Mailing List Archive

Still struggling to understand Xen
Xen seems to be different to most other forms of virtualisation in the
way it presents hardware to the guest. For so-called HVM guests I
understand everything:

Hypervisor in conjunction with dom0 provides disk and network devices
on PCI busses that can be viewed, enumerated with standard
off-the-shelf Linux drivers and tools. This is all good. My
confusion kicks in when the subject of PV drivers comes up.

From what I understand (not clearly documented anywhere that I could
find) the hypervisor/dom0 combination somehow switches mode in
response to something the DomU guest does. What exactly? Don't know.
But by the time you've booted using the HVM hardware it seems the door
is shut, and any attempt to load front-end drivers will then result in
'device not found' messages or whatever. That is, assuming my kernel
is configured correctly.

So this is presumably why most guests 'connect' to the PV back-end in
the initrd. I couldn't really understand if it's the loading of the
conventional SCSI driver, or the detection of a SCSI device, or the
opening of a conventional SCSI device to mount as root that shuts the
above 'door'. Unfortunately there isn't much documentation about
kernel configurations for Xen and what documentation I found seemed to
be out of date.

It's also unclear to me if the back-end drivers in a typical dom0 that
you might get from for example XCP-ng, or XenServer, or even AWS can
somehow be incompatible with the latest and greatest domU Linux
kernels. Is there some kind of interface versioning or are all
versions forward and backward compatible?

I've been through pretty much all drivers related to Xen, compiled
them into my kernel and selected /dev/xvda1 device on boot, but it's
still not working for me, the Xen 'hardware' is not being detected, so
would appreciate any guidance you can offer.

Regards,
Mark.
Re: Still struggling to understand Xen [ In reply to ]
On 02.07.20 14:00, Biff Eros wrote:
> Xen seems to be different to most other forms of virtualisation in the
> way it presents hardware to the guest. For so-called HVM guests I
> understand everything:
>
> Hypervisor in conjunction with dom0 provides disk and network devices
> on PCI busses that can be viewed, enumerated with standard
> off-the-shelf Linux drivers and tools. This is all good. My
> confusion kicks in when the subject of PV drivers comes up.
>
> From what I understand (not clearly documented anywhere that I could
> find) the hypervisor/dom0 combination somehow switches mode in
> response to something the DomU guest does. What exactly? Don't know.
> But by the time you've booted using the HVM hardware it seems the door
> is shut, and any attempt to load front-end drivers will then result in
> 'device not found' messages or whatever. That is, assuming my kernel
> is configured correctly.
>
> So this is presumably why most guests 'connect' to the PV back-end in
> the initrd. I couldn't really understand if it's the loading of the
> conventional SCSI driver, or the detection of a SCSI device, or the
> opening of a conventional SCSI device to mount as root that shuts the
> above 'door'. Unfortunately there isn't much documentation about
> kernel configurations for Xen and what documentation I found seemed to
> be out of date.

A typical HVM domain is booted with emulated devices being active (e.g.
hda, hdb, ...). The switch to pv devices is normally done before
mounting root in order to mount root on the pv device (for performance
reasons).

In case pv-devices are active the guest kernel will write to a special
IO-port emulated by qemu in order to deactivate the emulated devices.
This makes sure there are no ambiguous devices (otherwise each device
with a pv-driver would show up twice, once via the original driver and
once via the pv-driver). Disconnecting the emulated devices can be
avoided via the guest kernel boot parameter "xen_emul_unplug".

New pv-devices can be added at runtime, but they need to be assigned to
the guest from dom0 before. New devices and their parameters are
advertised via Xenstore (you need the xenbus driver for that purpose in
the guest).

> It's also unclear to me if the back-end drivers in a typical dom0 that
> you might get from for example XCP-ng, or XenServer, or even AWS can
> somehow be incompatible with the latest and greatest domU Linux
> kernels. Is there some kind of interface versioning or are all
> versions forward and backward compatible?

The basic protocol is compatible, some features are advertised by the
backend in Xenstore, the frontend knows that way which features are
allowed. The frontend will then set feature values in Xenstore to tell
the backend how it wants to operate the device.

> I've been through pretty much all drivers related to Xen, compiled
> them into my kernel and selected /dev/xvda1 device on boot, but it's
> still not working for me, the Xen 'hardware' is not being detected, so
> would appreciate any guidance you can offer.

Are you using Linux or another OS?

In Linux you need to use the xen pci device (see the source
drivers/xen/platform-pci.c in the kernel tree). platform_pci_probe()
contains all function calls to initialize the basic environment for
pv-drivers.


Juergen
Re: Still struggling to understand Xen [ In reply to ]
Am Donnerstag, 2. Juli 2020, 14:00:25 CEST schrieb Biff Eros:
> Hypervisor in conjunction with dom0 provides disk and network devices
> on PCI busses that can be viewed, enumerated with standard
> off-the-shelf Linux drivers and tools. This is all good. My
> confusion kicks in when the subject of PV drivers comes up.
Just some parts in short:

While HVM virtualizes a "full platform" (like most heavyweight virt solutions), PV virtualizes ressources.

This means:
In HVM, "standard drivers" could be used due to that full virtualization of hardware.

The reason behind: PV requires much less overhead just for "emulate" virtual "real" hardware on "real hardware" - it's main work is "just" providing "switching" ressources (transparently to the virtual devices in different DomU.

PV drivers like xen storage as xen net are presented to DomU ("guest") as the regarding devices over special drivers. I.e. you see no "buses" behind your storage device (disk or partition) nor have to deal with such as no access to devices on it without any special pv drivers. the hardware drivers are "covered" by xen / dom0 ("Host").


> It's also unclear to me if the back-end drivers in a typical dom0 that
> you might get from for example XCP-ng, or XenServer, or even AWS can
> somehow be incompatible with the latest and greatest domU Linux
> kernels. Is there some kind of interface versioning or are all
> versions forward and backward compatible?
The DomU drivers "should" backward compatible (have some xen 3.x drivers anywhere on a much newer xen 4.x but that seems extreme - if i remember right there was some more significant change in some 3.x or so). They do so at least on my platforms.

If i'm right, these could be read with i.e.

xl info |grep xen_caps

xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64


> I've been through pretty much all drivers related to Xen, compiled
> them into my kernel and selected /dev/xvda1 device on boot, but it's
> still not working for me, the Xen 'hardware' is not being detected, so
> would appreciate any guidance you can offer.

The device name seems not HVM, so i'm a bit confused.

Which devices are available at a DomU could be listed with:

xl block-list <yourdomu>

and they should follow the naming of the xen storage driver within your DomU (may depend from DomU OS - i.e. Linux., *BSDs use different namings)

if it hangs "before" kernel boot: how do you boot your DomU (pygrub)?
if it hangs during boot (i.e. mounting root fs) show / grep kernel output regarding xen drivers ("blkfront", "xv")


niels.
--
---
Niels Dettenbach
Syndicat IT & Internet
http://www.syndicat.com
PGP: https://syndicat.com/pub_key.asc
---