Mailing List Archive

Error migrating HVM DomU in heterogeneous cluster
Hello.

I am trying to live migrate one DomU with Windows HVM between two Dom0s with Debian Buster with Xen 4.11.2-pre and kernel 4.19.0-4-amd64
and I always get the error of below (with PV Debian Linux DomU works Ok).
The two Dom0s have different CPUs.
I am trying to mask the CPUs with the tool xen_maskcalc.py and configuring the DomU, but the error go on (see cfg below).
The medium is shared, NFS.

Can anybody help me?

Thanks,
Agustin

-----------------------------------------------
migration target: Ready to receive domain.
Saving to migration stream new xl format (info 0x3/0x0/2193)
Loading new save file <incoming migration stream> (new xl fmt info 0x3/0x0/2193)
 Savefile contains xl domain config in JSON format
Parsing config from <saved>
xc: info: Saving domain 2, type x86 HVM
xc: info: Found x86 HVM domain from Xen 4.11
xc: info: Restoring domain
xc: error: Mapping pfn 0xaf000 (mfn 0xaf000, type 0) failed with -22: Internal error
xc: error: Restore failed (0 = Success): Internal error
libxl: error: libxl_stream_read.c:850:libxl__xc_domain_restore_done: restoring domain: Success
libxl: error: libxl_create.c:1267:domcreate_rebuild_done: Domain 30:cannot (re-)build domain: -3
libxl: error: libxl_domain.c:1034:libxl__destroy_domid: Domain 30:Non-existant domain
libxl: error: libxl_domain.c:993:domain_destroy_callback: Domain 30:Unable to destroy guest
libxl: error: libxl_domain.c:920:domain_destroy_cb: Domain 30:Destruction of domain failed
migration target: Domain creation failed (code -3).
libxl: error: libxl_utils.c:510:libxl_read_exactly: file/stream truncated reading ipc msg header from domain 2 save/restore helper stdout pipe
libxl: error: libxl_exec.c:129:libxl_report_child_exitstatus: domain 2 save/restore helper [9870] died due to fatal signal Broken pipe
migration sender: libxl_domain_suspend failed (rc=-3)
libxl: info: libxl_exec.c:118:libxl_report_child_exitstatus: migration transport process [9869] exited with error status 1
Migration failed, resuming at sender.
xc: error: Dom 2 not suspended: (shutdown 0, reason 255): Internal error
libxl: error: libxl_dom_suspend.c:472:libxl__domain_resume: Domain 2:xc_domain_resume failed: Invalid argument
-----------------------------------------------
windows.cfg

builder='hvm'
memory='4000'
shadow_memory=8
disk=['file:/Xxxxxxxxxxxxxxx/windows-disk.raw,ioemu:hda,w']
name='windows'
vif=['type=ioemu,bridge=mybridge,mac=0e:00:00:00:01:1e']
vfb=['type=vnc,vnclisten=localhost,vncdisplay=1']
boot='c'
vcpus=4
ne2000=1
apic=1
acpi=1
sdl=0
vnc=1
pae=1
stdvga=0
vncconsole=1
vncviewer=1
vncserver=1
nographic=0
usbdevice='tablet'
serial='pty'
#soundhw='sb16'
on_poweroff='destroy'
on_reboot='restart'
on_crash='restart'
cpuid = [
  "0x00000001:ecx=x000xx000x00xx0xxxxxxxxx00xxxx0x",
  "0x00000007,0x00:ebx=xxxxxxxxxxxxxxxxxxxxxx0x0xxxxxx0"
]
-----------------------------------------------

_______________________________________________
Xen-users mailing list
Xen-users@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-users
Re: Error migrating HVM DomU in heterogeneous cluster [ In reply to ]
Hi,

On 4/8/19 6:50 PM, Agustín López wrote:
>
> I am trying to live migrate one DomU with Windows HVM between two Dom0s
> with Debian Buster with Xen 4.11.2-pre and kernel 4.19.0-4-amd64
> and I always get the error of below (with PV Debian Linux DomU works Ok).
> The two Dom0s have different CPUs.

Are those the only two servers you have? The error below looks like a
toolstack error. Usually (but that's just my been-there-done-that) wrong
masks show themselves inside the virtual machine after resuming, when
the VM is using functionality that suddenly disappeared. (Like, console
full of oops messages etc)

> I am trying to mask the CPUs with the tool xen_maskcalc.py and
> configuring the DomU,

Regardless of what I just said, xen_maskcalc.py should not be used any
more, and maybe you also found my blog post about xen and cpuid from
2014, which is also outdated and technically incorrect nowadays. If
you're lucky, it works for PV, but HVM has different masks.

Xen 4.11 has the xen-cpuid program. The output shows different kinds of
cpuid information.

Here's a program a colleague in my team at work wrote to calculate
things based on the xen-cpuid program output:

http://paste.debian.net/hidden/2bb6a753/

Also, the new PVH mode uses the same masks as HVM.

And yes, I should find out how to change that old blog post because it's
misleading now.

> but the error go on (see cfg below).
> The medium is shared, NFS.
>
> Can anybody help me?
>
> Thanks,
> Agustin
>
> -----------------------------------------------
> migration target: Ready to receive domain.
> Saving to migration stream new xl format (info 0x3/0x0/2193)
> Loading new save file <incoming migration stream> (new xl fmt info
> 0x3/0x0/2193)
>  Savefile contains xl domain config in JSON format
> Parsing config from <saved>
> xc: info: Saving domain 2, type x86 HVM
> xc: info: Found x86 HVM domain from Xen 4.11
> xc: info: Restoring domain
> xc: error: Mapping pfn 0xaf000 (mfn 0xaf000, type 0) failed with -22:
> Internal error
> xc: error: Restore failed (0 = Success): Internal error
> libxl: error: libxl_stream_read.c:850:libxl__xc_domain_restore_done:
> restoring domain: Success
> libxl: error: libxl_create.c:1267:domcreate_rebuild_done: Domain
> 30:cannot (re-)build domain: -3
> libxl: error: libxl_domain.c:1034:libxl__destroy_domid: Domain
> 30:Non-existant domain
> libxl: error: libxl_domain.c:993:domain_destroy_callback: Domain
> 30:Unable to destroy guest
> libxl: error: libxl_domain.c:920:domain_destroy_cb: Domain
> 30:Destruction of domain failed
> migration target: Domain creation failed (code -3).
> libxl: error: libxl_utils.c:510:libxl_read_exactly: file/stream
> truncated reading ipc msg header from domain 2 save/restore helper
> stdout pipe
> libxl: error: libxl_exec.c:129:libxl_report_child_exitstatus: domain 2
> save/restore helper [9870] died due to fatal signal Broken pipe
> migration sender: libxl_domain_suspend failed (rc=-3)
> libxl: info: libxl_exec.c:118:libxl_report_child_exitstatus: migration
> transport process [9869] exited with error status 1
> Migration failed, resuming at sender.
> xc: error: Dom 2 not suspended: (shutdown 0, reason 255): Internal error
> libxl: error: libxl_dom_suspend.c:472:libxl__domain_resume: Domain
> 2:xc_domain_resume failed: Invalid argument
> -----------------------------------------------
> windows.cfg
>
> builder='hvm'
> memory='4000'
> shadow_memory=8
> disk=['file:/Xxxxxxxxxxxxxxx/windows-disk.raw,ioemu:hda,w']
> name='windows'
> vif=['type=ioemu,bridge=mybridge,mac=0e:00:00:00:01:1e']
> vfb=['type=vnc,vnclisten=localhost,vncdisplay=1']
> boot='c'
> vcpus=4
> ne2000=1
> apic=1
> acpi=1
> sdl=0
> vnc=1
> pae=1
> stdvga=0
> vncconsole=1
> vncviewer=1
> vncserver=1
> nographic=0
> usbdevice='tablet'
> serial='pty'
> #soundhw='sb16'
> on_poweroff='destroy'
> on_reboot='restart'
> on_crash='restart'
> cpuid = [
>   "0x00000001:ecx=x000xx000x00xx0xxxxxxxxx00xxxx0x",
>   "0x00000007,0x00:ebx=xxxxxxxxxxxxxxxxxxxxxx0x0xxxxxx0"
> ]
> -----------------------------------------------

Hans

_______________________________________________
Xen-users mailing list
Xen-users@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-users
Re: Error migrating HVM DomU in heterogeneous cluster [ In reply to ]
Whoops, forgot to put OP in To and only posted to the list.

On 4/8/19 11:40 PM, Hans van Kranenburg wrote:
> Hi,
>
> On 4/8/19 6:50 PM, Agustín López wrote:
>>
>> I am trying to live migrate one DomU with Windows HVM between two Dom0s
>> with Debian Buster with Xen 4.11.2-pre and kernel 4.19.0-4-amd64
>> and I always get the error of below (with PV Debian Linux DomU works Ok).
>> The two Dom0s have different CPUs.
>
> Are those the only two servers you have? The error below looks like a
> toolstack error. Usually (but that's just my been-there-done-that) wrong
> masks show themselves inside the virtual machine after resuming, when
> the VM is using functionality that suddenly disappeared. (Like, console
> full of oops messages etc)
>
>> I am trying to mask the CPUs with the tool xen_maskcalc.py and
>> configuring the DomU,
>
> Regardless of what I just said, xen_maskcalc.py should not be used any
> more, and maybe you also found my blog post about xen and cpuid from
> 2014, which is also outdated and technically incorrect nowadays. If
> you're lucky, it works for PV, but HVM has different masks.
>
> Xen 4.11 has the xen-cpuid program. The output shows different kinds of
> cpuid information.
>
> Here's a program a colleague in my team at work wrote to calculate
> things based on the xen-cpuid program output:
>
> http://paste.debian.net/hidden/2bb6a753/
>
> Also, the new PVH mode uses the same masks as HVM.
>
> And yes, I should find out how to change that old blog post because it's
> misleading now.
>
>> but the error go on (see cfg below).
>> The medium is shared, NFS.
>>
>> Can anybody help me?
>>
>> Thanks,
>> Agustin
>>
>> -----------------------------------------------
>> migration target: Ready to receive domain.
>> Saving to migration stream new xl format (info 0x3/0x0/2193)
>> Loading new save file <incoming migration stream> (new xl fmt info
>> 0x3/0x0/2193)
>>  Savefile contains xl domain config in JSON format
>> Parsing config from <saved>
>> xc: info: Saving domain 2, type x86 HVM
>> xc: info: Found x86 HVM domain from Xen 4.11
>> xc: info: Restoring domain
>> xc: error: Mapping pfn 0xaf000 (mfn 0xaf000, type 0) failed with -22:
>> Internal error
>> xc: error: Restore failed (0 = Success): Internal error
>> libxl: error: libxl_stream_read.c:850:libxl__xc_domain_restore_done:
>> restoring domain: Success
>> libxl: error: libxl_create.c:1267:domcreate_rebuild_done: Domain
>> 30:cannot (re-)build domain: -3
>> libxl: error: libxl_domain.c:1034:libxl__destroy_domid: Domain
>> 30:Non-existant domain
>> libxl: error: libxl_domain.c:993:domain_destroy_callback: Domain
>> 30:Unable to destroy guest
>> libxl: error: libxl_domain.c:920:domain_destroy_cb: Domain
>> 30:Destruction of domain failed
>> migration target: Domain creation failed (code -3).
>> libxl: error: libxl_utils.c:510:libxl_read_exactly: file/stream
>> truncated reading ipc msg header from domain 2 save/restore helper
>> stdout pipe
>> libxl: error: libxl_exec.c:129:libxl_report_child_exitstatus: domain 2
>> save/restore helper [9870] died due to fatal signal Broken pipe
>> migration sender: libxl_domain_suspend failed (rc=-3)
>> libxl: info: libxl_exec.c:118:libxl_report_child_exitstatus: migration
>> transport process [9869] exited with error status 1
>> Migration failed, resuming at sender.
>> xc: error: Dom 2 not suspended: (shutdown 0, reason 255): Internal error
>> libxl: error: libxl_dom_suspend.c:472:libxl__domain_resume: Domain
>> 2:xc_domain_resume failed: Invalid argument
>> -----------------------------------------------
>> windows.cfg
>>
>> builder='hvm'
>> memory='4000'
>> shadow_memory=8
>> disk=['file:/Xxxxxxxxxxxxxxx/windows-disk.raw,ioemu:hda,w']
>> name='windows'
>> vif=['type=ioemu,bridge=mybridge,mac=0e:00:00:00:01:1e']
>> vfb=['type=vnc,vnclisten=localhost,vncdisplay=1']
>> boot='c'
>> vcpus=4
>> ne2000=1
>> apic=1
>> acpi=1
>> sdl=0
>> vnc=1
>> pae=1
>> stdvga=0
>> vncconsole=1
>> vncviewer=1
>> vncserver=1
>> nographic=0
>> usbdevice='tablet'
>> serial='pty'
>> #soundhw='sb16'
>> on_poweroff='destroy'
>> on_reboot='restart'
>> on_crash='restart'
>> cpuid = [
>>   "0x00000001:ecx=x000xx000x00xx0xxxxxxxxx00xxxx0x",
>>   "0x00000007,0x00:ebx=xxxxxxxxxxxxxxxxxxxxxx0x0xxxxxx0"
>> ]
>> -----------------------------------------------
>
> Hans
>


_______________________________________________
Xen-users mailing list
Xen-users@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-users