Mailing List Archive

a ton of kernel issues
Hi folks,

please forgive me if I'm not 100% right on this list but I'm going mad
about the current xen development as far as the pv_ops kernel are
involved. I've been using Xen since 3.0 in various productive setups
without any serious problems for years now.

And now, since the support for the "patched" distro-kernel like debian 5
run out I'm trying to setup a working xen+pv_ops kernel with regard to
cpu and ram hotplug and live migration. Nothing special I thought but
hell I'm totally lost: After testing two month now dozens of
combinations I was not able to find a single kernel / xen combination
which supports the abovementioned features. Some crash after hotplug on
a migrated domu, some crash right at the migration and some crash right
at the start if vcpu_avail > vcpus.

I've tested

xen-4.0
xen-4.1

debian 6 32 and 64 bit (2.6.32-x kernel)
ubuntu lucid 32 and 64 bit kernels including karmic, natty and even
oneric backports

I've put some questions on the users mailinglist, wrote bug reports but
today after two month work on this I'm as stuck as one can get.

Is it possible that there is not a single working xen+pv_ops setup out
there which matches my needs? I can't believe it.

I appreciate _any_ hint!

Regards

Tim

http://lists.xen.org/archives/html/xen-users/2011-11/msg00553.html
http://lists.xen.org/archives/html/xen-users/2011-11/msg00523.html
http://lists.xen.org/archives/html/xen-users/2011-11/msg00295.html
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/893177

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On Mon, Dec 12, 2011 at 10:44:08PM +0100, Tim Evers wrote:
> Hi folks,
>
> please forgive me if I'm not 100% right on this list but I'm going mad
> about the current xen development as far as the pv_ops kernel are
> involved. I've been using Xen since 3.0 in various productive setups
> without any serious problems for years now.
>
> And now, since the support for the "patched" distro-kernel like debian 5
> run out I'm trying to setup a working xen+pv_ops kernel with regard to
> cpu and ram hotplug and live migration. Nothing special I thought but
> hell I'm totally lost: After testing two month now dozens of
> combinations I was not able to find a single kernel / xen combination
> which supports the abovementioned features. Some crash after hotplug on

Did you try the latest and greatest? Meaning anything that is based on a
3.0 (or higher) kernel?

> a migrated domu, some crash right at the migration and some crash right
> at the start if vcpu_avail > vcpus.

Is this with more than 32 CPUs?

>
> I've tested
>
> xen-4.0
> xen-4.1
>
> debian 6 32 and 64 bit (2.6.32-x kernel)
> ubuntu lucid 32 and 64 bit kernels including karmic, natty and even
> oneric backports

backports? Why not just .

Looking at your BUG, did you look in the .config to see if you
had MAXSMP or NR_CPUS defined? If you had MAXSMP, there was a bug
for that fixed in 2.6.39 time-frame (I think). And if you look
at the NR_CPUS, do you see the NR_CPUs > 0 virtual_cpus?


Otherwise, does the virgin kernel boot if you specify maxvcpus=vcpus
and then later on use xm to take CPUs away?


>
> I've put some questions on the users mailinglist, wrote bug reports but
> today after two month work on this I'm as stuck as one can get.

Uh? Hm somehow we [kernel engineers] never got copied on it.
>
> Is it possible that there is not a single working xen+pv_ops setup out
> there which matches my needs? I can't believe it.
> I appreciate _any_ hint!
>
> Regards
>
> Tim
>
> http://lists.xen.org/archives/html/xen-users/2011-11/msg00553.html
> http://lists.xen.org/archives/html/xen-users/2011-11/msg00523.html
> http://lists.xen.org/archives/html/xen-users/2011-11/msg00295.html
> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/893177

OK, so it says it brought 8 CPUs on .. and then it crashes afterwards.
So is that kernel compiled with CONFIG_NR_CPUS=8 or some higher value?

>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
Use suse and Centos5 (not 6) kernels. They works right with xen better
then others. OpenSUSE release pretty offten, so they have at least
2.6.34, 2.6.37 and 3.1. Or you can use old centos 2.6.18.

All those kernels is '-xen', not vanilla. pv_ops is still have some
issues with memory limits, but any new kernel (3.0+) will boot normal
and operates with very minor glitches. Older pv_ops (f.e. debian 2.6.32)
have some more major issues.

On 13.12.2011 01:44, Tim Evers wrote:
> Hi folks,
>
> please forgive me if I'm not 100% right on this list but I'm going mad
> about the current xen development as far as the pv_ops kernel are
> involved. I've been using Xen since 3.0 in various productive setups
> without any serious problems for years now.
>
> And now, since the support for the "patched" distro-kernel like debian 5
> run out I'm trying to setup a working xen+pv_ops kernel with regard to
> cpu and ram hotplug and live migration. Nothing special I thought but
> hell I'm totally lost: After testing two month now dozens of
> combinations I was not able to find a single kernel / xen combination
> which supports the abovementioned features. Some crash after hotplug on
> a migrated domu, some crash right at the migration and some crash right
> at the start if vcpu_avail> vcpus.
>
> I've tested
>
> xen-4.0
> xen-4.1
>
> debian 6 32 and 64 bit (2.6.32-x kernel)
> ubuntu lucid 32 and 64 bit kernels including karmic, natty and even
> oneric backports
>
> I've put some questions on the users mailinglist, wrote bug reports but
> today after two month work on this I'm as stuck as one can get.
>
> Is it possible that there is not a single working xen+pv_ops setup out
> there which matches my needs? I can't believe it.
>
> I appreciate _any_ hint!
>
> Regards
>
> Tim
>
> http://lists.xen.org/archives/html/xen-users/2011-11/msg00553.html
> http://lists.xen.org/archives/html/xen-users/2011-11/msg00523.html
> http://lists.xen.org/archives/html/xen-users/2011-11/msg00295.html
> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/893177
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
George,

> pv_ops is still have some issues with memory limits, but any
> new kernel (3.0+) will boot normal and operates with very
> minor glitches. Older pv_ops (f.e. debian 2.6.32) have some
> more major issues.

what glitches should one expect with 3.0+, and having the choice,
would it be better to go with 3.1 or even 3.2?

thank you!
-Alessandro-
 Here i am, A young man,
 A crashing computer program,
 Here is a pen, write out my name...

(from: The Servant - Orchestra)



On Tue, Dec 13, 2011 at 01:09, George Shuklin <george.shuklin@gmail.com> wrote:
> Use suse and Centos5 (not 6) kernels. They works right with xen better then
> others. OpenSUSE release pretty offten, so they have at least 2.6.34, 2.6.37
> and 3.1. Or you can use old centos 2.6.18.
>
> All those kernels is '-xen', not vanilla. pv_ops is still have some issues
> with memory limits, but any new kernel (3.0+) will boot normal and operates
> with very minor glitches. Older pv_ops (f.e. debian 2.6.32) have some more
> major issues.
>
>
> On 13.12.2011 01:44, Tim Evers wrote:
>>
>> Hi folks,
>>
>> please forgive me if I'm not 100% right on this list but I'm going mad
>> about the current xen development as far as the pv_ops kernel are
>> involved. I've been using Xen since 3.0 in various productive setups
>> without any serious problems for years now.
>>
>> And now, since the support for the "patched" distro-kernel like debian 5
>> run out I'm trying to setup a working xen+pv_ops kernel with regard to
>> cpu and ram hotplug and live migration. Nothing special I thought but
>> hell I'm totally lost: After testing two month now dozens of
>> combinations I was not able to find a single kernel / xen combination
>> which supports the abovementioned features. Some crash after hotplug on
>> a migrated domu, some crash right at the migration and some crash right
>> at the start if vcpu_avail>  vcpus.
>>
>> I've tested
>>
>> xen-4.0
>> xen-4.1
>>
>> debian 6 32 and 64 bit (2.6.32-x kernel)
>> ubuntu lucid 32 and 64 bit kernels including karmic, natty and even
>> oneric backports
>>
>> I've put some questions on the users mailinglist, wrote bug reports but
>> today after two month work on this I'm as stuck as one can get.
>>
>> Is it possible that there is not a single working xen+pv_ops setup out
>> there which matches my needs? I can't believe it.
>>
>> I appreciate _any_ hint!
>>
>> Regards
>>
>> Tim
>>
>> http://lists.xen.org/archives/html/xen-users/2011-11/msg00553.html
>> http://lists.xen.org/archives/html/xen-users/2011-11/msg00523.html
>> http://lists.xen.org/archives/html/xen-users/2011-11/msg00295.html
>> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/893177
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On 13.12.2011 14:19, Alessandro Salvatori wrote:
>
>> pv_ops is still have some issues with memory limits, but any
>> new kernel (3.0+) will boot normal and operates with very
>> minor glitches. Older pv_ops (f.e. debian 2.6.32) have some
>> more major issues.
> what glitches should one expect with 3.0+, and having the choice,
> would it be better to go with 3.1 or even 3.2?
>
>
Right now I know about two of them:
When you set up memory for virtual machine using xenballon, value in
dom0 differ from value in domU. The issue is that -xen kernels 'hide'
some memory in 'used' memory, and pv-ops just reducing TotalMem to value
without that memory. Practically that means if you set up memory for
domain to 2GiB client will saw only 1.95GiB and so on.

The second issue is lack of support of 'pre-inflated balloon', means you
can not set up memory-static-max to 2GiB, target to 1GiB and do 'memory
grow' from 1 G to 2 G latter without VM reboot. -xen kernels allow this
(up to memory-static-max limit).



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On 13/12/11 10:36, George Shuklin wrote:
> On 13.12.2011 14:19, Alessandro Salvatori wrote:
>>
>>> pv_ops is still have some issues with memory limits, but any
>>> new kernel (3.0+) will boot normal and operates with very
>>> minor glitches. Older pv_ops (f.e. debian 2.6.32) have some
>>> more major issues.
>> what glitches should one expect with 3.0+, and having the choice,
>> would it be better to go with 3.1 or even 3.2?
>>
>>
> Right now I know about two of them:
> When you set up memory for virtual machine using xenballon, value in
> dom0 differ from value in domU. The issue is that -xen kernels 'hide'
> some memory in 'used' memory, and pv-ops just reducing TotalMem to value
> without that memory. Practically that means if you set up memory for
> domain to 2GiB client will saw only 1.95GiB and so on.

This really makes no practical difference. The memory is "used" is
either case and the different reporting is a side-effect of the change
in how certain memory allocations are done.

> The second issue is lack of support of 'pre-inflated balloon', means you
> can not set up memory-static-max to 2GiB, target to 1GiB and do 'memory
> grow' from 1 G to 2 G latter without VM reboot. -xen kernels allow this
> (up to memory-static-max limit).

This should work if memory hotplug is enabled.

It is also supported without memory hotplug but this requires that the
tools supply a suitable memory map that covers the largest
memory-static-max limit you wish to support. I'm not sure if the tools
can do this yet.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On Tue, 2011-12-13 at 13:17 +0000, David Vrabel wrote:
> On 13/12/11 10:36, George Shuklin wrote:
> > On 13.12.2011 14:19, Alessandro Salvatori wrote:
> >>
> >>> pv_ops is still have some issues with memory limits, but any
> >>> new kernel (3.0+) will boot normal and operates with very
> >>> minor glitches. Older pv_ops (f.e. debian 2.6.32) have some
> >>> more major issues.
> >> what glitches should one expect with 3.0+, and having the choice,
> >> would it be better to go with 3.1 or even 3.2?
> >>
> >>
> > Right now I know about two of them:
> > When you set up memory for virtual machine using xenballon, value in
> > dom0 differ from value in domU. The issue is that -xen kernels 'hide'
> > some memory in 'used' memory, and pv-ops just reducing TotalMem to value
> > without that memory. Practically that means if you set up memory for
> > domain to 2GiB client will saw only 1.95GiB and so on.
>
> This really makes no practical difference. The memory is "used" is
> either case and the different reporting is a side-effect of the change
> in how certain memory allocations are done.
>
> > The second issue is lack of support of 'pre-inflated balloon', means you
> > can not set up memory-static-max to 2GiB, target to 1GiB and do 'memory
> > grow' from 1 G to 2 G latter without VM reboot. -xen kernels allow this
> > (up to memory-static-max limit).
>
> This should work if memory hotplug is enabled.
>
> It is also supported without memory hotplug but this requires that the
> tools supply a suitable memory map that covers the largest
> memory-static-max limit you wish to support. I'm not sure if the tools
> can do this yet.

With xl this should work using the "maxmem" option. (xm probably uses
the same name)

Ian.

>
> David
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On Tue, Dec 13, 2011 at 02:36:04PM +0400, George Shuklin wrote:
> On 13.12.2011 14:19, Alessandro Salvatori wrote:
> >
> >>pv_ops is still have some issues with memory limits, but any
> >>new kernel (3.0+) will boot normal and operates with very
> >>minor glitches. Older pv_ops (f.e. debian 2.6.32) have some
> >>more major issues.
> >what glitches should one expect with 3.0+, and having the choice,
> >would it be better to go with 3.1 or even 3.2?
> >
> >
> Right now I know about two of them:
> When you set up memory for virtual machine using xenballon, value in
> dom0 differ from value in domU. The issue is that -xen kernels 'hide'
> some memory in 'used' memory, and pv-ops just reducing TotalMem to value
> without that memory. Practically that means if you set up memory for
> domain to 2GiB client will saw only 1.95GiB and so on.
>
> The second issue is lack of support of 'pre-inflated balloon', means you
> can not set up memory-static-max to 2GiB, target to 1GiB and do 'memory
> grow' from 1 G to 2 G latter without VM reboot. -xen kernels allow this
> (up to memory-static-max limit).

Is all of this with 'xm' or with 'xl' tools? What happens when you 'grow'
the memory from 1G to 2G? Are you using the hotplug memory balloon or
the older one?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On 13.12.2011 17:37, Ian Campbell wrote:
>> This should work if memory hotplug is enabled.
>>
>> It is also supported without memory hotplug but this requires that the
>> tools supply a suitable memory map that covers the largest
>> memory-static-max limit you wish to support. I'm not sure if the tools
>> can do this yet.
> With xl this should work using the "maxmem" option. (xm probably uses
> the same name)
>
I'm not sure what maxmem do, but I can say, this option does not allow
to go beyond initial boot memory for pv_ops kernels and beyond
static-max memory for -xen kernel.

I tested it with following python script (under xcp, with shutdowned
squeezed to be sure not get 'reballanced'):

xc.domain_setmaxmem(domid,memory)
xs.write("","/local/domain/%i/memory/dynamic-min"%domid,str(memory*1024))
xs.write("","/local/domain/%i/memory/dynamic-max"%domid,str(memory*1024))
xs.write("","/local/domain/%i/memory/static-max"%domid,str(memory*1024))
xs.write("","/local/domain/%i/memory/target"%domid,str(memory*1024))

It works fine within noted ranges, but simply ignores any request higher
them.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On 13.12.2011 17:17, David Vrabel wrote:
>>>> pv_ops is still have some issues with memory limits, but any
>>>> new kernel (3.0+) will boot normal and operates with very
>>>> minor glitches. Older pv_ops (f.e. debian 2.6.32) have some
>>>> more major issues.
>>> what glitches should one expect with 3.0+, and having the choice,
>>> would it be better to go with 3.1 or even 3.2?
>> Right now I know about two of them:
>> When you set up memory for virtual machine using xenballon, value in
>> dom0 differ from value in domU. The issue is that -xen kernels 'hide'
>> some memory in 'used' memory, and pv-ops just reducing TotalMem to value
>> without that memory. Practically that means if you set up memory for
>> domain to 2GiB client will saw only 1.95GiB and so on.
> This really makes no practical difference. The memory is "used" is
> either case and the different reporting is a side-effect of the change
> in how certain memory allocations are done.
Well... For us it make HUGE difference. We running cloud with very
precise accounting of fast automaric memory allocation for customer's
domains and we account them for ... em... KiB*hr value, so if we account
domains based on xc.get_domain_memkb value and customers saw difference
between TotalMem and our values this will make them feel like we
cheating. We not greedy and ready to 'cut' our mem_kb value to their
TotalMem, but I hasn't found any formula for that (calculate TotalMem
from static-max and dom_kb values) and I can't trust any data from
customer's domains (except request for memory we accept, serve and
happily take money for 'more memory').

It sounds ridiculous, but we avoiding pv_ops kernels usage due that
little ~50Mb steal. We stuck with -xen kernels (forward porting from
SUSE and native CentOS 2.6.18-xen). I don't know what we will do in the
future, but right now PV-ops is not good...

>> The second issue is lack of support of 'pre-inflated balloon', means you
>> can not set up memory-static-max to 2GiB, target to 1GiB and do 'memory
>> grow' from 1 G to 2 G latter without VM reboot. -xen kernels allow this
>> (up to memory-static-max limit).
> This should work if memory hotplug is enabled.
>
> It is also supported without memory hotplug but this requires that the
> tools supply a suitable memory map that covers the largest
> memory-static-max limit you wish to support. I'm not sure if the tools
> can do this yet.
>
Well... I post few weeks ago about problems with memory hotplug - I see
this message in dmesg:

[1935380.223401] System RAM resource 18000000 - 27ffffff cannot be added
[1935380.223414] xen_balloon: reserve_additional_memory: add_memory()
failed: -17

I post about that:
http://old-list-archives.xen.org/archives/html/xen-users/2011-11/msg00278.html


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On 13.12.2011 18:18, Konrad Rzeszutek Wilk wrote:

>>> what glitches should one expect with 3.0+, and having the choice,
>>> would it be better to go with 3.1 or even 3.2?
>>>
>> Right now I know about two of them:
>> When you set up memory for virtual machine using xenballon, value in
>> dom0 differ from value in domU. The issue is that -xen kernels 'hide'
>> some memory in 'used' memory, and pv-ops just reducing TotalMem to value
>> without that memory. Practically that means if you set up memory for
>> domain to 2GiB client will saw only 1.95GiB and so on.
>>
>> The second issue is lack of support of 'pre-inflated balloon', means you
>> can not set up memory-static-max to 2GiB, target to 1GiB and do 'memory
>> grow' from 1 G to 2 G latter without VM reboot. -xen kernels allow this
>> (up to memory-static-max limit).
> Is all of this with 'xm' or with 'xl' tools? What happens when you 'grow'
> the memory from 1G to 2G? Are you using the hotplug memory balloon or
> the older one?
This is xcp (with shutdowned squeezed, it not a problem), so we using
direct calls to xc (domain_set_maxkb), but I can repeat this with xl
(which one and only one available in XCP), with same results. If I
create VM with static-max 2GiB, dynamic-max 1GiB, boot it, then:

1) with PV_ops I can go lower than 1GiB, but never any higher than 1GiB
2) with -xen kernels I can go upper to 2GiB.

If I compile kernel with xenballoon hotplug option enabled, after some
(very small) memory growth (within preallocated memory?) after future
attempts I getting this message:

[1935380.223401] System RAM resource 18000000 - 27ffffff cannot be added
[1935380.223414] xen_balloon: reserve_additional_memory: add_memory()
failed: -17

(tested with vanilla 3.0, 3.1 and some RCs of 3.2)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On Wed, Dec 14, 2011 at 01:10:39AM +0400, George Shuklin wrote:
> On 13.12.2011 18:18, Konrad Rzeszutek Wilk wrote:
>
> >>>what glitches should one expect with 3.0+, and having the choice,
> >>>would it be better to go with 3.1 or even 3.2?
> >>>
> >>Right now I know about two of them:
> >>When you set up memory for virtual machine using xenballon, value in
> >>dom0 differ from value in domU. The issue is that -xen kernels 'hide'
> >>some memory in 'used' memory, and pv-ops just reducing TotalMem to value
> >>without that memory. Practically that means if you set up memory for
> >>domain to 2GiB client will saw only 1.95GiB and so on.
> >>
> >>The second issue is lack of support of 'pre-inflated balloon', means you
> >>can not set up memory-static-max to 2GiB, target to 1GiB and do 'memory
> >>grow' from 1 G to 2 G latter without VM reboot. -xen kernels allow this
> >>(up to memory-static-max limit).
> >Is all of this with 'xm' or with 'xl' tools? What happens when you 'grow'
> >the memory from 1G to 2G? Are you using the hotplug memory balloon or
> >the older one?
> This is xcp (with shutdowned squeezed, it not a problem), so we using
> direct calls to xc (domain_set_maxkb), but I can repeat this with xl
> (which one and only one available in XCP), with same results. If I
> create VM with static-max 2GiB, dynamic-max 1GiB, boot it, then:
>
> 1) with PV_ops I can go lower than 1GiB, but never any higher than 1GiB

I just tried this, and had these two options in my guest config:

maxmem=2048
mem=1024

booted up the guest and from dom0 did "xm mem-set latest 2048"
and it expanded to 2G. Did "xm mem-set latest 10248" and it went down.

Is this how you are doing it as well? (I have to confess I don't know
much about XCP so don't know if those 'static-max, dynamic-max'
are the same thing).

Is this PV bootup? Or HVM?

> 2) with -xen kernels I can go upper to 2GiB.
>
> If I compile kernel with xenballoon hotplug option enabled, after some
> (very small) memory growth (within preallocated memory?) after future
> attempts I getting this message:
>
> [1935380.223401] System RAM resource 18000000 - 27ffffff cannot be added
> [1935380.223414] xen_balloon: reserve_additional_memory: add_memory()
> failed: -17

Hm, that looks like a bug. But the old style balloon code does work.
Not sure why with a default .config you are hitting these issues.
>
> (tested with vanilla 3.0, 3.1 and some RCs of 3.2)
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On Wed, Dec 14, 2011 at 01:05:31AM +0400, George Shuklin wrote:
>
> On 13.12.2011 17:17, David Vrabel wrote:
> >>>>pv_ops is still have some issues with memory limits, but any
> >>>>new kernel (3.0+) will boot normal and operates with very
> >>>>minor glitches. Older pv_ops (f.e. debian 2.6.32) have some
> >>>>more major issues.
> >>>what glitches should one expect with 3.0+, and having the choice,
> >>>would it be better to go with 3.1 or even 3.2?
> >>Right now I know about two of them:
> >>When you set up memory for virtual machine using xenballon, value in
> >>dom0 differ from value in domU. The issue is that -xen kernels 'hide'
> >>some memory in 'used' memory, and pv-ops just reducing TotalMem to value
> >>without that memory. Practically that means if you set up memory for
> >>domain to 2GiB client will saw only 1.95GiB and so on.
> >This really makes no practical difference. The memory is "used" is
> >either case and the different reporting is a side-effect of the change
> >in how certain memory allocations are done.

David,

You are thinking that this is the vmalloc vs kmalloc memory for the
frontends?

> Well... For us it make HUGE difference. We running cloud with very
> precise accounting of fast automaric memory allocation for customer's
> domains and we account them for ... em... KiB*hr value, so if we account
> domains based on xc.get_domain_memkb value and customers saw difference
> between TotalMem and our values this will make them feel like we
> cheating. We not greedy and ready to 'cut' our mem_kb value to their
> TotalMem, but I hasn't found any formula for that (calculate TotalMem
> from static-max and dom_kb values) and I can't trust any data from
> customer's domains (except request for memory we accept, serve and
> happily take money for 'more memory').
>
> It sounds ridiculous, but we avoiding pv_ops kernels usage due that
> little ~50Mb steal. We stuck with -xen kernels (forward porting from
> SUSE and native CentOS 2.6.18-xen). I don't know what we will do in the
> future, but right now PV-ops is not good...

50MB is rather a large amount - it should be more of a couple of MBs -
unless there are other things in the picture (P2M? per-cpu?). Have you
tried to run the same kernel version but different architecture - so a
SuSE 2.6.X kernel with a pvops 2.6.X, where X=X? If so, were there any
obvious disreprancies in the e820 and in the "Memory reserved" values?
Were the .config similar?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On Tue, 2011-12-13 at 20:59 +0000, George Shuklin wrote:
> On 13.12.2011 17:37, Ian Campbell wrote:
> >> This should work if memory hotplug is enabled.
> >>
> >> It is also supported without memory hotplug but this requires that the
> >> tools supply a suitable memory map that covers the largest
> >> memory-static-max limit you wish to support. I'm not sure if the tools
> >> can do this yet.
> > With xl this should work using the "maxmem" option. (xm probably uses
> > the same name)
> >
> I'm not sure what maxmem do

maxmem sets static-max for the guest in the xl toolstack.

> but I can say, this option does not allow
> to go beyond initial boot memory for pv_ops kernels and beyond
> static-max memory for -xen kernel.

AFAIK the toolstack support for memory hotplug (i.e. growing beyond
static-max) does not yet exist.

> I tested it with following python script (under xcp, with shutdowned
> squeezed to be sure not get 'reballanced'):
>
> xc.domain_setmaxmem(domid,memory)
> xs.write("","/local/domain/%i/memory/dynamic-min"%domid,str(memory*1024))
> xs.write("","/local/domain/%i/memory/dynamic-max"%domid,str(memory*1024))
> xs.write("","/local/domain/%i/memory/static-max"%domid,str(memory*1024))
> xs.write("","/local/domain/%i/memory/target"%domid,str(memory*1024))
>
> It works fine within noted ranges, but simply ignores any request higher
> them.

If you are stepping outside the toolstack and doing this sort of thing
behind its back then all bets are off and you can't really expect people
here to help you.

Please try and reproduce the issue using the standard toolstack options.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On 14.12.2011 02:30, Ian Campbell wrote:
> On Tue, 2011-12-13 at 20:59 +0000, George Shuklin wrote:
>> On 13.12.2011 17:37, Ian Campbell wrote:
>>>> This should work if memory hotplug is enabled.
>>>>
>>>> It is also supported without memory hotplug but this requires that the
>>>> tools supply a suitable memory map that covers the largest
>>>> memory-static-max limit you wish to support. I'm not sure if the tools
>>>> can do this yet.
>>> With xl this should work using the "maxmem" option. (xm probably uses
>>> the same name)
>>>
>> I'm not sure what maxmem do
> maxmem sets static-max for the guest in the xl toolstack.

Well, I understand that, I simply don't understand what maxmem do with
domain in xen (the hypervisor part). My hypothesis is just put a limit
for amount of allocated PFN for domain.

>
>> but I can say, this option does not allow
>> to go beyond initial boot memory for pv_ops kernels and beyond
>> static-max memory for -xen kernel.
> AFAIK the toolstack support for memory hotplug (i.e. growing beyond
> static-max) does not yet exist.

As far as I know toolstack is not really do some work for this (if I
wrong I'll be really appreciated for information). For memory control we
need to set maxmem and send request to guest xenballon for new target
(via xenstore). This works without running xapi and squeezed (of course,
for VM start/reboot we need them), but simple memory management can be
done manually without helps of xapi/squeezed. (Again, if I wrong, I do
really like to now where I wrong). As far as I understand xapi writing
dynamic-min, dynamic-max to xenstore for domain and call squeezed via
xenstore-rpc to rebalance memory, and squeezed read request, read those
values, calculate new target and writes it to xenstore for each domain.

My current description (I can be wrong) of stuff happens during memory
resizing:
1) Set up maxmem
2) Send via memory/target target to xenballoon
3) Xenballoon request new memory from hypervisor (or return back some of)
4) Xenballoon manipulate with domU memory management to allow use (or
snatch away) memory pages.
>> I tested it with following python script (under xcp, with shutdowned
>> squeezed to be sure not get 'reballanced'):
>>
>> xc.domain_setmaxmem(domid,memory)
>> xs.write("","/local/domain/%i/memory/dynamic-min"%domid,str(memory*1024))
>> xs.write("","/local/domain/%i/memory/dynamic-max"%domid,str(memory*1024))
>> xs.write("","/local/domain/%i/memory/static-max"%domid,str(memory*1024))
>> xs.write("","/local/domain/%i/memory/target"%domid,str(memory*1024))
>>
>> It works fine within noted ranges, but simply ignores any request higher
>> them.
> If you are stepping outside the toolstack and doing this sort of thing
> behind its back then all bets are off and you can't really expect people
> here to help you.
>
> Please try and reproduce the issue using the standard toolstack options.
>
I am very well understand that, so I've tested this (not with all kernel
versions) with normal XCP setup with default memory management:
set up vm, make dyn-max=1GiB,static-max=2GiB, start vm, use xe
vm-memory-target-set to 2GiB - and memory is still 1GiB.
I've tested this with few pv_ops kernels, including those on
xs-tools.iso (tested on XCP 0.5, 1.0 and 1.1, and even on XenServer 6) -
same result.


P.S. Thank you for your attention to this issue.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On Tue, 2011-12-13 at 22:53 +0000, George Shuklin wrote:
> On 14.12.2011 02:30, Ian Campbell wrote:
> > On Tue, 2011-12-13 at 20:59 +0000, George Shuklin wrote:
> >> On 13.12.2011 17:37, Ian Campbell wrote:
> >>>> This should work if memory hotplug is enabled.
> >>>>
> >>>> It is also supported without memory hotplug but this requires that the
> >>>> tools supply a suitable memory map that covers the largest
> >>>> memory-static-max limit you wish to support. I'm not sure if the tools
> >>>> can do this yet.
> >>> With xl this should work using the "maxmem" option. (xm probably uses
> >>> the same name)
> >>>
> >> I'm not sure what maxmem do
> > maxmem sets static-max for the guest in the xl toolstack.
>
> Well, I understand that, I simply don't understand what maxmem do with
> domain in xen (the hypervisor part). My hypothesis is just put a limit
> for amount of allocated PFN for domain.

It controls precisely the behaviour you need! Try "maxmem=2048" and
"memory=1024" in your guest configuration, it should boot with 1G of RAM
and allow you to balloon to 2G and back.

The maxmem option mainly passes an appropriate e820 table to the guest
to cause it to allocate enough space for 2G of pages.

On some versions of Linux you might also need to add mem=2G to the
kernel command line to work around a bug. That is fixed in the most
recent versions though.

> >
> >> but I can say, this option does not allow
> >> to go beyond initial boot memory for pv_ops kernels and beyond
> >> static-max memory for -xen kernel.
> > AFAIK the toolstack support for memory hotplug (i.e. growing beyond
> > static-max) does not yet exist.
>
> As far as I know toolstack is not really do some work for this (if I
> wrong I'll be really appreciated for information).

I'm not really sure what you are referring to here. Memory hotplug does
not currently work so lets leave it aside and talk only about the
booting with the balloon pre-inflated case since that should work.

You seem to be using XAPI in which case xen-api@ is the right list to
ask for information. I believe you can set various VM.memory-* options
on the VM to control all this stuff.

> For memory control we
> need to set maxmem and send request to guest xenballon for new target
> (via xenstore). This works without running xapi and squeezed (of course,
> for VM start/reboot we need them), but simple memory management can be
> done manually without helps of xapi/squeezed.

No. You might find that it works sometimes but going behind the
toolstack's back is not supported.

> (Again, if I wrong, I do
> really like to now where I wrong). As far as I understand xapi writing
> dynamic-min, dynamic-max to xenstore for domain and call squeezed via
> xenstore-rpc to rebalance memory, and squeezed read request, read those
> values, calculate new target and writes it to xenstore for each domain.

You'd have to ask xen-api@ about that stuff. I'm sure you can disable
squeezed for a host or a domain without killing the daemon(s) and doing
things manually. Setting dynamic-min and max to the same thing probably
has the same effect.

> My current description (I can be wrong) of stuff happens during memory
> resizing:

Step "0) Boot guest with static-max larger than initial allocation" is
important for this to work.

> 1) Set up maxmem
> 2) Send via memory/target target to xenballoon
> 3) Xenballoon request new memory from hypervisor (or return back some of)
> 4) Xenballoon manipulate with domU memory management to allow use (or
> snatch away) memory pages.
> >> I tested it with following python script (under xcp, with shutdowned
> >> squeezed to be sure not get 'reballanced'):
> >>
> >> xc.domain_setmaxmem(domid,memory)
> >> xs.write("","/local/domain/%i/memory/dynamic-min"%domid,str(memory*1024))
> >> xs.write("","/local/domain/%i/memory/dynamic-max"%domid,str(memory*1024))
> >> xs.write("","/local/domain/%i/memory/static-max"%domid,str(memory*1024))
> >> xs.write("","/local/domain/%i/memory/target"%domid,str(memory*1024))
> >>
> >> It works fine within noted ranges, but simply ignores any request higher
> >> them.
> > If you are stepping outside the toolstack and doing this sort of thing
> > behind its back then all bets are off and you can't really expect people
> > here to help you.
> >
> > Please try and reproduce the issue using the standard toolstack options.
> >
> I am very well understand that, so I've tested this (not with all kernel
> versions) with normal XCP setup with default memory management:
> set up vm, make dyn-max=1GiB,static-max=2GiB, start vm, use xe
> vm-memory-target-set to 2GiB - and memory is still 1GiB.
> I've tested this with few pv_ops kernels, including those on
> xs-tools.iso (tested on XCP 0.5, 1.0 and 1.1, and even on XenServer 6) -
> same result.

And which result is that? What does "doesn't work" actually mean?

Please take a look at
http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen and consider
including your precise kernel configuration, appropriate log files,
dmesg etc etc. You might also be best off reporting as a separate thread
and since you are using XCP you should include xen-api@.

>
>
> P.S. Thank you for your attention to this issue.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On Wed, Dec 14, 2011 at 4:05 AM, George Shuklin
<george.shuklin@gmail.com> wrote:
> Well... For us it make HUGE difference. We running cloud with very precise
> accounting of  fast automaric memory allocation for customer's domains and
> we account them for ... em... KiB*hr value, so if we account domains based
> on xc.get_domain_memkb value and customers saw difference between TotalMem
> and our values this will make them feel like we cheating. We not greedy and
> ready to 'cut' our mem_kb value to their TotalMem, but I hasn't found any
> formula for that (calculate TotalMem from static-max and dom_kb values) and
> I can't trust any data from customer's domains (except request for memory we
> accept, serve and happily take money for 'more memory').
>
> It sounds ridiculous, but we avoiding pv_ops kernels usage due that little
> ~50Mb steal. We stuck with -xen kernels (forward porting from SUSE and
> native CentOS 2.6.18-xen). I don't know what we will do in the future, but
> right now PV-ops is not good...

Why not just explain what really happened in a noob-friendly language?

For example, when buying a computer with built-in graphic card,
MemTotal is always lower than the ammount of memory actually
installed. Yet a simple explanation "some of the memory is used by the
built-in graphic card" is enough, without having to go into details of
what is the formula to calculate how much memory actualy used.

--
Fajar

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
Am 12.12.11 22:56, schrieb Konrad Rzeszutek Wilk:

> Did you try the latest and greatest? Meaning anything that is based on a
> 3.0 (or higher) kernel?

Not the latest. Oneric Kernel is 3.0. Unfortunately I'm short on time
right now but I will setup a complete testscenario and report back.

>> a migrated domu, some crash right at the migration and some crash right
>> at the start if vcpu_avail > vcpus.
>
> Is this with more than 32 CPUs?

No, happens with vcpu_avail=2 and vcpus=1.

>>
>> I've tested
>>
>> xen-4.0
>> xen-4.1
>>
>> debian 6 32 and 64 bit (2.6.32-x kernel)
>> ubuntu lucid 32 and 64 bit kernels including karmic, natty and even
>> oneric backports
>
> backports? Why not just .

Didn't get that. Till now I was trying to get away with standard
distro's ressources since I have to give some of the work in others
hands which I do not want to confront with kernel-compiling stuff.

> Looking at your BUG, did you look in the .config to see if you
> had MAXSMP or NR_CPUS defined? If you had MAXSMP, there was a bug
> for that fixed in 2.6.39 time-frame (I think). And if you look
> at the NR_CPUS, do you see the NR_CPUs > 0 virtual_cpus?
>
>
> Otherwise, does the virgin kernel boot if you specify maxvcpus=vcpus
> and then later on use xm to take CPUs away?

Yes, but It crashes sometimes if I re-plug the CPUs. I will include this
in my test scenario.

>
>>
>> I've put some questions on the users mailinglist, wrote bug reports but
>> today after two month work on this I'm as stuck as one can get.
>
> Uh? Hm somehow we [kernel engineers] never got copied on it.

Well, thanks for your prompt reply right now. I will do my best to put
all my findings together in a way that helps solving the problems (or
finding the point where I made the big mistake:)

Regards,

Tim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On Wed, 2011-12-14 at 07:25 +0000, Ian Campbell wrote:
>
> It controls precisely the behaviour you need! Try "maxmem=2048" and
> "memory=1024" in your guest configuration, it should boot with 1G of
> RAM and allow you to balloon to 2G and back.

I take it back, there is indeed a bug in the PV ops kernel in this
regard.

It works with xm/xend because they set the maximum reservation for
guests to static-max on boot. xl (and, I think, xapi) instead set the
maximum reservation to the current balloon target and change it
dynamically as the target is changed (as a method of enforcing the
targets). However the pvops kernel incorrectly uses the maximum
reservation at boot to size the physical address space for guests.

The patch below fixes this.

Ian.

8<-------------------------------------------------------------

From 649ca3b7ddca1cdda85c27e34f806f30484172ec Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Wed, 14 Dec 2011 12:00:38 +0000
Subject: [PATCH] xen: only limit memory map to maximum reservation for domain 0.

d312ae878b6a "xen: use maximum reservation to limit amount of usable RAM"
clamped the total amount of RAM to the current maximum reservation. This is
correct for dom0 but is not correct for guest domains. In order to boot a guest
"pre-ballooned" (e.g. with memory=1G but maxmem=2G) in order to allow for
future memory expansion the guest must derive max_pfn from the e820 provided by
the toolstack and not the current maximum reservation (which can reflect only
the current maximum, not the guest lifetime max). The existing algorithm
already behaves this correctly if we do not artificially limit the maximum
number of pages for the guest case.

For a guest booted with maxmem=512, memory=128 this results in:
[ 0.000000] BIOS-provided physical RAM map:
[ 0.000000] Xen: 0000000000000000 - 00000000000a0000 (usable)
[ 0.000000] Xen: 00000000000a0000 - 0000000000100000 (reserved)
-[ 0.000000] Xen: 0000000000100000 - 0000000008100000 (usable)
-[ 0.000000] Xen: 0000000008100000 - 0000000020800000 (unusable)
+[ 0.000000] Xen: 0000000000100000 - 0000000020800000 (usable)
...
[ 0.000000] NX (Execute Disable) protection: active
[ 0.000000] DMI not present or invalid.
[ 0.000000] e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved)
[ 0.000000] e820 remove range: 00000000000a0000 - 0000000000100000 (usable)
-[ 0.000000] last_pfn = 0x8100 max_arch_pfn = 0x1000000
+[ 0.000000] last_pfn = 0x20800 max_arch_pfn = 0x1000000
[ 0.000000] initial memory mapped : 0 - 027ff000
[ 0.000000] Base memory trampoline at [c009f000] 9f000 size 4096
-[ 0.000000] init_memory_mapping: 0000000000000000-0000000008100000
-[ 0.000000] 0000000000 - 0008100000 page 4k
-[ 0.000000] kernel direct mapping tables up to 8100000 @ 27bb000-27ff000
+[ 0.000000] init_memory_mapping: 0000000000000000-0000000020800000
+[ 0.000000] 0000000000 - 0020800000 page 4k
+[ 0.000000] kernel direct mapping tables up to 20800000 @ 26f8000-27ff000
[ 0.000000] xen: setting RW the range 27e8000 - 27ff000
[ 0.000000] 0MB HIGHMEM available.
-[ 0.000000] 129MB LOWMEM available.
-[ 0.000000] mapped low ram: 0 - 08100000
-[ 0.000000] low ram: 0 - 08100000
+[ 0.000000] 520MB LOWMEM available.
+[ 0.000000] mapped low ram: 0 - 20800000
+[ 0.000000] low ram: 0 - 20800000

With this change "xl mem-set <domain> 512M" will successfully increase the
guest RAM (by reducing the balloon).

There is no change for dom0.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: stable@kernel.org
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
arch/x86/xen/setup.c | 18 +++++++++++++++---
1 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 38d0af4..a54ff1a 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -173,9 +173,21 @@ static unsigned long __init xen_get_max_pages(void)
domid_t domid = DOMID_SELF;
int ret;

- ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
- if (ret > 0)
- max_pages = ret;
+ /*
+ * For the initial domain we use the maximum reservation as
+ * the maximum page.
+ *
+ * For guest domains the current maximum reservation reflects
+ * the current maximum rather than the static maximum. In this
+ * case the e820 map provided to us will cover the static
+ * maximum region.
+ */
+ if (xen_initial_domain()) {
+ ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
+ if (ret > 0)
+ max_pages = ret;
+ }
+
return min(max_pages, MAX_DOMAIN_PAGES);
}

--
1.7.2.5




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On 14/12/11 12:16, Ian Campbell wrote:
> On Wed, 2011-12-14 at 07:25 +0000, Ian Campbell wrote:
>>
>> It controls precisely the behaviour you need! Try "maxmem=2048" and
>> "memory=1024" in your guest configuration, it should boot with 1G of
>> RAM and allow you to balloon to 2G and back.
>
> I take it back, there is indeed a bug in the PV ops kernel in this
> regard.
>
> It works with xm/xend because they set the maximum reservation for
> guests to static-max on boot. xl (and, I think, xapi) instead set the
> maximum reservation to the current balloon target and change it
> dynamically as the target is changed (as a method of enforcing the
> targets). However the pvops kernel incorrectly uses the maximum
> reservation at boot to size the physical address space for guests.
>
> The patch below fixes this.
>
> Ian.
>
> 8<-------------------------------------------------------------
>
> From 649ca3b7ddca1cdda85c27e34f806f30484172ec Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Wed, 14 Dec 2011 12:00:38 +0000
> Subject: [PATCH] xen: only limit memory map to maximum reservation for domain 0.
>
> d312ae878b6a "xen: use maximum reservation to limit amount of usable RAM"
> clamped the total amount of RAM to the current maximum reservation. This is
> correct for dom0 but is not correct for guest domains. In order to boot a guest
> "pre-ballooned" (e.g. with memory=1G but maxmem=2G) in order to allow for
> future memory expansion the guest must derive max_pfn from the e820 provided by
> the toolstack and not the current maximum reservation (which can reflect only
> the current maximum, not the guest lifetime max). The existing algorithm
> already behaves this correctly if we do not artificially limit the maximum
> number of pages for the guest case.
[...]
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: stable@kernel.org
> Cc: David Vrabel <david.vrabel@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: David Vrabel <david.vrabel@citrix.com>

or Reviewed-by if that's more appropriate.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On 13/12/11 21:45, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 14, 2011 at 01:05:31AM +0400, George Shuklin wrote:
>>
>> On 13.12.2011 17:17, David Vrabel wrote:
>>>>>> pv_ops is still have some issues with memory limits, but any
>>>>>> new kernel (3.0+) will boot normal and operates with very
>>>>>> minor glitches. Older pv_ops (f.e. debian 2.6.32) have some
>>>>>> more major issues.
>>>>> what glitches should one expect with 3.0+, and having the choice,
>>>>> would it be better to go with 3.1 or even 3.2?
>>>> Right now I know about two of them:
>>>> When you set up memory for virtual machine using xenballon, value in
>>>> dom0 differ from value in domU. The issue is that -xen kernels 'hide'
>>>> some memory in 'used' memory, and pv-ops just reducing TotalMem to value
>>>> without that memory. Practically that means if you set up memory for
>>>> domain to 2GiB client will saw only 1.95GiB and so on.
>>> This really makes no practical difference. The memory is "used" is
>>> either case and the different reporting is a side-effect of the change
>>> in how certain memory allocations are done.
>
> David,
>
> You are thinking that this is the vmalloc vs kmalloc memory for the
> frontends?

That wasn't what I was thinking. When I looked (not very hard) at this
in dom0 I thought most of it was the swiotlb buffer.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
>>> On 14.12.11 at 13:16, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> It works with xm/xend because they set the maximum reservation for
> guests to static-max on boot. xl (and, I think, xapi) instead set the
> maximum reservation to the current balloon target and change it

Hmm, is that even compatible with the old (non-pvops) kernels? I
always assumed that the maximum reservation was a true maximum,
not something that can change at any time during the life of a VM.
What's the purpose of this being adjustable and there also being a
current target?

Is the old tool stack also setting up a suitable E820 table? Assuming
so, is it at all reasonable to have the outer world establish a static
upper bound on the VM if such an immutable upper bound doesn't
really exist (rather than having the VM determine for itself - via
"mem=" - how far it wants to be able to go)?

Jan

> dynamically as the target is changed (as a method of enforcing the
> targets). However the pvops kernel incorrectly uses the maximum
> reservation at boot to size the physical address space for guests.
>
> The patch below fixes this.
>
> Ian.
>
> 8<-------------------------------------------------------------
>
> From 649ca3b7ddca1cdda85c27e34f806f30484172ec Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Wed, 14 Dec 2011 12:00:38 +0000
> Subject: [PATCH] xen: only limit memory map to maximum reservation for
> domain 0.
>
> d312ae878b6a "xen: use maximum reservation to limit amount of usable RAM"
> clamped the total amount of RAM to the current maximum reservation. This is
> correct for dom0 but is not correct for guest domains. In order to boot a
> guest
> "pre-ballooned" (e.g. with memory=1G but maxmem=2G) in order to allow for
> future memory expansion the guest must derive max_pfn from the e820 provided
> by
> the toolstack and not the current maximum reservation (which can reflect
> only
> the current maximum, not the guest lifetime max). The existing algorithm
> already behaves this correctly if we do not artificially limit the maximum
> number of pages for the guest case.
>
> For a guest booted with maxmem=512, memory=128 this results in:
> [ 0.000000] BIOS-provided physical RAM map:
> [ 0.000000] Xen: 0000000000000000 - 00000000000a0000 (usable)
> [ 0.000000] Xen: 00000000000a0000 - 0000000000100000 (reserved)
> -[ 0.000000] Xen: 0000000000100000 - 0000000008100000 (usable)
> -[ 0.000000] Xen: 0000000008100000 - 0000000020800000 (unusable)
> +[ 0.000000] Xen: 0000000000100000 - 0000000020800000 (usable)
> ...
> [ 0.000000] NX (Execute Disable) protection: active
> [ 0.000000] DMI not present or invalid.
> [ 0.000000] e820 update range: 0000000000000000 - 0000000000010000
> (usable) ==> (reserved)
> [ 0.000000] e820 remove range: 00000000000a0000 - 0000000000100000
> (usable)
> -[ 0.000000] last_pfn = 0x8100 max_arch_pfn = 0x1000000
> +[ 0.000000] last_pfn = 0x20800 max_arch_pfn = 0x1000000
> [ 0.000000] initial memory mapped : 0 - 027ff000
> [ 0.000000] Base memory trampoline at [c009f000] 9f000 size 4096
> -[ 0.000000] init_memory_mapping: 0000000000000000-0000000008100000
> -[ 0.000000] 0000000000 - 0008100000 page 4k
> -[ 0.000000] kernel direct mapping tables up to 8100000 @ 27bb000-27ff000
> +[ 0.000000] init_memory_mapping: 0000000000000000-0000000020800000
> +[ 0.000000] 0000000000 - 0020800000 page 4k
> +[ 0.000000] kernel direct mapping tables up to 20800000 @ 26f8000-27ff000
> [ 0.000000] xen: setting RW the range 27e8000 - 27ff000
> [ 0.000000] 0MB HIGHMEM available.
> -[ 0.000000] 129MB LOWMEM available.
> -[ 0.000000] mapped low ram: 0 - 08100000
> -[ 0.000000] low ram: 0 - 08100000
> +[ 0.000000] 520MB LOWMEM available.
> +[ 0.000000] mapped low ram: 0 - 20800000
> +[ 0.000000] low ram: 0 - 20800000
>
> With this change "xl mem-set <domain> 512M" will successfully increase the
> guest RAM (by reducing the balloon).
>
> There is no change for dom0.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: stable@kernel.org
> Cc: David Vrabel <david.vrabel@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
> arch/x86/xen/setup.c | 18 +++++++++++++++---
> 1 files changed, 15 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index 38d0af4..a54ff1a 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -173,9 +173,21 @@ static unsigned long __init xen_get_max_pages(void)
> domid_t domid = DOMID_SELF;
> int ret;
>
> - ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
> - if (ret > 0)
> - max_pages = ret;
> + /*
> + * For the initial domain we use the maximum reservation as
> + * the maximum page.
> + *
> + * For guest domains the current maximum reservation reflects
> + * the current maximum rather than the static maximum. In this
> + * case the e820 map provided to us will cover the static
> + * maximum region.
> + */
> + if (xen_initial_domain()) {
> + ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
> + if (ret > 0)
> + max_pages = ret;
> + }
> +
> return min(max_pages, MAX_DOMAIN_PAGES);
> }
>
> --
> 1.7.2.5
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On Wed, 2011-12-14 at 13:11 +0000, Jan Beulich wrote:
> >>> On 14.12.11 at 13:16, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > It works with xm/xend because they set the maximum reservation for
> > guests to static-max on boot. xl (and, I think, xapi) instead set the
> > maximum reservation to the current balloon target and change it
>
> Hmm, is that even compatible with the old (non-pvops) kernels? I
> always assumed that the maximum reservation was a true maximum,
> not something that can change at any time during the life of a VM.
> What's the purpose of this being adjustable and there also being a
> current target?

The target is a target which we ask the guest to aim for, the maximum is
a hard maximum which Xen will actually enforce.

Toolstack can set them the same if they want to make sure that the guest
does not try and exceed its current target. Setting the maximum doesn't
reduce the memory a guest is currently using but it does ensure that
progress towards the target never regresses.

The xapi toolstack has been doing this for several years and we aren't
aware of any incompatibilities, there were some bugs in various kernels
(they would give up ballooning altogether if they hit the hard limit)
but they were all fixed quite some time ago IIRC.

> Is the old tool stack also setting up a suitable E820 table?

I believe so, this behaviour long predates xl. The difference with xend
is that it doesn't set the maximum to the target but rather to the
static-max, this is a toolstack policy decision.

> Assuming
> so, is it at all reasonable to have the outer world establish a static
> upper bound on the VM if such an immutable upper bound doesn't
> really exist

Yes, this is what many host administrators want, they want to control
this stuff via the host configuration APIs rather than by digging into
each guest.

> (rather than having the VM determine for itself - via
> "mem=" - how far it wants to be able to go)?

mem= takes precedence over the e820 table, so we effectively have the
best of both worlds.

Ian.

>
> Jan
>
> > dynamically as the target is changed (as a method of enforcing the
> > targets). However the pvops kernel incorrectly uses the maximum
> > reservation at boot to size the physical address space for guests.
> >
> > The patch below fixes this.
> >
> > Ian.
> >
> > 8<-------------------------------------------------------------
> >
> > From 649ca3b7ddca1cdda85c27e34f806f30484172ec Mon Sep 17 00:00:00 2001
> > From: Ian Campbell <ian.campbell@citrix.com>
> > Date: Wed, 14 Dec 2011 12:00:38 +0000
> > Subject: [PATCH] xen: only limit memory map to maximum reservation for
> > domain 0.
> >
> > d312ae878b6a "xen: use maximum reservation to limit amount of usable RAM"
> > clamped the total amount of RAM to the current maximum reservation. This is
> > correct for dom0 but is not correct for guest domains. In order to boot a
> > guest
> > "pre-ballooned" (e.g. with memory=1G but maxmem=2G) in order to allow for
> > future memory expansion the guest must derive max_pfn from the e820 provided
> > by
> > the toolstack and not the current maximum reservation (which can reflect
> > only
> > the current maximum, not the guest lifetime max). The existing algorithm
> > already behaves this correctly if we do not artificially limit the maximum
> > number of pages for the guest case.
> >
> > For a guest booted with maxmem=512, memory=128 this results in:
> > [ 0.000000] BIOS-provided physical RAM map:
> > [ 0.000000] Xen: 0000000000000000 - 00000000000a0000 (usable)
> > [ 0.000000] Xen: 00000000000a0000 - 0000000000100000 (reserved)
> > -[ 0.000000] Xen: 0000000000100000 - 0000000008100000 (usable)
> > -[ 0.000000] Xen: 0000000008100000 - 0000000020800000 (unusable)
> > +[ 0.000000] Xen: 0000000000100000 - 0000000020800000 (usable)
> > ...
> > [ 0.000000] NX (Execute Disable) protection: active
> > [ 0.000000] DMI not present or invalid.
> > [ 0.000000] e820 update range: 0000000000000000 - 0000000000010000
> > (usable) ==> (reserved)
> > [ 0.000000] e820 remove range: 00000000000a0000 - 0000000000100000
> > (usable)
> > -[ 0.000000] last_pfn = 0x8100 max_arch_pfn = 0x1000000
> > +[ 0.000000] last_pfn = 0x20800 max_arch_pfn = 0x1000000
> > [ 0.000000] initial memory mapped : 0 - 027ff000
> > [ 0.000000] Base memory trampoline at [c009f000] 9f000 size 4096
> > -[ 0.000000] init_memory_mapping: 0000000000000000-0000000008100000
> > -[ 0.000000] 0000000000 - 0008100000 page 4k
> > -[ 0.000000] kernel direct mapping tables up to 8100000 @ 27bb000-27ff000
> > +[ 0.000000] init_memory_mapping: 0000000000000000-0000000020800000
> > +[ 0.000000] 0000000000 - 0020800000 page 4k
> > +[ 0.000000] kernel direct mapping tables up to 20800000 @ 26f8000-27ff000
> > [ 0.000000] xen: setting RW the range 27e8000 - 27ff000
> > [ 0.000000] 0MB HIGHMEM available.
> > -[ 0.000000] 129MB LOWMEM available.
> > -[ 0.000000] mapped low ram: 0 - 08100000
> > -[ 0.000000] low ram: 0 - 08100000
> > +[ 0.000000] 520MB LOWMEM available.
> > +[ 0.000000] mapped low ram: 0 - 20800000
> > +[ 0.000000] low ram: 0 - 20800000
> >
> > With this change "xl mem-set <domain> 512M" will successfully increase the
> > guest RAM (by reducing the balloon).
> >
> > There is no change for dom0.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Cc: stable@kernel.org
> > Cc: David Vrabel <david.vrabel@citrix.com>
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> > arch/x86/xen/setup.c | 18 +++++++++++++++---
> > 1 files changed, 15 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> > index 38d0af4..a54ff1a 100644
> > --- a/arch/x86/xen/setup.c
> > +++ b/arch/x86/xen/setup.c
> > @@ -173,9 +173,21 @@ static unsigned long __init xen_get_max_pages(void)
> > domid_t domid = DOMID_SELF;
> > int ret;
> >
> > - ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
> > - if (ret > 0)
> > - max_pages = ret;
> > + /*
> > + * For the initial domain we use the maximum reservation as
> > + * the maximum page.
> > + *
> > + * For guest domains the current maximum reservation reflects
> > + * the current maximum rather than the static maximum. In this
> > + * case the e820 map provided to us will cover the static
> > + * maximum region.
> > + */
> > + if (xen_initial_domain()) {
> > + ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
> > + if (ret > 0)
> > + max_pages = ret;
> > + }
> > +
> > return min(max_pages, MAX_DOMAIN_PAGES);
> > }
> >
> > --
> > 1.7.2.5
> >
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On Wed, Dec 14, 2011 at 12:28:04PM +0000, David Vrabel wrote:
> On 13/12/11 21:45, Konrad Rzeszutek Wilk wrote:
> > On Wed, Dec 14, 2011 at 01:05:31AM +0400, George Shuklin wrote:
> >>
> >> On 13.12.2011 17:17, David Vrabel wrote:
> >>>>>> pv_ops is still have some issues with memory limits, but any
> >>>>>> new kernel (3.0+) will boot normal and operates with very
> >>>>>> minor glitches. Older pv_ops (f.e. debian 2.6.32) have some
> >>>>>> more major issues.
> >>>>> what glitches should one expect with 3.0+, and having the choice,
> >>>>> would it be better to go with 3.1 or even 3.2?
> >>>> Right now I know about two of them:
> >>>> When you set up memory for virtual machine using xenballon, value in
> >>>> dom0 differ from value in domU. The issue is that -xen kernels 'hide'
> >>>> some memory in 'used' memory, and pv-ops just reducing TotalMem to value
> >>>> without that memory. Practically that means if you set up memory for
> >>>> domain to 2GiB client will saw only 1.95GiB and so on.
> >>> This really makes no practical difference. The memory is "used" is
> >>> either case and the different reporting is a side-effect of the change
> >>> in how certain memory allocations are done.
> >
> > David,
> >
> > You are thinking that this is the vmalloc vs kmalloc memory for the
> > frontends?
>
> That wasn't what I was thinking. When I looked (not very hard) at this
> in dom0 I thought most of it was the swiotlb buffer.

Ok, that is dom0, but for domU the swiotlb buffer should not be enabled.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: a ton of kernel issues [ In reply to ]
On Wed, Dec 14, 2011 at 12:16:08PM +0000, Ian Campbell wrote:
> On Wed, 2011-12-14 at 07:25 +0000, Ian Campbell wrote:
> >
> > It controls precisely the behaviour you need! Try "maxmem=2048" and
> > "memory=1024" in your guest configuration, it should boot with 1G of
> > RAM and allow you to balloon to 2G and back.
>
> I take it back, there is indeed a bug in the PV ops kernel in this
> regard.
>
> It works with xm/xend because they set the maximum reservation for
> guests to static-max on boot. xl (and, I think, xapi) instead set the
> maximum reservation to the current balloon target and change it
> dynamically as the target is changed (as a method of enforcing the
> targets). However the pvops kernel incorrectly uses the maximum
> reservation at boot to size the physical address space for guests.
>
> The patch below fixes this.

Yup. It fixes it when using 'xl'. Thx
>
> Ian.
>
> 8<-------------------------------------------------------------
>
> >From 649ca3b7ddca1cdda85c27e34f806f30484172ec Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Wed, 14 Dec 2011 12:00:38 +0000
> Subject: [PATCH] xen: only limit memory map to maximum reservation for domain 0.
>
> d312ae878b6a "xen: use maximum reservation to limit amount of usable RAM"
> clamped the total amount of RAM to the current maximum reservation. This is
> correct for dom0 but is not correct for guest domains. In order to boot a guest
> "pre-ballooned" (e.g. with memory=1G but maxmem=2G) in order to allow for
> future memory expansion the guest must derive max_pfn from the e820 provided by
> the toolstack and not the current maximum reservation (which can reflect only
> the current maximum, not the guest lifetime max). The existing algorithm
> already behaves this correctly if we do not artificially limit the maximum
> number of pages for the guest case.
>
> For a guest booted with maxmem=512, memory=128 this results in:
> [ 0.000000] BIOS-provided physical RAM map:
> [ 0.000000] Xen: 0000000000000000 - 00000000000a0000 (usable)
> [ 0.000000] Xen: 00000000000a0000 - 0000000000100000 (reserved)
> -[ 0.000000] Xen: 0000000000100000 - 0000000008100000 (usable)
> -[ 0.000000] Xen: 0000000008100000 - 0000000020800000 (unusable)
> +[ 0.000000] Xen: 0000000000100000 - 0000000020800000 (usable)
> ...
> [ 0.000000] NX (Execute Disable) protection: active
> [ 0.000000] DMI not present or invalid.
> [ 0.000000] e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved)
> [ 0.000000] e820 remove range: 00000000000a0000 - 0000000000100000 (usable)
> -[ 0.000000] last_pfn = 0x8100 max_arch_pfn = 0x1000000
> +[ 0.000000] last_pfn = 0x20800 max_arch_pfn = 0x1000000
> [ 0.000000] initial memory mapped : 0 - 027ff000
> [ 0.000000] Base memory trampoline at [c009f000] 9f000 size 4096
> -[ 0.000000] init_memory_mapping: 0000000000000000-0000000008100000
> -[ 0.000000] 0000000000 - 0008100000 page 4k
> -[ 0.000000] kernel direct mapping tables up to 8100000 @ 27bb000-27ff000
> +[ 0.000000] init_memory_mapping: 0000000000000000-0000000020800000
> +[ 0.000000] 0000000000 - 0020800000 page 4k
> +[ 0.000000] kernel direct mapping tables up to 20800000 @ 26f8000-27ff000
> [ 0.000000] xen: setting RW the range 27e8000 - 27ff000
> [ 0.000000] 0MB HIGHMEM available.
> -[ 0.000000] 129MB LOWMEM available.
> -[ 0.000000] mapped low ram: 0 - 08100000
> -[ 0.000000] low ram: 0 - 08100000
> +[ 0.000000] 520MB LOWMEM available.
> +[ 0.000000] mapped low ram: 0 - 20800000
> +[ 0.000000] low ram: 0 - 20800000
>
> With this change "xl mem-set <domain> 512M" will successfully increase the
> guest RAM (by reducing the balloon).
>
> There is no change for dom0.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: stable@kernel.org
> Cc: David Vrabel <david.vrabel@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
> arch/x86/xen/setup.c | 18 +++++++++++++++---
> 1 files changed, 15 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index 38d0af4..a54ff1a 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -173,9 +173,21 @@ static unsigned long __init xen_get_max_pages(void)
> domid_t domid = DOMID_SELF;
> int ret;
>
> - ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
> - if (ret > 0)
> - max_pages = ret;
> + /*
> + * For the initial domain we use the maximum reservation as
> + * the maximum page.
> + *
> + * For guest domains the current maximum reservation reflects
> + * the current maximum rather than the static maximum. In this
> + * case the e820 map provided to us will cover the static
> + * maximum region.
> + */
> + if (xen_initial_domain()) {
> + ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
> + if (ret > 0)
> + max_pages = ret;
> + }
> +
> return min(max_pages, MAX_DOMAIN_PAGES);
> }
>
> --
> 1.7.2.5
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

1 2  View All