Mailing List Archive

[Patch][resend] implementation of cpupool support in xl
Hi,

attached patch implements cpupool support in xl. Rewrite of previous patch
after rework of libxl.


Juergen

--
Juergen Gross Principal Developer Operating Systems
TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28 Internet: ts.fujitsu.com
D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
On Wed, 2010-09-15 at 08:26 +0100, Juergen Gross wrote:
> diff -r 3985fea87987 tools/libxl/libxl.idl
> --- a/tools/libxl/libxl.idl Fri Sep 10 19:06:33 2010 +0100
> +++ b/tools/libxl/libxl.idl Wed Sep 15 09:19:02 2010 +0200
> @@ -43,7 +43,11 @@ SHUTDOWN_* constant."""),
> ], destructor_fn=None)
>
> libxl_poolinfo = Struct("poolinfo", [.
> - ("poolid", uint32)
> + ("poolid", uint32),
> + ("sched_id", uint32),
> + ("n_dom", uint32),
> + ("cpumap_size", uint32),
> + ("cpumap", libxl_cpumap)
> ], destructor_fn=None)
>
> libxl_vminfo = Struct("vminfo", [

Does the addition of the cpumap field here mean that we now need to
generate a destructor function (by removing destructor_fn=None) and call
it e.g. from main_pool*?

Would it make sense to turn libxl_cpumap into a struct containing both
the size and the data pointer?

> diff -r 3985fea87987 tools/libxl/libxl.h
> --- a/tools/libxl/libxl.h Fri Sep 10 19:06:33 2010 +0100
> +++ b/tools/libxl/libxl.h Wed Sep 15 09:19:02 2010 +0200
> @@ -471,6 +471,15 @@ int libxl_device_net2_del(libxl_ctx *ctx
> int libxl_device_net2_del(libxl_ctx *ctx, libxl_device_net2 *net2,
> int wait);
>
> +int libxl_get_freecpus(libxl_ctx *ctx, int *n_cpus, uint64_t **cpumap);
> +int libxl_create_cpupool(libxl_ctx *ctx, char *name, int schedid,
> + uint64_t *cpumap, int n_cpus, libxl_uuid *uuid,
> + uint32_t *poolid);

Should these cpumap parameters be libxl_cpumap* or are they a different sort of cpumap?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
On 09/15/10 10:29, Ian Campbell wrote:
> On Wed, 2010-09-15 at 08:26 +0100, Juergen Gross wrote:
>> diff -r 3985fea87987 tools/libxl/libxl.idl
>> --- a/tools/libxl/libxl.idl Fri Sep 10 19:06:33 2010 +0100
>> +++ b/tools/libxl/libxl.idl Wed Sep 15 09:19:02 2010 +0200
>> @@ -43,7 +43,11 @@ SHUTDOWN_* constant."""),
>> ], destructor_fn=None)
>>
>> libxl_poolinfo = Struct("poolinfo", [.
>> - ("poolid", uint32)
>> + ("poolid", uint32),
>> + ("sched_id", uint32),
>> + ("n_dom", uint32),
>> + ("cpumap_size", uint32),
>> + ("cpumap", libxl_cpumap)
>> ], destructor_fn=None)
>>
>> libxl_vminfo = Struct("vminfo", [
>
> Does the addition of the cpumap field here mean that we now need to
> generate a destructor function (by removing destructor_fn=None) and call
> it e.g. from main_pool*?

I took care of this by allocating the space for the cpumap(s) together with
the poolinfo structure(s).
If you don't like this, a destructor would be the correct solution, I think.

> Would it make sense to turn libxl_cpumap into a struct containing both
> the size and the data pointer?

IMO this would make sense. You ALWAYS need the size of a cpumap to handle it
correctly.

>
>> diff -r 3985fea87987 tools/libxl/libxl.h
>> --- a/tools/libxl/libxl.h Fri Sep 10 19:06:33 2010 +0100
>> +++ b/tools/libxl/libxl.h Wed Sep 15 09:19:02 2010 +0200
>> @@ -471,6 +471,15 @@ int libxl_device_net2_del(libxl_ctx *ctx
>> int libxl_device_net2_del(libxl_ctx *ctx, libxl_device_net2 *net2,
>> int wait);
>>
>> +int libxl_get_freecpus(libxl_ctx *ctx, int *n_cpus, uint64_t **cpumap);
>> +int libxl_create_cpupool(libxl_ctx *ctx, char *name, int schedid,
>> + uint64_t *cpumap, int n_cpus, libxl_uuid *uuid,
>> + uint32_t *poolid);
>
> Should these cpumap parameters be libxl_cpumap* or are they a different sort of cpumap?

You are right, these should be libxl_cpumap.
I'll update the patch. It would be nice to know whether you are planning to
change libxl_cpumap to include the size or not.


Juergen

--
Juergen Gross Principal Developer Operating Systems
TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28 Internet: ts.fujitsu.com
D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
On Wed, 2010-09-15 at 09:45 +0100, Juergen Gross wrote:
> On 09/15/10 10:29, Ian Campbell wrote:
> > On Wed, 2010-09-15 at 08:26 +0100, Juergen Gross wrote:
> >> diff -r 3985fea87987 tools/libxl/libxl.idl
> >> --- a/tools/libxl/libxl.idl Fri Sep 10 19:06:33 2010 +0100
> >> +++ b/tools/libxl/libxl.idl Wed Sep 15 09:19:02 2010 +0200
> >> @@ -43,7 +43,11 @@ SHUTDOWN_* constant."""),
> >> ], destructor_fn=None)
> >>
> >> libxl_poolinfo = Struct("poolinfo", [.
> >> - ("poolid", uint32)
> >> + ("poolid", uint32),
> >> + ("sched_id", uint32),
> >> + ("n_dom", uint32),
> >> + ("cpumap_size", uint32),
> >> + ("cpumap", libxl_cpumap)
> >> ], destructor_fn=None)
> >>
> >> libxl_vminfo = Struct("vminfo", [
> >
> > Does the addition of the cpumap field here mean that we now need to
> > generate a destructor function (by removing destructor_fn=None) and call
> > it e.g. from main_pool*?
>
> I took care of this by allocating the space for the cpumap(s) together with
> the poolinfo structure(s).
> If you don't like this, a destructor would be the correct solution, I think.

Personally I would prefer using the destructor style for consistency
with other libxl types.

> I'll update the patch. It would be nice to know whether you are planning to
> change libxl_cpumap to include the size or not.

I wasn't immediately planning on it but I can if you don't want to do it
as part of this patchset.

Thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
On 09/15/10 11:16, Ian Campbell wrote:
> On Wed, 2010-09-15 at 09:45 +0100, Juergen Gross wrote:
>> On 09/15/10 10:29, Ian Campbell wrote:
>>> On Wed, 2010-09-15 at 08:26 +0100, Juergen Gross wrote:
>>>> diff -r 3985fea87987 tools/libxl/libxl.idl
>>>> --- a/tools/libxl/libxl.idl Fri Sep 10 19:06:33 2010 +0100
>>>> +++ b/tools/libxl/libxl.idl Wed Sep 15 09:19:02 2010 +0200
>>>> @@ -43,7 +43,11 @@ SHUTDOWN_* constant."""),
>>>> ], destructor_fn=None)
>>>>
>>>> libxl_poolinfo = Struct("poolinfo", [.
>>>> - ("poolid", uint32)
>>>> + ("poolid", uint32),
>>>> + ("sched_id", uint32),
>>>> + ("n_dom", uint32),
>>>> + ("cpumap_size", uint32),
>>>> + ("cpumap", libxl_cpumap)
>>>> ], destructor_fn=None)
>>>>
>>>> libxl_vminfo = Struct("vminfo", [
>>>
>>> Does the addition of the cpumap field here mean that we now need to
>>> generate a destructor function (by removing destructor_fn=None) and call
>>> it e.g. from main_pool*?
>>
>> I took care of this by allocating the space for the cpumap(s) together with
>> the poolinfo structure(s).
>> If you don't like this, a destructor would be the correct solution, I think.
>
> Personally I would prefer using the destructor style for consistency
> with other libxl types.

Okay, I'll change it accordingly.

>
>> I'll update the patch. It would be nice to know whether you are planning to
>> change libxl_cpumap to include the size or not.
>
> I wasn't immediately planning on it but I can if you don't want to do it
> as part of this patchset.

I think it shouldn't be included in this patch. :-)
And I'm not sure I'll get the generating of the bindings correctly.
So yes, please do it!


Juergen

--
Juergen Gross Principal Developer Operating Systems
TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28 Internet: ts.fujitsu.com
D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
On Wed, 2010-09-15 at 10:23 +0100, Juergen Gross wrote:
>
> And I'm not sure I'll get the generating of the bindings correctly.
> So yes, please do it!

Will do.

>From looking at your patch it seems that the intention is that the size
is the number of bytes in the map, rather than the number of CPUs which
can be represented or the number of uint64_t's, is that right and/or
what you would like to use?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
On Wed, 2010-09-15 at 10:37 +0100, Ian Campbell wrote:
> On Wed, 2010-09-15 at 10:23 +0100, Juergen Gross wrote:
> >
> > And I'm not sure I'll get the generating of the bindings correctly.
> > So yes, please do it!
>
> Will do.
>
> >From looking at your patch it seems that the intention is that the size
> is the number of bytes in the map, rather than the number of CPUs which
> can be represented or the number of uint64_t's, is that right and/or
> what you would like to use?

Seems like libxc counts bytes but libxl (with your patch) counts 64 bit
words.

There doesn't seem to be any users of the size currently so I went with
bytes in the below (compile tested + examined generated code only) but
since your patch adds the first actual user feel free to change it if
that makes the libxl interface more suitable for your needs or whatever.

Ian.

diff -r 098790dd9327 tools/libxl/libxl.c
--- a/tools/libxl/libxl.c Thu Sep 09 17:59:33 2010 +0100
+++ b/tools/libxl/libxl.c Wed Sep 15 11:19:35 2010 +0100
@@ -2924,8 +2924,9 @@ libxl_vcpuinfo *libxl_list_vcpu(libxl_ct

num_cpuwords = ((physinfo.max_cpu_id + 64) / 64);
for (*nb_vcpu = 0; *nb_vcpu <= domaininfo.max_vcpu_id; ++*nb_vcpu, ++ptr) {
- ptr->cpumap = malloc(num_cpuwords * sizeof(*ptr->cpumap));
- if (!ptr->cpumap) {
+ ptr->cpumap.size = num_cpuwords * sizeof(*ptr->cpumap.map);
+ ptr->cpumap.map = malloc(ptr->cpumap.size);
+ if (!ptr->cpumap.map) {
return NULL;
}
if (xc_vcpu_getinfo(ctx->xch, domid, *nb_vcpu, &vcpuinfo) == -1) {
@@ -2933,7 +2934,7 @@ libxl_vcpuinfo *libxl_list_vcpu(libxl_ct
return NULL;
}
if (xc_vcpu_getaffinity(ctx->xch, domid, *nb_vcpu,
- ptr->cpumap, ((*nrcpus) + 7) / 8) == -1) {
+ ptr->cpumap.map, ((*nrcpus) + 7) / 8) == -1) {
LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "getting vcpu affinity");
return NULL;
}
diff -r 098790dd9327 tools/libxl/libxl.h
--- a/tools/libxl/libxl.h Thu Sep 09 17:59:33 2010 +0100
+++ b/tools/libxl/libxl.h Wed Sep 15 11:19:35 2010 +0100
@@ -138,8 +138,6 @@ typedef char **libxl_string_list;
typedef char **libxl_string_list;

typedef char **libxl_key_value_list;
-
-typedef uint64_t *libxl_cpumap;

typedef uint32_t libxl_hwcap[8];

diff -r 098790dd9327 tools/libxl/libxl.idl
--- a/tools/libxl/libxl.idl Thu Sep 09 17:59:33 2010 +0100
+++ b/tools/libxl/libxl.idl Wed Sep 15 11:19:35 2010 +0100
@@ -14,8 +14,6 @@ libxl_nic_type = Builtin("nic_type")

libxl_string_list = Builtin("string_list", destructor_fn="libxl_string_list_destroy", passby=PASS_BY_REFERENCE)
libxl_key_value_list = Builtin("key_value_list", destructor_fn="libxl_key_value_list_destroy", passby=PASS_BY_REFERENCE)
-
-libxl_cpumap = Builtin("cpumap", destructor_fn="free")

libxl_hwcap = Builtin("hwcap")

@@ -42,6 +40,11 @@ SHUTDOWN_* constant."""),
("vcpu_online", uint32),
], destructor_fn=None)

+libxl_cpumap = Struct("cpumap", [
+ ("size", uint32),
+ ("map", Reference(uint64)),
+ ])
+
libxl_poolinfo = Struct("poolinfo", [
("poolid", uint32)
], destructor_fn=None)
diff -r 098790dd9327 tools/libxl/libxltypes.py
--- a/tools/libxl/libxltypes.py Thu Sep 09 17:59:33 2010 +0100
+++ b/tools/libxl/libxltypes.py Wed Sep 15 11:19:35 2010 +0100
@@ -119,7 +119,10 @@ class Reference(Type):
kwargs.setdefault('passby', PASS_BY_VALUE)

kwargs.setdefault('namespace', ty.namespace)
- typename = ty.typename[len(kwargs['namespace']):]
+
+ typename = ty.typename
+ if ty.namespace:
+ typename = typename[len(kwargs['namespace']):]
Type.__init__(self, typename + " *", **kwargs)

#
diff -r 098790dd9327 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c Thu Sep 09 17:59:33 2010 +0100
+++ b/tools/libxl/xl_cmdimpl.c Wed Sep 15 11:19:35 2010 +0100
@@ -3289,7 +3289,7 @@ static void print_vcpuinfo(uint32_t tdom
printf("%9.1f ", ((float)vcpuinfo->vcpu_time / 1e9));
/* CPU AFFINITY */
pcpumap = nr_cpus > 64 ? (uint64_t)-1 : ((1ULL << nr_cpus) - 1);
- for (cpumap = vcpuinfo->cpumap; nr_cpus; ++cpumap) {
+ for (cpumap = vcpuinfo->cpumap.map; nr_cpus; ++cpumap) {
if (*cpumap < pcpumap) {
break;
}
@@ -3304,7 +3304,7 @@ static void print_vcpuinfo(uint32_t tdom
if (!nr_cpus) {
printf("any cpu\n");
} else {
- for (cpumap = vcpuinfo->cpumap; nr_cpus; ++cpumap) {
+ for (cpumap = vcpuinfo->cpumap.map; nr_cpus; ++cpumap) {
pcpumap = *cpumap;
for (i = 0; !(pcpumap & 1); ++i, pcpumap >>= 1)
;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
On 09/15/10 12:22, Ian Campbell wrote:
> On Wed, 2010-09-15 at 10:37 +0100, Ian Campbell wrote:
>> On Wed, 2010-09-15 at 10:23 +0100, Juergen Gross wrote:
>>>
>>> And I'm not sure I'll get the generating of the bindings correctly.
>>> So yes, please do it!
>>
>> Will do.
>>
>> > From looking at your patch it seems that the intention is that the size
>> is the number of bytes in the map, rather than the number of CPUs which
>> can be represented or the number of uint64_t's, is that right and/or
>> what you would like to use?
>
> Seems like libxc counts bytes but libxl (with your patch) counts 64 bit
> words.
>
> There doesn't seem to be any users of the size currently so I went with
> bytes in the below (compile tested + examined generated code only) but
> since your patch adds the first actual user feel free to change it if
> that makes the libxl interface more suitable for your needs or whatever.

Thanks, updated patch is attached.
I didn't add the destructor, because this would have made things much more
complicated as libxl_list_pool returns an array of libxl_poolinfo elements.
Doing only one free() at caller side is much easier than cycling through
the elements and free-ing all extra allocated cpumaps.


Juergen

--
Juergen Gross Principal Developer Operating Systems
TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28 Internet: ts.fujitsu.com
D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
On Wed, 2010-09-15 at 12:28 +0100, Juergen Gross wrote:
> On 09/15/10 12:22, Ian Campbell wrote:
> > On Wed, 2010-09-15 at 10:37 +0100, Ian Campbell wrote:
> >> On Wed, 2010-09-15 at 10:23 +0100, Juergen Gross wrote:
> >>>
> >>> And I'm not sure I'll get the generating of the bindings correctly.
> >>> So yes, please do it!
> >>
> >> Will do.
> >>
> >> > From looking at your patch it seems that the intention is that the size
> >> is the number of bytes in the map, rather than the number of CPUs which
> >> can be represented or the number of uint64_t's, is that right and/or
> >> what you would like to use?
> >
> > Seems like libxc counts bytes but libxl (with your patch) counts 64 bit
> > words.
> >
> > There doesn't seem to be any users of the size currently so I went with
> > bytes in the below (compile tested + examined generated code only) but
> > since your patch adds the first actual user feel free to change it if
> > that makes the libxl interface more suitable for your needs or whatever.
>
> Thanks, updated patch is attached.
> I didn't add the destructor, because this would have made things much more
> complicated as libxl_list_pool returns an array of libxl_poolinfo elements.
> Doing only one free() at caller side is much easier than cycling through
> the elements and free-ing all extra allocated cpumaps.

I'm afraid I think you do need to cycle through the elements calling
libxl_poolinfo_destroy on each, this is consistent with how all other
libxl types are handled and means that users of the library are isolated
from a certain amount of change to the underlying types.

Often you can destroy each element as you walk over the list doing
whatever it is you needed the list for, for example see how
xl_cmdimpl.c:print_domain_vcpuinfo does it. libxl_list_vcpu has
basically the same semantics as libxl_list_pool should have.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
On 09/15/10 17:35, Ian Campbell wrote:
> I'm afraid I think you do need to cycle through the elements calling
> libxl_poolinfo_destroy on each, this is consistent with how all other
> libxl types are handled and means that users of the library are isolated
> from a certain amount of change to the underlying types.
>
> Often you can destroy each element as you walk over the list doing
> whatever it is you needed the list for, for example see how
> xl_cmdimpl.c:print_domain_vcpuinfo does it. libxl_list_vcpu has
> basically the same semantics as libxl_list_pool should have.

Okay, new patch attached.


Juergen

--
Juergen Gross Principal Developer Operating Systems
TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28 Internet: ts.fujitsu.com
D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
В Срд, 15/09/2010 в 09:26 +0200, Juergen Gross пишет:
> Hi,
>
> attached patch implements cpupool support in xl. Rewrite of previous patch
> after rework of libxl.
>
>
> Juergen

Sorry for may be breaking thread, but what benefits of using cpu pool in
Xen? Can You provide some use cases for using cpu pools?

Thank You.

--
Vasiliy G Tolstov <v.tolstov@selfip.ru>
Selfip.Ru


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
On Wed, 2010-09-15 at 08:26 +0100, Juergen Gross wrote:
> Hi,
>
> attached patch implements cpupool support in xl. Rewrite of previous patch
> after rework of libxl.

Thanks!

Please could you provide a changelog message? It seems you needed to
modify the libxc interface in order to support libxl, it would be worth
commenting on what/why you changed.

In several places I notice you calculate (X + 64) / 64. Normally I would
expect a (X + 63) / 64 pattern to round something up (for example in
other places you use + 7 / 8). Is this deliberate? Probably worth a
comment, or maybe wrap the concept into a function with an appropriate
name?

I think the units of libxl_cpumap.size might be worth a comment too,
especially since appears to differ from the units used in the libxc
interface. You can do this in the IDL with e.g.
- ("size", uint32),
+ ("size", uint32, False, "number of <bananas> in map"),
("map", Reference(uint64)),

I think <bananas>=="uint64_t sized words"? Perhaps size would be better
represented as the maximum CPU ID in the map? The fact that the map is
rounded up to the next 64 bit boundary isn't too interesting to callers,
is it?

I saw a "size * 8" which I think would better as "size *
sizeof(*....map)", although perhaps this usage reinforces the idea that
the size should be in CPUs not multiples of some word size? Especially
given that this is seen in the caller who I guess shouldn't really need
to know these details.

Similarly callers seem to be doing "map[i / 64] & (i %64)" type
arithmetic, I think providing a libxl_cpumap_cpu_present helper (and
_set/_clear if necessary) would help here.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
On 09/16/10 12:45, Ian Campbell wrote:
> On Wed, 2010-09-15 at 08:26 +0100, Juergen Gross wrote:
>> Hi,
>>
>> attached patch implements cpupool support in xl. Rewrite of previous patch
>> after rework of libxl.
>
> Thanks!
>
> Please could you provide a changelog message? It seems you needed to
> modify the libxc interface in order to support libxl, it would be worth
> commenting on what/why you changed.

Okay, done.

> In several places I notice you calculate (X + 64) / 64. Normally I would
> expect a (X + 63) / 64 pattern to round something up (for example in
> other places you use + 7 / 8). Is this deliberate? Probably worth a
> comment, or maybe wrap the concept into a function with an appropriate
> name?

This was the conversion from max_cpu_id (which is 1 less than the max
number of cpus) to the number of map elements. I've moved this to an own
function for allocating a cpumap.map.

> I think the units of libxl_cpumap.size might be worth a comment too,
> especially since appears to differ from the units used in the libxc
> interface. You can do this in the IDL with e.g.
> - ("size", uint32),
> + ("size", uint32, False, "number of<bananas> in map"),
> ("map", Reference(uint64)),

Okay.

> I think<bananas>=="uint64_t sized words"? Perhaps size would be better
> represented as the maximum CPU ID in the map? The fact that the map is
> rounded up to the next 64 bit boundary isn't too interesting to callers,
> is it?

I kept your proposal to keep the size in bytes (like in xc).

> I saw a "size * 8" which I think would better as "size *
> sizeof(*....map)",

The 8 is the number of bits in a byte (size being number of bytes).

> Similarly callers seem to be doing "map[i / 64]& (i %64)" type
> arithmetic, I think providing a libxl_cpumap_cpu_present helper (and
> _set/_clear if necessary) would help here.

Done that, too.


Juergen

--
Juergen Gross Principal Developer Operating Systems
TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28 Internet: ts.fujitsu.com
D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
On 09/16/10 07:58, Vasiliy G Tolstov wrote:
> Sorry for may be breaking thread, but what benefits of using cpu pool in
> Xen? Can You provide some use cases for using cpu pools?

Without cpu pools the whole Xen server is running one scheduler which is
scheduling all domains on all cpus (possibly with some restrictions due to
cpu pinning). With cpu pools each pool is running it's own scheduler
responsible only for some cpus and some domains. This may be important if
you want to:
- isolate some domains from others regarding cpu usage
- restrict a group of domains to a subset of cpus (e.g. due to license
restrictions), while scheduling parameters should still be working (this was
our main problem without cpu pools: scheduling weights and cpu pinning are
not working together very well)
- use different scheduling strategies for different domains (e.g. credit
scheduler for some domains and credit2 or sedf for others)

For an extended discussion on cpu pools please see:
http://lists.xensource.com/archives/html/xen-devel/2009-07/msg01116.html


HTH Juergen

--
Juergen Gross Principal Developer Operating Systems
TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28 Internet: ts.fujitsu.com
D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
В Чтв, 16/09/2010 в 15:01 +0200, Juergen Gross пишет:
> On 09/16/10 07:58, Vasiliy G Tolstov wrote:
> > Sorry for may be breaking thread, but what benefits of using cpu pool in
> > Xen? Can You provide some use cases for using cpu pools?
>
> Without cpu pools the whole Xen server is running one scheduler which is
> scheduling all domains on all cpus (possibly with some restrictions due to
> cpu pinning). With cpu pools each pool is running it's own scheduler
> responsible only for some cpus and some domains. This may be important if
> you want to:
> - isolate some domains from others regarding cpu usage
> - restrict a group of domains to a subset of cpus (e.g. due to license
> restrictions), while scheduling parameters should still be working (this was
> our main problem without cpu pools: scheduling weights and cpu pinning are
> not working together very well)
> - use different scheduling strategies for different domains (e.g. credit
> scheduler for some domains and credit2 or sedf for others)
>
> For an extended discussion on cpu pools please see:
> http://lists.xensource.com/archives/html/xen-devel/2009-07/msg01116.html
>
>
> HTH Juergen
>

Thank you very much!

--
Vasiliy G Tolstov <v.tolstov@selfip.ru>
Selfip.Ru


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
Juergen Gross writes ("Re: [Xen-devel] [Patch][resend] implementation of cpupool support in xl"):
> To be able to support arbitrary numbers of physical cpus it was necessary to
> include the size of cpumaps in the xc-interfaces for cpu pools as well
> These were:
> definition of xc_cpupoolinfo_t
> xc_cpupool_getinfo()
> xc_cpupool_freeinfo()

I think it would make sense to separate out this part of the patch
(which also touches tools/python). Can the consequence in libxl of
this change to the xc_ functions be decoupled from the extra
functionality of the new libxl and xl features ?

I think ideally this would be three patches:
libxc: cpupools: support arbitrary number of physical cpus
libxl: support more cpupool operations
xl: support cpupools
although splitting the 2nd and 3rd apart is trivial so I could do it
myself :-).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Patch][resend] implementation of cpupool support in xl [ In reply to ]
Juergen Gross writes ("Re: [Xen-devel] [Patch][resend] implementation of cpupool support in xl"):
> Okay, new patch attached.

Thanks. See the comments in my other mail, just sent. Also can you
please refresh your patch(es) against xen-unstable staging ? There
are a few conflicts when I test-apply it, right now.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel