Mailing List Archive

[PATCH 0/2] enable event channel wake-up for mem_event interfaces
Currently the mem_event code requires a domctl to kick the hypervisor
and unpause vcpus. An event channel is used to notify dom0 of
requests placed in the ring, and it can similarly be used to notify
Xen, so as not to overuse domctls when running many domains with
mem_event interfaces (domctls are not a great interface for this sort
of thing, because they will all be serialized).

This patch set enables the use of the event channel to signal when a
response in placed in a mem_event ring.

The two patches are as follows:
- The first patch tweaks the memevent code to ensure that no events
are lost. Instead of calling get_response once per kick, we may have
to pull out multiple events at a time. This doesn't affect normal
operation with the domctls.
This patch also ensures that each vCPU can generate a request in each
mem_event ring (i.e. put_request will always work), by appropriately
pausing vCPUs when after requests are placed.
- The second patch breaks the Xen-side event channel handling into a
new arch-specific file "events.c", and adds cases for the different
Xen-handled event channels. This formalizes the tiny exception that
was in place for just qemu in event_channel.c.

All the domctls are still in place and everything should be backwards
compatible.

Cheers,
-Adin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 0/2] enable event channel wake-up for mem_event interfaces [ In reply to ]
On 28/09/2011 14:22, "Adin Scannell" <adin@gridcentric.com> wrote:

> Currently the mem_event code requires a domctl to kick the hypervisor
> and unpause vcpus. An event channel is used to notify dom0 of
> requests placed in the ring, and it can similarly be used to notify
> Xen, so as not to overuse domctls when running many domains with
> mem_event interfaces (domctls are not a great interface for this sort
> of thing, because they will all be serialized).
>
> This patch set enables the use of the event channel to signal when a
> response in placed in a mem_event ring.

I don't have an opinion on patch 1/2. I'll leave it to someone else, like
Tim, to comment.

Patch 2/2 I don't mind the principle, but the implementation is not very
scalable. I will post a rewritten version to the list. It might be early
next week before I do so.

-- Keir

> The two patches are as follows:
> - The first patch tweaks the memevent code to ensure that no events
> are lost. Instead of calling get_response once per kick, we may have
> to pull out multiple events at a time. This doesn't affect normal
> operation with the domctls.
> This patch also ensures that each vCPU can generate a request in each
> mem_event ring (i.e. put_request will always work), by appropriately
> pausing vCPUs when after requests are placed.
> - The second patch breaks the Xen-side event channel handling into a
> new arch-specific file "events.c", and adds cases for the different
> Xen-handled event channels. This formalizes the tiny exception that
> was in place for just qemu in event_channel.c.
>
> All the domctls are still in place and everything should be backwards
> compatible.
>
> Cheers,
> -Adin
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 0/2] enable event channel wake-up for mem_event interfaces [ In reply to ]
On 29/09/2011 07:21, "Keir Fraser" <keir.xen@gmail.com> wrote:

> On 28/09/2011 14:22, "Adin Scannell" <adin@gridcentric.com> wrote:
>
>> Currently the mem_event code requires a domctl to kick the hypervisor
>> and unpause vcpus. An event channel is used to notify dom0 of
>> requests placed in the ring, and it can similarly be used to notify
>> Xen, so as not to overuse domctls when running many domains with
>> mem_event interfaces (domctls are not a great interface for this sort
>> of thing, because they will all be serialized).
>>
>> This patch set enables the use of the event channel to signal when a
>> response in placed in a mem_event ring.
>
> I don't have an opinion on patch 1/2. I'll leave it to someone else, like
> Tim, to comment.
>
> Patch 2/2 I don't mind the principle, but the implementation is not very
> scalable. I will post a rewritten version to the list. It might be early
> next week before I do so.

I've attached it. Let me know how it works for you.

-- Keir

>
> -- Keir
>
>> The two patches are as follows:
>> - The first patch tweaks the memevent code to ensure that no events
>> are lost. Instead of calling get_response once per kick, we may have
>> to pull out multiple events at a time. This doesn't affect normal
>> operation with the domctls.
>> This patch also ensures that each vCPU can generate a request in each
>> mem_event ring (i.e. put_request will always work), by appropriately
>> pausing vCPUs when after requests are placed.
>> - The second patch breaks the Xen-side event channel handling into a
>> new arch-specific file "events.c", and adds cases for the different
>> Xen-handled event channels. This formalizes the tiny exception that
>> was in place for just qemu in event_channel.c.
>>
>> All the domctls are still in place and everything should be backwards
>> compatible.
>>
>> Cheers,
>> -Adin
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel
>
>
Re: [PATCH 0/2] enable event channel wake-up for mem_event interfaces [ In reply to ]
On 30/09/2011 12:39, "Keir Fraser" <keir.xen@gmail.com> wrote:

> On 29/09/2011 07:21, "Keir Fraser" <keir.xen@gmail.com> wrote:
>
>> On 28/09/2011 14:22, "Adin Scannell" <adin@gridcentric.com> wrote:
>>
>>> Currently the mem_event code requires a domctl to kick the hypervisor
>>> and unpause vcpus. An event channel is used to notify dom0 of
>>> requests placed in the ring, and it can similarly be used to notify
>>> Xen, so as not to overuse domctls when running many domains with
>>> mem_event interfaces (domctls are not a great interface for this sort
>>> of thing, because they will all be serialized).
>>>
>>> This patch set enables the use of the event channel to signal when a
>>> response in placed in a mem_event ring.
>>
>> I don't have an opinion on patch 1/2. I'll leave it to someone else, like
>> Tim, to comment.
>>
>> Patch 2/2 I don't mind the principle, but the implementation is not very
>> scalable. I will post a rewritten version to the list. It might be early
>> next week before I do so.
>
> I've attached it. Let me know how it works for you.

By the way my patch doesn't hook up event notification for the d->mem_share
structure. It doesn't look like d->mem_share.xen_port ever gets set up, and
your patches didn't appear to fix that either.

> -- Keir
>
>>
>> -- Keir
>>
>>> The two patches are as follows:
>>> - The first patch tweaks the memevent code to ensure that no events
>>> are lost. Instead of calling get_response once per kick, we may have
>>> to pull out multiple events at a time. This doesn't affect normal
>>> operation with the domctls.
>>> This patch also ensures that each vCPU can generate a request in each
>>> mem_event ring (i.e. put_request will always work), by appropriately
>>> pausing vCPUs when after requests are placed.
>>> - The second patch breaks the Xen-side event channel handling into a
>>> new arch-specific file "events.c", and adds cases for the different
>>> Xen-handled event channels. This formalizes the tiny exception that
>>> was in place for just qemu in event_channel.c.
>>>
>>> All the domctls are still in place and everything should be backwards
>>> compatible.
>>>
>>> Cheers,
>>> -Adin
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xensource.com
>>> http://lists.xensource.com/xen-devel
>>
>>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 0/2] enable event channel wake-up for mem_event interfaces [ In reply to ]
>>> Patch 2/2 I don't mind the principle, but the implementation is not very
>>> scalable. I will post a rewritten version to the list. It might be early
>>> next week before I do so.
>>
>> I've attached it. Let me know how it works for you.

Seems to work for me, thanks!

> By the way my patch doesn't hook up event notification for the d->mem_share
> structure. It doesn't look like d->mem_share.xen_port ever gets set up, and
> your patches didn't appear to fix that either.

Yeah, it seems that is currently unused (unimplemented). I assume the
idea was to put OOM notifications (or maybe unshare notifications) in
that ring.

Once I hear back on the first patch, I will resend as a series (the
event mechanism for paging requires the first patch for correctness).

Cheers,
-Adin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 0/2] enable event channel wake-up for mem_event interfaces [ In reply to ]
On 30/09/2011 13:57, "Adin Scannell" <adin@gridcentric.com> wrote:

>>>> Patch 2/2 I don't mind the principle, but the implementation is not very
>>>> scalable. I will post a rewritten version to the list. It might be early
>>>> next week before I do so.
>>>
>>> I've attached it. Let me know how it works for you.
>
> Seems to work for me, thanks!
>
>> By the way my patch doesn't hook up event notification for the d->mem_share
>> structure. It doesn't look like d->mem_share.xen_port ever gets set up, and
>> your patches didn't appear to fix that either.
>
> Yeah, it seems that is currently unused (unimplemented). I assume the
> idea was to put OOM notifications (or maybe unshare notifications) in
> that ring.
>
> Once I hear back on the first patch, I will resend as a series (the
> event mechanism for paging requires the first patch for correctness).

You can put my sign off on the redone second patch when you re-send it:
Signed-off-by: Keir Fraser <keir@xen.org>

Also, most of the reviewers on this list prefer it if you can send patches
in-line in plain text rather than as an attachment. Makes it easier to make
detailed comments.

-- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel