Mailing List Archive

[PATCH 0 of 2] Mem event ring management overhaul
Ensure no guest events are ever lost in the mem event ring.

This is one of two outstanding proposals to solve this issue. One
key difference between them being that ours does not necessitate wait
queues.

Instead, we rely on foreign domain retry (already in place), preempting
hypercalls that may cause unbounded guest events (such as
decrease_reservation), and ensuring there is always space left in the
ring for each guest vcpu to place at least one event.

The patch has been refreshed to apply on top of 62ff6a318c5d, and untangled
from other mem event modifications that are essentially orthogonal and can
go in independently.

Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Signed-off-by: Adin Scannell <adin@scannell.ca>


xen/common/memory.c | 29 +++++-
xen/arch/x86/hvm/hvm.c | 21 ++-
xen/arch/x86/mm/mem_event.c | 203 +++++++++++++++++++++++++++++----------
xen/arch/x86/mm/mem_sharing.c | 17 ++-
xen/arch/x86/mm/p2m.c | 47 +++++----
xen/common/memory.c | 7 +-
xen/include/asm-x86/mem_event.h | 16 ++-
xen/include/asm-x86/p2m.h | 6 +-
xen/include/xen/mm.h | 2 +
xen/include/xen/sched.h | 5 +-
10 files changed, 257 insertions(+), 96 deletions(-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 0 of 2] Mem event ring management overhaul [ In reply to ]
On Mon, Dec 05, Andres Lagar-Cavilla wrote:

> Ensure no guest events are ever lost in the mem event ring.
>
> This is one of two outstanding proposals to solve this issue. One
> key difference between them being that ours does not necessitate wait
> queues.
>
> Instead, we rely on foreign domain retry (already in place), preempting
> hypercalls that may cause unbounded guest events (such as
> decrease_reservation), and ensuring there is always space left in the
> ring for each guest vcpu to place at least one event.

Thats not enough. Cases like hvm_copy and the emulator do currently no
retry, instead they get an invalid mfn and crash the guest. Its possible
to code around that in some places, like shown in the URL below, but
wouldnt it make sense to just stop execution until the expected
condition is met?
Its not clear to me how to properly handle a full ring in
get_gfn_type_access() with your proposal.

Olaf

http://old-list-archives.xen.org/archives/html/xen-devel/2011-01/msg01121.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 0 of 2] Mem event ring management overhaul [ In reply to ]
> On Mon, Dec 05, Andres Lagar-Cavilla wrote:
>
>> Ensure no guest events are ever lost in the mem event ring.
>>
>> This is one of two outstanding proposals to solve this issue. One
>> key difference between them being that ours does not necessitate wait
>> queues.
>>
>> Instead, we rely on foreign domain retry (already in place), preempting
>> hypercalls that may cause unbounded guest events (such as
>> decrease_reservation), and ensuring there is always space left in the
>> ring for each guest vcpu to place at least one event.
>
> Thats not enough. Cases like hvm_copy and the emulator do currently no
> retry, instead they get an invalid mfn and crash the guest. Its possible
> to code around that in some places, like shown in the URL below, but
> wouldnt it make sense to just stop execution until the expected
> condition is met?
These things are completely unrelated. *Foreign* domains retry. With our
patch, events caused by the guest itself are guaranteed to go in, no
retry.

The fact that hvm_copy et al. currently crash the guest is independent of
ring management. They crash the guest after placing the event in the ring.
And that's where the wait queues are expected to save the day.

> Its not clear to me how to properly handle a full ring in
> get_gfn_type_access() with your proposal.

If the request comes from a foreign domain, and there is no space in the
ring, ENOENT goes upwards.

If the request comes from the guest itself, and it's p2m_query, no event
needs to be generated.

If the request comes from the guest itself, and it requires
paging_populate (!= p2m_query), the event is guaranteed to be put in the
ring, and then the vcpu goes to sleep.

Easy-peasy
Andres
>
> Olaf
>
> http://old-list-archives.xen.org/archives/html/xen-devel/2011-01/msg01121.html
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 0 of 2] Mem event ring management overhaul [ In reply to ]
On Tue, Dec 06, Andres Lagar-Cavilla wrote:

> If the request comes from the guest itself, and it requires
> paging_populate (!= p2m_query), the event is guaranteed to be put in the
> ring, and then the vcpu goes to sleep.
>
> Easy-peasy

If everything were so easy. ;-)

I will try to combine your patch with my paging waitqueue change.
Perhaps your change should disallow paging if max_cpus is larger than
RING_SIZE().


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 0 of 2] Mem event ring management overhaul [ In reply to ]
> On Tue, Dec 06, Andres Lagar-Cavilla wrote:
>
>> If the request comes from the guest itself, and it requires
>> paging_populate (!= p2m_query), the event is guaranteed to be put in the
>> ring, and then the vcpu goes to sleep.
>>
>> Easy-peasy
>
> If everything were so easy. ;-)
>
> I will try to combine your patch with my paging waitqueue change.
> Perhaps your change should disallow paging if max_cpus is larger than
> RING_SIZE().
That sounds excellent. I can do those changes as well, if you don't want
to be burdened by that.

I think it's wise though, to suspend forward-motion on either "branch" of
ring management until we get a verdict. The fundamentals of each
philosophy have been exposed. Further code has a 50% of being throw-away.

Thanks
Andres
>
>
> Olaf
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel