Mailing List Archive

[PATCH] Fix locking bug in vcpu_migrate
c/s 22957:c5c4688d5654 (unstable) changed the locking scheme for
vcpu_migrate by adjusting the order so that the lock with the lowest
lock address is obtained first. However, the code does not release them
in the correct reverse order; it removes new_lock first if it differs
from old_lock, but that is not the last lock obtained when old_lock >
new_lock.

As a result of this bug, under credit2, domUs would sometimes take a
long time to start, and there was an occasional panic.

This fix should be applied to both xen-unstable and xen-4.1-testing. I
have tested and seen the problem with both, and also tested to confirm
an improvement for both.

Signed-off-by: John Weekes <lists.xen@nuclearfallout.net>

diff -r eb4505f8dd97 xen/common/schedule.c
--- a/xen/common/schedule.c Wed Apr 20 12:02:51 2011 +0100
+++ b/xen/common/schedule.c Fri Apr 22 03:46:00 2011 -0500
@@ -455,9 +455,20 @@
pick_called = 0;
}

- if ( old_lock != new_lock )
+ if ( old_lock == new_lock )
+ {
+ spin_unlock_irqrestore(old_lock, flags);
+ }
+ else if ( old_lock < new_lock )
+ {
spin_unlock(new_lock);
- spin_unlock_irqrestore(old_lock, flags);
+ spin_unlock_irqrestore(old_lock, flags);
+ }
+ else
+ {
+ spin_unlock(old_lock);
+ spin_unlock_irqrestore(new_lock, flags);
+ }
}

/*
@@ -468,9 +479,20 @@
if ( v->is_running ||
!test_and_clear_bit(_VPF_migrating, &v->pause_flags) )
{
- if ( old_lock != new_lock )
+ if ( old_lock == new_lock )
+ {
+ spin_unlock_irqrestore(old_lock, flags);
+ }
+ else if ( old_lock < new_lock )
+ {
spin_unlock(new_lock);
- spin_unlock_irqrestore(old_lock, flags);
+ spin_unlock_irqrestore(old_lock, flags);
+ }
+ else
+ {
+ spin_unlock(old_lock);
+ spin_unlock_irqrestore(new_lock, flags);
+ }
return;
}

@@ -491,9 +513,20 @@
*/
v->processor = new_cpu;

- if ( old_lock != new_lock )
+ if ( old_lock == new_lock )
+ {
+ spin_unlock_irqrestore(old_lock, flags);
+ }
+ else if ( old_lock < new_lock )
+ {
spin_unlock(new_lock);
- spin_unlock_irqrestore(old_lock, flags);
+ spin_unlock_irqrestore(old_lock, flags);
+ }
+ else
+ {
+ spin_unlock_irqrestore(old_lock, flags);
+ spin_unlock(new_lock);
+ }

if ( old_cpu != new_cpu )
evtchn_move_pirqs(v);
Re: [PATCH] Fix locking bug in vcpu_migrate [ In reply to ]
On 22/04/2011 11:00, "John Weekes" <lists.xen@nuclearfallout.net> wrote:

> c/s 22957:c5c4688d5654 (unstable) changed the locking scheme for
> vcpu_migrate by adjusting the order so that the lock with the lowest
> lock address is obtained first. However, the code does not release them
> in the correct reverse order; it removes new_lock first if it differs
> from old_lock, but that is not the last lock obtained when old_lock >
> new_lock.
>
> As a result of this bug, under credit2, domUs would sometimes take a
> long time to start, and there was an occasional panic.
>
> This fix should be applied to both xen-unstable and xen-4.1-testing. I
> have tested and seen the problem with both, and also tested to confirm
> an improvement for both.

Nice that empirical evidence supports the patch, however, I'm being dense
and don't understand why order of lock release matters. This matters because
this idiom of ordering lock acquisition by lock address, but not caring
about release order, is seen elsewhere (in common/timer.c:migrate_timer()
for example). Perhaps the release order matters there too, and I never
realised. Or perhaps you've merely perturbed a fragile pos code that's
broken in some other way.

So, I would need an explanation of how this improves guest startup so
dramatically, before I could apply it. Something beyond "it is bad to
release locks in a different order to that in which they were acquired".

Also the last hunk of your patch is broken -- in the final else clause you
call spin_unlock_irqrestore on the wrong lock. This is very definitely a
bug, as irqs should never be enabled while any schedule_lock is held.

Thanks,
Keir

> Signed-off-by: John Weekes <lists.xen@nuclearfallout.net>
>
> diff -r eb4505f8dd97 xen/common/schedule.c
> --- a/xen/common/schedule.c Wed Apr 20 12:02:51 2011 +0100
> +++ b/xen/common/schedule.c Fri Apr 22 03:46:00 2011 -0500
> @@ -455,9 +455,20 @@
> pick_called = 0;
> }
>
> - if ( old_lock != new_lock )
> + if ( old_lock == new_lock )
> + {
> + spin_unlock_irqrestore(old_lock, flags);
> + }
> + else if ( old_lock < new_lock )
> + {
> spin_unlock(new_lock);
> - spin_unlock_irqrestore(old_lock, flags);
> + spin_unlock_irqrestore(old_lock, flags);
> + }
> + else
> + {
> + spin_unlock(old_lock);
> + spin_unlock_irqrestore(new_lock, flags);
> + }
> }
>
> /*
> @@ -468,9 +479,20 @@
> if ( v->is_running ||
> !test_and_clear_bit(_VPF_migrating, &v->pause_flags) )
> {
> - if ( old_lock != new_lock )
> + if ( old_lock == new_lock )
> + {
> + spin_unlock_irqrestore(old_lock, flags);
> + }
> + else if ( old_lock < new_lock )
> + {
> spin_unlock(new_lock);
> - spin_unlock_irqrestore(old_lock, flags);
> + spin_unlock_irqrestore(old_lock, flags);
> + }
> + else
> + {
> + spin_unlock(old_lock);
> + spin_unlock_irqrestore(new_lock, flags);
> + }
> return;
> }
>
> @@ -491,9 +513,20 @@
> */
> v->processor = new_cpu;
>
> - if ( old_lock != new_lock )
> + if ( old_lock == new_lock )
> + {
> + spin_unlock_irqrestore(old_lock, flags);
> + }
> + else if ( old_lock < new_lock )
> + {
> spin_unlock(new_lock);
> - spin_unlock_irqrestore(old_lock, flags);
> + spin_unlock_irqrestore(old_lock, flags);
> + }
> + else
> + {
> + spin_unlock_irqrestore(old_lock, flags);
> + spin_unlock(new_lock);
> + }
>
> if ( old_cpu != new_cpu )
> evtchn_move_pirqs(v);
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH] Fix locking bug in vcpu_migrate [ In reply to ]
> Nice that empirical evidence supports the patch, however, I'm being dense
> and don't understand why order of lock release matters.

I was taught long ago to release them in order, but as I think about it,
I agree with you that there doesn't seem to be a scenario when the
release would matter.

It's odd that it seemed to lead to such a big difference for me, then.
I'll do some further tests -- maybe I changed something else to cause
the behavior, or the problem is more random than I thought and just
hasn't occurred for me yet in all the new tests.

> perhaps you've merely perturbed a fragile pos code that's
> broken in some other way.

That's entirely possible.

> Also the last hunk of your patch is broken -- in the final else clause you
> call spin_unlock_irqrestore on the wrong lock. This is very definitely a
> bug, as irqs should never be enabled while any schedule_lock is held.

Definitely. I had fixed that here but sent an old version of the patch
-- a boneheaded mistake.

-John

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH] Fix locking bug in vcpu_migrate [ In reply to ]
On 22/04/2011 19:23, "John Weekes" <lists.xen@nuclearfallout.net> wrote:

>
>> Nice that empirical evidence supports the patch, however, I'm being dense
>> and don't understand why order of lock release matters.
>
> I was taught long ago to release them in order, but as I think about it,
> I agree with you that there doesn't seem to be a scenario when the
> release would matter.
>
> It's odd that it seemed to lead to such a big difference for me, then.
> I'll do some further tests -- maybe I changed something else to cause
> the behavior, or the problem is more random than I thought and just
> hasn't occurred for me yet in all the new tests.

Thanks!

-- Keir

>> perhaps you've merely perturbed a fragile pos code that's
>> broken in some other way.
>
> That's entirely possible.
>
>> Also the last hunk of your patch is broken -- in the final else clause you
>> call spin_unlock_irqrestore on the wrong lock. This is very definitely a
>> bug, as irqs should never be enabled while any schedule_lock is held.
>
> Definitely. I had fixed that here but sent an old version of the patch
> -- a boneheaded mistake.
>
> -John



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH] Fix locking bug in vcpu_migrate [ In reply to ]
On 4/22/2011 11:43 AM, Keir Fraser wrote:
> It's odd that it seemed to lead to such a big difference for me, then.
> > I'll do some further tests -- maybe I changed something else to cause
> > the behavior, or the problem is more random than I thought and just
> > hasn't occurred for me yet in all the new tests.

I did further testing and determined that my domU was starting properly
because I had only tested once or twice with Debian Squeeze after
applying the patch; I had then done more extensive testing only under a
Win2k3 domU. It seems that Win2k3 domUs don't have the same issue.

Back on the Squeeze domU, I am reliably seeing the BUG again, with
either configuration.

I have rolled back schedule.c to pre-22948, when it was much simpler,
and that seems to have resolved this particular bug.

Now, a different credit2 bug has occurred, though only once for me so
far; with the other bug, I was seeing a panic with every 1 or 2 domU
startups, but I have seen the new bug on one test out of 15.

Specifically, I have triggered the BUG_ON in csched_domcntl. The line
number is not the standard one because I have added further debugging,
but the BUG_ON is:

BUG_ON(svc->rqd != RQD(ops, svc->vcpu->processor));

The bt being:

(XEN) [<ffff82c480119578>] csched_dom_cntl+0x11a/0x185
(XEN) [<ffff82c48011f24d>] sched_adjust+0x102/0x1f9
(XEN) [<ffff82c480102ee5>] do_domctl+0xb25/0x1250
(XEN) [<ffff82c4801ff0e8>] syscall_enter+0xc8/0x122

Also, in three of those last 15 startups, my domU froze three times
(consuming no CPU and seemingly doing nothing), somewhere in this block
of code in ring_read in tools/firmware/hvmloader/xenbus.c -- I added
debug information that allowed me to narrow it down. This function is
being called when it is writing the SMBIOS tables. I can't tell whether
this is related to the credit2 problem. (The domU can be "destroyed" to
get out of it).

/* Don't overrun the producer pointer */
while ( (part = MASK_XENSTORE_IDX(rings->rsp_prod -
rings->rsp_cons)) == 0 )
ring_wait();
/* Don't overrun the end of the ring */
if ( part > (XENSTORE_RING_SIZE -
MASK_XENSTORE_IDX(rings->rsp_cons)) )
part = XENSTORE_RING_SIZE - MASK_XENSTORE_IDX(rings->rsp_cons);
/* Don't read more than we were asked for */
if ( part > len )
part = len;

Note that I am using stubdoms.

I would be happy to temporarily turn over the reins on this machine to
you or George, if you'd like to debug any of these issues directly. I
may not be able to continue experimenting in the short term here myself
due to time constraints.

-John

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH] Fix locking bug in vcpu_migrate [ In reply to ]
On 22/04/2011 23:33, "John Weekes" <lists.xen@nuclearfallout.net> wrote:

> I would be happy to temporarily turn over the reins on this machine to
> you or George, if you'd like to debug any of these issues directly. I
> may not be able to continue experimenting in the short term here myself
> due to time constraints.

Given that these are all apparently credit2 (non-default) scheduler issues,
I'll leave it to George.

-- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel