Mailing List Archive

[xen-unstable] cpupool: Avoid race when moving cpu between cpupools
# HG changeset patch
# User Juergen Gross <juergen.gross@ts.fujitsu.com>
# Date 1298633295 0
# Node ID 2d35823a86e7fbab004125591e56cd14aeaffcb3
# Parent 598d1fc295b6e88c6ff226b461553eaea61e2043
cpupool: Avoid race when moving cpu between cpupools

Moving cpus between cpupools is done under the schedule lock of the
moved cpu. When checking a cpu being member of a cpupool this must be
done with the lock of that cpu being held. Hot-unplugging of physical
cpus might encounter the same problems, but this should happen only
very rarely.

Signed-off-by: Juergen Gross <juergen.gross@ts.fujitsu.com>
Acked-by: Andre Przywara <andre.przywara@amd.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
---


diff -r 598d1fc295b6 -r 2d35823a86e7 xen/common/sched_credit.c
--- a/xen/common/sched_credit.c Thu Feb 24 09:33:19 2011 +0000
+++ b/xen/common/sched_credit.c Fri Feb 25 11:28:15 2011 +0000
@@ -1268,7 +1268,8 @@
/*
* Any work over there to steal?
*/
- speer = csched_runq_steal(peer_cpu, cpu, snext->pri);
+ speer = cpu_isset(peer_cpu, *online) ?
+ csched_runq_steal(peer_cpu, cpu, snext->pri) : NULL;
pcpu_schedule_unlock(peer_cpu);
if ( speer != NULL )
{
diff -r 598d1fc295b6 -r 2d35823a86e7 xen/common/schedule.c
--- a/xen/common/schedule.c Thu Feb 24 09:33:19 2011 +0000
+++ b/xen/common/schedule.c Fri Feb 25 11:28:15 2011 +0000
@@ -394,8 +394,32 @@
{
unsigned long flags;
int old_cpu, new_cpu;
+ int same_lock;

- vcpu_schedule_lock_irqsave(v, flags);
+ for (;;)
+ {
+ vcpu_schedule_lock_irqsave(v, flags);
+
+ /* Select new CPU. */
+ old_cpu = v->processor;
+ new_cpu = SCHED_OP(VCPU2OP(v), pick_cpu, v);
+ same_lock = (per_cpu(schedule_data, new_cpu).schedule_lock ==
+ per_cpu(schedule_data, old_cpu).schedule_lock);
+
+ if ( same_lock )
+ break;
+
+ if ( !pcpu_schedule_trylock(new_cpu) )
+ {
+ vcpu_schedule_unlock_irqrestore(v, flags);
+ continue;
+ }
+ if ( cpu_isset(new_cpu, v->domain->cpupool->cpu_valid) )
+ break;
+
+ pcpu_schedule_unlock(new_cpu);
+ vcpu_schedule_unlock_irqrestore(v, flags);
+ }

/*
* NB. Check of v->running happens /after/ setting migration flag
@@ -405,14 +429,13 @@
if ( v->is_running ||
!test_and_clear_bit(_VPF_migrating, &v->pause_flags) )
{
+ if ( !same_lock )
+ pcpu_schedule_unlock(new_cpu);
+
vcpu_schedule_unlock_irqrestore(v, flags);
return;
}

- /* Select new CPU. */
- old_cpu = v->processor;
- new_cpu = SCHED_OP(VCPU2OP(v), pick_cpu, v);
-
/*
* Transfer urgency status to new CPU before switching CPUs, as once
* the switch occurs, v->is_urgent is no longer protected by the per-CPU
@@ -424,9 +447,15 @@
atomic_dec(&per_cpu(schedule_data, old_cpu).urgent_count);
}

- /* Switch to new CPU, then unlock old CPU. This is safe because
- * the lock pointer cant' change while the current lock is held. */
+ /*
+ * Switch to new CPU, then unlock new and old CPU. This is safe because
+ * the lock pointer cant' change while the current lock is held.
+ */
v->processor = new_cpu;
+
+ if ( !same_lock )
+ pcpu_schedule_unlock(new_cpu);
+
spin_unlock_irqrestore(
per_cpu(schedule_data, old_cpu).schedule_lock, flags);


_______________________________________________
Xen-changelog mailing list
Xen-changelog@lists.xensource.com
http://lists.xensource.com/xen-changelog