Hi all,
I tested the xen vm disk I/O this weekend and had some interesting
observations:
I ran TPC-C benchmarks (mostly random small disk read) within two VMs (PV)
with the exactly the same resource and software configuration. I started the
benchmarks in the two VMs at the same time (started with a script, the time
difference is within several ms). The Xen VM scheduler seems always favor
one VM, which results in a 50% better performance over the other VM. I
changed the seqence of the VM creation and application starting order, the
specific VM always got better performance, 30%-50% better.
What could be the reason that xen always favor a specific VM?
I ran the above test for several more times. Between each run, I purged the
cached data within each VM to make the I/O demand always the same. It is
interesting that the performance gap between the two VM becomes smaller and
smaller. After 6 runs, the performance almost the same.
Anyone has any idea? Does the VM scheduler scheduling VMs based on history?
I have already tried file-based, physical partition and LVM VMDs, similar
results.
I am using Xen 3.3.1, CentOS 5.1, linux 2.8.18-x86_64
Each VM has 512M, 2-VCPU not pinned. Dom0 with 512M, 8-VCPU not pinned.
Host: dell poweredge 1950: 8G, two quad-core Intel xeon.
Thanks in advance,
Jia.
I tested the xen vm disk I/O this weekend and had some interesting
observations:
I ran TPC-C benchmarks (mostly random small disk read) within two VMs (PV)
with the exactly the same resource and software configuration. I started the
benchmarks in the two VMs at the same time (started with a script, the time
difference is within several ms). The Xen VM scheduler seems always favor
one VM, which results in a 50% better performance over the other VM. I
changed the seqence of the VM creation and application starting order, the
specific VM always got better performance, 30%-50% better.
What could be the reason that xen always favor a specific VM?
I ran the above test for several more times. Between each run, I purged the
cached data within each VM to make the I/O demand always the same. It is
interesting that the performance gap between the two VM becomes smaller and
smaller. After 6 runs, the performance almost the same.
Anyone has any idea? Does the VM scheduler scheduling VMs based on history?
I have already tried file-based, physical partition and LVM VMDs, similar
results.
I am using Xen 3.3.1, CentOS 5.1, linux 2.8.18-x86_64
Each VM has 512M, 2-VCPU not pinned. Dom0 with 512M, 8-VCPU not pinned.
Host: dell poweredge 1950: 8G, two quad-core Intel xeon.
Thanks in advance,
Jia.