Mailing List Archive

Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: Direct I/O to domU seeing a 30% performance hit
Hi Ian and John,

I'm also doing some performance analysis about Linux native, dom0 and domU
(para-virtualized). Here are some brief comparison for 256K sequential
read/write. The testing is done using for JBOD based on 8 Maxtor SAS Atlas 2
15K drives with LSI SAS HBA.

256K Sequential Read
Linux Native: 559.6MB/s
Xen Domain0: 423.3MB/s
Xen DomainU: 555.9MB/s

256K Sequential Write
Linux Native: 668.9MB/s
Xen Domain0: 708.7MB/s
Xen DomainU: 373.5MB/s

Just two questions:

It seems para-virtualized DomU outperform Dom0 in terms of sequential read
and is very to Linux native performance. However, DomU does show poor (only
50%) sequential write performance compared with Linux native and Dom0.

Could you explain some reason behind this?

Thanks,

Liang


----- Original Message -----
From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
To: "John Byrne" <john.l.byrne@hp.com>
Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
<ack@xensource.com>
Sent: Tuesday, November 07, 2006 10:20 AM
Subject: RE: [Xen-devel] Direct I/O to domU seeing a 30% performance hit


> Both dom0 and the domU are SLES 10, so I don't know why the "idle"
> performance of the two should be different. The obvious asymmetry is
the
> disk. Since the disk isn't direct, any disk I/O by the domU would
> certainly impact dom0, but I don't think there should be much, if any.
I
> did run a dom0 test with the domU started, but idle and there was no
> real change to dom0's numbers.
>
> What's the best way to gather information about what is going on with
> the domains without perturbing them? (Or, at least, perturbing
everyone
> equally.)
>
> As to the test, I am running netperf 2.4.1 on an outside machine to
the
> dom0 and the domU. (So the doms are running the netserver portion.) I
> was originally running it in the doms to the outside machine, but when
> the bad numbers showed up I moved it to the outside machine because I
> wondered if the bad numbers were due to something happening to the
> system time in domU. The numbers is the "outside" test to domU look
worse.


It might be worth checking that there's no interrupt sharing happening.
While running the test against the domU, see how much CPU dom0 burns in
the same period using 'xm vcpu-list'.

To keep things simple, have dom0 and domU as uniprocessor guests.

Ian


> Ian Pratt wrote:
> >
> >> There have been a couple of network receive throughput
> >> performance regressions to domUs over time that were
> >> subsequently fixed. I think one may have crept in to 3.0.3.
> >
> > The report was (I believe) with a NIC directly assigned to the domU,
so
> > not using netfront/back at all.
> >
> > John: please can you give more details on your config.
> >
> > Ian
> >
> >> Are you seeing any dropped packets on the vif associated with
> >> your domU in your dom0? If so, propagating changeset
> >> 11861 from unstable may help:
> >>
> >> changeset: 11861:637eace6d5c6
> >> user: kfraser@localhost.localdomain
> >> date: Mon Oct 23 11:20:37 2006 +0100
> >> summary: [NET] back: Fix packet queuing so that packets
> >> are drained if the
> >>
> >>
> >> In the past, we also had receive throughput issues to domUs
> >> that were due to socket buffer size logic but those were
> >> fixed a while ago.
> >>
> >> Can you send netstat -i output from dom0?
> >>
> >> Emmanuel.
> >>
> >>
> >> On Mon, Nov 06, 2006 at 09:55:17PM -0800, John Byrne wrote:
> >>> I was asked to test direct I/O to a PV domU. Since, I had a system
> >>> with two NICs, I gave one to a domU and one dom0. (Each is
> >> running the
> >>> same
> >>> kernel: xen 3.0.3 x86_64.)
> >>>
> >>> I'm running netperf from an outside system to the domU and
> >> dom0 and I
> >>> am seeing 30% less throughput for the domU vs dom0.
> >>>
> >>> Is this to be expected? If so, why? If not, does anyone
> >> have a guess
> >>> as to what I might be doing wrong or what the issue might be?
> >>>
> >>> Thanks,
> >>>
> >>> John Byrne
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xensource.com
> >> http://lists.xensource.com/xen-devel
> >>
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
> >


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
RE: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: Direct I/O to domU seeing a 30% performance hit [ In reply to ]
> I'm also doing some performance analysis about Linux native, dom0 and
domU
> (para-virtualized). Here are some brief comparison for 256K sequential
> read/write. The testing is done using for JBOD based on 8 Maxtor SAS
Atlas
> 2
> 15K drives with LSI SAS HBA.
>
> 256K Sequential Read
> Linux Native: 559.6MB/s
> Xen Domain0: 423.3MB/s
> Xen DomainU: 555.9MB/s

This doesn't make a lot of sense. Only thing I can think of is that
there must be some extra prefetching going on in the domU case. It still
doesn't explain why the dom0 result is so much worse than native.

It might be worth repeating with both native and dom0 boot with
maxcpus=1.

Are you using near-identical kernels in both cases? Same drivers, same
part of the disk for the tests, etc?

How are you doing the measurement? A timed 'dd'?

Ian


> 256K Sequential Write
> Linux Native: 668.9MB/s
> Xen Domain0: 708.7MB/s
> Xen DomainU: 373.5MB/s
>
> Just two questions:
>
> It seems para-virtualized DomU outperform Dom0 in terms of sequential
read
> and is very to Linux native performance. However, DomU does show poor
(only
> 50%) sequential write performance compared with Linux native and Dom0.
>
> Could you explain some reason behind this?
>
> Thanks,
>
> Liang
>
>
> ----- Original Message -----
> From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> To: "John Byrne" <john.l.byrne@hp.com>
> Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> <ack@xensource.com>
> Sent: Tuesday, November 07, 2006 10:20 AM
> Subject: RE: [Xen-devel] Direct I/O to domU seeing a 30% performance
hit
>
>
> > Both dom0 and the domU are SLES 10, so I don't know why the "idle"
> > performance of the two should be different. The obvious asymmetry is
> the
> > disk. Since the disk isn't direct, any disk I/O by the domU would
> > certainly impact dom0, but I don't think there should be much, if
any.
> I
> > did run a dom0 test with the domU started, but idle and there was no
> > real change to dom0's numbers.
> >
> > What's the best way to gather information about what is going on
with
> > the domains without perturbing them? (Or, at least, perturbing
> everyone
> > equally.)
> >
> > As to the test, I am running netperf 2.4.1 on an outside machine to
> the
> > dom0 and the domU. (So the doms are running the netserver portion.)
I
> > was originally running it in the doms to the outside machine, but
when
> > the bad numbers showed up I moved it to the outside machine because
I
> > wondered if the bad numbers were due to something happening to the
> > system time in domU. The numbers is the "outside" test to domU look
> worse.
>
>
> It might be worth checking that there's no interrupt sharing
happening.
> While running the test against the domU, see how much CPU dom0 burns
in
> the same period using 'xm vcpu-list'.
>
> To keep things simple, have dom0 and domU as uniprocessor guests.
>
> Ian
>
>
> > Ian Pratt wrote:
> > >
> > >> There have been a couple of network receive throughput
> > >> performance regressions to domUs over time that were
> > >> subsequently fixed. I think one may have crept in to 3.0.3.
> > >
> > > The report was (I believe) with a NIC directly assigned to the
domU,
> so
> > > not using netfront/back at all.
> > >
> > > John: please can you give more details on your config.
> > >
> > > Ian
> > >
> > >> Are you seeing any dropped packets on the vif associated with
> > >> your domU in your dom0? If so, propagating changeset
> > >> 11861 from unstable may help:
> > >>
> > >> changeset: 11861:637eace6d5c6
> > >> user: kfraser@localhost.localdomain
> > >> date: Mon Oct 23 11:20:37 2006 +0100
> > >> summary: [NET] back: Fix packet queuing so that packets
> > >> are drained if the
> > >>
> > >>
> > >> In the past, we also had receive throughput issues to domUs
> > >> that were due to socket buffer size logic but those were
> > >> fixed a while ago.
> > >>
> > >> Can you send netstat -i output from dom0?
> > >>
> > >> Emmanuel.
> > >>
> > >>
> > >> On Mon, Nov 06, 2006 at 09:55:17PM -0800, John Byrne wrote:
> > >>> I was asked to test direct I/O to a PV domU. Since, I had a
system
> > >>> with two NICs, I gave one to a domU and one dom0. (Each is
> > >> running the
> > >>> same
> > >>> kernel: xen 3.0.3 x86_64.)
> > >>>
> > >>> I'm running netperf from an outside system to the domU and
> > >> dom0 and I
> > >>> am seeing 30% less throughput for the domU vs dom0.
> > >>>
> > >>> Is this to be expected? If so, why? If not, does anyone
> > >> have a guess
> > >>> as to what I might be doing wrong or what the issue might be?
> > >>>
> > >>> Thanks,
> > >>>
> > >>> John Byrne
> > >> _______________________________________________
> > >> Xen-devel mailing list
> > >> Xen-devel@lists.xensource.com
> > >> http://lists.xensource.com/xen-devel
> > >>
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xensource.com
> > > http://lists.xensource.com/xen-devel
> > >
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: Direct I/O to domU seeing a 30% performance hit [ In reply to ]
Hi Ian,

I already set dom0_max_vcpus=1 for domain0 when I was doing testing. Also,
Linux native kernel and domU kernel are all compiled as Uni-Processor
mode.All the testing for Linux native, domain0 and domainU are exactly the
same. All used Linux kernel 2.6.16.29.

Regards,

Liang

----- Original Message -----
From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
To: "Liang Yang" <multisyncfe991@hotmail.com>; "John Byrne"
<john.l.byrne@hp.com>
Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
<ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
Sent: Tuesday, November 07, 2006 11:06 AM
Subject: RE: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re:
[Xen-devel] Direct I/O to domU seeing a 30% performance hit


> I'm also doing some performance analysis about Linux native, dom0 and
domU
> (para-virtualized). Here are some brief comparison for 256K sequential
> read/write. The testing is done using for JBOD based on 8 Maxtor SAS
Atlas
> 2
> 15K drives with LSI SAS HBA.
>
> 256K Sequential Read
> Linux Native: 559.6MB/s
> Xen Domain0: 423.3MB/s
> Xen DomainU: 555.9MB/s

This doesn't make a lot of sense. Only thing I can think of is that
there must be some extra prefetching going on in the domU case. It still
doesn't explain why the dom0 result is so much worse than native.

It might be worth repeating with both native and dom0 boot with
maxcpus=1.

Are you using near-identical kernels in both cases? Same drivers, same
part of the disk for the tests, etc?

How are you doing the measurement? A timed 'dd'?

Ian


> 256K Sequential Write
> Linux Native: 668.9MB/s
> Xen Domain0: 708.7MB/s
> Xen DomainU: 373.5MB/s
>
> Just two questions:
>
> It seems para-virtualized DomU outperform Dom0 in terms of sequential
read
> and is very to Linux native performance. However, DomU does show poor
(only
> 50%) sequential write performance compared with Linux native and Dom0.
>
> Could you explain some reason behind this?
>
> Thanks,
>
> Liang
>
>
> ----- Original Message -----
> From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> To: "John Byrne" <john.l.byrne@hp.com>
> Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> <ack@xensource.com>
> Sent: Tuesday, November 07, 2006 10:20 AM
> Subject: RE: [Xen-devel] Direct I/O to domU seeing a 30% performance
hit
>
>
> > Both dom0 and the domU are SLES 10, so I don't know why the "idle"
> > performance of the two should be different. The obvious asymmetry is
> the
> > disk. Since the disk isn't direct, any disk I/O by the domU would
> > certainly impact dom0, but I don't think there should be much, if
any.
> I
> > did run a dom0 test with the domU started, but idle and there was no
> > real change to dom0's numbers.
> >
> > What's the best way to gather information about what is going on
with
> > the domains without perturbing them? (Or, at least, perturbing
> everyone
> > equally.)
> >
> > As to the test, I am running netperf 2.4.1 on an outside machine to
> the
> > dom0 and the domU. (So the doms are running the netserver portion.)
I
> > was originally running it in the doms to the outside machine, but
when
> > the bad numbers showed up I moved it to the outside machine because
I
> > wondered if the bad numbers were due to something happening to the
> > system time in domU. The numbers is the "outside" test to domU look
> worse.
>
>
> It might be worth checking that there's no interrupt sharing
happening.
> While running the test against the domU, see how much CPU dom0 burns
in
> the same period using 'xm vcpu-list'.
>
> To keep things simple, have dom0 and domU as uniprocessor guests.
>
> Ian
>
>
> > Ian Pratt wrote:
> > >
> > >> There have been a couple of network receive throughput
> > >> performance regressions to domUs over time that were
> > >> subsequently fixed. I think one may have crept in to 3.0.3.
> > >
> > > The report was (I believe) with a NIC directly assigned to the
domU,
> so
> > > not using netfront/back at all.
> > >
> > > John: please can you give more details on your config.
> > >
> > > Ian
> > >
> > >> Are you seeing any dropped packets on the vif associated with
> > >> your domU in your dom0? If so, propagating changeset
> > >> 11861 from unstable may help:
> > >>
> > >> changeset: 11861:637eace6d5c6
> > >> user: kfraser@localhost.localdomain
> > >> date: Mon Oct 23 11:20:37 2006 +0100
> > >> summary: [NET] back: Fix packet queuing so that packets
> > >> are drained if the
> > >>
> > >>
> > >> In the past, we also had receive throughput issues to domUs
> > >> that were due to socket buffer size logic but those were
> > >> fixed a while ago.
> > >>
> > >> Can you send netstat -i output from dom0?
> > >>
> > >> Emmanuel.
> > >>
> > >>
> > >> On Mon, Nov 06, 2006 at 09:55:17PM -0800, John Byrne wrote:
> > >>> I was asked to test direct I/O to a PV domU. Since, I had a
system
> > >>> with two NICs, I gave one to a domU and one dom0. (Each is
> > >> running the
> > >>> same
> > >>> kernel: xen 3.0.3 x86_64.)
> > >>>
> > >>> I'm running netperf from an outside system to the domU and
> > >> dom0 and I
> > >>> am seeing 30% less throughput for the domU vs dom0.
> > >>>
> > >>> Is this to be expected? If so, why? If not, does anyone
> > >> have a guess
> > >>> as to what I might be doing wrong or what the issue might be?
> > >>>
> > >>> Thanks,
> > >>>
> > >>> John Byrne
> > >> _______________________________________________
> > >> Xen-devel mailing list
> > >> Xen-devel@lists.xensource.com
> > >> http://lists.xensource.com/xen-devel
> > >>
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xensource.com
> > > http://lists.xensource.com/xen-devel
> > >
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
RE: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: Direct I/O to domU seeing a 30% performance hit [ In reply to ]
> I already set dom0_max_vcpus=1 for domain0 when I was doing testing.
Also,
> Linux native kernel and domU kernel are all compiled as Uni-Processor
> mode.All the testing for Linux native, domain0 and domainU are exactly
the
> same. All used Linux kernel 2.6.16.29.

Please could you post a 'diff' of the two kernel configs.

It might be worth diff'ing the boot messages in both cases too.

Thanks,
Ian


> Regards,
>
> Liang
>
> ----- Original Message -----
> From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> To: "Liang Yang" <multisyncfe991@hotmail.com>; "John Byrne"
> <john.l.byrne@hp.com>
> Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> <ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
> Sent: Tuesday, November 07, 2006 11:06 AM
> Subject: RE: Performance data of Linux native vs. Xen Dom0 and Xen
DomU.
> Re:
> [Xen-devel] Direct I/O to domU seeing a 30% performance hit
>
>
> > I'm also doing some performance analysis about Linux native, dom0
and
> domU
> > (para-virtualized). Here are some brief comparison for 256K
sequential
> > read/write. The testing is done using for JBOD based on 8 Maxtor SAS
> Atlas
> > 2
> > 15K drives with LSI SAS HBA.
> >
> > 256K Sequential Read
> > Linux Native: 559.6MB/s
> > Xen Domain0: 423.3MB/s
> > Xen DomainU: 555.9MB/s
>
> This doesn't make a lot of sense. Only thing I can think of is that
> there must be some extra prefetching going on in the domU case. It
still
> doesn't explain why the dom0 result is so much worse than native.
>
> It might be worth repeating with both native and dom0 boot with
> maxcpus=1.
>
> Are you using near-identical kernels in both cases? Same drivers, same
> part of the disk for the tests, etc?
>
> How are you doing the measurement? A timed 'dd'?
>
> Ian
>
>
> > 256K Sequential Write
> > Linux Native: 668.9MB/s
> > Xen Domain0: 708.7MB/s
> > Xen DomainU: 373.5MB/s
> >
> > Just two questions:
> >
> > It seems para-virtualized DomU outperform Dom0 in terms of
sequential
> read
> > and is very to Linux native performance. However, DomU does show
poor
> (only
> > 50%) sequential write performance compared with Linux native and
Dom0.
> >
> > Could you explain some reason behind this?
> >
> > Thanks,
> >
> > Liang
> >
> >
> > ----- Original Message -----
> > From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> > To: "John Byrne" <john.l.byrne@hp.com>
> > Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> > <ack@xensource.com>
> > Sent: Tuesday, November 07, 2006 10:20 AM
> > Subject: RE: [Xen-devel] Direct I/O to domU seeing a 30% performance
> hit
> >
> >
> > > Both dom0 and the domU are SLES 10, so I don't know why the "idle"
> > > performance of the two should be different. The obvious asymmetry
is
> > the
> > > disk. Since the disk isn't direct, any disk I/O by the domU would
> > > certainly impact dom0, but I don't think there should be much, if
> any.
> > I
> > > did run a dom0 test with the domU started, but idle and there was
no
> > > real change to dom0's numbers.
> > >
> > > What's the best way to gather information about what is going on
> with
> > > the domains without perturbing them? (Or, at least, perturbing
> > everyone
> > > equally.)
> > >
> > > As to the test, I am running netperf 2.4.1 on an outside machine
to
> > the
> > > dom0 and the domU. (So the doms are running the netserver
portion.)
> I
> > > was originally running it in the doms to the outside machine, but
> when
> > > the bad numbers showed up I moved it to the outside machine
because
> I
> > > wondered if the bad numbers were due to something happening to the
> > > system time in domU. The numbers is the "outside" test to domU
look
> > worse.
> >
> >
> > It might be worth checking that there's no interrupt sharing
> happening.
> > While running the test against the domU, see how much CPU dom0 burns
> in
> > the same period using 'xm vcpu-list'.
> >
> > To keep things simple, have dom0 and domU as uniprocessor guests.
> >
> > Ian
> >
> >
> > > Ian Pratt wrote:
> > > >
> > > >> There have been a couple of network receive throughput
> > > >> performance regressions to domUs over time that were
> > > >> subsequently fixed. I think one may have crept in to 3.0.3.
> > > >
> > > > The report was (I believe) with a NIC directly assigned to the
> domU,
> > so
> > > > not using netfront/back at all.
> > > >
> > > > John: please can you give more details on your config.
> > > >
> > > > Ian
> > > >
> > > >> Are you seeing any dropped packets on the vif associated with
> > > >> your domU in your dom0? If so, propagating changeset
> > > >> 11861 from unstable may help:
> > > >>
> > > >> changeset: 11861:637eace6d5c6
> > > >> user: kfraser@localhost.localdomain
> > > >> date: Mon Oct 23 11:20:37 2006 +0100
> > > >> summary: [NET] back: Fix packet queuing so that packets
> > > >> are drained if the
> > > >>
> > > >>
> > > >> In the past, we also had receive throughput issues to domUs
> > > >> that were due to socket buffer size logic but those were
> > > >> fixed a while ago.
> > > >>
> > > >> Can you send netstat -i output from dom0?
> > > >>
> > > >> Emmanuel.
> > > >>
> > > >>
> > > >> On Mon, Nov 06, 2006 at 09:55:17PM -0800, John Byrne wrote:
> > > >>> I was asked to test direct I/O to a PV domU. Since, I had a
> system
> > > >>> with two NICs, I gave one to a domU and one dom0. (Each is
> > > >> running the
> > > >>> same
> > > >>> kernel: xen 3.0.3 x86_64.)
> > > >>>
> > > >>> I'm running netperf from an outside system to the domU and
> > > >> dom0 and I
> > > >>> am seeing 30% less throughput for the domU vs dom0.
> > > >>>
> > > >>> Is this to be expected? If so, why? If not, does anyone
> > > >> have a guess
> > > >>> as to what I might be doing wrong or what the issue might be?
> > > >>>
> > > >>> Thanks,
> > > >>>
> > > >>> John Byrne
> > > >> _______________________________________________
> > > >> Xen-devel mailing list
> > > >> Xen-devel@lists.xensource.com
> > > >> http://lists.xensource.com/xen-devel
> > > >>
> > > >
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@lists.xensource.com
> > > > http://lists.xensource.com/xen-devel
> > > >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: Direct I/O to domU seeing a 30% performance hit [ In reply to ]
Attached is the diff of the two kernel configs.

----- Original Message -----
From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
To: "Liang Yang" <yangliang_mr@hotmail.com>; "John Byrne"
<john.l.byrne@hp.com>
Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
<ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
Sent: Tuesday, November 07, 2006 11:15 AM
Subject: RE: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re:
[Xen-devel] Direct I/O to domU seeing a 30% performance hit


> I already set dom0_max_vcpus=1 for domain0 when I was doing testing.
Also,
> Linux native kernel and domU kernel are all compiled as Uni-Processor
> mode.All the testing for Linux native, domain0 and domainU are exactly
the
> same. All used Linux kernel 2.6.16.29.

Please could you post a 'diff' of the two kernel configs.

It might be worth diff'ing the boot messages in both cases too.

Thanks,
Ian


> Regards,
>
> Liang
>
> ----- Original Message -----
> From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> To: "Liang Yang" <multisyncfe991@hotmail.com>; "John Byrne"
> <john.l.byrne@hp.com>
> Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> <ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
> Sent: Tuesday, November 07, 2006 11:06 AM
> Subject: RE: Performance data of Linux native vs. Xen Dom0 and Xen
DomU.
> Re:
> [Xen-devel] Direct I/O to domU seeing a 30% performance hit
>
>
> > I'm also doing some performance analysis about Linux native, dom0
and
> domU
> > (para-virtualized). Here are some brief comparison for 256K
sequential
> > read/write. The testing is done using for JBOD based on 8 Maxtor SAS
> Atlas
> > 2
> > 15K drives with LSI SAS HBA.
> >
> > 256K Sequential Read
> > Linux Native: 559.6MB/s
> > Xen Domain0: 423.3MB/s
> > Xen DomainU: 555.9MB/s
>
> This doesn't make a lot of sense. Only thing I can think of is that
> there must be some extra prefetching going on in the domU case. It
still
> doesn't explain why the dom0 result is so much worse than native.
>
> It might be worth repeating with both native and dom0 boot with
> maxcpus=1.
>
> Are you using near-identical kernels in both cases? Same drivers, same
> part of the disk for the tests, etc?
>
> How are you doing the measurement? A timed 'dd'?
>
> Ian
>
>
> > 256K Sequential Write
> > Linux Native: 668.9MB/s
> > Xen Domain0: 708.7MB/s
> > Xen DomainU: 373.5MB/s
> >
> > Just two questions:
> >
> > It seems para-virtualized DomU outperform Dom0 in terms of
sequential
> read
> > and is very to Linux native performance. However, DomU does show
poor
> (only
> > 50%) sequential write performance compared with Linux native and
Dom0.
> >
> > Could you explain some reason behind this?
> >
> > Thanks,
> >
> > Liang
> >
> >
> > ----- Original Message -----
> > From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> > To: "John Byrne" <john.l.byrne@hp.com>
> > Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> > <ack@xensource.com>
> > Sent: Tuesday, November 07, 2006 10:20 AM
> > Subject: RE: [Xen-devel] Direct I/O to domU seeing a 30% performance
> hit
> >
> >
> > > Both dom0 and the domU are SLES 10, so I don't know why the "idle"
> > > performance of the two should be different. The obvious asymmetry
is
> > the
> > > disk. Since the disk isn't direct, any disk I/O by the domU would
> > > certainly impact dom0, but I don't think there should be much, if
> any.
> > I
> > > did run a dom0 test with the domU started, but idle and there was
no
> > > real change to dom0's numbers.
> > >
> > > What's the best way to gather information about what is going on
> with
> > > the domains without perturbing them? (Or, at least, perturbing
> > everyone
> > > equally.)
> > >
> > > As to the test, I am running netperf 2.4.1 on an outside machine
to
> > the
> > > dom0 and the domU. (So the doms are running the netserver
portion.)
> I
> > > was originally running it in the doms to the outside machine, but
> when
> > > the bad numbers showed up I moved it to the outside machine
because
> I
> > > wondered if the bad numbers were due to something happening to the
> > > system time in domU. The numbers is the "outside" test to domU
look
> > worse.
> >
> >
> > It might be worth checking that there's no interrupt sharing
> happening.
> > While running the test against the domU, see how much CPU dom0 burns
> in
> > the same period using 'xm vcpu-list'.
> >
> > To keep things simple, have dom0 and domU as uniprocessor guests.
> >
> > Ian
> >
> >
> > > Ian Pratt wrote:
> > > >
> > > >> There have been a couple of network receive throughput
> > > >> performance regressions to domUs over time that were
> > > >> subsequently fixed. I think one may have crept in to 3.0.3.
> > > >
> > > > The report was (I believe) with a NIC directly assigned to the
> domU,
> > so
> > > > not using netfront/back at all.
> > > >
> > > > John: please can you give more details on your config.
> > > >
> > > > Ian
> > > >
> > > >> Are you seeing any dropped packets on the vif associated with
> > > >> your domU in your dom0? If so, propagating changeset
> > > >> 11861 from unstable may help:
> > > >>
> > > >> changeset: 11861:637eace6d5c6
> > > >> user: kfraser@localhost.localdomain
> > > >> date: Mon Oct 23 11:20:37 2006 +0100
> > > >> summary: [NET] back: Fix packet queuing so that packets
> > > >> are drained if the
> > > >>
> > > >>
> > > >> In the past, we also had receive throughput issues to domUs
> > > >> that were due to socket buffer size logic but those were
> > > >> fixed a while ago.
> > > >>
> > > >> Can you send netstat -i output from dom0?
> > > >>
> > > >> Emmanuel.
> > > >>
> > > >>
> > > >> On Mon, Nov 06, 2006 at 09:55:17PM -0800, John Byrne wrote:
> > > >>> I was asked to test direct I/O to a PV domU. Since, I had a
> system
> > > >>> with two NICs, I gave one to a domU and one dom0. (Each is
> > > >> running the
> > > >>> same
> > > >>> kernel: xen 3.0.3 x86_64.)
> > > >>>
> > > >>> I'm running netperf from an outside system to the domU and
> > > >> dom0 and I
> > > >>> am seeing 30% less throughput for the domU vs dom0.
> > > >>>
> > > >>> Is this to be expected? If so, why? If not, does anyone
> > > >> have a guess
> > > >>> as to what I might be doing wrong or what the issue might be?
> > > >>>
> > > >>> Thanks,
> > > >>>
> > > >>> John Byrne
> > > >> _______________________________________________
> > > >> Xen-devel mailing list
> > > >> Xen-devel@lists.xensource.com
> > > >> http://lists.xensource.com/xen-devel
> > > >>
> > > >
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@lists.xensource.com
> > > > http://lists.xensource.com/xen-devel
> > > >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
>
Re: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: Direct I/O to domU seeing a 30% performance hit [ In reply to ]
It looks like you're using totally different disk schedulers:

90c87
< CONFIG_DEFAULT_CFQ=y
---
> # CONFIG_DEFAULT_CFQ is not set
92c89
< CONFIG_DEFAULT_IOSCHED="cfq"
---
> CONFIG_DEFAULT_IOSCHED="anticipatory"

Try changing them both to the same thing, and seeing what happens...

-George

On 11/7/06, Liang Yang <multisyncfe991@hotmail.com> wrote:
> Attached is the diff of the two kernel configs.
>
> ----- Original Message -----
> From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> To: "Liang Yang" <yangliang_mr@hotmail.com>; "John Byrne"
> <john.l.byrne@hp.com>
> Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> <ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
> Sent: Tuesday, November 07, 2006 11:15 AM
> Subject: RE: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re:
> [Xen-devel] Direct I/O to domU seeing a 30% performance hit
>
>
> > I already set dom0_max_vcpus=1 for domain0 when I was doing testing.
> Also,
> > Linux native kernel and domU kernel are all compiled as Uni-Processor
> > mode.All the testing for Linux native, domain0 and domainU are exactly
> the
> > same. All used Linux kernel 2.6.16.29.
>
> Please could you post a 'diff' of the two kernel configs.
>
> It might be worth diff'ing the boot messages in both cases too.
>
> Thanks,
> Ian
>
>
> > Regards,
> >
> > Liang
> >
> > ----- Original Message -----
> > From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> > To: "Liang Yang" <multisyncfe991@hotmail.com>; "John Byrne"
> > <john.l.byrne@hp.com>
> > Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> > <ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
> > Sent: Tuesday, November 07, 2006 11:06 AM
> > Subject: RE: Performance data of Linux native vs. Xen Dom0 and Xen
> DomU.
> > Re:
> > [Xen-devel] Direct I/O to domU seeing a 30% performance hit
> >
> >
> > > I'm also doing some performance analysis about Linux native, dom0
> and
> > domU
> > > (para-virtualized). Here are some brief comparison for 256K
> sequential
> > > read/write. The testing is done using for JBOD based on 8 Maxtor SAS
> > Atlas
> > > 2
> > > 15K drives with LSI SAS HBA.
> > >
> > > 256K Sequential Read
> > > Linux Native: 559.6MB/s
> > > Xen Domain0: 423.3MB/s
> > > Xen DomainU: 555.9MB/s
> >
> > This doesn't make a lot of sense. Only thing I can think of is that
> > there must be some extra prefetching going on in the domU case. It
> still
> > doesn't explain why the dom0 result is so much worse than native.
> >
> > It might be worth repeating with both native and dom0 boot with
> > maxcpus=1.
> >
> > Are you using near-identical kernels in both cases? Same drivers, same
> > part of the disk for the tests, etc?
> >
> > How are you doing the measurement? A timed 'dd'?
> >
> > Ian
> >
> >
> > > 256K Sequential Write
> > > Linux Native: 668.9MB/s
> > > Xen Domain0: 708.7MB/s
> > > Xen DomainU: 373.5MB/s
> > >
> > > Just two questions:
> > >
> > > It seems para-virtualized DomU outperform Dom0 in terms of
> sequential
> > read
> > > and is very to Linux native performance. However, DomU does show
> poor
> > (only
> > > 50%) sequential write performance compared with Linux native and
> Dom0.
> > >
> > > Could you explain some reason behind this?
> > >
> > > Thanks,
> > >
> > > Liang
> > >
> > >
> > > ----- Original Message -----
> > > From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> > > To: "John Byrne" <john.l.byrne@hp.com>
> > > Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> > > <ack@xensource.com>
> > > Sent: Tuesday, November 07, 2006 10:20 AM
> > > Subject: RE: [Xen-devel] Direct I/O to domU seeing a 30% performance
> > hit
> > >
> > >
> > > > Both dom0 and the domU are SLES 10, so I don't know why the "idle"
> > > > performance of the two should be different. The obvious asymmetry
> is
> > > the
> > > > disk. Since the disk isn't direct, any disk I/O by the domU would
> > > > certainly impact dom0, but I don't think there should be much, if
> > any.
> > > I
> > > > did run a dom0 test with the domU started, but idle and there was
> no
> > > > real change to dom0's numbers.
> > > >
> > > > What's the best way to gather information about what is going on
> > with
> > > > the domains without perturbing them? (Or, at least, perturbing
> > > everyone
> > > > equally.)
> > > >
> > > > As to the test, I am running netperf 2.4.1 on an outside machine
> to
> > > the
> > > > dom0 and the domU. (So the doms are running the netserver
> portion.)
> > I
> > > > was originally running it in the doms to the outside machine, but
> > when
> > > > the bad numbers showed up I moved it to the outside machine
> because
> > I
> > > > wondered if the bad numbers were due to something happening to the
> > > > system time in domU. The numbers is the "outside" test to domU
> look
> > > worse.
> > >
> > >
> > > It might be worth checking that there's no interrupt sharing
> > happening.
> > > While running the test against the domU, see how much CPU dom0 burns
> > in
> > > the same period using 'xm vcpu-list'.
> > >
> > > To keep things simple, have dom0 and domU as uniprocessor guests.
> > >
> > > Ian
> > >
> > >
> > > > Ian Pratt wrote:
> > > > >
> > > > >> There have been a couple of network receive throughput
> > > > >> performance regressions to domUs over time that were
> > > > >> subsequently fixed. I think one may have crept in to 3.0.3.
> > > > >
> > > > > The report was (I believe) with a NIC directly assigned to the
> > domU,
> > > so
> > > > > not using netfront/back at all.
> > > > >
> > > > > John: please can you give more details on your config.
> > > > >
> > > > > Ian
> > > > >
> > > > >> Are you seeing any dropped packets on the vif associated with
> > > > >> your domU in your dom0? If so, propagating changeset
> > > > >> 11861 from unstable may help:
> > > > >>
> > > > >> changeset: 11861:637eace6d5c6
> > > > >> user: kfraser@localhost.localdomain
> > > > >> date: Mon Oct 23 11:20:37 2006 +0100
> > > > >> summary: [NET] back: Fix packet queuing so that packets
> > > > >> are drained if the
> > > > >>
> > > > >>
> > > > >> In the past, we also had receive throughput issues to domUs
> > > > >> that were due to socket buffer size logic but those were
> > > > >> fixed a while ago.
> > > > >>
> > > > >> Can you send netstat -i output from dom0?
> > > > >>
> > > > >> Emmanuel.
> > > > >>
> > > > >>
> > > > >> On Mon, Nov 06, 2006 at 09:55:17PM -0800, John Byrne wrote:
> > > > >>> I was asked to test direct I/O to a PV domU. Since, I had a
> > system
> > > > >>> with two NICs, I gave one to a domU and one dom0. (Each is
> > > > >> running the
> > > > >>> same
> > > > >>> kernel: xen 3.0.3 x86_64.)
> > > > >>>
> > > > >>> I'm running netperf from an outside system to the domU and
> > > > >> dom0 and I
> > > > >>> am seeing 30% less throughput for the domU vs dom0.
> > > > >>>
> > > > >>> Is this to be expected? If so, why? If not, does anyone
> > > > >> have a guess
> > > > >>> as to what I might be doing wrong or what the issue might be?
> > > > >>>
> > > > >>> Thanks,
> > > > >>>
> > > > >>> John Byrne
> > > > >> _______________________________________________
> > > > >> Xen-devel mailing list
> > > > >> Xen-devel@lists.xensource.com
> > > > >> http://lists.xensource.com/xen-devel
> > > > >>
> > > > >
> > > > > _______________________________________________
> > > > > Xen-devel mailing list
> > > > > Xen-devel@lists.xensource.com
> > > > > http://lists.xensource.com/xen-devel
> > > > >
> > >
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xensource.com
> > > http://lists.xensource.com/xen-devel
> >
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
RE: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: Direct I/O to domU seeing a 30% performance hit [ In reply to ]
> Attached is the diff of the two kernel configs.

There are a *lot* of differences between those kernel configs. A cursory
glance spots such gems as:

< CONFIG_DEFAULT_IOSCHED="cfq"
---
> CONFIG_DEFAULT_IOSCHED="anticipatory"

All bets are off.

Ian


> ----- Original Message -----
> From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> To: "Liang Yang" <yangliang_mr@hotmail.com>; "John Byrne"
> <john.l.byrne@hp.com>
> Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> <ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
> Sent: Tuesday, November 07, 2006 11:15 AM
> Subject: RE: Performance data of Linux native vs. Xen Dom0
> and Xen DomU. Re:
> [Xen-devel] Direct I/O to domU seeing a 30% performance hit
>
>
> > I already set dom0_max_vcpus=1 for domain0 when I was doing testing.
> Also,
> > Linux native kernel and domU kernel are all compiled as
> Uni-Processor
> > mode.All the testing for Linux native, domain0 and domainU
> are exactly
> the
> > same. All used Linux kernel 2.6.16.29.
>
> Please could you post a 'diff' of the two kernel configs.
>
> It might be worth diff'ing the boot messages in both cases too.
>
> Thanks,
> Ian
>
>
> > Regards,
> >
> > Liang
> >
> > ----- Original Message -----
> > From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> > To: "Liang Yang" <multisyncfe991@hotmail.com>; "John Byrne"
> > <john.l.byrne@hp.com>
> > Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> > <ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
> > Sent: Tuesday, November 07, 2006 11:06 AM
> > Subject: RE: Performance data of Linux native vs. Xen Dom0 and Xen
> DomU.
> > Re:
> > [Xen-devel] Direct I/O to domU seeing a 30% performance hit
> >
> >
> > > I'm also doing some performance analysis about Linux native, dom0
> and
> > domU
> > > (para-virtualized). Here are some brief comparison for 256K
> sequential
> > > read/write. The testing is done using for JBOD based on 8
> Maxtor SAS
> > Atlas
> > > 2
> > > 15K drives with LSI SAS HBA.
> > >
> > > 256K Sequential Read
> > > Linux Native: 559.6MB/s
> > > Xen Domain0: 423.3MB/s
> > > Xen DomainU: 555.9MB/s
> >
> > This doesn't make a lot of sense. Only thing I can think of is that
> > there must be some extra prefetching going on in the domU case. It
> still
> > doesn't explain why the dom0 result is so much worse than native.
> >
> > It might be worth repeating with both native and dom0 boot with
> > maxcpus=1.
> >
> > Are you using near-identical kernels in both cases? Same
> drivers, same
> > part of the disk for the tests, etc?
> >
> > How are you doing the measurement? A timed 'dd'?
> >
> > Ian
> >
> >
> > > 256K Sequential Write
> > > Linux Native: 668.9MB/s
> > > Xen Domain0: 708.7MB/s
> > > Xen DomainU: 373.5MB/s
> > >
> > > Just two questions:
> > >
> > > It seems para-virtualized DomU outperform Dom0 in terms of
> sequential
> > read
> > > and is very to Linux native performance. However, DomU does show
> poor
> > (only
> > > 50%) sequential write performance compared with Linux native and
> Dom0.
> > >
> > > Could you explain some reason behind this?
> > >
> > > Thanks,
> > >
> > > Liang
> > >
> > >
> > > ----- Original Message -----
> > > From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> > > To: "John Byrne" <john.l.byrne@hp.com>
> > > Cc: "xen-devel" <xen-devel@lists.xensource.com>;
> "Emmanuel Ackaouy"
> > > <ack@xensource.com>
> > > Sent: Tuesday, November 07, 2006 10:20 AM
> > > Subject: RE: [Xen-devel] Direct I/O to domU seeing a 30%
> performance
> > hit
> > >
> > >
> > > > Both dom0 and the domU are SLES 10, so I don't know why
> the "idle"
> > > > performance of the two should be different. The obvious
> asymmetry
> is
> > > the
> > > > disk. Since the disk isn't direct, any disk I/O by the
> domU would
> > > > certainly impact dom0, but I don't think there should
> be much, if
> > any.
> > > I
> > > > did run a dom0 test with the domU started, but idle and
> there was
> no
> > > > real change to dom0's numbers.
> > > >
> > > > What's the best way to gather information about what is going on
> > with
> > > > the domains without perturbing them? (Or, at least, perturbing
> > > everyone
> > > > equally.)
> > > >
> > > > As to the test, I am running netperf 2.4.1 on an outside machine
> to
> > > the
> > > > dom0 and the domU. (So the doms are running the netserver
> portion.)
> > I
> > > > was originally running it in the doms to the outside
> machine, but
> > when
> > > > the bad numbers showed up I moved it to the outside machine
> because
> > I
> > > > wondered if the bad numbers were due to something
> happening to the
> > > > system time in domU. The numbers is the "outside" test to domU
> look
> > > worse.
> > >
> > >
> > > It might be worth checking that there's no interrupt sharing
> > happening.
> > > While running the test against the domU, see how much CPU
> dom0 burns
> > in
> > > the same period using 'xm vcpu-list'.
> > >
> > > To keep things simple, have dom0 and domU as uniprocessor guests.
> > >
> > > Ian
> > >
> > >
> > > > Ian Pratt wrote:
> > > > >
> > > > >> There have been a couple of network receive throughput
> > > > >> performance regressions to domUs over time that were
> > > > >> subsequently fixed. I think one may have crept in to 3.0.3.
> > > > >
> > > > > The report was (I believe) with a NIC directly assigned to the
> > domU,
> > > so
> > > > > not using netfront/back at all.
> > > > >
> > > > > John: please can you give more details on your config.
> > > > >
> > > > > Ian
> > > > >
> > > > >> Are you seeing any dropped packets on the vif associated with
> > > > >> your domU in your dom0? If so, propagating changeset
> > > > >> 11861 from unstable may help:
> > > > >>
> > > > >> changeset: 11861:637eace6d5c6
> > > > >> user: kfraser@localhost.localdomain
> > > > >> date: Mon Oct 23 11:20:37 2006 +0100
> > > > >> summary: [NET] back: Fix packet queuing so that packets
> > > > >> are drained if the
> > > > >>
> > > > >>
> > > > >> In the past, we also had receive throughput issues to domUs
> > > > >> that were due to socket buffer size logic but those were
> > > > >> fixed a while ago.
> > > > >>
> > > > >> Can you send netstat -i output from dom0?
> > > > >>
> > > > >> Emmanuel.
> > > > >>
> > > > >>
> > > > >> On Mon, Nov 06, 2006 at 09:55:17PM -0800, John Byrne wrote:
> > > > >>> I was asked to test direct I/O to a PV domU. Since, I had a
> > system
> > > > >>> with two NICs, I gave one to a domU and one dom0. (Each is
> > > > >> running the
> > > > >>> same
> > > > >>> kernel: xen 3.0.3 x86_64.)
> > > > >>>
> > > > >>> I'm running netperf from an outside system to the domU and
> > > > >> dom0 and I
> > > > >>> am seeing 30% less throughput for the domU vs dom0.
> > > > >>>
> > > > >>> Is this to be expected? If so, why? If not, does anyone
> > > > >> have a guess
> > > > >>> as to what I might be doing wrong or what the issue
> might be?
> > > > >>>
> > > > >>> Thanks,
> > > > >>>
> > > > >>> John Byrne
> > > > >> _______________________________________________
> > > > >> Xen-devel mailing list
> > > > >> Xen-devel@lists.xensource.com
> > > > >> http://lists.xensource.com/xen-devel
> > > > >>
> > > > >
> > > > > _______________________________________________
> > > > > Xen-devel mailing list
> > > > > Xen-devel@lists.xensource.com
> > > > > http://lists.xensource.com/xen-devel
> > > > >
> > >
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xensource.com
> > > http://lists.xensource.com/xen-devel
> >
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: Direct I/O to domU seeing a 30% performance hit [ In reply to ]
I already changed the Linux I/O scheduler for all block devices to
Anticipatory before I start the testing. This is done runtime.

Liang

----- Original Message -----
From: "George Dunlap" <gdunlap@xensource.com>
To: "Liang Yang" <multisyncfe991@hotmail.com>
Cc: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>; "John Byrne"
<john.l.byrne@hp.com>; "xen-devel" <xen-devel@lists.xensource.com>;
"Emmanuel Ackaouy" <ack@xensource.com>
Sent: Tuesday, November 07, 2006 11:40 AM
Subject: Re: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re:
[Xen-devel] Direct I/O to domU seeing a 30% performance hit


> It looks like you're using totally different disk schedulers:
>
> 90c87
> < CONFIG_DEFAULT_CFQ=y
> ---
>> # CONFIG_DEFAULT_CFQ is not set
> 92c89
> < CONFIG_DEFAULT_IOSCHED="cfq"
> ---
>> CONFIG_DEFAULT_IOSCHED="anticipatory"
>
> Try changing them both to the same thing, and seeing what happens...
>
> -George
>
> On 11/7/06, Liang Yang <multisyncfe991@hotmail.com> wrote:
>> Attached is the diff of the two kernel configs.
>>
>> ----- Original Message -----
>> From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
>> To: "Liang Yang" <yangliang_mr@hotmail.com>; "John Byrne"
>> <john.l.byrne@hp.com>
>> Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
>> <ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
>> Sent: Tuesday, November 07, 2006 11:15 AM
>> Subject: RE: Performance data of Linux native vs. Xen Dom0 and Xen DomU.
>> Re:
>> [Xen-devel] Direct I/O to domU seeing a 30% performance hit
>>
>>
>> > I already set dom0_max_vcpus=1 for domain0 when I was doing testing.
>> Also,
>> > Linux native kernel and domU kernel are all compiled as Uni-Processor
>> > mode.All the testing for Linux native, domain0 and domainU are exactly
>> the
>> > same. All used Linux kernel 2.6.16.29.
>>
>> Please could you post a 'diff' of the two kernel configs.
>>
>> It might be worth diff'ing the boot messages in both cases too.
>>
>> Thanks,
>> Ian
>>
>>
>> > Regards,
>> >
>> > Liang
>> >
>> > ----- Original Message -----
>> > From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
>> > To: "Liang Yang" <multisyncfe991@hotmail.com>; "John Byrne"
>> > <john.l.byrne@hp.com>
>> > Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
>> > <ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
>> > Sent: Tuesday, November 07, 2006 11:06 AM
>> > Subject: RE: Performance data of Linux native vs. Xen Dom0 and Xen
>> DomU.
>> > Re:
>> > [Xen-devel] Direct I/O to domU seeing a 30% performance hit
>> >
>> >
>> > > I'm also doing some performance analysis about Linux native, dom0
>> and
>> > domU
>> > > (para-virtualized). Here are some brief comparison for 256K
>> sequential
>> > > read/write. The testing is done using for JBOD based on 8 Maxtor SAS
>> > Atlas
>> > > 2
>> > > 15K drives with LSI SAS HBA.
>> > >
>> > > 256K Sequential Read
>> > > Linux Native: 559.6MB/s
>> > > Xen Domain0: 423.3MB/s
>> > > Xen DomainU: 555.9MB/s
>> >
>> > This doesn't make a lot of sense. Only thing I can think of is that
>> > there must be some extra prefetching going on in the domU case. It
>> still
>> > doesn't explain why the dom0 result is so much worse than native.
>> >
>> > It might be worth repeating with both native and dom0 boot with
>> > maxcpus=1.
>> >
>> > Are you using near-identical kernels in both cases? Same drivers, same
>> > part of the disk for the tests, etc?
>> >
>> > How are you doing the measurement? A timed 'dd'?
>> >
>> > Ian
>> >
>> >
>> > > 256K Sequential Write
>> > > Linux Native: 668.9MB/s
>> > > Xen Domain0: 708.7MB/s
>> > > Xen DomainU: 373.5MB/s
>> > >
>> > > Just two questions:
>> > >
>> > > It seems para-virtualized DomU outperform Dom0 in terms of
>> sequential
>> > read
>> > > and is very to Linux native performance. However, DomU does show
>> poor
>> > (only
>> > > 50%) sequential write performance compared with Linux native and
>> Dom0.
>> > >
>> > > Could you explain some reason behind this?
>> > >
>> > > Thanks,
>> > >
>> > > Liang
>> > >
>> > >
>> > > ----- Original Message -----
>> > > From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
>> > > To: "John Byrne" <john.l.byrne@hp.com>
>> > > Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
>> > > <ack@xensource.com>
>> > > Sent: Tuesday, November 07, 2006 10:20 AM
>> > > Subject: RE: [Xen-devel] Direct I/O to domU seeing a 30% performance
>> > hit
>> > >
>> > >
>> > > > Both dom0 and the domU are SLES 10, so I don't know why the "idle"
>> > > > performance of the two should be different. The obvious asymmetry
>> is
>> > > the
>> > > > disk. Since the disk isn't direct, any disk I/O by the domU would
>> > > > certainly impact dom0, but I don't think there should be much, if
>> > any.
>> > > I
>> > > > did run a dom0 test with the domU started, but idle and there was
>> no
>> > > > real change to dom0's numbers.
>> > > >
>> > > > What's the best way to gather information about what is going on
>> > with
>> > > > the domains without perturbing them? (Or, at least, perturbing
>> > > everyone
>> > > > equally.)
>> > > >
>> > > > As to the test, I am running netperf 2.4.1 on an outside machine
>> to
>> > > the
>> > > > dom0 and the domU. (So the doms are running the netserver
>> portion.)
>> > I
>> > > > was originally running it in the doms to the outside machine, but
>> > when
>> > > > the bad numbers showed up I moved it to the outside machine
>> because
>> > I
>> > > > wondered if the bad numbers were due to something happening to the
>> > > > system time in domU. The numbers is the "outside" test to domU
>> look
>> > > worse.
>> > >
>> > >
>> > > It might be worth checking that there's no interrupt sharing
>> > happening.
>> > > While running the test against the domU, see how much CPU dom0 burns
>> > in
>> > > the same period using 'xm vcpu-list'.
>> > >
>> > > To keep things simple, have dom0 and domU as uniprocessor guests.
>> > >
>> > > Ian
>> > >
>> > >
>> > > > Ian Pratt wrote:
>> > > > >
>> > > > >> There have been a couple of network receive throughput
>> > > > >> performance regressions to domUs over time that were
>> > > > >> subsequently fixed. I think one may have crept in to 3.0.3.
>> > > > >
>> > > > > The report was (I believe) with a NIC directly assigned to the
>> > domU,
>> > > so
>> > > > > not using netfront/back at all.
>> > > > >
>> > > > > John: please can you give more details on your config.
>> > > > >
>> > > > > Ian
>> > > > >
>> > > > >> Are you seeing any dropped packets on the vif associated with
>> > > > >> your domU in your dom0? If so, propagating changeset
>> > > > >> 11861 from unstable may help:
>> > > > >>
>> > > > >> changeset: 11861:637eace6d5c6
>> > > > >> user: kfraser@localhost.localdomain
>> > > > >> date: Mon Oct 23 11:20:37 2006 +0100
>> > > > >> summary: [NET] back: Fix packet queuing so that packets
>> > > > >> are drained if the
>> > > > >>
>> > > > >>
>> > > > >> In the past, we also had receive throughput issues to domUs
>> > > > >> that were due to socket buffer size logic but those were
>> > > > >> fixed a while ago.
>> > > > >>
>> > > > >> Can you send netstat -i output from dom0?
>> > > > >>
>> > > > >> Emmanuel.
>> > > > >>
>> > > > >>
>> > > > >> On Mon, Nov 06, 2006 at 09:55:17PM -0800, John Byrne wrote:
>> > > > >>> I was asked to test direct I/O to a PV domU. Since, I had a
>> > system
>> > > > >>> with two NICs, I gave one to a domU and one dom0. (Each is
>> > > > >> running the
>> > > > >>> same
>> > > > >>> kernel: xen 3.0.3 x86_64.)
>> > > > >>>
>> > > > >>> I'm running netperf from an outside system to the domU and
>> > > > >> dom0 and I
>> > > > >>> am seeing 30% less throughput for the domU vs dom0.
>> > > > >>>
>> > > > >>> Is this to be expected? If so, why? If not, does anyone
>> > > > >> have a guess
>> > > > >>> as to what I might be doing wrong or what the issue might be?
>> > > > >>>
>> > > > >>> Thanks,
>> > > > >>>
>> > > > >>> John Byrne
>> > > > >> _______________________________________________
>> > > > >> Xen-devel mailing list
>> > > > >> Xen-devel@lists.xensource.com
>> > > > >> http://lists.xensource.com/xen-devel
>> > > > >>
>> > > > >
>> > > > > _______________________________________________
>> > > > > Xen-devel mailing list
>> > > > > Xen-devel@lists.xensource.com
>> > > > > http://lists.xensource.com/xen-devel
>> > > > >
>> > >
>> > >
>> > > _______________________________________________
>> > > Xen-devel mailing list
>> > > Xen-devel@lists.xensource.com
>> > > http://lists.xensource.com/xen-devel
>> >
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel
>>
>>
>>
>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: Direct I/O to domU seeing a 30% performance hit [ In reply to ]
One of the strange things, though, is that the difference should be so
big between domU and dom0, which are using the exact same same kernel
(correct me if I'm wrong).

I'm not familiar with the I/O scheduling. Is it possible that the I/O
scheduling inside the domU is interacting poorly with the I/O
scheduling in dom0? That's one hypothesis for why domU writes are
slower than dom0 writes; but that doesn't seem to explain why domU
reads would be *faster* than dom0 reads.

-George

On 11/7/06, Ian Pratt <m+Ian.Pratt@cl.cam.ac.uk> wrote:
> > Attached is the diff of the two kernel configs.
>
> There are a *lot* of differences between those kernel configs. A cursory
> glance spots such gems as:
>
> < CONFIG_DEFAULT_IOSCHED="cfq"
> ---
> > CONFIG_DEFAULT_IOSCHED="anticipatory"
>
> All bets are off.
>
> Ian
>
>
> > ----- Original Message -----
> > From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> > To: "Liang Yang" <yangliang_mr@hotmail.com>; "John Byrne"
> > <john.l.byrne@hp.com>
> > Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> > <ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
> > Sent: Tuesday, November 07, 2006 11:15 AM
> > Subject: RE: Performance data of Linux native vs. Xen Dom0
> > and Xen DomU. Re:
> > [Xen-devel] Direct I/O to domU seeing a 30% performance hit
> >
> >
> > > I already set dom0_max_vcpus=1 for domain0 when I was doing testing.
> > Also,
> > > Linux native kernel and domU kernel are all compiled as
> > Uni-Processor
> > > mode.All the testing for Linux native, domain0 and domainU
> > are exactly
> > the
> > > same. All used Linux kernel 2.6.16.29.
> >
> > Please could you post a 'diff' of the two kernel configs.
> >
> > It might be worth diff'ing the boot messages in both cases too.
> >
> > Thanks,
> > Ian
> >
> >
> > > Regards,
> > >
> > > Liang
> > >
> > > ----- Original Message -----
> > > From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> > > To: "Liang Yang" <multisyncfe991@hotmail.com>; "John Byrne"
> > > <john.l.byrne@hp.com>
> > > Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> > > <ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
> > > Sent: Tuesday, November 07, 2006 11:06 AM
> > > Subject: RE: Performance data of Linux native vs. Xen Dom0 and Xen
> > DomU.
> > > Re:
> > > [Xen-devel] Direct I/O to domU seeing a 30% performance hit
> > >
> > >
> > > > I'm also doing some performance analysis about Linux native, dom0
> > and
> > > domU
> > > > (para-virtualized). Here are some brief comparison for 256K
> > sequential
> > > > read/write. The testing is done using for JBOD based on 8
> > Maxtor SAS
> > > Atlas
> > > > 2
> > > > 15K drives with LSI SAS HBA.
> > > >
> > > > 256K Sequential Read
> > > > Linux Native: 559.6MB/s
> > > > Xen Domain0: 423.3MB/s
> > > > Xen DomainU: 555.9MB/s
> > >
> > > This doesn't make a lot of sense. Only thing I can think of is that
> > > there must be some extra prefetching going on in the domU case. It
> > still
> > > doesn't explain why the dom0 result is so much worse than native.
> > >
> > > It might be worth repeating with both native and dom0 boot with
> > > maxcpus=1.
> > >
> > > Are you using near-identical kernels in both cases? Same
> > drivers, same
> > > part of the disk for the tests, etc?
> > >
> > > How are you doing the measurement? A timed 'dd'?
> > >
> > > Ian
> > >
> > >
> > > > 256K Sequential Write
> > > > Linux Native: 668.9MB/s
> > > > Xen Domain0: 708.7MB/s
> > > > Xen DomainU: 373.5MB/s
> > > >
> > > > Just two questions:
> > > >
> > > > It seems para-virtualized DomU outperform Dom0 in terms of
> > sequential
> > > read
> > > > and is very to Linux native performance. However, DomU does show
> > poor
> > > (only
> > > > 50%) sequential write performance compared with Linux native and
> > Dom0.
> > > >
> > > > Could you explain some reason behind this?
> > > >
> > > > Thanks,
> > > >
> > > > Liang
> > > >
> > > >
> > > > ----- Original Message -----
> > > > From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> > > > To: "John Byrne" <john.l.byrne@hp.com>
> > > > Cc: "xen-devel" <xen-devel@lists.xensource.com>;
> > "Emmanuel Ackaouy"
> > > > <ack@xensource.com>
> > > > Sent: Tuesday, November 07, 2006 10:20 AM
> > > > Subject: RE: [Xen-devel] Direct I/O to domU seeing a 30%
> > performance
> > > hit
> > > >
> > > >
> > > > > Both dom0 and the domU are SLES 10, so I don't know why
> > the "idle"
> > > > > performance of the two should be different. The obvious
> > asymmetry
> > is
> > > > the
> > > > > disk. Since the disk isn't direct, any disk I/O by the
> > domU would
> > > > > certainly impact dom0, but I don't think there should
> > be much, if
> > > any.
> > > > I
> > > > > did run a dom0 test with the domU started, but idle and
> > there was
> > no
> > > > > real change to dom0's numbers.
> > > > >
> > > > > What's the best way to gather information about what is going on
> > > with
> > > > > the domains without perturbing them? (Or, at least, perturbing
> > > > everyone
> > > > > equally.)
> > > > >
> > > > > As to the test, I am running netperf 2.4.1 on an outside machine
> > to
> > > > the
> > > > > dom0 and the domU. (So the doms are running the netserver
> > portion.)
> > > I
> > > > > was originally running it in the doms to the outside
> > machine, but
> > > when
> > > > > the bad numbers showed up I moved it to the outside machine
> > because
> > > I
> > > > > wondered if the bad numbers were due to something
> > happening to the
> > > > > system time in domU. The numbers is the "outside" test to domU
> > look
> > > > worse.
> > > >
> > > >
> > > > It might be worth checking that there's no interrupt sharing
> > > happening.
> > > > While running the test against the domU, see how much CPU
> > dom0 burns
> > > in
> > > > the same period using 'xm vcpu-list'.
> > > >
> > > > To keep things simple, have dom0 and domU as uniprocessor guests.
> > > >
> > > > Ian
> > > >
> > > >
> > > > > Ian Pratt wrote:
> > > > > >
> > > > > >> There have been a couple of network receive throughput
> > > > > >> performance regressions to domUs over time that were
> > > > > >> subsequently fixed. I think one may have crept in to 3.0.3.
> > > > > >
> > > > > > The report was (I believe) with a NIC directly assigned to the
> > > domU,
> > > > so
> > > > > > not using netfront/back at all.
> > > > > >
> > > > > > John: please can you give more details on your config.
> > > > > >
> > > > > > Ian
> > > > > >
> > > > > >> Are you seeing any dropped packets on the vif associated with
> > > > > >> your domU in your dom0? If so, propagating changeset
> > > > > >> 11861 from unstable may help:
> > > > > >>
> > > > > >> changeset: 11861:637eace6d5c6
> > > > > >> user: kfraser@localhost.localdomain
> > > > > >> date: Mon Oct 23 11:20:37 2006 +0100
> > > > > >> summary: [NET] back: Fix packet queuing so that packets
> > > > > >> are drained if the
> > > > > >>
> > > > > >>
> > > > > >> In the past, we also had receive throughput issues to domUs
> > > > > >> that were due to socket buffer size logic but those were
> > > > > >> fixed a while ago.
> > > > > >>
> > > > > >> Can you send netstat -i output from dom0?
> > > > > >>
> > > > > >> Emmanuel.
> > > > > >>
> > > > > >>
> > > > > >> On Mon, Nov 06, 2006 at 09:55:17PM -0800, John Byrne wrote:
> > > > > >>> I was asked to test direct I/O to a PV domU. Since, I had a
> > > system
> > > > > >>> with two NICs, I gave one to a domU and one dom0. (Each is
> > > > > >> running the
> > > > > >>> same
> > > > > >>> kernel: xen 3.0.3 x86_64.)
> > > > > >>>
> > > > > >>> I'm running netperf from an outside system to the domU and
> > > > > >> dom0 and I
> > > > > >>> am seeing 30% less throughput for the domU vs dom0.
> > > > > >>>
> > > > > >>> Is this to be expected? If so, why? If not, does anyone
> > > > > >> have a guess
> > > > > >>> as to what I might be doing wrong or what the issue
> > might be?
> > > > > >>>
> > > > > >>> Thanks,
> > > > > >>>
> > > > > >>> John Byrne
> > > > > >> _______________________________________________
> > > > > >> Xen-devel mailing list
> > > > > >> Xen-devel@lists.xensource.com
> > > > > >> http://lists.xensource.com/xen-devel
> > > > > >>
> > > > > >
> > > > > > _______________________________________________
> > > > > > Xen-devel mailing list
> > > > > > Xen-devel@lists.xensource.com
> > > > > > http://lists.xensource.com/xen-devel
> > > > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@lists.xensource.com
> > > > http://lists.xensource.com/xen-devel
> > >
> >
> >
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: Direct I/O to domU seeing a 30% performance hit [ In reply to ]
Hi Ian,

I already tested the performance for four I/O schedulers: noop, deadline,
Anticipatory and CFQ. There is performance impact with I/O schedulers,
however
the performance difference between those four I/O schedulers are all less
than 10%.

I find Anticipaotry could be the best choice. So I used Anticipatory as the
Linux I/O scheduler for all my testing Linux native, Xen Domain0 and DomainU
(this change does not show in the config files).

Liang

----- Original Message -----
From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
To: "Liang Yang" <multisyncfe991@hotmail.com>; "John Byrne"
<john.l.byrne@hp.com>
Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
<ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
Sent: Tuesday, November 07, 2006 11:45 AM
Subject: RE: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re:
[Xen-devel] Direct I/O to domU seeing a 30% performance hit


> Attached is the diff of the two kernel configs.

There are a *lot* of differences between those kernel configs. A cursory
glance spots such gems as:

< CONFIG_DEFAULT_IOSCHED="cfq"
---
> CONFIG_DEFAULT_IOSCHED="anticipatory"

All bets are off.

Ian


> ----- Original Message -----
> From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> To: "Liang Yang" <yangliang_mr@hotmail.com>; "John Byrne"
> <john.l.byrne@hp.com>
> Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> <ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
> Sent: Tuesday, November 07, 2006 11:15 AM
> Subject: RE: Performance data of Linux native vs. Xen Dom0
> and Xen DomU. Re:
> [Xen-devel] Direct I/O to domU seeing a 30% performance hit
>
>
> > I already set dom0_max_vcpus=1 for domain0 when I was doing testing.
> Also,
> > Linux native kernel and domU kernel are all compiled as
> Uni-Processor
> > mode.All the testing for Linux native, domain0 and domainU
> are exactly
> the
> > same. All used Linux kernel 2.6.16.29.
>
> Please could you post a 'diff' of the two kernel configs.
>
> It might be worth diff'ing the boot messages in both cases too.
>
> Thanks,
> Ian
>
>
> > Regards,
> >
> > Liang
> >
> > ----- Original Message -----
> > From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> > To: "Liang Yang" <multisyncfe991@hotmail.com>; "John Byrne"
> > <john.l.byrne@hp.com>
> > Cc: "xen-devel" <xen-devel@lists.xensource.com>; "Emmanuel Ackaouy"
> > <ack@xensource.com>; <ian.pratt@cl.cam.ac.uk>
> > Sent: Tuesday, November 07, 2006 11:06 AM
> > Subject: RE: Performance data of Linux native vs. Xen Dom0 and Xen
> DomU.
> > Re:
> > [Xen-devel] Direct I/O to domU seeing a 30% performance hit
> >
> >
> > > I'm also doing some performance analysis about Linux native, dom0
> and
> > domU
> > > (para-virtualized). Here are some brief comparison for 256K
> sequential
> > > read/write. The testing is done using for JBOD based on 8
> Maxtor SAS
> > Atlas
> > > 2
> > > 15K drives with LSI SAS HBA.
> > >
> > > 256K Sequential Read
> > > Linux Native: 559.6MB/s
> > > Xen Domain0: 423.3MB/s
> > > Xen DomainU: 555.9MB/s
> >
> > This doesn't make a lot of sense. Only thing I can think of is that
> > there must be some extra prefetching going on in the domU case. It
> still
> > doesn't explain why the dom0 result is so much worse than native.
> >
> > It might be worth repeating with both native and dom0 boot with
> > maxcpus=1.
> >
> > Are you using near-identical kernels in both cases? Same
> drivers, same
> > part of the disk for the tests, etc?
> >
> > How are you doing the measurement? A timed 'dd'?
> >
> > Ian
> >
> >
> > > 256K Sequential Write
> > > Linux Native: 668.9MB/s
> > > Xen Domain0: 708.7MB/s
> > > Xen DomainU: 373.5MB/s
> > >
> > > Just two questions:
> > >
> > > It seems para-virtualized DomU outperform Dom0 in terms of
> sequential
> > read
> > > and is very to Linux native performance. However, DomU does show
> poor
> > (only
> > > 50%) sequential write performance compared with Linux native and
> Dom0.
> > >
> > > Could you explain some reason behind this?
> > >
> > > Thanks,
> > >
> > > Liang
> > >
> > >
> > > ----- Original Message -----
> > > From: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>
> > > To: "John Byrne" <john.l.byrne@hp.com>
> > > Cc: "xen-devel" <xen-devel@lists.xensource.com>;
> "Emmanuel Ackaouy"
> > > <ack@xensource.com>
> > > Sent: Tuesday, November 07, 2006 10:20 AM
> > > Subject: RE: [Xen-devel] Direct I/O to domU seeing a 30%
> performance
> > hit
> > >
> > >
> > > > Both dom0 and the domU are SLES 10, so I don't know why
> the "idle"
> > > > performance of the two should be different. The obvious
> asymmetry
> is
> > > the
> > > > disk. Since the disk isn't direct, any disk I/O by the
> domU would
> > > > certainly impact dom0, but I don't think there should
> be much, if
> > any.
> > > I
> > > > did run a dom0 test with the domU started, but idle and
> there was
> no
> > > > real change to dom0's numbers.
> > > >
> > > > What's the best way to gather information about what is going on
> > with
> > > > the domains without perturbing them? (Or, at least, perturbing
> > > everyone
> > > > equally.)
> > > >
> > > > As to the test, I am running netperf 2.4.1 on an outside machine
> to
> > > the
> > > > dom0 and the domU. (So the doms are running the netserver
> portion.)
> > I
> > > > was originally running it in the doms to the outside
> machine, but
> > when
> > > > the bad numbers showed up I moved it to the outside machine
> because
> > I
> > > > wondered if the bad numbers were due to something
> happening to the
> > > > system time in domU. The numbers is the "outside" test to domU
> look
> > > worse.
> > >
> > >
> > > It might be worth checking that there's no interrupt sharing
> > happening.
> > > While running the test against the domU, see how much CPU
> dom0 burns
> > in
> > > the same period using 'xm vcpu-list'.
> > >
> > > To keep things simple, have dom0 and domU as uniprocessor guests.
> > >
> > > Ian
> > >
> > >
> > > > Ian Pratt wrote:
> > > > >
> > > > >> There have been a couple of network receive throughput
> > > > >> performance regressions to domUs over time that were
> > > > >> subsequently fixed. I think one may have crept in to 3.0.3.
> > > > >
> > > > > The report was (I believe) with a NIC directly assigned to the
> > domU,
> > > so
> > > > > not using netfront/back at all.
> > > > >
> > > > > John: please can you give more details on your config.
> > > > >
> > > > > Ian
> > > > >
> > > > >> Are you seeing any dropped packets on the vif associated with
> > > > >> your domU in your dom0? If so, propagating changeset
> > > > >> 11861 from unstable may help:
> > > > >>
> > > > >> changeset: 11861:637eace6d5c6
> > > > >> user: kfraser@localhost.localdomain
> > > > >> date: Mon Oct 23 11:20:37 2006 +0100
> > > > >> summary: [NET] back: Fix packet queuing so that packets
> > > > >> are drained if the
> > > > >>
> > > > >>
> > > > >> In the past, we also had receive throughput issues to domUs
> > > > >> that were due to socket buffer size logic but those were
> > > > >> fixed a while ago.
> > > > >>
> > > > >> Can you send netstat -i output from dom0?
> > > > >>
> > > > >> Emmanuel.
> > > > >>
> > > > >>
> > > > >> On Mon, Nov 06, 2006 at 09:55:17PM -0800, John Byrne wrote:
> > > > >>> I was asked to test direct I/O to a PV domU. Since, I had a
> > system
> > > > >>> with two NICs, I gave one to a domU and one dom0. (Each is
> > > > >> running the
> > > > >>> same
> > > > >>> kernel: xen 3.0.3 x86_64.)
> > > > >>>
> > > > >>> I'm running netperf from an outside system to the domU and
> > > > >> dom0 and I
> > > > >>> am seeing 30% less throughput for the domU vs dom0.
> > > > >>>
> > > > >>> Is this to be expected? If so, why? If not, does anyone
> > > > >> have a guess
> > > > >>> as to what I might be doing wrong or what the issue
> might be?
> > > > >>>
> > > > >>> Thanks,
> > > > >>>
> > > > >>> John Byrne
> > > > >> _______________________________________________
> > > > >> Xen-devel mailing list
> > > > >> Xen-devel@lists.xensource.com
> > > > >> http://lists.xensource.com/xen-devel
> > > > >>
> > > > >
> > > > > _______________________________________________
> > > > > Xen-devel mailing list
> > > > > Xen-devel@lists.xensource.com
> > > > > http://lists.xensource.com/xen-devel
> > > > >
> > >
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xensource.com
> > > http://lists.xensource.com/xen-devel
> >
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: Direct I/O to domU seeing a 30% performance hit [ In reply to ]
George Dunlap wrote:
> One of the strange things, though, is that the difference should be so
> big between domU and dom0, which are using the exact same same kernel
> (correct me if I'm wrong).
>
> I'm not familiar with the I/O scheduling. Is it possible that the I/O
> scheduling inside the domU is interacting poorly with the I/O
> scheduling in dom0? That's one hypothesis for why domU writes are
> slower than dom0 writes; but that doesn't seem to explain why domU
> reads would be *faster* than dom0 reads.
>
> -George

Correct me if I'm wrong, but wouldn't a file-backed block device for the
domU be reading from memory if it's cached in the dom0? That would be a
bit faster.

Also, in the past it seemed the domU schedulers were ignored and were
implicitly noop, relying on the dom0's scheduler (which did all of the
real I/O.) Has this changed?

--
Christopher G. Stach II

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: Direct I/O to domU seeing a 30% performance hit [ In reply to ]
Hi,

I did not use any file backed block devices. To get the real disk I/O
performance for guest domains, all the guest domain OS are installed on the
separate logic volumes (each guest domain is installed on independent hard
drives).

I tried to change the I/O scheduler for both guest domain (noop is the
default) and domain0 to anticipatory, but the results are the same as I
collected before.

Regards,

Liang

----- Original Message -----
From: "Christopher G. Stach II" <cgs@ldsys.net>
To: "George Dunlap" <gdunlap@xensource.com>
Cc: "Ian Pratt" <m+Ian.Pratt@cl.cam.ac.uk>; "Emmanuel Ackaouy"
<ack@xensource.com>; "Liang Yang" <multisyncfe991@hotmail.com>; "xen-devel"
<xen-devel@lists.xensource.com>; "John Byrne" <john.l.byrne@hp.com>
Sent: Monday, November 13, 2006 8:49 AM
Subject: Re: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re:
[Xen-devel] Direct I/O to domU seeing a 30% performance hit


> George Dunlap wrote:
>> One of the strange things, though, is that the difference should be so
>> big between domU and dom0, which are using the exact same same kernel
>> (correct me if I'm wrong).
>>
>> I'm not familiar with the I/O scheduling. Is it possible that the I/O
>> scheduling inside the domU is interacting poorly with the I/O
>> scheduling in dom0? That's one hypothesis for why domU writes are
>> slower than dom0 writes; but that doesn't seem to explain why domU
>> reads would be *faster* than dom0 reads.
>>
>> -George
>
> Correct me if I'm wrong, but wouldn't a file-backed block device for the
> domU be reading from memory if it's cached in the dom0? That would be a
> bit faster.
>
> Also, in the past it seemed the domU schedulers were ignored and were
> implicitly noop, relying on the dom0's scheduler (which did all of the
> real I/O.) Has this changed?
>
> --
> Christopher G. Stach II
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel