Mailing List Archive

Network performance - sending from VM to VM using TCP
Hi,

I have been simulating a network using dummynet and evaluating it
using netperf. Xen3.0-unstable is used and the VMs are
vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT.
Using netperf, I sent data using TCP from domain-0 of machine 1 to
domain-0 of machine 2. Then I repeat the experiment, but this time
from VM-1 of machine 1 to VM-1 of machine 2.

However, the performance across the two VMs is substantially worse
than that across domain-0. Here's the result:

FROM VM to VM:
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu
(172.19.222.210) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

87380 65536 65536 80.28 24.83


FROM domain-0 to domain-0:
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu
(137.110.222.236) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

87380 65536 65536 80.11 280.62

Here's the setting of the network buffer:

net.core.wmem_max = 8388608
net.core.rmem_max = 8388608
net.ipv4.tcp_bic = 1
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 65536 8388608

Does anyone know why the performance across two VMs is so bad? Any fix
to it? Thank you.

Cherie

_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
Re: [Xen-devel] Network performance - sending from VM to VM using TCP [ In reply to ]
Are you using FreeBSD or Linux?

On Thu, 26 May 2005, Cherie Cheung wrote:

> Hi,
>
> I have been simulating a network using dummynet and evaluating it
> using netperf. Xen3.0-unstable is used and the VMs are
> vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT.
> Using netperf, I sent data using TCP from domain-0 of machine 1 to
> domain-0 of machine 2. Then I repeat the experiment, but this time
> from VM-1 of machine 1 to VM-1 of machine 2.
>
> However, the performance across the two VMs is substantially worse
> than that across domain-0. Here's the result:
>
> FROM VM to VM:
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu
> (172.19.222.210) port 0 AF_INET
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 65536 65536 80.28 24.83
>
>
> FROM domain-0 to domain-0:
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu
> (137.110.222.236) port 0 AF_INET
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 65536 65536 80.11 280.62
>
> Here's the setting of the network buffer:
>
> net.core.wmem_max = 8388608
> net.core.rmem_max = 8388608
> net.ipv4.tcp_bic = 1
> net.ipv4.tcp_rmem = 4096 87380 8388608
> net.ipv4.tcp_wmem = 4096 65536 8388608
>
> Does anyone know why the performance across two VMs is so bad? Any fix
> to it? Thank you.
>
> Cherie
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>

--
"I will not be pushed, filed, stamped, indexed, briefed, debriefed or numbered.
My life is my own."

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Xen-devel] Network performance - sending from VM to VM using TCP [ In reply to ]
Cherie Cheung wrote:
> Hi,
>
> I have been simulating a network using dummynet and evaluating it

I haven't played with dummynet and don't know if there are
additional issues inherent in using dummynet itself...

> using netperf. Xen3.0-unstable is used and the VMs are
> vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT.
> Using netperf, I sent data using TCP from domain-0 of machine 1 to
> domain-0 of machine 2. Then I repeat the experiment, but this time
> from VM-1 of machine 1 to VM-1 of machine 2.
>
> However, the performance across the two VMs is substantially worse
> than that across domain-0. Here's the result:
>
> FROM VM to VM:
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu
> (172.19.222.210) port 0 AF_INET
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 65536 65536 80.28 24.83

Your send message size is exactly your socket size. It is also
the size of the default write buffer. The kernel uses half the
buffer (very roughly) for data

Were you testing with 65536 bytes exactly for some reason?
This is stop and go traffic and normally the kernel doesn't
use the entire buffer to store data - it's roughly half...

Could you test with different send sizes?

> FROM domain-0 to domain-0:
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu
> (137.110.222.236) port 0 AF_INET
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 65536 65536 80.11 280.62
>
> Here's the setting of the network buffer:
>
> net.core.wmem_max = 8388608
> net.core.rmem_max = 8388608
> net.ipv4.tcp_bic = 1
> net.ipv4.tcp_rmem = 4096 87380 8388608
> net.ipv4.tcp_wmem = 4096 65536 8388608
>
> Does anyone know why the performance across two VMs is so bad? Any fix
> to it? Thank you.

If you just want to improve your peformance, increase your
buffer sizes!

For example:
tcp_rmem = 4096 1398080 8388608
tcp_wmem = 4096 1398080 8388608

Were you seeing losses, queue overflows?

More importantly, how much memory do you have in the system and
how were you allocating it?


thanks,
Nivedita

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Xen-devel] Network performance - sending from VM to VM using TCP [ In reply to ]
Hi,

Thanks for answering me. Here's what I have:

> Were you testing with 65536 bytes exactly for some reason?
> This is stop and go traffic and normally the kernel doesn't
> use the entire buffer to store data - it's roughly half...
>
> Could you test with different send sizes?

No special reason for that. What do you mean by kernel doesn't use the
entire buffer to store the data? I have tried different send size. It
doesn't make any noticable difference.

> If you just want to improve your peformance, increase your
> buffer sizes!
>
> For example:
> tcp_rmem = 4096 1398080 8388608
> tcp_wmem = 4096 1398080 8388608

The performance only improved a little.

TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw15.ucsd.edu
(172.19.222.215) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

1398080 1398080 1398080 80.39 26.55

can't compare with that of domain0 to domain0.

> Were you seeing losses, queue overflows?
how to check that?

> More importantly, how much memory do you have in the system and
> how were you allocating it?
it said 127MB in sudo xm list

is it really the problem with the buffer size and send size? domain0
can achieve such good performance under the same settings. Is the
bottleneck related to the overhead in the VM that causes the problem?

also, I had performed some more tests:
with bandwidth 150Mbit/s and RTT 40ms

domain0 to domain0
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

87380 65536 65536 80.17 135.01

vm to vm
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

87380 65536 65536 80.55 134.80

under these setting, VM to VM performed as good as domain0 to domain0.
if I increased or decreased the BDP, the performance dropped again.

Any idea what is causing the problem?

Thanks.

Cherie



On 5/26/05, Nivedita Singhvi <niv@us.ibm.com> wrote:
> Cherie Cheung wrote:
> > Hi,
> >
> > I have been simulating a network using dummynet and evaluating it
>
> I haven't played with dummynet and don't know if there are
> additional issues inherent in using dummynet itself...
>
> > using netperf. Xen3.0-unstable is used and the VMs are
> > vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT.
> > Using netperf, I sent data using TCP from domain-0 of machine 1 to
> > domain-0 of machine 2. Then I repeat the experiment, but this time
> > from VM-1 of machine 1 to VM-1 of machine 2.
> >
> > However, the performance across the two VMs is substantially worse
> > than that across domain-0. Here's the result:
> >
> > FROM VM to VM:
> > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu
> > (172.19.222.210) port 0 AF_INET
> > Recv Send Send
> > Socket Socket Message Elapsed
> > Size Size Size Time Throughput
> > bytes bytes bytes secs. 10^6bits/sec
> >
> > 87380 65536 65536 80.28 24.83
>
> Your send message size is exactly your socket size. It is also
> the size of the default write buffer. The kernel uses half the
> buffer (very roughly) for data
>
> Were you testing with 65536 bytes exactly for some reason?
> This is stop and go traffic and normally the kernel doesn't
> use the entire buffer to store data - it's roughly half...
>
> Could you test with different send sizes?
>
> > FROM domain-0 to domain-0:
> > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu
> > (137.110.222.236) port 0 AF_INET
> > Recv Send Send
> > Socket Socket Message Elapsed
> > Size Size Size Time Throughput
> > bytes bytes bytes secs. 10^6bits/sec
> >
> > 87380 65536 65536 80.11 280.62
> >
> > Here's the setting of the network buffer:
> >
> > net.core.wmem_max = 8388608
> > net.core.rmem_max = 8388608
> > net.ipv4.tcp_bic = 1
> > net.ipv4.tcp_rmem = 4096 87380 8388608
> > net.ipv4.tcp_wmem = 4096 65536 8388608
> >
> > Does anyone know why the performance across two VMs is so bad? Any fix
> > to it? Thank you.
>
> If you just want to improve your peformance, increase your
> buffer sizes!
>
> For example:
> tcp_rmem = 4096 1398080 8388608
> tcp_wmem = 4096 1398080 8388608
>
> Were you seeing losses, queue overflows?
>
> More importantly, how much memory do you have in the system and
> how were you allocating it?
>
>
> thanks,
> Nivedita
>

_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
Re: Re: [Xen-devel] Network performance - sending from VM to VM using TCP [ In reply to ]
Cherie Cheung wrote:
> Hi,
>
> Thanks for answering me. Here's what I have:
>
>
>>Were you testing with 65536 bytes exactly for some reason?
>>This is stop and go traffic and normally the kernel doesn't
>>use the entire buffer to store data - it's roughly half...
>>
>>Could you test with different send sizes?
>
>
> No special reason for that. What do you mean by kernel doesn't use the
> entire buffer to store the data? I have tried different send size. It
> doesn't make any noticable difference.
>
>
>>If you just want to improve your peformance, increase your
>>buffer sizes!
>>
>>For example:
>>tcp_rmem = 4096 1398080 8388608
>>tcp_wmem = 4096 1398080 8388608
>
>
> The performance only improved a little.
>
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw15.ucsd.edu
> (172.19.222.215) port 0 AF_INET
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 1398080 1398080 1398080 80.39 26.55
>
> can't compare with that of domain0 to domain0.
>
>
>>Were you seeing losses, queue overflows?
>
> how to check that?
>
>
>>More importantly, how much memory do you have in the system and
>>how were you allocating it?
>
> it said 127MB in sudo xm list
>
> is it really the problem with the buffer size and send size? domain0
> can achieve such good performance under the same settings. Is the
> bottleneck related to the overhead in the VM that causes the problem?
>
> also, I had performed some more tests:
> with bandwidth 150Mbit/s and RTT 40ms
>
> domain0 to domain0
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 65536 65536 80.17 135.01
>
> vm to vm
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 65536 65536 80.55 134.80
>
> under these setting, VM to VM performed as good as domain0 to domain0.
> if I increased or decreased the BDP, the performance dropped again.

Hi Cherie,

Please pardon my ignorance. What is BDP?

TIA

>
> Any idea what is causing the problem?
>
> Thanks.
>
> Cherie
>
>
>
> On 5/26/05, Nivedita Singhvi <niv@us.ibm.com> wrote:
>
>>Cherie Cheung wrote:
>>
>>>Hi,
>>>
>>>I have been simulating a network using dummynet and evaluating it
>>
>>I haven't played with dummynet and don't know if there are
>>additional issues inherent in using dummynet itself...
>>
>>
>>>using netperf. Xen3.0-unstable is used and the VMs are
>>>vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT.
>>>Using netperf, I sent data using TCP from domain-0 of machine 1 to
>>>domain-0 of machine 2. Then I repeat the experiment, but this time
>>>from VM-1 of machine 1 to VM-1 of machine 2.
>>>
>>>However, the performance across the two VMs is substantially worse
>>>than that across domain-0. Here's the result:
>>>
>>>FROM VM to VM:
>>>TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu
>>>(172.19.222.210) port 0 AF_INET
>>>Recv Send Send
>>>Socket Socket Message Elapsed
>>>Size Size Size Time Throughput
>>>bytes bytes bytes secs. 10^6bits/sec
>>>
>>> 87380 65536 65536 80.28 24.83
>>
>>Your send message size is exactly your socket size. It is also
>>the size of the default write buffer. The kernel uses half the
>>buffer (very roughly) for data
>>
>>Were you testing with 65536 bytes exactly for some reason?
>>This is stop and go traffic and normally the kernel doesn't
>>use the entire buffer to store data - it's roughly half...
>>
>>Could you test with different send sizes?
>>
>>
>>>FROM domain-0 to domain-0:
>>>TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu
>>>(137.110.222.236) port 0 AF_INET
>>>Recv Send Send
>>>Socket Socket Message Elapsed
>>>Size Size Size Time Throughput
>>>bytes bytes bytes secs. 10^6bits/sec
>>>
>>> 87380 65536 65536 80.11 280.62
>>>
>>>Here's the setting of the network buffer:
>>>
>>>net.core.wmem_max = 8388608
>>>net.core.rmem_max = 8388608
>>>net.ipv4.tcp_bic = 1
>>>net.ipv4.tcp_rmem = 4096 87380 8388608
>>>net.ipv4.tcp_wmem = 4096 65536 8388608
>>>
>>>Does anyone know why the performance across two VMs is so bad? Any fix
>>>to it? Thank you.
>>
>>If you just want to improve your peformance, increase your
>>buffer sizes!
>>
>>For example:
>>tcp_rmem = 4096 1398080 8388608
>>tcp_wmem = 4096 1398080 8388608
>>
>>Were you seeing losses, queue overflows?
>>
>>More importantly, how much memory do you have in the system and
>>how were you allocating it?
>>
>>
>>thanks,
>>Nivedita
>>
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@lists.xensource.com
> http://lists.xensource.com/xen-users
>
>


_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
Re: Re: [Xen-devel] Network performance - sending from VM to VM using TCP [ In reply to ]
Bandwidth Delay Product - google can give you better examples than I.

On 5/26/05, Xen User <xen@theorb.net> wrote:
> Cherie Cheung wrote:
> > Hi,
> >
> > Thanks for answering me. Here's what I have:
> >
> >
> >>Were you testing with 65536 bytes exactly for some reason?
> >>This is stop and go traffic and normally the kernel doesn't
> >>use the entire buffer to store data - it's roughly half...
> >>
> >>Could you test with different send sizes?
> >
> >
> > No special reason for that. What do you mean by kernel doesn't use the
> > entire buffer to store the data? I have tried different send size. It
> > doesn't make any noticable difference.
> >
> >
> >>If you just want to improve your peformance, increase your
> >>buffer sizes!
> >>
> >>For example:
> >>tcp_rmem = 4096 1398080 8388608
> >>tcp_wmem = 4096 1398080 8388608
> >
> >
> > The performance only improved a little.
> >
> > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw15.ucsd.edu
> > (172.19.222.215) port 0 AF_INET
> > Recv Send Send
> > Socket Socket Message Elapsed
> > Size Size Size Time Throughput
> > bytes bytes bytes secs. 10^6bits/sec
> >
> > 1398080 1398080 1398080 80.39 26.55
> >
> > can't compare with that of domain0 to domain0.
> >
> >
> >>Were you seeing losses, queue overflows?
> >
> > how to check that?
> >
> >
> >>More importantly, how much memory do you have in the system and
> >>how were you allocating it?
> >
> > it said 127MB in sudo xm list
> >
> > is it really the problem with the buffer size and send size? domain0
> > can achieve such good performance under the same settings. Is the
> > bottleneck related to the overhead in the VM that causes the problem?
> >
> > also, I had performed some more tests:
> > with bandwidth 150Mbit/s and RTT 40ms
> >
> > domain0 to domain0
> > Recv Send Send
> > Socket Socket Message Elapsed
> > Size Size Size Time Throughput
> > bytes bytes bytes secs. 10^6bits/sec
> >
> > 87380 65536 65536 80.17 135.01
> >
> > vm to vm
> > Recv Send Send
> > Socket Socket Message Elapsed
> > Size Size Size Time Throughput
> > bytes bytes bytes secs. 10^6bits/sec
> >
> > 87380 65536 65536 80.55 134.80
> >
> > under these setting, VM to VM performed as good as domain0 to domain0.
> > if I increased or decreased the BDP, the performance dropped again.
>
> Hi Cherie,
>
> Please pardon my ignorance. What is BDP?
>
> TIA
>
> >
> > Any idea what is causing the problem?
> >
> > Thanks.
> >
> > Cherie
> >
> >
> >
> > On 5/26/05, Nivedita Singhvi <niv@us.ibm.com> wrote:
> >
> >>Cherie Cheung wrote:
> >>
> >>>Hi,
> >>>
> >>>I have been simulating a network using dummynet and evaluating it
> >>
> >>I haven't played with dummynet and don't know if there are
> >>additional issues inherent in using dummynet itself...
> >>
> >>
> >>>using netperf. Xen3.0-unstable is used and the VMs are
> >>>vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT.
> >>>Using netperf, I sent data using TCP from domain-0 of machine 1 to
> >>>domain-0 of machine 2. Then I repeat the experiment, but this time
> >>>from VM-1 of machine 1 to VM-1 of machine 2.
> >>>
> >>>However, the performance across the two VMs is substantially worse
> >>>than that across domain-0. Here's the result:
> >>>
> >>>FROM VM to VM:
> >>>TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu
> >>>(172.19.222.210) port 0 AF_INET
> >>>Recv Send Send
> >>>Socket Socket Message Elapsed
> >>>Size Size Size Time Throughput
> >>>bytes bytes bytes secs. 10^6bits/sec
> >>>
> >>> 87380 65536 65536 80.28 24.83
> >>
> >>Your send message size is exactly your socket size. It is also
> >>the size of the default write buffer. The kernel uses half the
> >>buffer (very roughly) for data
> >>
> >>Were you testing with 65536 bytes exactly for some reason?
> >>This is stop and go traffic and normally the kernel doesn't
> >>use the entire buffer to store data - it's roughly half...
> >>
> >>Could you test with different send sizes?
> >>
> >>
> >>>FROM domain-0 to domain-0:
> >>>TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu
> >>>(137.110.222.236) port 0 AF_INET
> >>>Recv Send Send
> >>>Socket Socket Message Elapsed
> >>>Size Size Size Time Throughput
> >>>bytes bytes bytes secs. 10^6bits/sec
> >>>
> >>> 87380 65536 65536 80.11 280.62
> >>>
> >>>Here's the setting of the network buffer:
> >>>
> >>>net.core.wmem_max = 8388608
> >>>net.core.rmem_max = 8388608
> >>>net.ipv4.tcp_bic = 1
> >>>net.ipv4.tcp_rmem = 4096 87380 8388608
> >>>net.ipv4.tcp_wmem = 4096 65536 8388608
> >>>
> >>>Does anyone know why the performance across two VMs is so bad? Any fix
> >>>to it? Thank you.
> >>
> >>If you just want to improve your peformance, increase your
> >>buffer sizes!
> >>
> >>For example:
> >>tcp_rmem = 4096 1398080 8388608
> >>tcp_wmem = 4096 1398080 8388608
> >>
> >>Were you seeing losses, queue overflows?
> >>
> >>More importantly, how much memory do you have in the system and
> >>how were you allocating it?
> >>
> >>
> >>thanks,
> >>Nivedita
> >>
> >
> >
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@lists.xensource.com
> > http://lists.xensource.com/xen-users
> >
> >
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@lists.xensource.com
> http://lists.xensource.com/xen-users
>

_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
Re: [Xen-devel] Network performance - sending from VM to VM using TCP [ In reply to ]
Cherie Cheung wrote:
>
>>Could you test with different send sizes?
>
>
> No special reason for that. What do you mean by kernel doesn't use the
> entire buffer to store the data? I have tried different send size. It
> doesn't make any noticable difference.

Normally, if you do a write that fits in the send buffer,
the write will return immediately. If you don't have enough
room, it will block until the buffer drains and there is
enough room. Normally, the kernel reserves a fraction of
the socket buffer space for internal kernel data management.
If you do a setsockopt of 128K bytes, for instance, and then
do a getsockopt(), you will notice that the kernel will report
twice what you asked for.


> The performance only improved a little.
>
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw15.ucsd.edu
> (172.19.222.215) port 0 AF_INET
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 1398080 1398080 1398080 80.39 26.55

Ah, the idea is not to use such a large send message
size! Increase your buffer sizes - but not your send
message size..Not sure if netperf handles that well -
this is a memory allocation issue. netperf is an intensive
application in TCP streams - the application does no disk
activity - it's generating data on the fly, and doing
repeated writes of that amount. You might just be
blocking on memory.

I'd be very interested in what you get with those buffer
sizes and 1K, 4K, 16K message sizes..

> can't compare with that of domain0 to domain0.

So both domains have 128MB? Can you bump that up to, say, 512MB?

>>Were you seeing losses, queue overflows?
>
> how to check that?

you can do a netstat -s, ifconfig, for instance.

> is it really the problem with the buffer size and send size? domain0
> can achieve such good performance under the same settings. Is the
> bottleneck related to the overhead in the VM that causes the problem?
>
> also, I had performed some more tests:
> with bandwidth 150Mbit/s and RTT 40ms
>
> domain0 to domain0
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 65536 65536 80.17 135.01
>
> vm to vm
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 65536 65536 80.55 134.80
>
> under these setting, VM to VM performed as good as domain0 to domain0.
> if I increased or decreased the BDP, the performance dropped again.

Very interesting - possibly you're managing to send
closer to your real bandwidth-delay-product? Would be
interesting to get the numbers across a range of RTTs.

thanks,
Nivedita



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel