Mailing List Archive

Directional network performance issues with Neutron + OpenvSwitch
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi Folks

I'm seeing an odd direction performance issue with my Havana test rig
which I'm struggling to debug; details:

Ubuntu 12.04 with Linux 3.8 backports kernel, Havana Cloud Archive
(currently Havana b3, OpenvSwitch 1.10.2), OpenvSwitch plugin with GRE
overlay networks.

I've configured the MTU's on all of the physical host network
interfaces to 1546 to add capacity for the GRE network headers.

Performance between instances within a single tenant network on
different physical hosts is as I would expect (near 1GBps), but I see
issues when data transits the Neutron L3 gateway - in the example
below churel is a physical host on the same network as the layer 3
gateway:

ubuntu@churel:~$ scp hardware.dump 10.98.191.103:
hardware.dump
100% 67MB 4.8MB/s
00:14

ubuntu@churel:~$ scp 10.98.191.103:hardware.dump .
hardware.dump
100% 67MB
66.8MB/s 00:01

As you can see, pushing data to the instance (via a floating ip
10.98.191.103) is painfully slow, whereas pulling the same data is
x10+ faster (and closer to what I would expect).

iperf confirms the same:

ubuntu@churel:~$ iperf -c 10.98.191.103 -m
- ------------------------------------------------------------
Client connecting to 10.98.191.103, TCP port 5001
TCP window size: 22.9 KByte (default)
- ------------------------------------------------------------
[ 3] local 10.98.191.11 port 55330 connected with 10.98.191.103 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 60.8 MBytes 50.8 Mbits/sec
[ 3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)

ubuntu@james-page-bastion:~$ iperf -c 10.98.191.11 -m


- ------------------------------------------------------------
Client connecting to 10.98.191.11, TCP port 5001
TCP window size: 23.3 KByte (default)
- ------------------------------------------------------------
[ 3] local 10.5.0.2 port 52190 connected with 10.98.191.11 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.07 GBytes 918 Mbits/sec
[ 3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)


918Mbit vs 50Mbits.

I tcpdump'ed the traffic and I see alot of duplicate acks which makes
me suspect some sort of packet fragmentation but its got me puzzled.

Anyone have any ideas about how to debug this further? or has anyone
seen anything like this before?

Cheers

James


- --
James Page
Ubuntu and Debian Developer
james.page@ubuntu.com
jamespage@debian.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJSS+QSAAoJEL/srsug59jD8ZcQAKbZDVU8KKa7hsic7+ulqWQQ
EFbq8Im5x4mQY7htIvIOM26BR0ktAO5luE7zMBXsA4AwPud1BQSGhw89/NvNhADT
TLcGdQADsomeiBpJebzwUmvL/tYUoMDRA3O96mUn2pi0fySWbEuEgMDjDJ/ow23D
Y7nEv0mItaZ4MBSI9RZcqsDUl7UbbdlGejSWhJcwp/127HMU9nYwWNz5UHJjsGZ1
eITyv1WZH/dYPQ1SES41qD1WvkTBugopGJvptEyrcO62A+akGOvnqpsHgPECbLb+
b/8rk8nB1HB74Wh+tQP4WRQCZYso15nB6ukIyIU24Qti2tXtXDdKwszEoblCwCT3
YZJTERNOENURlUEFwgi6FNL+nZomSG0UJU6qqDGiUJkbSF7SwJm4y8/XRlJM2Ihn
wyxFB0qe3YdMqgDLZn11GwCDqn3g11hYaocHNUyRaj/tgxhGKbOFvix5kz3I4V7T
gd+sqUySMVd9wCRXBzDDhCuG9xf/QY2ZQxXzyfPJWd9svPh/O6osTSQzaI1eZl9/
jVRejMAFr6Rl11GPKd3DYi32GXa896QELjBmJ9Kof0NDlCcDuUKpVeifIhcbQZZV
sWyQmbb6Z/ypFV9xXiLRfH2fW2bAQQHgiQGvy9apoE78BWYdnsD8Q3Ekwag6lFqp
yUwt/RcRXS1PbLG4EGFW
=HTvW
-----END PGP SIGNATURE-----

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
On 10/02/2013 02:14 AM, James Page wrote:

>
> I tcpdump'ed the traffic and I see alot of duplicate acks which makes
> me suspect some sort of packet fragmentation but its got me puzzled.
>
> Anyone have any ideas about how to debug this further? or has anyone
> seen anything like this before?

Duplicate ACKs can be triggered by missing or out-of-order TCP segments.
Presumably that would show-up in the tcpdump trace though it might be
easier to see if you run the .pcap file through tcptrace -G.

Iperf may have a similar option, but if there are actual TCP
retransmissions during the run, netperf can be told to tell you about
them (when running under Linux):

netperf -H <remote> -t TCP_STREAM -- -o
throughput,local_transport_retrans,remote_transport_retrans

will give to <remote>

and

netperf -H <remote> -t TCP_MAERTS -- -o
throughput,local_transport_retrans,remote_transport_retrans

will give from <remote>. Or you can take snapshots of netstat -s output
from before and after your iperf run(s) and do the math by hand.

rick jones
if the netperf in multiverse isn't new enough to grok the -o option, you
can grab the top-of-trunk from http://www.netperf.org/svn/netperf2/trunk
via svn.

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
Hi James, have you tried setting the MTU to a lower number of bytes,
instead of a higher-than-1500 setting? Say... 1454 instead of 1546?

Curious to see if that resolves the issue. If it does, then perhaps
there is a path somewhere that had a <1546 PMTU?

-jay

On 10/02/2013 05:14 AM, James Page wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Hi Folks
>
> I'm seeing an odd direction performance issue with my Havana test rig
> which I'm struggling to debug; details:
>
> Ubuntu 12.04 with Linux 3.8 backports kernel, Havana Cloud Archive
> (currently Havana b3, OpenvSwitch 1.10.2), OpenvSwitch plugin with GRE
> overlay networks.
>
> I've configured the MTU's on all of the physical host network
> interfaces to 1546 to add capacity for the GRE network headers.
>
> Performance between instances within a single tenant network on
> different physical hosts is as I would expect (near 1GBps), but I see
> issues when data transits the Neutron L3 gateway - in the example
> below churel is a physical host on the same network as the layer 3
> gateway:
>
> ubuntu@churel:~$ scp hardware.dump 10.98.191.103:
> hardware.dump
> 100% 67MB 4.8MB/s
> 00:14
>
> ubuntu@churel:~$ scp 10.98.191.103:hardware.dump .
> hardware.dump
> 100% 67MB
> 66.8MB/s 00:01
>
> As you can see, pushing data to the instance (via a floating ip
> 10.98.191.103) is painfully slow, whereas pulling the same data is
> x10+ faster (and closer to what I would expect).
>
> iperf confirms the same:
>
> ubuntu@churel:~$ iperf -c 10.98.191.103 -m
> - ------------------------------------------------------------
> Client connecting to 10.98.191.103, TCP port 5001
> TCP window size: 22.9 KByte (default)
> - ------------------------------------------------------------
> [ 3] local 10.98.191.11 port 55330 connected with 10.98.191.103 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-10.0 sec 60.8 MBytes 50.8 Mbits/sec
> [ 3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
>
> ubuntu@james-page-bastion:~$ iperf -c 10.98.191.11 -m
>
>
> - ------------------------------------------------------------
> Client connecting to 10.98.191.11, TCP port 5001
> TCP window size: 23.3 KByte (default)
> - ------------------------------------------------------------
> [ 3] local 10.5.0.2 port 52190 connected with 10.98.191.11 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-10.0 sec 1.07 GBytes 918 Mbits/sec
> [ 3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
>
>
> 918Mbit vs 50Mbits.
>
> I tcpdump'ed the traffic and I see alot of duplicate acks which makes
> me suspect some sort of packet fragmentation but its got me puzzled.
>
> Anyone have any ideas about how to debug this further? or has anyone
> seen anything like this before?
>
> Cheers
>
> James
>
>
> - --
> James Page
> Ubuntu and Debian Developer
> james.page@ubuntu.com
> jamespage@debian.org
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.14 (GNU/Linux)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIcBAEBCAAGBQJSS+QSAAoJEL/srsug59jD8ZcQAKbZDVU8KKa7hsic7+ulqWQQ
> EFbq8Im5x4mQY7htIvIOM26BR0ktAO5luE7zMBXsA4AwPud1BQSGhw89/NvNhADT
> TLcGdQADsomeiBpJebzwUmvL/tYUoMDRA3O96mUn2pi0fySWbEuEgMDjDJ/ow23D
> Y7nEv0mItaZ4MBSI9RZcqsDUl7UbbdlGejSWhJcwp/127HMU9nYwWNz5UHJjsGZ1
> eITyv1WZH/dYPQ1SES41qD1WvkTBugopGJvptEyrcO62A+akGOvnqpsHgPECbLb+
> b/8rk8nB1HB74Wh+tQP4WRQCZYso15nB6ukIyIU24Qti2tXtXDdKwszEoblCwCT3
> YZJTERNOENURlUEFwgi6FNL+nZomSG0UJU6qqDGiUJkbSF7SwJm4y8/XRlJM2Ihn
> wyxFB0qe3YdMqgDLZn11GwCDqn3g11hYaocHNUyRaj/tgxhGKbOFvix5kz3I4V7T
> gd+sqUySMVd9wCRXBzDDhCuG9xf/QY2ZQxXzyfPJWd9svPh/O6osTSQzaI1eZl9/
> jVRejMAFr6Rl11GPKd3DYi32GXa896QELjBmJ9Kof0NDlCcDuUKpVeifIhcbQZZV
> sWyQmbb6Z/ypFV9xXiLRfH2fW2bAQQHgiQGvy9apoE78BWYdnsD8Q3Ekwag6lFqp
> yUwt/RcRXS1PbLG4EGFW
> =HTvW
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi Jay

On 02/10/13 16:37, Jay Pipes wrote:
> Hi James, have you tried setting the MTU to a lower number of
> bytes, instead of a higher-than-1500 setting? Say... 1454 instead
> of 1546?
>
> Curious to see if that resolves the issue. If it does, then
> perhaps there is a path somewhere that had a <1546 PMTU?

Do you mean in instances, or on the physical servers?

For context I hit this problem prior to tweaking MTU's (defaults of
1500 everywhere).


- --
James Page
Ubuntu and Debian Developer
james.page@ubuntu.com
jamespage@debian.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJSTEcPAAoJEL/srsug59jDJg0P/3Qa2IBgbbRDJ0qyoRKJIasY
apeI1ocxBSQiMu/T+8lcjaJj5ucz3l8LZrZGBe+0mHtEIzwOma7Lyie47GIPHopJ
I7oalNHY20sipbtljraA4HFfondDjXL9DCSQNAotiY2sD+QqHWG4oZ55IH/lh/fs
HfUb7WrOclQVK16WKn8fRmot3qx1tR2TNDhbp1WGGDqZxRPJa1xBXiw6FhSandcs
/uruaEIw8lZFvtLOiOhLLH5JErPZOAE4SZHTUuF56AtEthfMZLIzFrMrwV+cqcS5
8z/y6gsjMvDl4uKFwbuw/8DnVfzdQVI2/IRQPOrhj0Ve73YtAspEa5FmHgcGtm9c
8AL8emOLLs3jVFBVLDcCD3PezeItqaDoj8oAI1RUU3Ks1Pk2OsgKH2PLG/A2q97J
MSHv81Sm2m6xbSdAxLsxz+MCWV3Wkhvm0F6Q9k8xUowsIsgql2pbOs4QmAsIwucJ
tQdQ0R+yBCV+9lxWODieXTT/N0h7di5GVztip08T5kMxISLUo/Qhswi8jE9GU6ds
M6YC/GkSfoV0mOVsbLso8s6IEBlaCJajZduG4RkT1X+gt8nLMtcrx8eR49h0CIMe
+cT7Ck174IUL3oOfDJSjWRZkixIqhvmId5gtnjX0sg1mXnvGMMYG2d/0YeF1WZDE
of5cDqBMmh9Lm2ZMZvCh
=2SYk
-----END PGP SIGNATURE-----

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
http://techbackground.blogspot.co.uk/2013/06/path-mtu-discovery-and-gre.html


-----Original Message-----
From: James Page [mailto:james.page@ubuntu.com]
Sent: Wednesday, October 02, 2013 9:17 AM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Directional network performance issues with Neutron + OpenvSwitch

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi Jay

On 02/10/13 16:37, Jay Pipes wrote:
> Hi James, have you tried setting the MTU to a lower number of bytes,
> instead of a higher-than-1500 setting? Say... 1454 instead of 1546?
>
> Curious to see if that resolves the issue. If it does, then perhaps
> there is a path somewhere that had a <1546 PMTU?

Do you mean in instances, or on the physical servers?

For context I hit this problem prior to tweaking MTU's (defaults of
1500 everywhere).


- --
James Page
Ubuntu and Debian Developer
james.page@ubuntu.com
jamespage@debian.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJSTEcPAAoJEL/srsug59jDJg0P/3Qa2IBgbbRDJ0qyoRKJIasY
apeI1ocxBSQiMu/T+8lcjaJj5ucz3l8LZrZGBe+0mHtEIzwOma7Lyie47GIPHopJ
I7oalNHY20sipbtljraA4HFfondDjXL9DCSQNAotiY2sD+QqHWG4oZ55IH/lh/fs
HfUb7WrOclQVK16WKn8fRmot3qx1tR2TNDhbp1WGGDqZxRPJa1xBXiw6FhSandcs
/uruaEIw8lZFvtLOiOhLLH5JErPZOAE4SZHTUuF56AtEthfMZLIzFrMrwV+cqcS5
8z/y6gsjMvDl4uKFwbuw/8DnVfzdQVI2/IRQPOrhj0Ve73YtAspEa5FmHgcGtm9c
8AL8emOLLs3jVFBVLDcCD3PezeItqaDoj8oAI1RUU3Ks1Pk2OsgKH2PLG/A2q97J
MSHv81Sm2m6xbSdAxLsxz+MCWV3Wkhvm0F6Q9k8xUowsIsgql2pbOs4QmAsIwucJ
tQdQ0R+yBCV+9lxWODieXTT/N0h7di5GVztip08T5kMxISLUo/Qhswi8jE9GU6ds
M6YC/GkSfoV0mOVsbLso8s6IEBlaCJajZduG4RkT1X+gt8nLMtcrx8eR49h0CIMe
+cT7Ck174IUL3oOfDJSjWRZkixIqhvmId5gtnjX0sg1mXnvGMMYG2d/0YeF1WZDE
of5cDqBMmh9Lm2ZMZvCh
=2SYk
-----END PGP SIGNATURE-----

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
On 10/02/2013 12:17 PM, James Page wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Hi Jay
>
> On 02/10/13 16:37, Jay Pipes wrote:
>> Hi James, have you tried setting the MTU to a lower number of
>> bytes, instead of a higher-than-1500 setting? Say... 1454 instead
>> of 1546?
>>
>> Curious to see if that resolves the issue. If it does, then
>> perhaps there is a path somewhere that had a <1546 PMTU?
>
> Do you mean in instances, or on the physical servers?

I mean on the instance vNICs.

> For context I hit this problem prior to tweaking MTU's (defaults of
> 1500 everywhere).

Right, I'm just curious :)

-jay



_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi Gangur

On 02/10/13 17:24, Gangur, Hrushikesh (R & D HP Cloud) wrote:
> http://techbackground.blogspot.co.uk/2013/06/path-mtu-discovery-and-gre.html

Yeah
>
- - I read that already:

sudo ip netns exec qrouter-d3baf1b1-55ee-42cb-a3f6-9629288e3221
traceroute -n 10.5.0.2 -p 44444 --mtu
traceroute to 10.5.0.2 (10.5.0.2), 30 hops max, 65000 byte packets
1 10.5.0.2 0.950 ms F=1500 0.598 ms 0.566 ms

The PMTU from the l3 gateway to the instance looks OK to me.

> On 02/10/13 16:37, Jay Pipes wrote:
>> Hi James, have you tried setting the MTU to a lower number of
>> bytes, instead of a higher-than-1500 setting? Say... 1454 instead
>> of 1546?
>
>> Curious to see if that resolves the issue. If it does, then
>> perhaps there is a path somewhere that had a <1546 PMTU?
>
> Do you mean in instances, or on the physical servers?
>
> For context I hit this problem prior to tweaking MTU's (defaults
> of 1500 everywhere).

- --
James Page
Ubuntu and Debian Developer
james.page@ubuntu.com
jamespage@debian.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJSTErmAAoJEL/srsug59jDLzEQALqIhfbVeWwUCe/s+P/CLN3k
EIH5koGJ69RiQDFhcIBSRzQw7FbwWznBAHHeemVn5OW/LcCKJQo9wLNX1K742pjz
G2pDwVeJnwX/QVK95chyJ/4zZENpSiT/2fzlNje7H95eiKdRd6mvDSPsIjoEQ5Ci
Cz4R1nvOoJj9cWOt5xCHtsmb5PX7O2D9zpCj/Al6ELH95zNfe7eyFSUcwZ/MEo9t
e8VxAaKlg+AQ6bdYokssIrHU6osdHDGXY1/9z6ffbcrVXJnlDkzHx0DmN81qIPXV
ros8OPZA51cVqVpEw2TvFbl5DZHukjOLGePsTKN6IcQ/2TtMdqqgbGdWAxO9iVFR
SAQdVp9yM6J7XM4kZ//gj4Oc3g/jN9EHr8rP0tEFWlypomiBjG8sQeEuHlp6DFxQ
IYacqOfWCozTDuQroj77Q9QUf4VV+ykVvTPFBHG7FiLAZyXRV5ueOlwHgAdysiyO
rIYcxXYrU6RAAmuqXXnyu5awFd/s2qisuAXTjhQpN9mUuVB9ge/BRGLa1di4S/Wz
sHAhT18h/JAxvyzARq9Qa0X8go87mM3Xoe5fivnvQrTNPQsoOxgaK6JVbTNG0pP2
bJbnRTBEjudSNlRo1WEfopsiz1HxYsN5tlpG0BabnkAsUqVjKP36tUQphe3e7S9R
dFBngsPowBFLcBuBY7tp
=FDK3
-----END PGP SIGNATURE-----

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 02/10/13 17:28, Jay Pipes wrote:
>> On 02/10/13 16:37, Jay Pipes wrote:
>>> Hi James, have you tried setting the MTU to a lower number of
>>> bytes, instead of a higher-than-1500 setting? Say... 1454
>>> instead of 1546?
>>>
>>> Curious to see if that resolves the issue. If it does, then
>>> perhaps there is a path somewhere that had a <1546 PMTU?
>>
>> Do you mean in instances, or on the physical servers?
>
> I mean on the instance vNICs.

Yeah - thats what I thought - that makes no difference either.

- --
James Page
Technical Lead
Ubuntu Server Team
james.page@canonical.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJSTE7zAAoJEL/srsug59jDwswP/AwQarblKhDnAe+aGYVn1hKs
g/BPiyqovNtBNNXKj5FLIaDDQnpLueIDxoX0lHPZkKLpDJybrsQBtqwnol2qcBa3
rBfb/yt92vL8wDlRBEsbh1qr/2EmErksFjcIMIltqBNXP5gGR3ADS9DIJ65GUIFY
Aipsk03bu3pn2FiCJo/cbbKBT96bbQg9vNgbUi8Eu8vWW7wpEq90njlDrVh02u/o
ioME0Ja8DnFrPNmIx8kaaOdXSY9e3YmWfjImQbi/O7lVwUHV7ZA+4szSrQiCmPn3
eHUGTblLP2yEmETu3rF7hxB1bn2H3bxZ+C1vg7k3ABNlTMrDPHTQv+iRSCA9WDcf
yMNjCD5dTI10gx+OTDjEIg+z2yEA4fqmYqHgHsuPyCBdRs6CX1qIJPywFZlFDglC
AC1R6PMtpVTlcUXlLX/3QJc63/n+3nX6R56iOmAxgDIaVLy5+Hh52g+5vY1T5Nl8
B0aqM60Duxvpf6/9wkgSHcjp7MHBp1IEoT8b+aD5xwSZjG+gqW2wClCGx6ktOfnN
vwxmaTT+rY2vqLNXd51PF2Tfl5+cfK2Sws3lnmJwh5PxZtcwfY42wiBAJWbuJMDT
EIurmHqSPhBkylZlONWto7oNyDSaiqYczbTXGM3eYw/ZqTpgN/X9JuCpMAxt51oI
ALR0na+J0AIQcRUS0P4M
=CQbq
-----END PGP SIGNATURE-----

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 02/10/13 17:33, James Page wrote:
> On 02/10/13 17:24, Gangur, Hrushikesh (R & D HP Cloud) wrote:
>>> http://techbackground.blogspot.co.uk/2013/06/path-mtu-discovery-and-gre.html
>
>>>
Yeah
>>>
> - I read that already:
>
> sudo ip netns exec qrouter-d3baf1b1-55ee-42cb-a3f6-9629288e3221
> traceroute -n 10.5.0.2 -p 44444 --mtu traceroute to 10.5.0.2
> (10.5.0.2), 30 hops max, 65000 byte packets 1 10.5.0.2 0.950 ms
> F=1500 0.598 ms 0.566 ms
>
> The PMTU from the l3 gateway to the instance looks OK to me.

I spent a bit more time debugging this; performance from within the
router netns on the L3 gateway node looks good in both directions when
accessing via the tenant network (10.5.0.2) over the qr-XXXXX
interface, but when accessing through the external network from within
the netns I see the same performance choke upstream into the tenant
network.

Which would indicate that my problem lies somewhere around the
qg-XXXXX interface in the router netns - just trying to figure out
exactly what - maybe iptables is doing something wonky?

- --
James Page
Ubuntu and Debian Developer
james.page@ubuntu.com
jamespage@debian.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJSTJTzAAoJEL/srsug59jDXIoQAIqd5Msoyubvs0Y270PeYHwJ
vsmjw0Fzyf+428KTo2RcfWKGarkBmn/3kbygzPJH2aVHZx/+s2dHY1YJu1gH7B4i
0yCIQZWhur+CdXN7QplqhJLgq+ZVyC4/GV4RA/C2NpHzGZg/avx5BPMhzfnSnRtB
Xy49umZkG90622WhW2hlXW5J06YIEsO1EuwonXxIXzXu2CYsvLKk2GguU7tejC7Q
DfW36gkCVv2z/71vVXgpjNt76MNsA8IVmaB4vv08Ai4yyUMNpvUc/SWu5DwzuoZx
vGxkCFv419rzO64L6EbYcmnUBXa+wFnSTp8hCNfl8fsDMJb6kynwLAWqCiIKKS8/
ozZfZ7eQ4CmyctckXjxBchmybh0aMRrzYANvE/9vkub3aAF7fpeCus+Nw59TLe62
tlfAZKPhmLikGbbIia6SX6j9PS9x2mSagfinjQs0BHDV0Pyww5qotWbWLbCFD7Cz
yhLjAGAhOnB5CQlEqX9XdM2/YGvhTIzLMMkPeQVicNlUXx/TXqJ2cvcIjdoBASFC
i6lfhhwXU9n9zi0THOxHQozksaMKc/diWULkcewqdbqYgLbZ5x8+SADf2Zd7WFzZ
MKe54y7fmhKWnL+zTN9tLwG8qnLWpIWJ5M4V99a8HL6zgTyeRJ/9bgMsl/2ghTra
EGO8vL6+zj8cAYTFB3oF
=Fp5N
-----END PGP SIGNATURE-----

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
Hi James,

Let me ask you something...

Are you using the package `openvswitch-datapath-dkms' from Havana Ubuntu
Cloud Archive with Linux 3.8?

I am unable to compile that module on top of Ubuntu 12.04.3 (with Linux
3.8) and I'm wondering if it is still required or not...

Thanks!
Thiago


On 2 October 2013 06:14, James Page <james.page@ubuntu.com> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Hi Folks
>
> I'm seeing an odd direction performance issue with my Havana test rig
> which I'm struggling to debug; details:
>
> Ubuntu 12.04 with Linux 3.8 backports kernel, Havana Cloud Archive
> (currently Havana b3, OpenvSwitch 1.10.2), OpenvSwitch plugin with GRE
> overlay networks.
>
> I've configured the MTU's on all of the physical host network
> interfaces to 1546 to add capacity for the GRE network headers.
>
> Performance between instances within a single tenant network on
> different physical hosts is as I would expect (near 1GBps), but I see
> issues when data transits the Neutron L3 gateway - in the example
> below churel is a physical host on the same network as the layer 3
> gateway:
>
> ubuntu@churel:~$ scp hardware.dump 10.98.191.103:
> hardware.dump
> 100% 67MB 4.8MB/s
> 00:14
>
> ubuntu@churel:~$ scp 10.98.191.103:hardware.dump .
> hardware.dump
> 100% 67MB
> 66.8MB/s 00:01
>
> As you can see, pushing data to the instance (via a floating ip
> 10.98.191.103) is painfully slow, whereas pulling the same data is
> x10+ faster (and closer to what I would expect).
>
> iperf confirms the same:
>
> ubuntu@churel:~$ iperf -c 10.98.191.103 -m
> - ------------------------------------------------------------
> Client connecting to 10.98.191.103, TCP port 5001
> TCP window size: 22.9 KByte (default)
> - ------------------------------------------------------------
> [ 3] local 10.98.191.11 port 55330 connected with 10.98.191.103 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-10.0 sec 60.8 MBytes 50.8 Mbits/sec
> [ 3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
>
> ubuntu@james-page-bastion:~$ iperf -c 10.98.191.11 -m
>
>
> - ------------------------------------------------------------
> Client connecting to 10.98.191.11, TCP port 5001
> TCP window size: 23.3 KByte (default)
> - ------------------------------------------------------------
> [ 3] local 10.5.0.2 port 52190 connected with 10.98.191.11 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-10.0 sec 1.07 GBytes 918 Mbits/sec
> [ 3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
>
>
> 918Mbit vs 50Mbits.
>
> I tcpdump'ed the traffic and I see alot of duplicate acks which makes
> me suspect some sort of packet fragmentation but its got me puzzled.
>
> Anyone have any ideas about how to debug this further? or has anyone
> seen anything like this before?
>
> Cheers
>
> James
>
>
> - --
> James Page
> Ubuntu and Debian Developer
> james.page@ubuntu.com
> jamespage@debian.org
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.14 (GNU/Linux)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIcBAEBCAAGBQJSS+QSAAoJEL/srsug59jD8ZcQAKbZDVU8KKa7hsic7+ulqWQQ
> EFbq8Im5x4mQY7htIvIOM26BR0ktAO5luE7zMBXsA4AwPud1BQSGhw89/NvNhADT
> TLcGdQADsomeiBpJebzwUmvL/tYUoMDRA3O96mUn2pi0fySWbEuEgMDjDJ/ow23D
> Y7nEv0mItaZ4MBSI9RZcqsDUl7UbbdlGejSWhJcwp/127HMU9nYwWNz5UHJjsGZ1
> eITyv1WZH/dYPQ1SES41qD1WvkTBugopGJvptEyrcO62A+akGOvnqpsHgPECbLb+
> b/8rk8nB1HB74Wh+tQP4WRQCZYso15nB6ukIyIU24Qti2tXtXDdKwszEoblCwCT3
> YZJTERNOENURlUEFwgi6FNL+nZomSG0UJU6qqDGiUJkbSF7SwJm4y8/XRlJM2Ihn
> wyxFB0qe3YdMqgDLZn11GwCDqn3g11hYaocHNUyRaj/tgxhGKbOFvix5kz3I4V7T
> gd+sqUySMVd9wCRXBzDDhCuG9xf/QY2ZQxXzyfPJWd9svPh/O6osTSQzaI1eZl9/
> jVRejMAFr6Rl11GPKd3DYi32GXa896QELjBmJ9Kof0NDlCcDuUKpVeifIhcbQZZV
> sWyQmbb6Z/ypFV9xXiLRfH2fW2bAQQHgiQGvy9apoE78BWYdnsD8Q3Ekwag6lFqp
> yUwt/RcRXS1PbLG4EGFW
> =HTvW
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
I believe it's still needed: upstream kernel have pushed back against
the modules it provides, but neutron needs them to deliver the gre
tunnels.

-Rob

On 3 October 2013 13:15, Martinx - ジェームズ <thiagocmartinsc@gmail.com> wrote:
> Hi James,
>
> Let me ask you something...
>
> Are you using the package `openvswitch-datapath-dkms' from Havana Ubuntu
> Cloud Archive with Linux 3.8?
>
> I am unable to compile that module on top of Ubuntu 12.04.3 (with Linux 3.8)
> and I'm wondering if it is still required or not...
>
> Thanks!
> Thiago
>
>
> On 2 October 2013 06:14, James Page <james.page@ubuntu.com> wrote:
>>
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA256
>>
>> Hi Folks
>>
>> I'm seeing an odd direction performance issue with my Havana test rig
>> which I'm struggling to debug; details:
>>
>> Ubuntu 12.04 with Linux 3.8 backports kernel, Havana Cloud Archive
>> (currently Havana b3, OpenvSwitch 1.10.2), OpenvSwitch plugin with GRE
>> overlay networks.
>>
>> I've configured the MTU's on all of the physical host network
>> interfaces to 1546 to add capacity for the GRE network headers.
>>
>> Performance between instances within a single tenant network on
>> different physical hosts is as I would expect (near 1GBps), but I see
>> issues when data transits the Neutron L3 gateway - in the example
>> below churel is a physical host on the same network as the layer 3
>> gateway:
>>
>> ubuntu@churel:~$ scp hardware.dump 10.98.191.103:
>> hardware.dump
>> 100% 67MB 4.8MB/s
>> 00:14
>>
>> ubuntu@churel:~$ scp 10.98.191.103:hardware.dump .
>> hardware.dump
>> 100% 67MB
>> 66.8MB/s 00:01
>>
>> As you can see, pushing data to the instance (via a floating ip
>> 10.98.191.103) is painfully slow, whereas pulling the same data is
>> x10+ faster (and closer to what I would expect).
>>
>> iperf confirms the same:
>>
>> ubuntu@churel:~$ iperf -c 10.98.191.103 -m
>> - ------------------------------------------------------------
>> Client connecting to 10.98.191.103, TCP port 5001
>> TCP window size: 22.9 KByte (default)
>> - ------------------------------------------------------------
>> [ 3] local 10.98.191.11 port 55330 connected with 10.98.191.103 port 5001
>> [ ID] Interval Transfer Bandwidth
>> [ 3] 0.0-10.0 sec 60.8 MBytes 50.8 Mbits/sec
>> [ 3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
>>
>> ubuntu@james-page-bastion:~$ iperf -c 10.98.191.11 -m
>>
>>
>> - ------------------------------------------------------------
>> Client connecting to 10.98.191.11, TCP port 5001
>> TCP window size: 23.3 KByte (default)
>> - ------------------------------------------------------------
>> [ 3] local 10.5.0.2 port 52190 connected with 10.98.191.11 port 5001
>> [ ID] Interval Transfer Bandwidth
>> [ 3] 0.0-10.0 sec 1.07 GBytes 918 Mbits/sec
>> [ 3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
>>
>>
>> 918Mbit vs 50Mbits.
>>
>> I tcpdump'ed the traffic and I see alot of duplicate acks which makes
>> me suspect some sort of packet fragmentation but its got me puzzled.
>>
>> Anyone have any ideas about how to debug this further? or has anyone
>> seen anything like this before?
>>
>> Cheers
>>
>> James
>>
>>
>> - --
>> James Page
>> Ubuntu and Debian Developer
>> james.page@ubuntu.com
>> jamespage@debian.org
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG v1.4.14 (GNU/Linux)
>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>>
>> iQIcBAEBCAAGBQJSS+QSAAoJEL/srsug59jD8ZcQAKbZDVU8KKa7hsic7+ulqWQQ
>> EFbq8Im5x4mQY7htIvIOM26BR0ktAO5luE7zMBXsA4AwPud1BQSGhw89/NvNhADT
>> TLcGdQADsomeiBpJebzwUmvL/tYUoMDRA3O96mUn2pi0fySWbEuEgMDjDJ/ow23D
>> Y7nEv0mItaZ4MBSI9RZcqsDUl7UbbdlGejSWhJcwp/127HMU9nYwWNz5UHJjsGZ1
>> eITyv1WZH/dYPQ1SES41qD1WvkTBugopGJvptEyrcO62A+akGOvnqpsHgPECbLb+
>> b/8rk8nB1HB74Wh+tQP4WRQCZYso15nB6ukIyIU24Qti2tXtXDdKwszEoblCwCT3
>> YZJTERNOENURlUEFwgi6FNL+nZomSG0UJU6qqDGiUJkbSF7SwJm4y8/XRlJM2Ihn
>> wyxFB0qe3YdMqgDLZn11GwCDqn3g11hYaocHNUyRaj/tgxhGKbOFvix5kz3I4V7T
>> gd+sqUySMVd9wCRXBzDDhCuG9xf/QY2ZQxXzyfPJWd9svPh/O6osTSQzaI1eZl9/
>> jVRejMAFr6Rl11GPKd3DYi32GXa896QELjBmJ9Kof0NDlCcDuUKpVeifIhcbQZZV
>> sWyQmbb6Z/ypFV9xXiLRfH2fW2bAQQHgiQGvy9apoE78BWYdnsD8Q3Ekwag6lFqp
>> yUwt/RcRXS1PbLG4EGFW
>> =HTvW
>> -----END PGP SIGNATURE-----
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>



--
Robert Collins <rbtcollins@hp.com>
Distinguished Technologist
HP Converged Cloud

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
Mmm... I am unable to compile openvswitch-datapath-dkms from Havana Ubuntu
Cloud Archive (on top of a fresh install of Ubuntu 12.04.3), look:

------
root@havabuntu-1:~# uname -a
Linux havabuntu-1 3.8.0-31-generic #46~precise1-Ubuntu SMP Wed Sep 11
18:21:16 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

root@havabuntu-1:~# dpkg -l | grep openvswitch-datapath-dkms
ii openvswitch-datapath-dkms 1.10.2-0ubuntu1~cloud0 Open
vSwitch datapath module source - DKMS version

root@havabuntu-1:~# dpkg-reconfigure openvswitch-datapath-dkms

------------------------------
Deleting module version: 1.10.2
completely from the DKMS tree.
------------------------------
Done.

Creating symlink /var/lib/dkms/openvswitch/1.10.2/source ->
/usr/src/openvswitch-1.10.2

DKMS: add completed.

Kernel preparation unnecessary for this kernel. Skipping...

Building module:
cleaning build area....(bad exit status: 2)
./configure --with-linux='/lib/modules/3.8.0-31-generic/build' && make -C
datapath/linux............(bad exit status: 2)
Error! Bad return status for module build on kernel: 3.8.0-31-generic
(x86_64)
Consult /var/lib/dkms/openvswitch/1.10.2/build/make.log for more
information.
------

Contents of /var/lib/dkms/openvswitch/1.10.2/build/make.log:

http://paste.openstack.org/show/47888/

I also have the packages: build-essential, linux-headers, etc, installed...

So, James, have you this module compiled on your test environment? I mean,
does this command: "dpkg-reconfigure openvswitch-datapath-dkms" works for
you?!

NOTE: It also doesn't compiles when with Linux 3.2 (Ubuntu 12.04.1).

Thanks,
Thiago


On 2 October 2013 22:28, Robert Collins <robertc@robertcollins.net> wrote:

> I believe it's still needed: upstream kernel have pushed back against
> the modules it provides, but neutron needs them to deliver the gre
> tunnels.
>
> -Rob
>
> On 3 October 2013 13:15, Martinx - $B%8%'!<%`%:(B <thiagocmartinsc@gmail.com>
> wrote:
> > Hi James,
> >
> > Let me ask you something...
> >
> > Are you using the package `openvswitch-datapath-dkms' from Havana Ubuntu
> > Cloud Archive with Linux 3.8?
> >
> > I am unable to compile that module on top of Ubuntu 12.04.3 (with Linux
> 3.8)
> > and I'm wondering if it is still required or not...
> >
> > Thanks!
> > Thiago
> >
> >
> > On 2 October 2013 06:14, James Page <james.page@ubuntu.com> wrote:
> >>
> >> -----BEGIN PGP SIGNED MESSAGE-----
> >> Hash: SHA256
> >>
> >> Hi Folks
> >>
> >> I'm seeing an odd direction performance issue with my Havana test rig
> >> which I'm struggling to debug; details:
> >>
> >> Ubuntu 12.04 with Linux 3.8 backports kernel, Havana Cloud Archive
> >> (currently Havana b3, OpenvSwitch 1.10.2), OpenvSwitch plugin with GRE
> >> overlay networks.
> >>
> >> I've configured the MTU's on all of the physical host network
> >> interfaces to 1546 to add capacity for the GRE network headers.
> >>
> >> Performance between instances within a single tenant network on
> >> different physical hosts is as I would expect (near 1GBps), but I see
> >> issues when data transits the Neutron L3 gateway - in the example
> >> below churel is a physical host on the same network as the layer 3
> >> gateway:
> >>
> >> ubuntu@churel:~$ scp hardware.dump 10.98.191.103:
> >> hardware.dump
> >> 100% 67MB 4.8MB/s
> >> 00:14
> >>
> >> ubuntu@churel:~$ scp 10.98.191.103:hardware.dump .
> >> hardware.dump
> >> 100% 67MB
> >> 66.8MB/s 00:01
> >>
> >> As you can see, pushing data to the instance (via a floating ip
> >> 10.98.191.103) is painfully slow, whereas pulling the same data is
> >> x10+ faster (and closer to what I would expect).
> >>
> >> iperf confirms the same:
> >>
> >> ubuntu@churel:~$ iperf -c 10.98.191.103 -m
> >> - ------------------------------------------------------------
> >> Client connecting to 10.98.191.103, TCP port 5001
> >> TCP window size: 22.9 KByte (default)
> >> - ------------------------------------------------------------
> >> [ 3] local 10.98.191.11 port 55330 connected with 10.98.191.103 port
> 5001
> >> [ ID] Interval Transfer Bandwidth
> >> [ 3] 0.0-10.0 sec 60.8 MBytes 50.8 Mbits/sec
> >> [ 3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
> >>
> >> ubuntu@james-page-bastion:~$ iperf -c 10.98.191.11 -m
> >>
> >>
> >> - ------------------------------------------------------------
> >> Client connecting to 10.98.191.11, TCP port 5001
> >> TCP window size: 23.3 KByte (default)
> >> - ------------------------------------------------------------
> >> [ 3] local 10.5.0.2 port 52190 connected with 10.98.191.11 port 5001
> >> [ ID] Interval Transfer Bandwidth
> >> [ 3] 0.0-10.0 sec 1.07 GBytes 918 Mbits/sec
> >> [ 3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
> >>
> >>
> >> 918Mbit vs 50Mbits.
> >>
> >> I tcpdump'ed the traffic and I see alot of duplicate acks which makes
> >> me suspect some sort of packet fragmentation but its got me puzzled.
> >>
> >> Anyone have any ideas about how to debug this further? or has anyone
> >> seen anything like this before?
> >>
> >> Cheers
> >>
> >> James
> >>
> >>
> >> - --
> >> James Page
> >> Ubuntu and Debian Developer
> >> james.page@ubuntu.com
> >> jamespage@debian.org
> >> -----BEGIN PGP SIGNATURE-----
> >> Version: GnuPG v1.4.14 (GNU/Linux)
> >> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> >>
> >> iQIcBAEBCAAGBQJSS+QSAAoJEL/srsug59jD8ZcQAKbZDVU8KKa7hsic7+ulqWQQ
> >> EFbq8Im5x4mQY7htIvIOM26BR0ktAO5luE7zMBXsA4AwPud1BQSGhw89/NvNhADT
> >> TLcGdQADsomeiBpJebzwUmvL/tYUoMDRA3O96mUn2pi0fySWbEuEgMDjDJ/ow23D
> >> Y7nEv0mItaZ4MBSI9RZcqsDUl7UbbdlGejSWhJcwp/127HMU9nYwWNz5UHJjsGZ1
> >> eITyv1WZH/dYPQ1SES41qD1WvkTBugopGJvptEyrcO62A+akGOvnqpsHgPECbLb+
> >> b/8rk8nB1HB74Wh+tQP4WRQCZYso15nB6ukIyIU24Qti2tXtXDdKwszEoblCwCT3
> >> YZJTERNOENURlUEFwgi6FNL+nZomSG0UJU6qqDGiUJkbSF7SwJm4y8/XRlJM2Ihn
> >> wyxFB0qe3YdMqgDLZn11GwCDqn3g11hYaocHNUyRaj/tgxhGKbOFvix5kz3I4V7T
> >> gd+sqUySMVd9wCRXBzDDhCuG9xf/QY2ZQxXzyfPJWd9svPh/O6osTSQzaI1eZl9/
> >> jVRejMAFr6Rl11GPKd3DYi32GXa896QELjBmJ9Kof0NDlCcDuUKpVeifIhcbQZZV
> >> sWyQmbb6Z/ypFV9xXiLRfH2fW2bAQQHgiQGvy9apoE78BWYdnsD8Q3Ekwag6lFqp
> >> yUwt/RcRXS1PbLG4EGFW
> >> =HTvW
> >> -----END PGP SIGNATURE-----
> >>
> >> _______________________________________________
> >> Mailing list:
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >> Post to : openstack@lists.openstack.org
> >> Unsubscribe :
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
> >
> >
> > _______________________________________________
> > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to : openstack@lists.openstack.org
> > Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
>
>
>
> --
> Robert Collins <rbtcollins@hp.com>
> Distinguished Technologist
> HP Converged Cloud
>
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 03/10/13 04:43, Martinx - ジェームズ wrote:
> Mmm... I am unable to compile openvswitch-datapath-dkms from
> Havana Ubuntu Cloud Archive (on top of a fresh install of Ubuntu
> 12.04.3), look:

There is a bug in that version; I'm deploying from
ppa:ubuntu-cloud-archive/havana-staging which has a version that does
work - we are testing everything prior to push through to proposed and
updates for rc1 (i.e. this week).


- --
James Page
Ubuntu and Debian Developer
james.page@ubuntu.com
jamespage@debian.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJSTQh2AAoJEL/srsug59jDYvgQAIFpc/NTKGHBUSCRX3JiRVru
iBK2EuPZeNhh9Y4oXO14/zhDNp4/vnDQcJMNAZskUxuA5HcAnLp9oZbleKqG/r7W
w0s9fpkPzzYabaKR431QzJhm+3NIuMqtSgNy0ZX7zO9om3vkSAtLLTUlyYIHxTj3
owPpndN527XUuYalwFF7ffdZK0oIOX65XEUehmX1SPEeOGNhrWjnLH8rcr5XcCbL
VaGPMcqkJLjW+aKTjr4Xi0R6geQ+BjM7g+FNtu7BR4V+laxLyKz9f+WPdrdfcFQP
PLt6gBG6/OVzmZD8Fxs2iD0ox/KaC7gfhxF7ffF1aFwZIhzMZhUYtmCxNSPx80lG
FXOG9R54kDzvPzPNdZLS+dYUcuSBjFLw3Wjrplxzlok+cLjlqjfoABHXlhFjfcuM
Qr5QeUnJc9at+2p8JBjBRK1uxLgV2G+R7umIcjS9SIiD0kK9mKHGDbdKHJ4pvto8
sMAtIDAYMT+hEPWZ7i7x3lqbd/G2ipwKi2exgKy2VVfxB11qTY07boqNztd905NG
iOpusyvFqouHZZJ4SC5OziTTa3rcy2nhta2uYT946aS22z3BxESePlzi/PCJ5faU
h6HA7qIZyr4aUH75I/FBBmDasFrSKA7xJUYXPHa5wV1pnBvSs6QA14P0q43OsmwX
OQyC1OFfgRfE49kX14QZ
=TjDN
-----END PGP SIGNATURE-----

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
Cool! The `ppa:ubuntu-cloud-archive/havana-staging' is the repository I was
looking for. It works now... Thanks!

On 3 October 2013 03:02, James Page <james.page@ubuntu.com> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> On 03/10/13 04:43, Martinx - $B%8%'!<%`%:(B wrote:
> > Mmm... I am unable to compile openvswitch-datapath-dkms from
> > Havana Ubuntu Cloud Archive (on top of a fresh install of Ubuntu
> > 12.04.3), look:
>
> There is a bug in that version; I'm deploying from
> ppa:ubuntu-cloud-archive/havana-staging which has a version that does
> work - we are testing everything prior to push through to proposed and
> updates for rc1 (i.e. this week).
>
>
> - --
> James Page
> Ubuntu and Debian Developer
> james.page@ubuntu.com
> jamespage@debian.org
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.14 (GNU/Linux)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIcBAEBCAAGBQJSTQh2AAoJEL/srsug59jDYvgQAIFpc/NTKGHBUSCRX3JiRVru
> iBK2EuPZeNhh9Y4oXO14/zhDNp4/vnDQcJMNAZskUxuA5HcAnLp9oZbleKqG/r7W
> w0s9fpkPzzYabaKR431QzJhm+3NIuMqtSgNy0ZX7zO9om3vkSAtLLTUlyYIHxTj3
> owPpndN527XUuYalwFF7ffdZK0oIOX65XEUehmX1SPEeOGNhrWjnLH8rcr5XcCbL
> VaGPMcqkJLjW+aKTjr4Xi0R6geQ+BjM7g+FNtu7BR4V+laxLyKz9f+WPdrdfcFQP
> PLt6gBG6/OVzmZD8Fxs2iD0ox/KaC7gfhxF7ffF1aFwZIhzMZhUYtmCxNSPx80lG
> FXOG9R54kDzvPzPNdZLS+dYUcuSBjFLw3Wjrplxzlok+cLjlqjfoABHXlhFjfcuM
> Qr5QeUnJc9at+2p8JBjBRK1uxLgV2G+R7umIcjS9SIiD0kK9mKHGDbdKHJ4pvto8
> sMAtIDAYMT+hEPWZ7i7x3lqbd/G2ipwKi2exgKy2VVfxB11qTY07boqNztd905NG
> iOpusyvFqouHZZJ4SC5OziTTa3rcy2nhta2uYT946aS22z3BxESePlzi/PCJ5faU
> h6HA7qIZyr4aUH75I/FBBmDasFrSKA7xJUYXPHa5wV1pnBvSs6QA14P0q43OsmwX
> OQyC1OFfgRfE49kX14QZ
> =TjDN
> -----END PGP SIGNATURE-----
>
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 02/10/13 22:49, James Page wrote:
>> sudo ip netns exec qrouter-d3baf1b1-55ee-42cb-a3f6-9629288e3221
>>> traceroute -n 10.5.0.2 -p 44444 --mtu traceroute to 10.5.0.2
>>> (10.5.0.2), 30 hops max, 65000 byte packets 1 10.5.0.2 0.950
>>> ms F=1500 0.598 ms 0.566 ms
>>>
>>> The PMTU from the l3 gateway to the instance looks OK to me.
> I spent a bit more time debugging this; performance from within
> the router netns on the L3 gateway node looks good in both
> directions when accessing via the tenant network (10.5.0.2) over
> the qr-XXXXX interface, but when accessing through the external
> network from within the netns I see the same performance choke
> upstream into the tenant network.
>
> Which would indicate that my problem lies somewhere around the
> qg-XXXXX interface in the router netns - just trying to figure out
> exactly what - maybe iptables is doing something wonky?

OK - I found a fix but I'm not sure why this makes a difference;
neither my l3-agent or dhcp-agent configuration had 'ovs_use_veth =
True'; I switched this on, clearing everything down, rebooted and now
I seem symmetric good performance across all neutron routers.

This would point to some sort of underlying bug when ovs_use_veth = False.


- --
James Page
Ubuntu and Debian Developer
james.page@ubuntu.com
jamespage@debian.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJSTTh6AAoJEL/srsug59jDmpEP/jaB5/yn9+Xm12XrVu0Q3IV5
fLGOuBboUgykVVsfkWccI/oygNlBaXIcDuak/E4jxPcoRhLAdY1zpX8MQ8wSsGKd
CjSeuW8xxnXubdfzmsCKSs3FCIBhDkSYzyiJd/raLvCfflyy8Cl7KN2x22mGHJ6z
qZ9APcYfm9qCVbEssA3BHcUL+st1iqMJ0YhVZBk03+QEXaWu3FFbjpjwx3X1ZvV5
Vbac7enqy7Lr4DSAIJVldeVuRURfv3YE3iJZTIXjaoUCCVTQLm5OmP9TrwBNHLsA
7W+LceQri+Vh0s4dHPKx5MiHsV3RCydcXkSQFYhx7390CXypMQ6WwXEY/a8Egssg
SuxXByHwEcQFa+9sCwPQ+RXCmC0O6kUi8EPmwadjI5Gc1LoKw5Wov/SEen86fDUW
P9pRXonseYyWN9I4MT4aG1ez8Dqq/SiZyWBHtcITxKI2smD92G9CwWGo4L9oGqJJ
UcHRwQaTHgzy3yETPO25hjax8ZWZGNccHBixMCZKegr9p2dhR+7qF8G7mRtRQLxL
0fgOAExn/SX59ZT4RaYi9fI6Gng13RtSyI87CJC/50vfTmqoraUUK1aoSjIY4Dt+
DYEMMLp205uLEj2IyaNTzykR0yh3t6dvfpCCcRA/xPT9slfa0a7P8LafyiWa4/5c
jkJM4Y1BUV+2L5Rrf3sc
=4lO4
-----END PGP SIGNATURE-----

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
James,

I think I'm hitting this problem.

I'm using "Per-Tenant Routers with Private Networks", GRE tunnels and
L3+DHCP Network Node.

The connectivity from behind my Instances is very slow. It takes an
eternity to finish "apt-get update".

If I run "apt-get update" from within tenant's Namespace, it goes fine.

If I enable "ovs_use_veth", Metadata (and/or DHCP) stops working and I and
unable to start new Ubuntu Instances and login into them... Look:

--
cloud-init start running: Tue, 22 Oct 2013 05:57:39 +0000. up 4.01 seconds
2013-10-22 06:01:42,989 - util.py[WARNING]: '
http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [3/120s]:
url error [[Errno 113] No route to host]
2013-10-22 06:01:45,988 - util.py[WARNING]: '
http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [6/120s]:
url error [[Errno 113] No route to host]
--

Is this problem still around?!

Should I stay away from GRE tunnels when with Havana + Ubuntu 12.04.3?

Is it possible to re-enable Metadata when ovs_use_veth = true ?

Thanks!
Thiago


On 3 October 2013 06:27, James Page <james.page@ubuntu.com> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> On 02/10/13 22:49, James Page wrote:
> >> sudo ip netns exec qrouter-d3baf1b1-55ee-42cb-a3f6-9629288e3221
> >>> traceroute -n 10.5.0.2 -p 44444 --mtu traceroute to 10.5.0.2
> >>> (10.5.0.2), 30 hops max, 65000 byte packets 1 10.5.0.2 0.950
> >>> ms F=1500 0.598 ms 0.566 ms
> >>>
> >>> The PMTU from the l3 gateway to the instance looks OK to me.
> > I spent a bit more time debugging this; performance from within
> > the router netns on the L3 gateway node looks good in both
> > directions when accessing via the tenant network (10.5.0.2) over
> > the qr-XXXXX interface, but when accessing through the external
> > network from within the netns I see the same performance choke
> > upstream into the tenant network.
> >
> > Which would indicate that my problem lies somewhere around the
> > qg-XXXXX interface in the router netns - just trying to figure out
> > exactly what - maybe iptables is doing something wonky?
>
> OK - I found a fix but I'm not sure why this makes a difference;
> neither my l3-agent or dhcp-agent configuration had 'ovs_use_veth =
> True'; I switched this on, clearing everything down, rebooted and now
> I seem symmetric good performance across all neutron routers.
>
> This would point to some sort of underlying bug when ovs_use_veth = False.
>
>
> - --
> James Page
> Ubuntu and Debian Developer
> james.page@ubuntu.com
> jamespage@debian.org
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.14 (GNU/Linux)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIcBAEBCAAGBQJSTTh6AAoJEL/srsug59jDmpEP/jaB5/yn9+Xm12XrVu0Q3IV5
> fLGOuBboUgykVVsfkWccI/oygNlBaXIcDuak/E4jxPcoRhLAdY1zpX8MQ8wSsGKd
> CjSeuW8xxnXubdfzmsCKSs3FCIBhDkSYzyiJd/raLvCfflyy8Cl7KN2x22mGHJ6z
> qZ9APcYfm9qCVbEssA3BHcUL+st1iqMJ0YhVZBk03+QEXaWu3FFbjpjwx3X1ZvV5
> Vbac7enqy7Lr4DSAIJVldeVuRURfv3YE3iJZTIXjaoUCCVTQLm5OmP9TrwBNHLsA
> 7W+LceQri+Vh0s4dHPKx5MiHsV3RCydcXkSQFYhx7390CXypMQ6WwXEY/a8Egssg
> SuxXByHwEcQFa+9sCwPQ+RXCmC0O6kUi8EPmwadjI5Gc1LoKw5Wov/SEen86fDUW
> P9pRXonseYyWN9I4MT4aG1ez8Dqq/SiZyWBHtcITxKI2smD92G9CwWGo4L9oGqJJ
> UcHRwQaTHgzy3yETPO25hjax8ZWZGNccHBixMCZKegr9p2dhR+7qF8G7mRtRQLxL
> 0fgOAExn/SX59ZT4RaYi9fI6Gng13RtSyI87CJC/50vfTmqoraUUK1aoSjIY4Dt+
> DYEMMLp205uLEj2IyaNTzykR0yh3t6dvfpCCcRA/xPT9slfa0a7P8LafyiWa4/5c
> jkJM4Y1BUV+2L5Rrf3sc
> =4lO4
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi Martinx

On 21/10/13 23:52, Martinx - ジェームズ wrote:
> I'm using "Per-Tenant Routers with Private Networks", GRE tunnels
> and L3+DHCP Network Node.
>
> The connectivity from behind my Instances is very slow. It takes
> an eternity to finish "apt-get update".
>
> If I run "apt-get update" from within tenant's Namespace, it goes
> fine.
>
> If I enable "ovs_use_veth", Metadata (and/or DHCP) stops working
> and I and unable to start new Ubuntu Instances and login into
> them... Look:
>
> -- cloud-init start running: Tue, 22 Oct 2013 05:57:39 +0000. up
> 4.01 seconds 2013-10-22 06:01:42,989 - util.py[WARNING]:
> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
> [3/120s]: url error [[Errno 113] No route to host] 2013-10-22
> 06:01:45,988 - util.py[WARNING]:
> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
> [6/120s]: url error [[Errno 113] No route to host] --
>
> Is this problem still around?!

Definatetly sounds similar; I'd ensure that all of the namespaces on
the gateways/data forwarding node are correct by giving it a reboot;

I think this needs a bug; Neutron should be OK without the use of veth
- - I'll get to that today.

Cheers

James

- --
James Page
Ubuntu and Debian Developer
james.page@ubuntu.com
jamespage@debian.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJSZoqMAAoJEL/srsug59jDlYAQAJkzfeVcWElQbB9LWQ4CRwjy
KwiAsFN6UVVnUgh4gZtS6Nb9xUtA4oQN/X8hVbSK9Ng5bSErot1NrjRITnWH0Wjl
70Tg4vh4ofufrYzzvGcUVGJ0FB1V+pf/XDAk5vMNEF6iMs7/XETWsabN15dPPUOv
Hq+YKo+8eeDgASVszelb8Hy14oZ7mJ1uaGIUTCqXH3Zbrkcwqw9Cp0AJ621pQ6K4
W0deiyy89+Br/FF65pi358949o1z7xexo+R74i9mPwUyeEuR27EeZEo9sM2LgLkR
kvk4jhndAZNgnK4ijc6ATqKuiDqgyUbrwJi4MTIbN2iFKtEV9gwftW/LRBwL5ihN
CgTgUw3ocKRudstgqUJ4Y1UjAmeztnrdQ3ZYuj1IXqqnpjvWvBxE87ajmoj6xhEL
miaxEKHkQuiM6XTuSmmoUvVQw5H77ZaRBTUCtTr2yUbaHArrBgjCwdAWsXjv2jp0
OO59k6Und6Mugi1tpUOWgrupgcrqG0Bc0W9XC+Q11WhYVYaoDh6QEjGFY8/5H5Mp
gUfu6jvGA891eDbYDMFclB2XDAKDxKGvMsnJbJ3UbC/tQBmmviemKgbKqRAO3Pt7
692bLGwuTy/t69EbTqs/+USaJGn9G2l2pZk8CgvmmHEU4dqdKqtFsZCfn4X3+w41
sl0NaHdulfF8HRgQN6ES
=kaLf
-----END PGP SIGNATURE-----

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
Hi James!

Any updates about this issue?!

I am unable to provide a good connectivity for my tenants. I already tried
everything I could, without success... I tried to installing it again, from
scratch, in parallel (another isolated lab), using new hardware, same
result.

I'm trying to use "Per-Tenant Routers with Private Networks" topology, and
it is useless... I'm with GRE tunnels, and a Network Node with 3 ethernets.

Also, when I enable ovs_use_veth, DHCP / Metadata stops working, reboots
doesn't fix it. So, I am unable to check if ovs_use_veth fix it.

I'm a bit tired of GRE tunnels, too much trouble... Maybe it is time to try
VXLAN... But it will be a shot in the dark... :-/

Grizzly Reference HowTo, used to guide my Havana deployment:


https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst

Thanks!
Thiago


On 22 October 2013 12:24, James Page <james.page@ubuntu.com> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Hi Martinx
>
> On 21/10/13 23:52, Martinx - ジェームズ wrote:
> > I'm using "Per-Tenant Routers with Private Networks", GRE tunnels
> > and L3+DHCP Network Node.
> >
> > The connectivity from behind my Instances is very slow. It takes
> > an eternity to finish "apt-get update".
> >
> > If I run "apt-get update" from within tenant's Namespace, it goes
> > fine.
> >
> > If I enable "ovs_use_veth", Metadata (and/or DHCP) stops working
> > and I and unable to start new Ubuntu Instances and login into
> > them... Look:
> >
> > -- cloud-init start running: Tue, 22 Oct 2013 05:57:39 +0000. up
> > 4.01 seconds 2013-10-22 06:01:42,989 - util.py[WARNING]:
> > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
> > [3/120s]: url error [[Errno 113] No route to host] 2013-10-22
> > 06:01:45,988 - util.py[WARNING]:
> > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
> > [6/120s]: url error [[Errno 113] No route to host] --
> >
> > Is this problem still around?!
>
> Definatetly sounds similar; I'd ensure that all of the namespaces on
> the gateways/data forwarding node are correct by giving it a reboot;
>
> I think this needs a bug; Neutron should be OK without the use of veth
> - - I'll get to that today.
>
> Cheers
>
> James
>
> - --
> James Page
> Ubuntu and Debian Developer
> james.page@ubuntu.com
> jamespage@debian.org
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.14 (GNU/Linux)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIcBAEBCAAGBQJSZoqMAAoJEL/srsug59jDlYAQAJkzfeVcWElQbB9LWQ4CRwjy
> KwiAsFN6UVVnUgh4gZtS6Nb9xUtA4oQN/X8hVbSK9Ng5bSErot1NrjRITnWH0Wjl
> 70Tg4vh4ofufrYzzvGcUVGJ0FB1V+pf/XDAk5vMNEF6iMs7/XETWsabN15dPPUOv
> Hq+YKo+8eeDgASVszelb8Hy14oZ7mJ1uaGIUTCqXH3Zbrkcwqw9Cp0AJ621pQ6K4
> W0deiyy89+Br/FF65pi358949o1z7xexo+R74i9mPwUyeEuR27EeZEo9sM2LgLkR
> kvk4jhndAZNgnK4ijc6ATqKuiDqgyUbrwJi4MTIbN2iFKtEV9gwftW/LRBwL5ihN
> CgTgUw3ocKRudstgqUJ4Y1UjAmeztnrdQ3ZYuj1IXqqnpjvWvBxE87ajmoj6xhEL
> miaxEKHkQuiM6XTuSmmoUvVQw5H77ZaRBTUCtTr2yUbaHArrBgjCwdAWsXjv2jp0
> OO59k6Und6Mugi1tpUOWgrupgcrqG0Bc0W9XC+Q11WhYVYaoDh6QEjGFY8/5H5Mp
> gUfu6jvGA891eDbYDMFclB2XDAKDxKGvMsnJbJ3UbC/tQBmmviemKgbKqRAO3Pt7
> 692bLGwuTy/t69EbTqs/+USaJGn9G2l2pZk8CgvmmHEU4dqdKqtFsZCfn4X3+w41
> sl0NaHdulfF8HRgQN6ES
> =kaLf
> -----END PGP SIGNATURE-----
>
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
On Mon, Oct 21, 2013 at 11:52 PM, Martinx - ジェームズ <thiagocmartinsc@gmail.com
> wrote:

> James,
>
> I think I'm hitting this problem.
>
> I'm using "Per-Tenant Routers with Private Networks", GRE tunnels and
> L3+DHCP Network Node.
>
> The connectivity from behind my Instances is very slow. It takes an
> eternity to finish "apt-get update".
>


I'm curious if you can do the following tests to help pinpoint the bottle
neck:

Run iperf or netperf between:
two instances on the same hypervisor - this will determine if it's a
virtualization driver issue if the performance is bad.
two instances on different hypervisors.
one instance to the namespace of the l3 agent.






>
> If I run "apt-get update" from within tenant's Namespace, it goes fine.
>
> If I enable "ovs_use_veth", Metadata (and/or DHCP) stops working and I and
> unable to start new Ubuntu Instances and login into them... Look:
>
> --
> cloud-init start running: Tue, 22 Oct 2013 05:57:39 +0000. up 4.01 seconds
> 2013-10-22 06:01:42,989 - util.py[WARNING]: '
> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [3/120s]:
> url error [[Errno 113] No route to host]
> 2013-10-22 06:01:45,988 - util.py[WARNING]: '
> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [6/120s]:
> url error [[Errno 113] No route to host]
> --
>


Do you see anything interesting in the neutron-metadata-agent log? Or it
looks like your instance doesn't have a route to the default gw?


>
> Is this problem still around?!
>
> Should I stay away from GRE tunnels when with Havana + Ubuntu 12.04.3?
>
> Is it possible to re-enable Metadata when ovs_use_veth = true ?
>
> Thanks!
> Thiago
>
>
> On 3 October 2013 06:27, James Page <james.page@ubuntu.com> wrote:
>
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA256
>>
>> On 02/10/13 22:49, James Page wrote:
>> >> sudo ip netns exec qrouter-d3baf1b1-55ee-42cb-a3f6-9629288e3221
>> >>> traceroute -n 10.5.0.2 -p 44444 --mtu traceroute to 10.5.0.2
>> >>> (10.5.0.2), 30 hops max, 65000 byte packets 1 10.5.0.2 0.950
>> >>> ms F=1500 0.598 ms 0.566 ms
>> >>>
>> >>> The PMTU from the l3 gateway to the instance looks OK to me.
>> > I spent a bit more time debugging this; performance from within
>> > the router netns on the L3 gateway node looks good in both
>> > directions when accessing via the tenant network (10.5.0.2) over
>> > the qr-XXXXX interface, but when accessing through the external
>> > network from within the netns I see the same performance choke
>> > upstream into the tenant network.
>> >
>> > Which would indicate that my problem lies somewhere around the
>> > qg-XXXXX interface in the router netns - just trying to figure out
>> > exactly what - maybe iptables is doing something wonky?
>>
>> OK - I found a fix but I'm not sure why this makes a difference;
>> neither my l3-agent or dhcp-agent configuration had 'ovs_use_veth =
>> True'; I switched this on, clearing everything down, rebooted and now
>> I seem symmetric good performance across all neutron routers.
>>
>> This would point to some sort of underlying bug when ovs_use_veth = False.
>>
>>
>> - --
>> James Page
>> Ubuntu and Debian Developer
>> james.page@ubuntu.com
>> jamespage@debian.org
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG v1.4.14 (GNU/Linux)
>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>>
>> iQIcBAEBCAAGBQJSTTh6AAoJEL/srsug59jDmpEP/jaB5/yn9+Xm12XrVu0Q3IV5
>> fLGOuBboUgykVVsfkWccI/oygNlBaXIcDuak/E4jxPcoRhLAdY1zpX8MQ8wSsGKd
>> CjSeuW8xxnXubdfzmsCKSs3FCIBhDkSYzyiJd/raLvCfflyy8Cl7KN2x22mGHJ6z
>> qZ9APcYfm9qCVbEssA3BHcUL+st1iqMJ0YhVZBk03+QEXaWu3FFbjpjwx3X1ZvV5
>> Vbac7enqy7Lr4DSAIJVldeVuRURfv3YE3iJZTIXjaoUCCVTQLm5OmP9TrwBNHLsA
>> 7W+LceQri+Vh0s4dHPKx5MiHsV3RCydcXkSQFYhx7390CXypMQ6WwXEY/a8Egssg
>> SuxXByHwEcQFa+9sCwPQ+RXCmC0O6kUi8EPmwadjI5Gc1LoKw5Wov/SEen86fDUW
>> P9pRXonseYyWN9I4MT4aG1ez8Dqq/SiZyWBHtcITxKI2smD92G9CwWGo4L9oGqJJ
>> UcHRwQaTHgzy3yETPO25hjax8ZWZGNccHBixMCZKegr9p2dhR+7qF8G7mRtRQLxL
>> 0fgOAExn/SX59ZT4RaYi9fI6Gng13RtSyI87CJC/50vfTmqoraUUK1aoSjIY4Dt+
>> DYEMMLp205uLEj2IyaNTzykR0yh3t6dvfpCCcRA/xPT9slfa0a7P8LafyiWa4/5c
>> jkJM4Y1BUV+2L5Rrf3sc
>> =4lO4
>> -----END PGP SIGNATURE-----
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
On 10/23/2013 05:40 PM, Aaron Rosen wrote:
> I'm curious if you can do the following tests to help pinpoint the
> bottle neck:
>
> Run iperf or netperf between:
> two instances on the same hypervisor - this will determine if it's a
> virtualization driver issue if the performance is bad.
> two instances on different hypervisors.
> one instance to the namespace of the l3 agent.

If you happen to run netperf, I would suggest something like:

netperf -H <otherinstance> -t TCP_STREAM -l 30 -- -m 64K -o
throughput,local_transport_retrans

If you need data flowing the other direction, then I would suggest:

netperf -H <otherinstance> -t TCP_MAERTS -l 30 -- -m ,64K -o
throughput,remote_transport_retrans


You could add ",transport_mss" to those lists after the -o option if you
want.

What you will get is throughput (in 10^6 bits/s) and the number of TCP
retransmissions for the data connection (assuming the OS running in the
instances is Linux). Netperf will present 64KB of data to the transport
in each send call, and will run for 30 seconds. The socket buffer sizes
will be at their defaults - which under linux means they will autotune.

happy benchmarking,

rick jones

For extra credit :) you can run:

netperf -t TCP_RR -H <otherinstance> -l 30

if you are curious about latency.

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
Hi Aaron,

Thanks for answering! =)

Lets work...

---

TEST #1 - iperf between Network Node and its Uplink router (Data Center's
gateway "Internet") - OVS br-ex / eth2

# Tenant Namespace route table

root@net-node-1:~# ip netns exec
qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 ip route
default via 172.16.0.1 dev qg-50b615b7-c2
172.16.0.0/20 dev qg-50b615b7-c2 proto kernel scope link src 172.16.0.2
192.168.210.0/24 dev qr-a1376f61-05 proto kernel scope link src
192.168.210.1

# there is a "iperf -s" running at 172.16.0.1 "Internet", testing it

root@net-node-1:~# ip netns exec
qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 iperf -c 172.16.0.1
------------------------------------------------------------
Client connecting to 172.16.0.1, TCP port 5001
TCP window size: 22.9 KByte (default)
------------------------------------------------------------
[ 5] local 172.16.0.2 port 58342 connected with 172.16.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 668 MBytes 559 Mbits/sec
---

---

TEST #2 - iperf on one instance to the Namespace of the L3 agent + uplink
router

# iperf server running within Tenant's Namespace router

root@net-node-1:~# ip netns exec
qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 iperf -s

-

# from instance-1

ubuntu@instance-1:~$ ip route
default via 192.168.210.1 dev eth0 metric 100
192.168.210.0/24 dev eth0 proto kernel scope link src 192.168.210.2

# instance-1 performing tests against net-node-1 Namespace above

ubuntu@instance-1:~$ iperf -c 192.168.210.1
------------------------------------------------------------
Client connecting to 192.168.210.1, TCP port 5001
TCP window size: 21.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.210.2 port 43739 connected with 192.168.210.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 484 MBytes 406 Mbits/sec

# still on instance-1, now against "External IP" of its own Namespace /
Router

ubuntu@instance-1:~$ iperf -c 172.16.0.2
------------------------------------------------------------
Client connecting to 172.16.0.2, TCP port 5001
TCP window size: 21.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.210.2 port 34703 connected with 172.16.0.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 520 MBytes 436 Mbits/sec

# still on instance-1, now against the Data Center UpLink Router

ubuntu@instance-1:~$ iperf -c 172.16.0.1
------------------------------------------------------------
Client connecting to 172.16.0.1, TCP port 5001
TCP window size: 21.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.210.4 port 38401 connected with 172.16.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec * 324 MBytes 271 Mbits/sec*
---

This latest test shows only 271 Mbits/s! I think it should be at least,
400~430 MBits/s... Right?!

---

TEST #3 - Two instances on the same hypervisor

# iperf server

ubuntu@instance-2:~$ ip route
default via 192.168.210.1 dev eth0 metric 100
192.168.210.0/24 dev eth0 proto kernel scope link src 192.168.210.4

ubuntu@instance-2:~$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.210.4 port 5001 connected with 192.168.210.2 port 45800
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 4.61 GBytes 3.96 Gbits/sec

# iperf client

ubuntu@instance-1:~$ iperf -c 192.168.210.4
------------------------------------------------------------
Client connecting to 192.168.210.4, TCP port 5001
TCP window size: 21.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.210.2 port 45800 connected with 192.168.210.4 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 4.61 GBytes 3.96 Gbits/sec
---

---

TEST #4 - Two instances on different hypervisors - over GRE

root@instance-2:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.210.4 port 5001 connected with 192.168.210.2 port 34640
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 237 MBytes 198 Mbits/sec


root@instance-1:~# iperf -c 192.168.210.4
------------------------------------------------------------
Client connecting to 192.168.210.4, TCP port 5001
TCP window size: 21.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.210.2 port 34640 connected with 192.168.210.4 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 237 MBytes 198 Mbits/sec
---

I just realized how slow is my intra-cloud (intra-VM) communication... :-/

---

TEST #5 - Two hypervisors - "GRE TUNNEL LAN" - OVS local_ip / remote_ip

# Same path of "TEST #4" but, testing the physical GRE path (where GRE
traffic flows)

root@hypervisor-2:~$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
n[ 4] local 10.20.2.57 port 5001 connected with 10.20.2.53 port 51694
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

root@hypervisor-1:~# iperf -c 10.20.2.57
------------------------------------------------------------
Client connecting to 10.20.2.57, TCP port 5001
TCP window size: 22.9 KByte (default)
------------------------------------------------------------
[ 3] local 10.20.2.53 port 51694 connected with 10.20.2.57 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec
---

About Test #5, I don't know why the GRE traffic (Test #4) doesn't reach
1Gbit/sec (only ~200Mbit/s ?), since its physical path is much faster
(GIGALan). Plus, Test #3 shows a pretty fast speed when traffic flows only
within a hypervisor (3.96Gbit/sec).

Tomorrow, I'll do this tests with netperf.

NOTE: I'm using Open vSwitch 1.11.0, compiled for Ubuntu 12.04.3, via
"dpkg-buildpackage" and installed via "Debian / Ubuntu way". If I downgrade
to 1.10.2 from Havana Cloud Archive, same results... I can downgrade it, if
you guys tell me to do so.

BTW, I'll install another "Region", based on Havana on Ubuntu 13.10, with
exactly the same configurations from my current Havana + Ubuntu 12.04.3, on
top of the same hardware, to see if the problem still persist.

Regards,
Thiago

On 23 October 2013 22:40, Aaron Rosen <arosen@nicira.com> wrote:

>
>
>
> On Mon, Oct 21, 2013 at 11:52 PM, Martinx - ジェームズ <
> thiagocmartinsc@gmail.com> wrote:
>
>> James,
>>
>> I think I'm hitting this problem.
>>
>> I'm using "Per-Tenant Routers with Private Networks", GRE tunnels and
>> L3+DHCP Network Node.
>>
>> The connectivity from behind my Instances is very slow. It takes an
>> eternity to finish "apt-get update".
>>
>
>
> I'm curious if you can do the following tests to help pinpoint the bottle
> neck:
>
> Run iperf or netperf between:
> two instances on the same hypervisor - this will determine if it's a
> virtualization driver issue if the performance is bad.
> two instances on different hypervisors.
> one instance to the namespace of the l3 agent.
>
>
>
>
>
>
>>
>> If I run "apt-get update" from within tenant's Namespace, it goes fine.
>>
>> If I enable "ovs_use_veth", Metadata (and/or DHCP) stops working and I
>> and unable to start new Ubuntu Instances and login into them... Look:
>>
>> --
>> cloud-init start running: Tue, 22 Oct 2013 05:57:39 +0000. up 4.01 seconds
>> 2013-10-22 06:01:42,989 - util.py[WARNING]: '
>> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
>> [3/120s]: url error [[Errno 113] No route to host]
>> 2013-10-22 06:01:45,988 - util.py[WARNING]: '
>> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
>> [6/120s]: url error [[Errno 113] No route to host]
>> --
>>
>
>
> Do you see anything interesting in the neutron-metadata-agent log? Or it
> looks like your instance doesn't have a route to the default gw?
>
>
>>
>> Is this problem still around?!
>>
>> Should I stay away from GRE tunnels when with Havana + Ubuntu 12.04.3?
>>
>> Is it possible to re-enable Metadata when ovs_use_veth = true ?
>>
>> Thanks!
>> Thiago
>>
>>
>> On 3 October 2013 06:27, James Page <james.page@ubuntu.com> wrote:
>>
>>> -----BEGIN PGP SIGNED MESSAGE-----
>>> Hash: SHA256
>>>
>>> On 02/10/13 22:49, James Page wrote:
>>> >> sudo ip netns exec qrouter-d3baf1b1-55ee-42cb-a3f6-9629288e3221
>>> >>> traceroute -n 10.5.0.2 -p 44444 --mtu traceroute to 10.5.0.2
>>> >>> (10.5.0.2), 30 hops max, 65000 byte packets 1 10.5.0.2 0.950
>>> >>> ms F=1500 0.598 ms 0.566 ms
>>> >>>
>>> >>> The PMTU from the l3 gateway to the instance looks OK to me.
>>> > I spent a bit more time debugging this; performance from within
>>> > the router netns on the L3 gateway node looks good in both
>>> > directions when accessing via the tenant network (10.5.0.2) over
>>> > the qr-XXXXX interface, but when accessing through the external
>>> > network from within the netns I see the same performance choke
>>> > upstream into the tenant network.
>>> >
>>> > Which would indicate that my problem lies somewhere around the
>>> > qg-XXXXX interface in the router netns - just trying to figure out
>>> > exactly what - maybe iptables is doing something wonky?
>>>
>>> OK - I found a fix but I'm not sure why this makes a difference;
>>> neither my l3-agent or dhcp-agent configuration had 'ovs_use_veth =
>>> True'; I switched this on, clearing everything down, rebooted and now
>>> I seem symmetric good performance across all neutron routers.
>>>
>>> This would point to some sort of underlying bug when ovs_use_veth =
>>> False.
>>>
>>>
>>> - --
>>> James Page
>>> Ubuntu and Debian Developer
>>> james.page@ubuntu.com
>>> jamespage@debian.org
>>> -----BEGIN PGP SIGNATURE-----
>>> Version: GnuPG v1.4.14 (GNU/Linux)
>>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>>>
>>> iQIcBAEBCAAGBQJSTTh6AAoJEL/srsug59jDmpEP/jaB5/yn9+Xm12XrVu0Q3IV5
>>> fLGOuBboUgykVVsfkWccI/oygNlBaXIcDuak/E4jxPcoRhLAdY1zpX8MQ8wSsGKd
>>> CjSeuW8xxnXubdfzmsCKSs3FCIBhDkSYzyiJd/raLvCfflyy8Cl7KN2x22mGHJ6z
>>> qZ9APcYfm9qCVbEssA3BHcUL+st1iqMJ0YhVZBk03+QEXaWu3FFbjpjwx3X1ZvV5
>>> Vbac7enqy7Lr4DSAIJVldeVuRURfv3YE3iJZTIXjaoUCCVTQLm5OmP9TrwBNHLsA
>>> 7W+LceQri+Vh0s4dHPKx5MiHsV3RCydcXkSQFYhx7390CXypMQ6WwXEY/a8Egssg
>>> SuxXByHwEcQFa+9sCwPQ+RXCmC0O6kUi8EPmwadjI5Gc1LoKw5Wov/SEen86fDUW
>>> P9pRXonseYyWN9I4MT4aG1ez8Dqq/SiZyWBHtcITxKI2smD92G9CwWGo4L9oGqJJ
>>> UcHRwQaTHgzy3yETPO25hjax8ZWZGNccHBixMCZKegr9p2dhR+7qF8G7mRtRQLxL
>>> 0fgOAExn/SX59ZT4RaYi9fI6Gng13RtSyI87CJC/50vfTmqoraUUK1aoSjIY4Dt+
>>> DYEMMLp205uLEj2IyaNTzykR0yh3t6dvfpCCcRA/xPT9slfa0a7P8LafyiWa4/5c
>>> jkJM4Y1BUV+2L5Rrf3sc
>>> =4lO4
>>> -----END PGP SIGNATURE-----
>>>
>>> _______________________________________________
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
Precisely!

The doc currently says to disable Namespace when using GRE, never did this
before, look:

http://docs.openstack.org/trunk/install-guide/install/apt/content/install-neutron.install-plugin.ovs.gre.html

But on this very same doc, they say to enable it... Who knows?! =P

http://docs.openstack.org/trunk/install-guide/install/apt/content/section_networking-routers-with-private-networks.html

I stick with Namespace enabled...

Let me ask you something, when you enable ovs_use_veth, que Metadata and
DHCP still works?!

Cheers!
Thiago


On 24 October 2013 12:22, Speichert,Daniel <djs428@drexel.edu> wrote:

> Hello everyone,****
>
> ** **
>
> It seems we also ran into the same issue.****
>
> ** **
>
> We are running Ubuntu Saucy with OpenStack Havana from Ubuntu Cloud
> archives (precise-updates).****
>
> ** **
>
> The download speed to the VMs increased from 5 Mbps to maximum after
> enabling ovs_use_veth. Upload speed from the VMs is still terrible (max 1
> Mbps, usually 0.04 Mbps).****
>
> ** **
>
> Here is the iperf between the instance and L3 agent (network node) inside
> namespace.****
>
> ** **
>
> root@cloud:~# ip netns exec qrouter-a29e0200-d390-40d1-8cf7-7ac1cef5863a
> iperf -c 10.1.0.24 -r****
>
> ------------------------------------------------------------****
>
> Server listening on TCP port 5001****
>
> TCP window size: 85.3 KByte (default)****
>
> ------------------------------------------------------------****
>
> ------------------------------------------------------------****
>
> Client connecting to 10.1.0.24, TCP port 5001****
>
> TCP window size: 585 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 7] local 10.1.0.1 port 37520 connected with 10.1.0.24 port 5001****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 7] 0.0-10.0 sec 845 MBytes 708 Mbits/sec****
>
> [ 6] local 10.1.0.1 port 5001 connected with 10.1.0.24 port 53006****
>
> [ 6] 0.0-31.4 sec 256 KBytes 66.7 Kbits/sec****
>
> ** **
>
> We are using Neutron OpenVSwitch with GRE and namespaces.****
>
>
> A side question: the documentation says to disable namespaces with GRE and
> enable them with VLANs. It was always working well for us on Grizzly with
> GRE and namespaces and we could never get it to work without namespaces. Is
> there any specific reason why the documentation is advising to disable it?
> ****
>
> ** **
>
> Regards,****
>
> Daniel****
>
> ** **
>
> *From:* Martinx - ジェームズ [mailto:thiagocmartinsc@gmail.com]
> *Sent:* Thursday, October 24, 2013 3:58 AM
> *To:* Aaron Rosen
> *Cc:* openstack@lists.openstack.org
>
> *Subject:* Re: [Openstack] Directional network performance issues with
> Neutron + OpenvSwitch****
>
> ** **
>
> Hi Aaron,****
>
> ** **
>
> Thanks for answering! =)****
>
> ** **
>
> Lets work...****
>
> ** **
>
> ---****
>
> ** **
>
> TEST #1 - iperf between Network Node and its Uplink router (Data Center's
> gateway "Internet") - OVS br-ex / eth2****
>
> ** **
>
> # Tenant Namespace route table****
>
> ** **
>
> root@net-node-1:~# ip netns exec
> qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 ip route****
>
> default via 172.16.0.1 dev qg-50b615b7-c2 ****
>
> 172.16.0.0/20 dev qg-50b615b7-c2 proto kernel scope link src
> 172.16.0.2 ****
>
> 192.168.210.0/24 dev qr-a1376f61-05 proto kernel scope link src
> 192.168.210.1 ****
>
> ** **
>
> # there is a "iperf -s" running at 172.16.0.1 "Internet", testing it****
>
> ** **
>
> root@net-node-1:~# ip netns exec
> qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 iperf -c 172.16.0.1****
>
> ------------------------------------------------------------****
>
> Client connecting to 172.16.0.1, TCP port 5001****
>
> TCP window size: 22.9 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 5] local 172.16.0.2 port 58342 connected with 172.16.0.1 port 5001****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 5] 0.0-10.0 sec 668 MBytes 559 Mbits/sec****
>
> ---****
>
> ** **
>
> ---****
>
> ** **
>
> TEST #2 - iperf on one instance to the Namespace of the L3 agent + uplink
> router****
>
> ** **
>
> # iperf server running within Tenant's Namespace router****
>
> ** **
>
> root@net-node-1:~# ip netns exec
> qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 iperf -s****
>
> ** **
>
> -****
>
> ** **
>
> # from instance-1****
>
> ** **
>
> ubuntu@instance-1:~$ ip route****
>
> default via 192.168.210.1 dev eth0 metric 100 ****
>
> 192.168.210.0/24 dev eth0 proto kernel scope link src 192.168.210.2 ***
> *
>
> ** **
>
> # instance-1 performing tests against net-node-1 Namespace above****
>
> ** **
>
> ubuntu@instance-1:~$ iperf -c 192.168.210.1****
>
> ------------------------------------------------------------****
>
> Client connecting to 192.168.210.1, TCP port 5001****
>
> TCP window size: 21.0 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 3] local 192.168.210.2 port 43739 connected with 192.168.210.1 port
> 5001****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 3] 0.0-10.0 sec 484 MBytes 406 Mbits/sec****
>
> ** **
>
> # still on instance-1, now against "External IP" of its own Namespace /
> Router****
>
> ** **
>
> ubuntu@instance-1:~$ iperf -c 172.16.0.2****
>
> ------------------------------------------------------------****
>
> Client connecting to 172.16.0.2, TCP port 5001****
>
> TCP window size: 21.0 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 3] local 192.168.210.2 port 34703 connected with 172.16.0.2 port 5001**
> **
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 3] 0.0-10.0 sec 520 MBytes 436 Mbits/sec****
>
> ** **
>
> # still on instance-1, now against the Data Center UpLink Router****
>
> ** **
>
> ubuntu@instance-1:~$ iperf -c 172.16.0.1****
>
> ------------------------------------------------------------****
>
> Client connecting to 172.16.0.1, TCP port 5001****
>
> TCP window size: 21.0 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 3] local 192.168.210.4 port 38401 connected with 172.16.0.1 port 5001**
> **
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 3] 0.0-10.0 sec * 324 MBytes 271 Mbits/sec*****
>
> ---****
>
> ** **
>
> This latest test shows only 271 Mbits/s! I think it should be at least,
> 400~430 MBits/s... Right?!****
>
> ** **
>
> ---****
>
> ** **
>
> TEST #3 - Two instances on the same hypervisor****
>
> ** **
>
> # iperf server****
>
> ** **
>
> ubuntu@instance-2:~$ ip route****
>
> default via 192.168.210.1 dev eth0 metric 100 ****
>
> 192.168.210.0/24 dev eth0 proto kernel scope link src 192.168.210.4 ***
> *
>
> ** **
>
> ubuntu@instance-2:~$ iperf -s****
>
> ------------------------------------------------------------****
>
> Server listening on TCP port 5001****
>
> TCP window size: 85.3 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 4] local 192.168.210.4 port 5001 connected with 192.168.210.2 port
> 45800****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 4] 0.0-10.0 sec 4.61 GBytes 3.96 Gbits/sec****
>
> ** **
>
> # iperf client****
>
> ** **
>
> ubuntu@instance-1:~$ iperf -c 192.168.210.4****
>
> ------------------------------------------------------------****
>
> Client connecting to 192.168.210.4, TCP port 5001****
>
> TCP window size: 21.0 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 3] local 192.168.210.2 port 45800 connected with 192.168.210.4 port
> 5001****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 3] 0.0-10.0 sec 4.61 GBytes 3.96 Gbits/sec****
>
> ---****
>
> ** **
>
> ---****
>
> ** **
>
> TEST #4 - Two instances on different hypervisors - over GRE****
>
> ** **
>
> root@instance-2:~# iperf -s****
>
> ------------------------------------------------------------****
>
> Server listening on TCP port 5001****
>
> TCP window size: 85.3 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 4] local 192.168.210.4 port 5001 connected with 192.168.210.2 port
> 34640****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 4] 0.0-10.0 sec 237 MBytes 198 Mbits/sec****
>
> ** **
>
> ** **
>
> root@instance-1:~# iperf -c 192.168.210.4****
>
> ------------------------------------------------------------****
>
> Client connecting to 192.168.210.4, TCP port 5001****
>
> TCP window size: 21.0 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 3] local 192.168.210.2 port 34640 connected with 192.168.210.4 port
> 5001****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 3] 0.0-10.0 sec 237 MBytes 198 Mbits/sec****
>
> ---****
>
> ** **
>
> I just realized how slow is my intra-cloud (intra-VM) communication...
> :-/****
>
> ** **
>
> ---****
>
> ** **
>
> TEST #5 - Two hypervisors - "GRE TUNNEL LAN" - OVS local_ip / remote_ip***
> *
>
> ** **
>
> # Same path of "TEST #4" but, testing the physical GRE path (where GRE
> traffic flows)****
>
> ** **
>
> root@hypervisor-2:~$ iperf -s****
>
> ------------------------------------------------------------****
>
> Server listening on TCP port 5001****
>
> TCP window size: 85.3 KByte (default)****
>
> ------------------------------------------------------------****
>
> n[ 4] local 10.20.2.57 port 5001 connected with 10.20.2.53 port 51694****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 4] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec****
>
> ** **
>
> root@hypervisor-1:~# iperf -c 10.20.2.57****
>
> ------------------------------------------------------------****
>
> Client connecting to 10.20.2.57, TCP port 5001****
>
> TCP window size: 22.9 KByte (default)****
>
> ------------------------------------------------------------****
>
> [ 3] local 10.20.2.53 port 51694 connected with 10.20.2.57 port 5001****
>
> [ ID] Interval Transfer Bandwidth****
>
> [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec****
>
> ---****
>
> ** **
>
> About Test #5, I don't know why the GRE traffic (Test #4) doesn't reach
> 1Gbit/sec (only ~200Mbit/s ?), since its physical path is much faster
> (GIGALan). Plus, Test #3 shows a pretty fast speed when traffic flows only
> within a hypervisor (3.96Gbit/sec).****
>
> ** **
>
> Tomorrow, I'll do this tests with netperf.****
>
> ** **
>
> NOTE: I'm using Open vSwitch 1.11.0, compiled for Ubuntu 12.04.3, via
> "dpkg-buildpackage" and installed via "Debian / Ubuntu way". If I downgrade
> to 1.10.2 from Havana Cloud Archive, same results... I can downgrade it, if
> you guys tell me to do so.****
>
> ** **
>
> BTW, I'll install another "Region", based on Havana on Ubuntu 13.10, with
> exactly the same configurations from my current Havana + Ubuntu 12.04.3, on
> top of the same hardware, to see if the problem still persist.****
>
> ** **
>
> Regards,****
>
> Thiago****
>
> ** **
>
> On 23 October 2013 22:40, Aaron Rosen <arosen@nicira.com> wrote:****
>
> ** **
>
> ** **
>
> On Mon, Oct 21, 2013 at 11:52 PM, Martinx - ジェームズ <
> thiagocmartinsc@gmail.com> wrote:****
>
> James,****
>
> ** **
>
> I think I'm hitting this problem.****
>
> ** **
>
> I'm using "Per-Tenant Routers with Private Networks", GRE tunnels and
> L3+DHCP Network Node.****
>
> ** **
>
> The connectivity from behind my Instances is very slow. It takes an
> eternity to finish "apt-get update".****
>
> ** **
>
> ** **
>
> I'm curious if you can do the following tests to help pinpoint the bottle
> neck: ****
>
> ** **
>
> Run iperf or netperf between:****
>
> two instances on the same hypervisor - this will determine if it's a
> virtualization driver issue if the performance is bad. ****
>
> two instances on different hypervisors.****
>
> one instance to the namespace of the l3 agent. ****
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ****
>
> ** **
>
> If I run "apt-get update" from within tenant's Namespace, it goes fine.***
> *
>
> ** **
>
> If I enable "ovs_use_veth", Metadata (and/or DHCP) stops working and I and
> unable to start new Ubuntu Instances and login into them... Look:****
>
> ** **
>
> --****
>
> cloud-init start running: Tue, 22 Oct 2013 05:57:39 +0000. up 4.01 seconds
> ****
>
> 2013-10-22 06:01:42,989 - util.py[WARNING]: '
> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [3/120s]:
> url error [[Errno 113] No route to host]****
>
> 2013-10-22 06:01:45,988 - util.py[WARNING]: '
> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [6/120s]:
> url error [[Errno 113] No route to host]****
>
> --****
>
> ** **
>
> ** **
>
> Do you see anything interesting in the neutron-metadata-agent log? Or it
> looks like your instance doesn't have a route to the default gw? ****
>
> ****
>
> ** **
>
> Is this problem still around?!****
>
> ** **
>
> Should I stay away from GRE tunnels when with Havana + Ubuntu 12.04.3?****
>
> ** **
>
> Is it possible to re-enable Metadata when ovs_use_veth = true ?****
>
> ** **
>
> Thanks!****
>
> Thiago****
>
> ** **
>
> On 3 October 2013 06:27, James Page <james.page@ubuntu.com> wrote:****
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256****
>
> On 02/10/13 22:49, James Page wrote:
> >> sudo ip netns exec qrouter-d3baf1b1-55ee-42cb-a3f6-9629288e3221
> >>> traceroute -n 10.5.0.2 -p 44444 --mtu traceroute to 10.5.0.2
> >>> (10.5.0.2), 30 hops max, 65000 byte packets 1 10.5.0.2 0.950
> >>> ms F=1500 0.598 ms 0.566 ms
> >>>
> >>> The PMTU from the l3 gateway to the instance looks OK to me.
> > I spent a bit more time debugging this; performance from within
> > the router netns on the L3 gateway node looks good in both
> > directions when accessing via the tenant network (10.5.0.2) over
> > the qr-XXXXX interface, but when accessing through the external
> > network from within the netns I see the same performance choke
> > upstream into the tenant network.
> >
> > Which would indicate that my problem lies somewhere around the
> > qg-XXXXX interface in the router netns - just trying to figure out
> > exactly what - maybe iptables is doing something wonky?****
>
> OK - I found a fix but I'm not sure why this makes a difference;
> neither my l3-agent or dhcp-agent configuration had 'ovs_use_veth =
> True'; I switched this on, clearing everything down, rebooted and now
> I seem symmetric good performance across all neutron routers.
>
> This would point to some sort of underlying bug when ovs_use_veth = False.
> ****
>
>
>
> - --
> James Page
> Ubuntu and Debian Developer
> james.page@ubuntu.com
> jamespage@debian.org
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.14 (GNU/Linux)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/****
>
> iQIcBAEBCAAGBQJSTTh6AAoJEL/srsug59jDmpEP/jaB5/yn9+Xm12XrVu0Q3IV5
> fLGOuBboUgykVVsfkWccI/oygNlBaXIcDuak/E4jxPcoRhLAdY1zpX8MQ8wSsGKd
> CjSeuW8xxnXubdfzmsCKSs3FCIBhDkSYzyiJd/raLvCfflyy8Cl7KN2x22mGHJ6z
> qZ9APcYfm9qCVbEssA3BHcUL+st1iqMJ0YhVZBk03+QEXaWu3FFbjpjwx3X1ZvV5
> Vbac7enqy7Lr4DSAIJVldeVuRURfv3YE3iJZTIXjaoUCCVTQLm5OmP9TrwBNHLsA
> 7W+LceQri+Vh0s4dHPKx5MiHsV3RCydcXkSQFYhx7390CXypMQ6WwXEY/a8Egssg
> SuxXByHwEcQFa+9sCwPQ+RXCmC0O6kUi8EPmwadjI5Gc1LoKw5Wov/SEen86fDUW
> P9pRXonseYyWN9I4MT4aG1ez8Dqq/SiZyWBHtcITxKI2smD92G9CwWGo4L9oGqJJ
> UcHRwQaTHgzy3yETPO25hjax8ZWZGNccHBixMCZKegr9p2dhR+7qF8G7mRtRQLxL
> 0fgOAExn/SX59ZT4RaYi9fI6Gng13RtSyI87CJC/50vfTmqoraUUK1aoSjIY4Dt+
> DYEMMLp205uLEj2IyaNTzykR0yh3t6dvfpCCcRA/xPT9slfa0a7P8LafyiWa4/5c
> jkJM4Y1BUV+2L5Rrf3sc
> =4lO4****
>
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack****
>
> ** **
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack****
>
> ** **
>
> ** **
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
On Thu, Oct 24, 2013 at 10:37 AM, Martinx - ジェームズ <thiagocmartinsc@gmail.com
> wrote:

> Precisely!
>
> The doc currently says to disable Namespace when using GRE, never did this
> before, look:
>
>
> http://docs.openstack.org/trunk/install-guide/install/apt/content/install-neutron.install-plugin.ovs.gre.html
>
> But on this very same doc, they say to enable it... Who knows?! =P
>
>
> http://docs.openstack.org/trunk/install-guide/install/apt/content/section_networking-routers-with-private-networks.html
>
> I stick with Namespace enabled...
>
>
Just a reminder, /trunk/ links are works in progress, thanks for bringing
the mismatch to our attention, and we already have a doc bug filed:

https://bugs.launchpad.net/openstack-manuals/+bug/1241056

Review this patch: https://review.openstack.org/#/c/53380/

Anne




> Let me ask you something, when you enable ovs_use_veth, que Metadata and
> DHCP still works?!
>
> Cheers!
> Thiago
>
>
> On 24 October 2013 12:22, Speichert,Daniel <djs428@drexel.edu> wrote:
>
>> Hello everyone,****
>>
>> ** **
>>
>> It seems we also ran into the same issue.****
>>
>> ** **
>>
>> We are running Ubuntu Saucy with OpenStack Havana from Ubuntu Cloud
>> archives (precise-updates).****
>>
>> ** **
>>
>> The download speed to the VMs increased from 5 Mbps to maximum after
>> enabling ovs_use_veth. Upload speed from the VMs is still terrible (max
>> 1 Mbps, usually 0.04 Mbps).****
>>
>> ** **
>>
>> Here is the iperf between the instance and L3 agent (network node) inside
>> namespace.****
>>
>> ** **
>>
>> root@cloud:~# ip netns exec
>> qrouter-a29e0200-d390-40d1-8cf7-7ac1cef5863a iperf -c 10.1.0.24 -r****
>>
>> ------------------------------------------------------------****
>>
>> Server listening on TCP port 5001****
>>
>> TCP window size: 85.3 KByte (default)****
>>
>> ------------------------------------------------------------****
>>
>> ------------------------------------------------------------****
>>
>> Client connecting to 10.1.0.24, TCP port 5001****
>>
>> TCP window size: 585 KByte (default)****
>>
>> ------------------------------------------------------------****
>>
>> [ 7] local 10.1.0.1 port 37520 connected with 10.1.0.24 port 5001****
>>
>> [ ID] Interval Transfer Bandwidth****
>>
>> [ 7] 0.0-10.0 sec 845 MBytes 708 Mbits/sec****
>>
>> [ 6] local 10.1.0.1 port 5001 connected with 10.1.0.24 port 53006****
>>
>> [ 6] 0.0-31.4 sec 256 KBytes 66.7 Kbits/sec****
>>
>> ** **
>>
>> We are using Neutron OpenVSwitch with GRE and namespaces.****
>>
>>
>> A side question: the documentation says to disable namespaces with GRE
>> and enable them with VLANs. It was always working well for us on Grizzly
>> with GRE and namespaces and we could never get it to work without
>> namespaces. Is there any specific reason why the documentation is advising
>> to disable it?****
>>
>> ** **
>>
>> Regards,****
>>
>> Daniel****
>>
>> ** **
>>
>> *From:* Martinx - ジェームズ [mailto:thiagocmartinsc@gmail.com]
>> *Sent:* Thursday, October 24, 2013 3:58 AM
>> *To:* Aaron Rosen
>> *Cc:* openstack@lists.openstack.org
>>
>> *Subject:* Re: [Openstack] Directional network performance issues with
>> Neutron + OpenvSwitch****
>>
>> ** **
>>
>> Hi Aaron,****
>>
>> ** **
>>
>> Thanks for answering! =)****
>>
>> ** **
>>
>> Lets work...****
>>
>> ** **
>>
>> ---****
>>
>> ** **
>>
>> TEST #1 - iperf between Network Node and its Uplink router (Data Center's
>> gateway "Internet") - OVS br-ex / eth2****
>>
>> ** **
>>
>> # Tenant Namespace route table****
>>
>> ** **
>>
>> root@net-node-1:~# ip netns exec
>> qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 ip route****
>>
>> default via 172.16.0.1 dev qg-50b615b7-c2 ****
>>
>> 172.16.0.0/20 dev qg-50b615b7-c2 proto kernel scope link src
>> 172.16.0.2 ****
>>
>> 192.168.210.0/24 dev qr-a1376f61-05 proto kernel scope link src
>> 192.168.210.1 ****
>>
>> ** **
>>
>> # there is a "iperf -s" running at 172.16.0.1 "Internet", testing it****
>>
>> ** **
>>
>> root@net-node-1:~# ip netns exec
>> qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 iperf -c 172.16.0.1****
>>
>> ------------------------------------------------------------****
>>
>> Client connecting to 172.16.0.1, TCP port 5001****
>>
>> TCP window size: 22.9 KByte (default)****
>>
>> ------------------------------------------------------------****
>>
>> [ 5] local 172.16.0.2 port 58342 connected with 172.16.0.1 port 5001****
>>
>> [ ID] Interval Transfer Bandwidth****
>>
>> [ 5] 0.0-10.0 sec 668 MBytes 559 Mbits/sec****
>>
>> ---****
>>
>> ** **
>>
>> ---****
>>
>> ** **
>>
>> TEST #2 - iperf on one instance to the Namespace of the L3 agent + uplink
>> router****
>>
>> ** **
>>
>> # iperf server running within Tenant's Namespace router****
>>
>> ** **
>>
>> root@net-node-1:~# ip netns exec
>> qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 iperf -s****
>>
>> ** **
>>
>> -****
>>
>> ** **
>>
>> # from instance-1****
>>
>> ** **
>>
>> ubuntu@instance-1:~$ ip route****
>>
>> default via 192.168.210.1 dev eth0 metric 100 ****
>>
>> 192.168.210.0/24 dev eth0 proto kernel scope link src 192.168.210.2 **
>> **
>>
>> ** **
>>
>> # instance-1 performing tests against net-node-1 Namespace above****
>>
>> ** **
>>
>> ubuntu@instance-1:~$ iperf -c 192.168.210.1****
>>
>> ------------------------------------------------------------****
>>
>> Client connecting to 192.168.210.1, TCP port 5001****
>>
>> TCP window size: 21.0 KByte (default)****
>>
>> ------------------------------------------------------------****
>>
>> [ 3] local 192.168.210.2 port 43739 connected with 192.168.210.1 port
>> 5001****
>>
>> [ ID] Interval Transfer Bandwidth****
>>
>> [ 3] 0.0-10.0 sec 484 MBytes 406 Mbits/sec****
>>
>> ** **
>>
>> # still on instance-1, now against "External IP" of its own Namespace /
>> Router****
>>
>> ** **
>>
>> ubuntu@instance-1:~$ iperf -c 172.16.0.2****
>>
>> ------------------------------------------------------------****
>>
>> Client connecting to 172.16.0.2, TCP port 5001****
>>
>> TCP window size: 21.0 KByte (default)****
>>
>> ------------------------------------------------------------****
>>
>> [ 3] local 192.168.210.2 port 34703 connected with 172.16.0.2 port 5001*
>> ***
>>
>> [ ID] Interval Transfer Bandwidth****
>>
>> [ 3] 0.0-10.0 sec 520 MBytes 436 Mbits/sec****
>>
>> ** **
>>
>> # still on instance-1, now against the Data Center UpLink Router****
>>
>> ** **
>>
>> ubuntu@instance-1:~$ iperf -c 172.16.0.1****
>>
>> ------------------------------------------------------------****
>>
>> Client connecting to 172.16.0.1, TCP port 5001****
>>
>> TCP window size: 21.0 KByte (default)****
>>
>> ------------------------------------------------------------****
>>
>> [ 3] local 192.168.210.4 port 38401 connected with 172.16.0.1 port 5001*
>> ***
>>
>> [ ID] Interval Transfer Bandwidth****
>>
>> [ 3] 0.0-10.0 sec * 324 MBytes 271 Mbits/sec*****
>>
>> ---****
>>
>> ** **
>>
>> This latest test shows only 271 Mbits/s! I think it should be at least,
>> 400~430 MBits/s... Right?!****
>>
>> ** **
>>
>> ---****
>>
>> ** **
>>
>> TEST #3 - Two instances on the same hypervisor****
>>
>> ** **
>>
>> # iperf server****
>>
>> ** **
>>
>> ubuntu@instance-2:~$ ip route****
>>
>> default via 192.168.210.1 dev eth0 metric 100 ****
>>
>> 192.168.210.0/24 dev eth0 proto kernel scope link src 192.168.210.4 **
>> **
>>
>> ** **
>>
>> ubuntu@instance-2:~$ iperf -s****
>>
>> ------------------------------------------------------------****
>>
>> Server listening on TCP port 5001****
>>
>> TCP window size: 85.3 KByte (default)****
>>
>> ------------------------------------------------------------****
>>
>> [ 4] local 192.168.210.4 port 5001 connected with 192.168.210.2 port
>> 45800****
>>
>> [ ID] Interval Transfer Bandwidth****
>>
>> [ 4] 0.0-10.0 sec 4.61 GBytes 3.96 Gbits/sec****
>>
>> ** **
>>
>> # iperf client****
>>
>> ** **
>>
>> ubuntu@instance-1:~$ iperf -c 192.168.210.4****
>>
>> ------------------------------------------------------------****
>>
>> Client connecting to 192.168.210.4, TCP port 5001****
>>
>> TCP window size: 21.0 KByte (default)****
>>
>> ------------------------------------------------------------****
>>
>> [ 3] local 192.168.210.2 port 45800 connected with 192.168.210.4 port
>> 5001****
>>
>> [ ID] Interval Transfer Bandwidth****
>>
>> [ 3] 0.0-10.0 sec 4.61 GBytes 3.96 Gbits/sec****
>>
>> ---****
>>
>> ** **
>>
>> ---****
>>
>> ** **
>>
>> TEST #4 - Two instances on different hypervisors - over GRE****
>>
>> ** **
>>
>> root@instance-2:~# iperf -s****
>>
>> ------------------------------------------------------------****
>>
>> Server listening on TCP port 5001****
>>
>> TCP window size: 85.3 KByte (default)****
>>
>> ------------------------------------------------------------****
>>
>> [ 4] local 192.168.210.4 port 5001 connected with 192.168.210.2 port
>> 34640****
>>
>> [ ID] Interval Transfer Bandwidth****
>>
>> [ 4] 0.0-10.0 sec 237 MBytes 198 Mbits/sec****
>>
>> ** **
>>
>> ** **
>>
>> root@instance-1:~# iperf -c 192.168.210.4****
>>
>> ------------------------------------------------------------****
>>
>> Client connecting to 192.168.210.4, TCP port 5001****
>>
>> TCP window size: 21.0 KByte (default)****
>>
>> ------------------------------------------------------------****
>>
>> [ 3] local 192.168.210.2 port 34640 connected with 192.168.210.4 port
>> 5001****
>>
>> [ ID] Interval Transfer Bandwidth****
>>
>> [ 3] 0.0-10.0 sec 237 MBytes 198 Mbits/sec****
>>
>> ---****
>>
>> ** **
>>
>> I just realized how slow is my intra-cloud (intra-VM) communication...
>> :-/****
>>
>> ** **
>>
>> ---****
>>
>> ** **
>>
>> TEST #5 - Two hypervisors - "GRE TUNNEL LAN" - OVS local_ip / remote_ip**
>> **
>>
>> ** **
>>
>> # Same path of "TEST #4" but, testing the physical GRE path (where GRE
>> traffic flows)****
>>
>> ** **
>>
>> root@hypervisor-2:~$ iperf -s****
>>
>> ------------------------------------------------------------****
>>
>> Server listening on TCP port 5001****
>>
>> TCP window size: 85.3 KByte (default)****
>>
>> ------------------------------------------------------------****
>>
>> n[ 4] local 10.20.2.57 port 5001 connected with 10.20.2.53 port 51694***
>> *
>>
>> [ ID] Interval Transfer Bandwidth****
>>
>> [ 4] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec****
>>
>> ** **
>>
>> root@hypervisor-1:~# iperf -c 10.20.2.57****
>>
>> ------------------------------------------------------------****
>>
>> Client connecting to 10.20.2.57, TCP port 5001****
>>
>> TCP window size: 22.9 KByte (default)****
>>
>> ------------------------------------------------------------****
>>
>> [ 3] local 10.20.2.53 port 51694 connected with 10.20.2.57 port 5001****
>>
>> [ ID] Interval Transfer Bandwidth****
>>
>> [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec****
>>
>> ---****
>>
>> ** **
>>
>> About Test #5, I don't know why the GRE traffic (Test #4) doesn't reach
>> 1Gbit/sec (only ~200Mbit/s ?), since its physical path is much faster
>> (GIGALan). Plus, Test #3 shows a pretty fast speed when traffic flows only
>> within a hypervisor (3.96Gbit/sec).****
>>
>> ** **
>>
>> Tomorrow, I'll do this tests with netperf.****
>>
>> ** **
>>
>> NOTE: I'm using Open vSwitch 1.11.0, compiled for Ubuntu 12.04.3, via
>> "dpkg-buildpackage" and installed via "Debian / Ubuntu way". If I downgrade
>> to 1.10.2 from Havana Cloud Archive, same results... I can downgrade it, if
>> you guys tell me to do so.****
>>
>> ** **
>>
>> BTW, I'll install another "Region", based on Havana on Ubuntu 13.10, with
>> exactly the same configurations from my current Havana + Ubuntu 12.04.3, on
>> top of the same hardware, to see if the problem still persist.****
>>
>> ** **
>>
>> Regards,****
>>
>> Thiago****
>>
>> ** **
>>
>> On 23 October 2013 22:40, Aaron Rosen <arosen@nicira.com> wrote:****
>>
>> ** **
>>
>> ** **
>>
>> On Mon, Oct 21, 2013 at 11:52 PM, Martinx - ジェームズ <
>> thiagocmartinsc@gmail.com> wrote:****
>>
>> James,****
>>
>> ** **
>>
>> I think I'm hitting this problem.****
>>
>> ** **
>>
>> I'm using "Per-Tenant Routers with Private Networks", GRE tunnels and
>> L3+DHCP Network Node.****
>>
>> ** **
>>
>> The connectivity from behind my Instances is very slow. It takes an
>> eternity to finish "apt-get update".****
>>
>> ** **
>>
>> ** **
>>
>> I'm curious if you can do the following tests to help pinpoint the bottle
>> neck: ****
>>
>> ** **
>>
>> Run iperf or netperf between:****
>>
>> two instances on the same hypervisor - this will determine if it's a
>> virtualization driver issue if the performance is bad. ****
>>
>> two instances on different hypervisors.****
>>
>> one instance to the namespace of the l3 agent. ****
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>> ****
>>
>> ** **
>>
>> If I run "apt-get update" from within tenant's Namespace, it goes fine.**
>> **
>>
>> ** **
>>
>> If I enable "ovs_use_veth", Metadata (and/or DHCP) stops working and I
>> and unable to start new Ubuntu Instances and login into them... Look:****
>>
>> ** **
>>
>> --****
>>
>> cloud-init start running: Tue, 22 Oct 2013 05:57:39 +0000. up 4.01 seconds
>> ****
>>
>> 2013-10-22 06:01:42,989 - util.py[WARNING]: '
>> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
>> [3/120s]: url error [[Errno 113] No route to host]****
>>
>> 2013-10-22 06:01:45,988 - util.py[WARNING]: '
>> http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
>> [6/120s]: url error [[Errno 113] No route to host]****
>>
>> --****
>>
>> ** **
>>
>> ** **
>>
>> Do you see anything interesting in the neutron-metadata-agent log? Or it
>> looks like your instance doesn't have a route to the default gw? ****
>>
>> ****
>>
>> ** **
>>
>> Is this problem still around?!****
>>
>> ** **
>>
>> Should I stay away from GRE tunnels when with Havana + Ubuntu 12.04.3?***
>> *
>>
>> ** **
>>
>> Is it possible to re-enable Metadata when ovs_use_veth = true ?****
>>
>> ** **
>>
>> Thanks!****
>>
>> Thiago****
>>
>> ** **
>>
>> On 3 October 2013 06:27, James Page <james.page@ubuntu.com> wrote:****
>>
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA256****
>>
>> On 02/10/13 22:49, James Page wrote:
>> >> sudo ip netns exec qrouter-d3baf1b1-55ee-42cb-a3f6-9629288e3221
>> >>> traceroute -n 10.5.0.2 -p 44444 --mtu traceroute to 10.5.0.2
>> >>> (10.5.0.2), 30 hops max, 65000 byte packets 1 10.5.0.2 0.950
>> >>> ms F=1500 0.598 ms 0.566 ms
>> >>>
>> >>> The PMTU from the l3 gateway to the instance looks OK to me.
>> > I spent a bit more time debugging this; performance from within
>> > the router netns on the L3 gateway node looks good in both
>> > directions when accessing via the tenant network (10.5.0.2) over
>> > the qr-XXXXX interface, but when accessing through the external
>> > network from within the netns I see the same performance choke
>> > upstream into the tenant network.
>> >
>> > Which would indicate that my problem lies somewhere around the
>> > qg-XXXXX interface in the router netns - just trying to figure out
>> > exactly what - maybe iptables is doing something wonky?****
>>
>> OK - I found a fix but I'm not sure why this makes a difference;
>> neither my l3-agent or dhcp-agent configuration had 'ovs_use_veth =
>> True'; I switched this on, clearing everything down, rebooted and now
>> I seem symmetric good performance across all neutron routers.
>>
>> This would point to some sort of underlying bug when ovs_use_veth = False.
>> ****
>>
>>
>>
>> - --
>> James Page
>> Ubuntu and Debian Developer
>> james.page@ubuntu.com
>> jamespage@debian.org
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG v1.4.14 (GNU/Linux)
>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/****
>>
>> iQIcBAEBCAAGBQJSTTh6AAoJEL/srsug59jDmpEP/jaB5/yn9+Xm12XrVu0Q3IV5
>> fLGOuBboUgykVVsfkWccI/oygNlBaXIcDuak/E4jxPcoRhLAdY1zpX8MQ8wSsGKd
>> CjSeuW8xxnXubdfzmsCKSs3FCIBhDkSYzyiJd/raLvCfflyy8Cl7KN2x22mGHJ6z
>> qZ9APcYfm9qCVbEssA3BHcUL+st1iqMJ0YhVZBk03+QEXaWu3FFbjpjwx3X1ZvV5
>> Vbac7enqy7Lr4DSAIJVldeVuRURfv3YE3iJZTIXjaoUCCVTQLm5OmP9TrwBNHLsA
>> 7W+LceQri+Vh0s4dHPKx5MiHsV3RCydcXkSQFYhx7390CXypMQ6WwXEY/a8Egssg
>> SuxXByHwEcQFa+9sCwPQ+RXCmC0O6kUi8EPmwadjI5Gc1LoKw5Wov/SEen86fDUW
>> P9pRXonseYyWN9I4MT4aG1ez8Dqq/SiZyWBHtcITxKI2smD92G9CwWGo4L9oGqJJ
>> UcHRwQaTHgzy3yETPO25hjax8ZWZGNccHBixMCZKegr9p2dhR+7qF8G7mRtRQLxL
>> 0fgOAExn/SX59ZT4RaYi9fI6Gng13RtSyI87CJC/50vfTmqoraUUK1aoSjIY4Dt+
>> DYEMMLp205uLEj2IyaNTzykR0yh3t6dvfpCCcRA/xPT9slfa0a7P8LafyiWa4/5c
>> jkJM4Y1BUV+2L5Rrf3sc
>> =4lO4****
>>
>> -----END PGP SIGNATURE-----
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack****
>>
>> ** **
>>
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack****
>>
>> ** **
>>
>> ** **
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
We managed to bring the upload speed back to maximum on the instances through the use of this guide:
http://docs.openstack.org/trunk/openstack-network/admin/content/openvswitch_plugin.html

Basically, the MTU needs to be lowered for GRE tunnels. It can be done with DHCP as explained in the new trunk manual.

Regards,
Daniel

From: annegentle@justwriteclick.com [mailto:annegentle@justwriteclick.com] On Behalf Of Anne Gentle
Sent: Thursday, October 24, 2013 12:08 PM
To: Martinx - ジェームズ
Cc: Speichert,Daniel; openstack@lists.openstack.org
Subject: Re: [Openstack] Directional network performance issues with Neutron + OpenvSwitch



On Thu, Oct 24, 2013 at 10:37 AM, Martinx - ジェームズ <thiagocmartinsc@gmail.com<mailto:thiagocmartinsc@gmail.com>> wrote:
Precisely!

The doc currently says to disable Namespace when using GRE, never did this before, look:

http://docs.openstack.org/trunk/install-guide/install/apt/content/install-neutron.install-plugin.ovs.gre.html

But on this very same doc, they say to enable it... Who knows?! =P

http://docs.openstack.org/trunk/install-guide/install/apt/content/section_networking-routers-with-private-networks.html

I stick with Namespace enabled...


Just a reminder, /trunk/ links are works in progress, thanks for bringing the mismatch to our attention, and we already have a doc bug filed:

https://bugs.launchpad.net/openstack-manuals/+bug/1241056

Review this patch: https://review.openstack.org/#/c/53380/

Anne



Let me ask you something, when you enable ovs_use_veth, que Metadata and DHCP still works?!

Cheers!
Thiago

On 24 October 2013 12:22, Speichert,Daniel <djs428@drexel.edu<mailto:djs428@drexel.edu>> wrote:
Hello everyone,

It seems we also ran into the same issue.

We are running Ubuntu Saucy with OpenStack Havana from Ubuntu Cloud archives (precise-updates).

The download speed to the VMs increased from 5 Mbps to maximum after enabling ovs_use_veth. Upload speed from the VMs is still terrible (max 1 Mbps, usually 0.04 Mbps).

Here is the iperf between the instance and L3 agent (network node) inside namespace.

root@cloud:~# ip netns exec qrouter-a29e0200-d390-40d1-8cf7-7ac1cef5863a iperf -c 10.1.0.24 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.1.0.24, TCP port 5001
TCP window size: 585 KByte (default)
------------------------------------------------------------
[ 7] local 10.1.0.1 port 37520 connected with 10.1.0.24 port 5001
[ ID] Interval Transfer Bandwidth
[ 7] 0.0-10.0 sec 845 MBytes 708 Mbits/sec
[ 6] local 10.1.0.1 port 5001 connected with 10.1.0.24 port 53006
[ 6] 0.0-31.4 sec 256 KBytes 66.7 Kbits/sec

We are using Neutron OpenVSwitch with GRE and namespaces.

A side question: the documentation says to disable namespaces with GRE and enable them with VLANs. It was always working well for us on Grizzly with GRE and namespaces and we could never get it to work without namespaces. Is there any specific reason why the documentation is advising to disable it?

Regards,
Daniel

From: Martinx - ジェームズ [mailto:thiagocmartinsc@gmail.com<mailto:thiagocmartinsc@gmail.com>]
Sent: Thursday, October 24, 2013 3:58 AM
To: Aaron Rosen
Cc: openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>

Subject: Re: [Openstack] Directional network performance issues with Neutron + OpenvSwitch

Hi Aaron,

Thanks for answering! =)

Lets work...

---

TEST #1 - iperf between Network Node and its Uplink router (Data Center's gateway "Internet") - OVS br-ex / eth2

# Tenant Namespace route table

root@net-node-1:~# ip netns exec qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 ip route
default via 172.16.0.1 dev qg-50b615b7-c2
172.16.0.0/20<http://172.16.0.0/20> dev qg-50b615b7-c2 proto kernel scope link src 172.16.0.2
192.168.210.0/24<http://192.168.210.0/24> dev qr-a1376f61-05 proto kernel scope link src 192.168.210.1<tel:192.168.210.1>

# there is a "iperf -s" running at 172.16.0.1 "Internet", testing it

root@net-node-1:~# ip netns exec qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 iperf -c 172.16.0.1
------------------------------------------------------------
Client connecting to 172.16.0.1, TCP port 5001
TCP window size: 22.9 KByte (default)
------------------------------------------------------------
[ 5] local 172.16.0.2 port 58342 connected with 172.16.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 668 MBytes 559 Mbits/sec
---

---

TEST #2 - iperf on one instance to the Namespace of the L3 agent + uplink router

# iperf server running within Tenant's Namespace router

root@net-node-1:~# ip netns exec qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 iperf -s

-

# from instance-1

ubuntu@instance-1:~$ ip route
default via 192.168.210.1<tel:192.168.210.1> dev eth0 metric 100
192.168.210.0/24<http://192.168.210.0/24> dev eth0 proto kernel scope link src 192.168.210.2<tel:192.168.210.2>

# instance-1 performing tests against net-node-1 Namespace above

ubuntu@instance-1:~$ iperf -c 192.168.210.1<tel:192.168.210.1>
------------------------------------------------------------
Client connecting to 192.168.210.1<tel:192.168.210.1>, TCP port 5001
TCP window size: 21.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.210.2<tel:192.168.210.2> port 43739 connected with 192.168.210.1<tel:192.168.210.1> port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 484 MBytes 406 Mbits/sec

# still on instance-1, now against "External IP" of its own Namespace / Router

ubuntu@instance-1:~$ iperf -c 172.16.0.2
------------------------------------------------------------
Client connecting to 172.16.0.2, TCP port 5001
TCP window size: 21.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.210.2<tel:192.168.210.2> port 34703 connected with 172.16.0.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 520 MBytes 436 Mbits/sec

# still on instance-1, now against the Data Center UpLink Router

ubuntu@instance-1:~$ iperf -c 172.16.0.1
------------------------------------------------------------
Client connecting to 172.16.0.1, TCP port 5001
TCP window size: 21.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.210.4<tel:192.168.210.4> port 38401 connected with 172.16.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 324 MBytes 271 Mbits/sec
---

This latest test shows only 271 Mbits/s! I think it should be at least, 400~430 MBits/s... Right?!

---

TEST #3 - Two instances on the same hypervisor

# iperf server

ubuntu@instance-2:~$ ip route
default via 192.168.210.1<tel:192.168.210.1> dev eth0 metric 100
192.168.210.0/24<http://192.168.210.0/24> dev eth0 proto kernel scope link src 192.168.210.4<tel:192.168.210.4>

ubuntu@instance-2:~$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.210.4<tel:192.168.210.4> port 5001 connected with 192.168.210.2<tel:192.168.210.2> port 45800
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 4.61 GBytes 3.96 Gbits/sec

# iperf client

ubuntu@instance-1:~$ iperf -c 192.168.210.4<tel:192.168.210.4>
------------------------------------------------------------
Client connecting to 192.168.210.4<tel:192.168.210.4>, TCP port 5001
TCP window size: 21.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.210.2<tel:192.168.210.2> port 45800 connected with 192.168.210.4<tel:192.168.210.4> port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 4.61 GBytes 3.96 Gbits/sec
---

---

TEST #4 - Two instances on different hypervisors - over GRE

root@instance-2:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.210.4<tel:192.168.210.4> port 5001 connected with 192.168.210.2<tel:192.168.210.2> port 34640
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 237 MBytes 198 Mbits/sec


root@instance-1:~# iperf -c 192.168.210.4<tel:192.168.210.4>
------------------------------------------------------------
Client connecting to 192.168.210.4<tel:192.168.210.4>, TCP port 5001
TCP window size: 21.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.210.2<tel:192.168.210.2> port 34640 connected with 192.168.210.4<tel:192.168.210.4> port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 237 MBytes 198 Mbits/sec
---

I just realized how slow is my intra-cloud (intra-VM) communication... :-/

---

TEST #5 - Two hypervisors - "GRE TUNNEL LAN" - OVS local_ip / remote_ip

# Same path of "TEST #4" but, testing the physical GRE path (where GRE traffic flows)

root@hypervisor-2:~$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
n[ 4] local 10.20.2.57 port 5001 connected with 10.20.2.53 port 51694
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

root@hypervisor-1:~# iperf -c 10.20.2.57
------------------------------------------------------------
Client connecting to 10.20.2.57, TCP port 5001
TCP window size: 22.9 KByte (default)
------------------------------------------------------------
[ 3] local 10.20.2.53 port 51694 connected with 10.20.2.57 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec
---

About Test #5, I don't know why the GRE traffic (Test #4) doesn't reach 1Gbit/sec (only ~200Mbit/s ?), since its physical path is much faster (GIGALan). Plus, Test #3 shows a pretty fast speed when traffic flows only within a hypervisor (3.96Gbit/sec).

Tomorrow, I'll do this tests with netperf.

NOTE: I'm using Open vSwitch 1.11.0, compiled for Ubuntu 12.04.3, via "dpkg-buildpackage" and installed via "Debian / Ubuntu way". If I downgrade to 1.10.2 from Havana Cloud Archive, same results... I can downgrade it, if you guys tell me to do so.

BTW, I'll install another "Region", based on Havana on Ubuntu 13.10, with exactly the same configurations from my current Havana + Ubuntu 12.04.3, on top of the same hardware, to see if the problem still persist.

Regards,
Thiago

On 23 October 2013 22:40, Aaron Rosen <arosen@nicira.com<mailto:arosen@nicira.com>> wrote:


On Mon, Oct 21, 2013 at 11:52 PM, Martinx - ジェームズ <thiagocmartinsc@gmail.com<mailto:thiagocmartinsc@gmail.com>> wrote:
James,

I think I'm hitting this problem.

I'm using "Per-Tenant Routers with Private Networks", GRE tunnels and L3+DHCP Network Node.

The connectivity from behind my Instances is very slow. It takes an eternity to finish "apt-get update".


I'm curious if you can do the following tests to help pinpoint the bottle neck:

Run iperf or netperf between:
two instances on the same hypervisor - this will determine if it's a virtualization driver issue if the performance is bad.
two instances on different hypervisors.
one instance to the namespace of the l3 agent.






If I run "apt-get update" from within tenant's Namespace, it goes fine.

If I enable "ovs_use_veth", Metadata (and/or DHCP) stops working and I and unable to start new Ubuntu Instances and login into them... Look:

--
cloud-init start running: Tue, 22 Oct 2013 05:57:39 +0000. up 4.01 seconds
2013-10-22 06:01:42,989 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [3/120s]: url error [[Errno 113] No route to host]
2013-10-22 06:01:45,988 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [6/120s]: url error [[Errno 113] No route to host]
--


Do you see anything interesting in the neutron-metadata-agent log? Or it looks like your instance doesn't have a route to the default gw?


Is this problem still around?!

Should I stay away from GRE tunnels when with Havana + Ubuntu 12.04.3?

Is it possible to re-enable Metadata when ovs_use_veth = true ?

Thanks!
Thiago

On 3 October 2013 06:27, James Page <james.page@ubuntu.com<mailto:james.page@ubuntu.com>> wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
On 02/10/13 22:49, James Page wrote:
>> sudo ip netns exec qrouter-d3baf1b1-55ee-42cb-a3f6-9629288e3221
>>> traceroute -n 10.5.0.2 -p 44444 --mtu traceroute to 10.5.0.2
>>> (10.5.0.2), 30 hops max, 65000 byte packets 1 10.5.0.2 0.950
>>> ms F=1500 0.598 ms 0.566 ms
>>>
>>> The PMTU from the l3 gateway to the instance looks OK to me.
> I spent a bit more time debugging this; performance from within
> the router netns on the L3 gateway node looks good in both
> directions when accessing via the tenant network (10.5.0.2) over
> the qr-XXXXX interface, but when accessing through the external
> network from within the netns I see the same performance choke
> upstream into the tenant network.
>
> Which would indicate that my problem lies somewhere around the
> qg-XXXXX interface in the router netns - just trying to figure out
> exactly what - maybe iptables is doing something wonky?
OK - I found a fix but I'm not sure why this makes a difference;
neither my l3-agent or dhcp-agent configuration had 'ovs_use_veth =
True'; I switched this on, clearing everything down, rebooted and now
I seem symmetric good performance across all neutron routers.

This would point to some sort of underlying bug when ovs_use_veth = False.


- --
James Page
Ubuntu and Debian Developer
james.page@ubuntu.com<mailto:james.page@ubuntu.com>
jamespage@debian.org<mailto:jamespage@debian.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQIcBAEBCAAGBQJSTTh6AAoJEL/srsug59jDmpEP/jaB5/yn9+Xm12XrVu0Q3IV5
fLGOuBboUgykVVsfkWccI/oygNlBaXIcDuak/E4jxPcoRhLAdY1zpX8MQ8wSsGKd
CjSeuW8xxnXubdfzmsCKSs3FCIBhDkSYzyiJd/raLvCfflyy8Cl7KN2x22mGHJ6z
qZ9APcYfm9qCVbEssA3BHcUL+st1iqMJ0YhVZBk03+QEXaWu3FFbjpjwx3X1ZvV5
Vbac7enqy7Lr4DSAIJVldeVuRURfv3YE3iJZTIXjaoUCCVTQLm5OmP9TrwBNHLsA
7W+LceQri+Vh0s4dHPKx5MiHsV3RCydcXkSQFYhx7390CXypMQ6WwXEY/a8Egssg
SuxXByHwEcQFa+9sCwPQ+RXCmC0O6kUi8EPmwadjI5Gc1LoKw5Wov/SEen86fDUW
P9pRXonseYyWN9I4MT4aG1ez8Dqq/SiZyWBHtcITxKI2smD92G9CwWGo4L9oGqJJ
UcHRwQaTHgzy3yETPO25hjax8ZWZGNccHBixMCZKegr9p2dhR+7qF8G7mRtRQLxL
0fgOAExn/SX59ZT4RaYi9fI6Gng13RtSyI87CJC/50vfTmqoraUUK1aoSjIY4Dt+
DYEMMLp205uLEj2IyaNTzykR0yh3t6dvfpCCcRA/xPT9slfa0a7P8LafyiWa4/5c
jkJM4Y1BUV+2L5Rrf3sc
=4lO4
-----END PGP SIGNATURE-----

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: Directional network performance issues with Neutron + OpenvSwitch [ In reply to ]
Ok so that says that PMTUd is failing, probably due to a
bug/limitation in openvswitch. Can we please make sure a bug is filed
- both on Neutron and on the upstream component as soon as someone
tracks it down : Manual MTU lowering is only needed when a network
component is failing to report failed delivery of DF packets
correctly.

-Rob

On 25 October 2013 08:38, Speichert,Daniel <djs428@drexel.edu> wrote:
> We managed to bring the upload speed back to maximum on the instances
> through the use of this guide:
>
> http://docs.openstack.org/trunk/openstack-network/admin/content/openvswitch_plugin.html
>
>
>
> Basically, the MTU needs to be lowered for GRE tunnels. It can be done with
> DHCP as explained in the new trunk manual.
>
>
>
> Regards,
>
> Daniel
>
>
>
> From: annegentle@justwriteclick.com [mailto:annegentle@justwriteclick.com]
> On Behalf Of Anne Gentle
> Sent: Thursday, October 24, 2013 12:08 PM
> To: Martinx - ジェームズ
> Cc: Speichert,Daniel; openstack@lists.openstack.org
>
>
> Subject: Re: [Openstack] Directional network performance issues with Neutron
> + OpenvSwitch
>
>
>
>
>
>
>
> On Thu, Oct 24, 2013 at 10:37 AM, Martinx - ジェームズ
> <thiagocmartinsc@gmail.com> wrote:
>
> Precisely!
>
>
>
> The doc currently says to disable Namespace when using GRE, never did this
> before, look:
>
>
>
> http://docs.openstack.org/trunk/install-guide/install/apt/content/install-neutron.install-plugin.ovs.gre.html
>
>
>
> But on this very same doc, they say to enable it... Who knows?! =P
>
>
>
> http://docs.openstack.org/trunk/install-guide/install/apt/content/section_networking-routers-with-private-networks.html
>
>
>
> I stick with Namespace enabled...
>
>
>
>
>
> Just a reminder, /trunk/ links are works in progress, thanks for bringing
> the mismatch to our attention, and we already have a doc bug filed:
>
>
>
> https://bugs.launchpad.net/openstack-manuals/+bug/1241056
>
>
>
> Review this patch: https://review.openstack.org/#/c/53380/
>
>
>
> Anne
>
>
>
>
>
>
>
> Let me ask you something, when you enable ovs_use_veth, que Metadata and
> DHCP still works?!
>
>
>
> Cheers!
>
> Thiago
>
>
>
> On 24 October 2013 12:22, Speichert,Daniel <djs428@drexel.edu> wrote:
>
> Hello everyone,
>
>
>
> It seems we also ran into the same issue.
>
>
>
> We are running Ubuntu Saucy with OpenStack Havana from Ubuntu Cloud archives
> (precise-updates).
>
>
>
> The download speed to the VMs increased from 5 Mbps to maximum after
> enabling ovs_use_veth. Upload speed from the VMs is still terrible (max 1
> Mbps, usually 0.04 Mbps).
>
>
>
> Here is the iperf between the instance and L3 agent (network node) inside
> namespace.
>
>
>
> root@cloud:~# ip netns exec qrouter-a29e0200-d390-40d1-8cf7-7ac1cef5863a
> iperf -c 10.1.0.24 -r
>
> ------------------------------------------------------------
>
> Server listening on TCP port 5001
>
> TCP window size: 85.3 KByte (default)
>
> ------------------------------------------------------------
>
> ------------------------------------------------------------
>
> Client connecting to 10.1.0.24, TCP port 5001
>
> TCP window size: 585 KByte (default)
>
> ------------------------------------------------------------
>
> [ 7] local 10.1.0.1 port 37520 connected with 10.1.0.24 port 5001
>
> [ ID] Interval Transfer Bandwidth
>
> [ 7] 0.0-10.0 sec 845 MBytes 708 Mbits/sec
>
> [ 6] local 10.1.0.1 port 5001 connected with 10.1.0.24 port 53006
>
> [ 6] 0.0-31.4 sec 256 KBytes 66.7 Kbits/sec
>
>
>
> We are using Neutron OpenVSwitch with GRE and namespaces.
>
>
> A side question: the documentation says to disable namespaces with GRE and
> enable them with VLANs. It was always working well for us on Grizzly with
> GRE and namespaces and we could never get it to work without namespaces. Is
> there any specific reason why the documentation is advising to disable it?
>
>
>
> Regards,
>
> Daniel
>
>
>
> From: Martinx - ジェームズ [mailto:thiagocmartinsc@gmail.com]
> Sent: Thursday, October 24, 2013 3:58 AM
> To: Aaron Rosen
> Cc: openstack@lists.openstack.org
>
>
> Subject: Re: [Openstack] Directional network performance issues with Neutron
> + OpenvSwitch
>
>
>
> Hi Aaron,
>
>
>
> Thanks for answering! =)
>
>
>
> Lets work...
>
>
>
> ---
>
>
>
> TEST #1 - iperf between Network Node and its Uplink router (Data Center's
> gateway "Internet") - OVS br-ex / eth2
>
>
>
> # Tenant Namespace route table
>
>
>
> root@net-node-1:~# ip netns exec
> qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 ip route
>
> default via 172.16.0.1 dev qg-50b615b7-c2
>
> 172.16.0.0/20 dev qg-50b615b7-c2 proto kernel scope link src 172.16.0.2
>
> 192.168.210.0/24 dev qr-a1376f61-05 proto kernel scope link src
> 192.168.210.1
>
>
>
> # there is a "iperf -s" running at 172.16.0.1 "Internet", testing it
>
>
>
> root@net-node-1:~# ip netns exec
> qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 iperf -c 172.16.0.1
>
> ------------------------------------------------------------
>
> Client connecting to 172.16.0.1, TCP port 5001
>
> TCP window size: 22.9 KByte (default)
>
> ------------------------------------------------------------
>
> [ 5] local 172.16.0.2 port 58342 connected with 172.16.0.1 port 5001
>
> [ ID] Interval Transfer Bandwidth
>
> [ 5] 0.0-10.0 sec 668 MBytes 559 Mbits/sec
>
> ---
>
>
>
> ---
>
>
>
> TEST #2 - iperf on one instance to the Namespace of the L3 agent + uplink
> router
>
>
>
> # iperf server running within Tenant's Namespace router
>
>
>
> root@net-node-1:~# ip netns exec
> qrouter-46cb8f7a-a3c5-4da7-ad69-4de63f7c34f1 iperf -s
>
>
>
> -
>
>
>
> # from instance-1
>
>
>
> ubuntu@instance-1:~$ ip route
>
> default via 192.168.210.1 dev eth0 metric 100
>
> 192.168.210.0/24 dev eth0 proto kernel scope link src 192.168.210.2
>
>
>
> # instance-1 performing tests against net-node-1 Namespace above
>
>
>
> ubuntu@instance-1:~$ iperf -c 192.168.210.1
>
> ------------------------------------------------------------
>
> Client connecting to 192.168.210.1, TCP port 5001
>
> TCP window size: 21.0 KByte (default)
>
> ------------------------------------------------------------
>
> [ 3] local 192.168.210.2 port 43739 connected with 192.168.210.1 port 5001
>
> [ ID] Interval Transfer Bandwidth
>
> [ 3] 0.0-10.0 sec 484 MBytes 406 Mbits/sec
>
>
>
> # still on instance-1, now against "External IP" of its own Namespace /
> Router
>
>
>
> ubuntu@instance-1:~$ iperf -c 172.16.0.2
>
> ------------------------------------------------------------
>
> Client connecting to 172.16.0.2, TCP port 5001
>
> TCP window size: 21.0 KByte (default)
>
> ------------------------------------------------------------
>
> [ 3] local 192.168.210.2 port 34703 connected with 172.16.0.2 port 5001
>
> [ ID] Interval Transfer Bandwidth
>
> [ 3] 0.0-10.0 sec 520 MBytes 436 Mbits/sec
>
>
>
> # still on instance-1, now against the Data Center UpLink Router
>
>
>
> ubuntu@instance-1:~$ iperf -c 172.16.0.1
>
> ------------------------------------------------------------
>
> Client connecting to 172.16.0.1, TCP port 5001
>
> TCP window size: 21.0 KByte (default)
>
> ------------------------------------------------------------
>
> [ 3] local 192.168.210.4 port 38401 connected with 172.16.0.1 port 5001
>
> [ ID] Interval Transfer Bandwidth
>
> [ 3] 0.0-10.0 sec 324 MBytes 271 Mbits/sec
>
> ---
>
>
>
> This latest test shows only 271 Mbits/s! I think it should be at least,
> 400~430 MBits/s... Right?!
>
>
>
> ---
>
>
>
> TEST #3 - Two instances on the same hypervisor
>
>
>
> # iperf server
>
>
>
> ubuntu@instance-2:~$ ip route
>
> default via 192.168.210.1 dev eth0 metric 100
>
> 192.168.210.0/24 dev eth0 proto kernel scope link src 192.168.210.4
>
>
>
> ubuntu@instance-2:~$ iperf -s
>
> ------------------------------------------------------------
>
> Server listening on TCP port 5001
>
> TCP window size: 85.3 KByte (default)
>
> ------------------------------------------------------------
>
> [ 4] local 192.168.210.4 port 5001 connected with 192.168.210.2 port 45800
>
> [ ID] Interval Transfer Bandwidth
>
> [ 4] 0.0-10.0 sec 4.61 GBytes 3.96 Gbits/sec
>
>
>
> # iperf client
>
>
>
> ubuntu@instance-1:~$ iperf -c 192.168.210.4
>
> ------------------------------------------------------------
>
> Client connecting to 192.168.210.4, TCP port 5001
>
> TCP window size: 21.0 KByte (default)
>
> ------------------------------------------------------------
>
> [ 3] local 192.168.210.2 port 45800 connected with 192.168.210.4 port 5001
>
> [ ID] Interval Transfer Bandwidth
>
> [ 3] 0.0-10.0 sec 4.61 GBytes 3.96 Gbits/sec
>
> ---
>
>
>
> ---
>
>
>
> TEST #4 - Two instances on different hypervisors - over GRE
>
>
>
> root@instance-2:~# iperf -s
>
> ------------------------------------------------------------
>
> Server listening on TCP port 5001
>
> TCP window size: 85.3 KByte (default)
>
> ------------------------------------------------------------
>
> [ 4] local 192.168.210.4 port 5001 connected with 192.168.210.2 port 34640
>
> [ ID] Interval Transfer Bandwidth
>
> [ 4] 0.0-10.0 sec 237 MBytes 198 Mbits/sec
>
>
>
>
>
> root@instance-1:~# iperf -c 192.168.210.4
>
> ------------------------------------------------------------
>
> Client connecting to 192.168.210.4, TCP port 5001
>
> TCP window size: 21.0 KByte (default)
>
> ------------------------------------------------------------
>
> [ 3] local 192.168.210.2 port 34640 connected with 192.168.210.4 port 5001
>
> [ ID] Interval Transfer Bandwidth
>
> [ 3] 0.0-10.0 sec 237 MBytes 198 Mbits/sec
>
> ---
>
>
>
> I just realized how slow is my intra-cloud (intra-VM) communication... :-/
>
>
>
> ---
>
>
>
> TEST #5 - Two hypervisors - "GRE TUNNEL LAN" - OVS local_ip / remote_ip
>
>
>
> # Same path of "TEST #4" but, testing the physical GRE path (where GRE
> traffic flows)
>
>
>
> root@hypervisor-2:~$ iperf -s
>
> ------------------------------------------------------------
>
> Server listening on TCP port 5001
>
> TCP window size: 85.3 KByte (default)
>
> ------------------------------------------------------------
>
> n[ 4] local 10.20.2.57 port 5001 connected with 10.20.2.53 port 51694
>
> [ ID] Interval Transfer Bandwidth
>
> [ 4] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec
>
>
>
> root@hypervisor-1:~# iperf -c 10.20.2.57
>
> ------------------------------------------------------------
>
> Client connecting to 10.20.2.57, TCP port 5001
>
> TCP window size: 22.9 KByte (default)
>
> ------------------------------------------------------------
>
> [ 3] local 10.20.2.53 port 51694 connected with 10.20.2.57 port 5001
>
> [ ID] Interval Transfer Bandwidth
>
> [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec
>
> ---
>
>
>
> About Test #5, I don't know why the GRE traffic (Test #4) doesn't reach
> 1Gbit/sec (only ~200Mbit/s ?), since its physical path is much faster
> (GIGALan). Plus, Test #3 shows a pretty fast speed when traffic flows only
> within a hypervisor (3.96Gbit/sec).
>
>
>
> Tomorrow, I'll do this tests with netperf.
>
>
>
> NOTE: I'm using Open vSwitch 1.11.0, compiled for Ubuntu 12.04.3, via
> "dpkg-buildpackage" and installed via "Debian / Ubuntu way". If I downgrade
> to 1.10.2 from Havana Cloud Archive, same results... I can downgrade it, if
> you guys tell me to do so.
>
>
>
> BTW, I'll install another "Region", based on Havana on Ubuntu 13.10, with
> exactly the same configurations from my current Havana + Ubuntu 12.04.3, on
> top of the same hardware, to see if the problem still persist.
>
>
>
> Regards,
>
> Thiago
>
>
>
> On 23 October 2013 22:40, Aaron Rosen <arosen@nicira.com> wrote:
>
>
>
>
>
> On Mon, Oct 21, 2013 at 11:52 PM, Martinx - ジェームズ
> <thiagocmartinsc@gmail.com> wrote:
>
> James,
>
>
>
> I think I'm hitting this problem.
>
>
>
> I'm using "Per-Tenant Routers with Private Networks", GRE tunnels and
> L3+DHCP Network Node.
>
>
>
> The connectivity from behind my Instances is very slow. It takes an eternity
> to finish "apt-get update".
>
>
>
>
>
> I'm curious if you can do the following tests to help pinpoint the bottle
> neck:
>
>
>
> Run iperf or netperf between:
>
> two instances on the same hypervisor - this will determine if it's a
> virtualization driver issue if the performance is bad.
>
> two instances on different hypervisors.
>
> one instance to the namespace of the l3 agent.
>
>
>
>
>
>
>
>
>
>
>
>
>
> If I run "apt-get update" from within tenant's Namespace, it goes fine.
>
>
>
> If I enable "ovs_use_veth", Metadata (and/or DHCP) stops working and I and
> unable to start new Ubuntu Instances and login into them... Look:
>
>
>
> --
>
> cloud-init start running: Tue, 22 Oct 2013 05:57:39 +0000. up 4.01 seconds
>
> 2013-10-22 06:01:42,989 - util.py[WARNING]:
> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [3/120s]:
> url error [[Errno 113] No route to host]
>
> 2013-10-22 06:01:45,988 - util.py[WARNING]:
> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [6/120s]:
> url error [[Errno 113] No route to host]
>
> --
>
>
>
>
>
> Do you see anything interesting in the neutron-metadata-agent log? Or it
> looks like your instance doesn't have a route to the default gw?
>
>
>
>
>
> Is this problem still around?!
>
>
>
> Should I stay away from GRE tunnels when with Havana + Ubuntu 12.04.3?
>
>
>
> Is it possible to re-enable Metadata when ovs_use_veth = true ?
>
>
>
> Thanks!
>
> Thiago
>
>
>
> On 3 October 2013 06:27, James Page <james.page@ubuntu.com> wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> On 02/10/13 22:49, James Page wrote:
>>> sudo ip netns exec qrouter-d3baf1b1-55ee-42cb-a3f6-9629288e3221
>>>> traceroute -n 10.5.0.2 -p 44444 --mtu traceroute to 10.5.0.2
>>>> (10.5.0.2), 30 hops max, 65000 byte packets 1 10.5.0.2 0.950
>>>> ms F=1500 0.598 ms 0.566 ms
>>>>
>>>> The PMTU from the l3 gateway to the instance looks OK to me.
>> I spent a bit more time debugging this; performance from within
>> the router netns on the L3 gateway node looks good in both
>> directions when accessing via the tenant network (10.5.0.2) over
>> the qr-XXXXX interface, but when accessing through the external
>> network from within the netns I see the same performance choke
>> upstream into the tenant network.
>>
>> Which would indicate that my problem lies somewhere around the
>> qg-XXXXX interface in the router netns - just trying to figure out
>> exactly what - maybe iptables is doing something wonky?
>
> OK - I found a fix but I'm not sure why this makes a difference;
> neither my l3-agent or dhcp-agent configuration had 'ovs_use_veth =
> True'; I switched this on, clearing everything down, rebooted and now
> I seem symmetric good performance across all neutron routers.
>
> This would point to some sort of underlying bug when ovs_use_veth = False.
>
>
>
> - --
> James Page
> Ubuntu and Debian Developer
> james.page@ubuntu.com
> jamespage@debian.org
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.14 (GNU/Linux)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIcBAEBCAAGBQJSTTh6AAoJEL/srsug59jDmpEP/jaB5/yn9+Xm12XrVu0Q3IV5
> fLGOuBboUgykVVsfkWccI/oygNlBaXIcDuak/E4jxPcoRhLAdY1zpX8MQ8wSsGKd
> CjSeuW8xxnXubdfzmsCKSs3FCIBhDkSYzyiJd/raLvCfflyy8Cl7KN2x22mGHJ6z
> qZ9APcYfm9qCVbEssA3BHcUL+st1iqMJ0YhVZBk03+QEXaWu3FFbjpjwx3X1ZvV5
> Vbac7enqy7Lr4DSAIJVldeVuRURfv3YE3iJZTIXjaoUCCVTQLm5OmP9TrwBNHLsA
> 7W+LceQri+Vh0s4dHPKx5MiHsV3RCydcXkSQFYhx7390CXypMQ6WwXEY/a8Egssg
> SuxXByHwEcQFa+9sCwPQ+RXCmC0O6kUi8EPmwadjI5Gc1LoKw5Wov/SEen86fDUW
> P9pRXonseYyWN9I4MT4aG1ez8Dqq/SiZyWBHtcITxKI2smD92G9CwWGo4L9oGqJJ
> UcHRwQaTHgzy3yETPO25hjax8ZWZGNccHBixMCZKegr9p2dhR+7qF8G7mRtRQLxL
> 0fgOAExn/SX59ZT4RaYi9fI6Gng13RtSyI87CJC/50vfTmqoraUUK1aoSjIY4Dt+
> DYEMMLp205uLEj2IyaNTzykR0yh3t6dvfpCCcRA/xPT9slfa0a7P8LafyiWa4/5c
> jkJM4Y1BUV+2L5Rrf3sc
> =4lO4
>
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>



--
Robert Collins <rbtcollins@hp.com>
Distinguished Technologist
HP Converged Cloud

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

1 2 3  View All