Mailing List Archive

[neutron][devstack][network] Question on the OVS configuration
Hi All,

Let me brief my question, I have two NIC ports, one is connect to internal (eno1) and the other (eno2) is external, my openstack cluster is based on the devstack and neutron is configured as OVS + vxlan, How can I make my VM could be able to access external with eno2 port?

I tried to add the port to br-ex but it seems doesn't work, the VM still go to the default route on the physical node which is via eno1.

$ ovs-vsctl add-port br-ex eno2


Thanks in the advance for any comments!


Best Regards,
Dave Chen

From: Chen2, Dave
Sent: Friday, June 15, 2018 4:17 PM
To: 'openstack-dev@lists.openstack.org'
Cc: Chen2, Dave
Subject: [neutron] Question on the OVS configuration

Dear folks,

I have setup a pretty simple OpenStack cluster in our lab based on devstack, couples of guest VM are running on one controller node (this doesn't looks like a right behavior anyway), the Neutron network is configured as OVS + vxlan, the bridge "br-ex" configured as below:

Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
ovs_version: "2.8.0"



As you see, there is no external physical NIC bound to "br-ex", so I guess the traffic from the VM to external will use the default route set on the controller node, since there is a NIC (eno2) that can access external so I bind it to "br-ex" like this: ovs-vsctl add-port br-ex eno2. now the "br-ex" is configured as below:

Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
*Port "eno2"*
Interface "eno2"
Port br-ex
Interface br-ex
type: internal
ovs_version: "2.8.0"



Looks like this is how it should be configured according to lots of wiki/blog suggestion I have googled, but it doesn't work as expected, ping from the VM, the tcpdump shows the traffic still go the "eno1" which is the default route on the controller node.

Inside of VM
ubuntu@test-br:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=38 time=168 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=38 time=168 ms
...

Dump the traffic on the "eno2", got nothing
$ sudo tcpdump -nn -i eno2 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes
...

Dump the traffic on the "eno1" (internal NIC), catch it!
$ sudo tcpdump -nn -i eno1 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eno1, link-type EN10MB (Ethernet), capture size 262144 bytes
16:08:59.609888 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id 1439, seq 1, length 64
16:08:59.781042 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, seq 1, length 64
16:09:00.611453 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id 1439, seq 2, length 64
16:09:00.779550 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, seq 2, length 64


$ sudo ip route
default via 192.168.18.1 dev eno1 proto static metric 100
default via 192.168.8.1 dev eno2 proto static metric 101
169.254.0.0/16 dev docker0 scope link metric 1000 linkdown
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.8.0/24 dev eno2 proto kernel scope link src 192.168.8.101 metric 100
192.168.16.0/21 dev eno1 proto kernel scope link src 192.168.20.132 metric 100
192.168.42.0/24 dev br-ex proto kernel scope link src 192.168.42.1


What's going wrong here? Do I miss something? Or some service need to be restarted?

Anyone could help me out? This question made me sick for many days! Huge thanks in the advance!


Best Regards,
Dave
Re: [neutron][devstack][network] Question on the OVS configuration [ In reply to ]
Did you follow the networking guide
https://docs.openstack.org/devstack/latest/guides/neutron.html? It has
instructions for single- and multiple-NIC nodes.

I believe the secret is in this setting:

PUBLIC_INTERFACE=eth0

Plugging your eno2 into the bridge is not enough; as far as I understand
it, you also need to move the IP address to the bridge and ensure that
the bridge is UP. Perhaps the routing table must be tweaked as well, but
in any case, the guides should achieve your goal.

Bernd Bausch


On 6/15/2018 7:34 PM, Dave.Chen@Dell.com wrote:
>
> Hi All,
>
> ?
>
> Let me brief my question, I have two NIC ports, one is connect to
> internal (eno1) and the other (eno2) is external, my openstack cluster
> is based on the devstack and neutron is configured as OVS + vxlan, How
> can I make my VM could be able to access external with eno2 port?
>
> ?
>
> I tried to add the port to br-ex but it seems doesn?t work, the VM
> still ?go to the default route on the physical node which is via eno1.
>
> ?
>
> $ ovs-vsctl add-port br-ex eno2
>
> ?
>
> ?
>
> Thanks in the advance for any comments!
>
> ?
>
> ?
>
> Best Regards,
>
> Dave Chen
>
> ?
>
> *From:* Chen2, Dave
> *Sent:* Friday, June 15, 2018 4:17 PM
> *To:* 'openstack-dev@lists.openstack.org'
> *Cc:* Chen2, Dave
> *Subject:* [neutron] Question on the OVS configuration
>
> ?
>
> Dear folks,
>
> ?
>
> I have setup a pretty simple OpenStack cluster in our lab based on
> devstack, couples of guest VM are running on one controller node (this
> doesn?t looks like a right behavior anyway), ?the Neutron network is
> configured as OVS + vxlan, the bridge ?br-ex? configured as below:
>
> ?
>
> ??? Bridge br-ex
>
> ??????? Controller "tcp:127.0.0.1:6633"
>
> ??????????? is_connected: true
>
> ??????? fail_mode: secure
>
> ??????? Port phy-br-ex
>
> ??????????? Interface phy-br-ex
>
> ??????????????? type: patch
>
> ??????????????? options: {peer=int-br-ex}
>
> ??????? Port br-ex
>
> ??????????? Interface br-ex
>
> ??????????????? type: internal
>
> ovs_version: "2.8.0"
>
> ?
>
> ?
>
> ?
>
> As you see, there is no external physical NIC bound to ?br-ex?, so I
> guess the traffic from the VM to external will use the default route
> set on the controller node, ?since there is a NIC (eno2) that can
> access external so I bind it to ?br-ex? like this: ovs-vsctl add-port
> br-ex eno2. now the ?br-ex? is configured as below: ?
>
> ?
>
> ??? Bridge br-ex
>
> ??????? Controller "tcp:127.0.0.1:6633"
>
> ??????????? is_connected: true
>
> ??????? fail_mode: secure
>
> ??????? Port phy-br-ex
>
> ??????????? Interface phy-br-ex
>
> ??????????????? type: patch
>
> ??????????????? options: {peer=int-br-ex}
>
> ??????? **Port "eno2"**
>
> ??????????? Interface "eno2"
>
> ??????? Port br-ex
>
> ??????????? Interface br-ex
>
> ??????????????? type: internal
>
> ??? ovs_version: "2.8.0"
>
> ?
>
> ?
>
> ?
>
> Looks like this is how it should be configured according to lots of
> wiki/blog suggestion I have googled, but it doesn?t work as expected,
> ping from the VM, the tcpdump shows the traffic still go the ?eno1?
> which is the default route on the controller node.
>
> ?
>
> Inside of VM
>
> ubuntu@test-br:~$ ping 8.8.8.8
>
> PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
>
> 64 bytes from 8.8.8.8: icmp_seq=1 ttl=38 time=168 ms
>
> 64 bytes from 8.8.8.8: icmp_seq=2 ttl=38 time=168 ms
>
> ?
>
> ?
>
> Dump the traffic on the ?eno2?, got nothing
>
> $ sudo tcpdump -nn -i eno2 icmp
>
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
>
> listening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes
>
> ?
>
> ?
>
> Dump the traffic on the ?eno1? (internal NIC), ?catch it!
>
> $ sudo tcpdump -nn -i eno1 icmp
>
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
>
> listening on eno1, link-type EN10MB (Ethernet), capture size 262144 bytes
>
> 16:08:59.609888 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id
> 1439, seq 1, length 64
>
> 16:08:59.781042 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439,
> seq 1, length 64
>
> 16:09:00.611453 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id
> 1439, seq 2, length 64
>
> 16:09:00.779550 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439,
> seq 2, length 64
>
> ?
>
> ?
>
> $ sudo ip route
>
> default via 192.168.18.1 dev eno1? proto static? metric 100
>
> default via 192.168.8.1 dev eno2? proto static? metric 101
>
> 169.254.0.0/16 dev docker0? scope link? metric 1000 linkdown
>
> 172.17.0.0/16 dev docker0? proto kernel? scope link? src 172.17.0.1
> linkdown
>
> 192.168.8.0/24 dev eno2? proto kernel? scope link? src 192.168.8.101?
> metric 100
>
> 192.168.16.0/21 dev eno1? proto kernel? scope link? src
> 192.168.20.132? metric 100
>
> 192.168.42.0/24 dev br-ex? proto kernel? scope link? src 192.168.42.1
>
> ?
>
> ?
>
> What?s going wrong here? Do I miss something? Or some service need to
> be restarted?
>
> ?
>
> Anyone could help me out?? This question made me sick for many days!?
> Huge thanks in the advance!
>
> ?
>
> ?
>
> Best Regards,
>
> Dave
>
> ?
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [neutron][devstack][network] Question on the OVS configuration [ In reply to ]
Thanks Bernd for pointing me to the link, that is definitely much helpful!

Best Regards,
Dave Chen

From: Bernd Bausch [mailto:berndbausch@gmail.com]
Sent: Saturday, June 16, 2018 6:49 AM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] [neutron][devstack][network] Question on the OVS configuration


Did you follow the networking guide https://docs.openstack.org/devstack/latest/guides/neutron.html? It has instructions for single- and multiple-NIC nodes.

I believe the secret is in this setting:

PUBLIC_INTERFACE=eth0

Plugging your eno2 into the bridge is not enough; as far as I understand it, you also need to move the IP address to the bridge and ensure that the bridge is UP. Perhaps the routing table must be tweaked as well, but in any case, the guides should achieve your goal.

Bernd Bausch

On 6/15/2018 7:34 PM, Dave.Chen@Dell.com<mailto:Dave.Chen@Dell.com> wrote:
Hi All,

Let me brief my question, I have two NIC ports, one is connect to internal (eno1) and the other (eno2) is external, my openstack cluster is based on the devstack and neutron is configured as OVS + vxlan, How can I make my VM could be able to access external with eno2 port?

I tried to add the port to br-ex but it seems doesn't work, the VM still go to the default route on the physical node which is via eno1.

$ ovs-vsctl add-port br-ex eno2


Thanks in the advance for any comments!


Best Regards,
Dave Chen

From: Chen2, Dave
Sent: Friday, June 15, 2018 4:17 PM
To: 'openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>'
Cc: Chen2, Dave
Subject: [neutron] Question on the OVS configuration

Dear folks,

I have setup a pretty simple OpenStack cluster in our lab based on devstack, couples of guest VM are running on one controller node (this doesn't looks like a right behavior anyway), the Neutron network is configured as OVS + vxlan, the bridge "br-ex" configured as below:

Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
ovs_version: "2.8.0"



As you see, there is no external physical NIC bound to "br-ex", so I guess the traffic from the VM to external will use the default route set on the controller node, since there is a NIC (eno2) that can access external so I bind it to "br-ex" like this: ovs-vsctl add-port br-ex eno2. now the "br-ex" is configured as below:

Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
*Port "eno2"*
Interface "eno2"
Port br-ex
Interface br-ex
type: internal
ovs_version: "2.8.0"



Looks like this is how it should be configured according to lots of wiki/blog suggestion I have googled, but it doesn't work as expected, ping from the VM, the tcpdump shows the traffic still go the "eno1" which is the default route on the controller node.

Inside of VM
ubuntu@test-br:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=38 time=168 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=38 time=168 ms
...

Dump the traffic on the "eno2", got nothing
$ sudo tcpdump -nn -i eno2 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes
...

Dump the traffic on the "eno1" (internal NIC), catch it!
$ sudo tcpdump -nn -i eno1 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eno1, link-type EN10MB (Ethernet), capture size 262144 bytes
16:08:59.609888 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id 1439, seq 1, length 64
16:08:59.781042 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, seq 1, length 64
16:09:00.611453 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id 1439, seq 2, length 64
16:09:00.779550 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, seq 2, length 64


$ sudo ip route
default via 192.168.18.1 dev eno1 proto static metric 100
default via 192.168.8.1 dev eno2 proto static metric 101
169.254.0.0/16 dev docker0 scope link metric 1000 linkdown
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.8.0/24 dev eno2 proto kernel scope link src 192.168.8.101 metric 100
192.168.16.0/21 dev eno1 proto kernel scope link src 192.168.20.132 metric 100
192.168.42.0/24 dev br-ex proto kernel scope link src 192.168.42.1


What's going wrong here? Do I miss something? Or some service need to be restarted?

Anyone could help me out? This question made me sick for many days! Huge thanks in the advance!


Best Regards,
Dave





_______________________________________________

Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Post to : openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>

Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack