Mailing List Archive

Problems with network-route/vif-route scripts
I've been attempting to get the network-route/vif-route scripts running
instead of using the traditional bridging setup, but running into some
puzzelling issues.

I changed the network-script & vif-script settings in xend-config.sxp to
the appropriate values. In the guest config I set an explicit IP address
as required for routing vif = [ 'mac=00:16:3e:1e:7c:a6, ip=10.13.4.222' ]
And inside the guest, configured eth0 with the matching IP address. This
all works fine - the host can connect to the guest, and vica-verca.
Off-host communications to/from the guest are not getting any traffic.
I do have a regular Fedora iptables rulset loaded which denies all incoming
traffic except SSH, but the vif-route script adds a number of IPTables rules
which are supposed to deal with this, allowing all traffic to go straight
to the guest.

After a little debugging, I came across a couple of separate issues with
the vif-route script which all conspire to block off-host networking from
working as expected

- The iptables rule is only added to the FORWARD rule - it also needs
to be added to the INPUT rule, otherwise Dom0 firwall rules will hit
DomU traffic too

- The iptables rule is added to the end of the FORWARD rule, so if you
have an existing catch all DENY/REJECT rule already, the Xen rule
will never get matched

- The rule is using '-m physdev --physdev-in $vif' to match guest traffic.
The 'physdev' module rules, however, only match on interfaces which are
part of a network bridge - obviously not the case for routed networking
config, so even at the correct location in FORWARD they don't match

- While the guest can transmit, it never receives anything back because
the remote hosts can't do ARP lookups for the guest's IP address. The
vif-route script turns on proxy_arp on the $vif, but the proxy_arp setting
is also needed on the Dom0's public interface (eg eth0)

Based on this it would seem we need to change the current

iptables -A FORWARD --source $ip -m physdev --physdev-in $vif -j ACCEPT

To instead do

iptables -I INPUT --source $ip -i $vif -j ACCEPT
iptables -I FORWARD --source $ip -i $vif -j ACCEPT

Since this stuff is dealt with in vif-common.sh it looks like we'll need to
remove that commonality between route & bridge scripts.

And add some logic to network-route which does

dev=....discover primary public interface...
sysctl -w net.ipv4.conf.$dev.proxy_arp = 1


I'm rather wondering if I've missed something incredibly stupid which would
avoid all these issues, but it certainly seems that the current vif-route
scripts just won't work in their current state

Regards,
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: Problems with network-route/vif-route scripts [ In reply to ]
On Thu, Oct 26, 2006 at 05:17:50PM +0100, Daniel P. Berrange wrote:
> I've been attempting to get the network-route/vif-route scripts running
> instead of using the traditional bridging setup, but running into some
> puzzelling issues.

Are you using an hvm or pv guest?

Routing under hvm requires a bit of hacking. Qemu uses /dev/tap*
devices rather than the vif*.* devices and is by default invoked in a
bridging mode (the default /etc/xen/qemu-ifup script that is invoked by
qemu-dm on launch simply adds the tap device to the bridge xenbr0).

I've found that turning the vif script executed by xend into a no-op,
changing the qemu-ifup script to invoke a vif-like script with the
correct vif and XENBUS_PATH defined, and modifying image.py to invoke
qemu-dm without the bridging.

There is a patch for image.py in
http://lists.xensource.com/archives/html/xen-users/2006-09/msg00976.html

To the list -- is there a chance that the qemu-dm network device
interaction will follow the pv naming scheme anytime soon?

-John McCullough

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: Problems with network-route/vif-route scripts [ In reply to ]
On Sun, Oct 29, 2006 at 06:08:52PM -0800, John McCullough wrote:
> On Thu, Oct 26, 2006 at 05:17:50PM +0100, Daniel P. Berrange wrote:
> > I've been attempting to get the network-route/vif-route scripts running
> > instead of using the traditional bridging setup, but running into some
> > puzzelling issues.
>
> Are you using an hvm or pv guest?

Nope, this was all with paravirt guests. I think I've got the problem
sorted out now, so I'll try & post a patch for the pv network scripts
tomorrow sometime.

> Routing under hvm requires a bit of hacking. Qemu uses /dev/tap*
> devices rather than the vif*.* devices and is by default invoked in a
> bridging mode (the default /etc/xen/qemu-ifup script that is invoked by
> qemu-dm on launch simply adds the tap device to the bridge xenbr0).
>
> I've found that turning the vif script executed by xend into a no-op,
> changing the qemu-ifup script to invoke a vif-like script with the
> correct vif and XENBUS_PATH defined, and modifying image.py to invoke
> qemu-dm without the bridging.

Ok, I'll take a look at the pv stuff too - we ought to make sure both
pv & hvm work in the same way if feasible.


Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
RE: Problems with network-route/vif-route scripts [ In reply to ]
> Are you using an hvm or pv guest?
>
> Routing under hvm requires a bit of hacking. Qemu uses /dev/tap*
> devices rather than the vif*.* devices and is by default invoked in a
> bridging mode (the default /etc/xen/qemu-ifup script that is invoked
by
> qemu-dm on launch simply adds the tap device to the bridge xenbr0).
>
> I've found that turning the vif script executed by xend into a no-op,
> changing the qemu-ifup script to invoke a vif-like script with the
> correct vif and XENBUS_PATH defined, and modifying image.py to invoke
> qemu-dm without the bridging.

It's much better for us to fix xend and its normal scripts rather than
using the qemu-ifup.

> To the list -- is there a chance that the qemu-dm network device
> interaction will follow the pv naming scheme anytime soon?

When qemu-dm moves into a stub domains this will get unififed
automatically. In the meantime, I suggest we just rename the tapX device
to be vifX.Y which should allow the normal scripts to work rather more
easily.

Thanks,
Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: Problems with network-route/vif-route scripts [ In reply to ]
On Thu, Oct 26, 2006 at 05:17:50PM +0100, Daniel P. Berrange wrote:
> After a little debugging, I came across a couple of separate issues with
> the vif-route script which all conspire to block off-host networking from
> working as expected
>
> - The iptables rule is only added to the FORWARD rule - it also needs
> to be added to the INPUT rule, otherwise Dom0 firwall rules will hit
> DomU traffic too
>
> - The iptables rule is added to the end of the FORWARD rule, so if you
> have an existing catch all DENY/REJECT rule already, the Xen rule
> will never get matched
>
> - The rule is using '-m physdev --physdev-in $vif' to match guest traffic.
> The 'physdev' module rules, however, only match on interfaces which are
> part of a network bridge - obviously not the case for routed networking
> config, so even at the correct location in FORWARD they don't match
>
> - While the guest can transmit, it never receives anything back because
> the remote hosts can't do ARP lookups for the guest's IP address. The
> vif-route script turns on proxy_arp on the $vif, but the proxy_arp setting
> is also needed on the Dom0's public interface (eg eth0)
>
> Based on this it would seem we need to change the current
>
> iptables -A FORWARD --source $ip -m physdev --physdev-in $vif -j ACCEPT
>
> To instead do
>
> iptables -I INPUT --source $ip -i $vif -j ACCEPT
> iptables -I FORWARD --source $ip -i $vif -j ACCEPT
>
> Since this stuff is dealt with in vif-common.sh it looks like we'll need to
> remove that commonality between route & bridge scripts.

I'm attaching a patch which does 3 things to the IPTables rules:

- Use -I instead of -A so that rules get inserted at start
of chain - avoiding other custom rules such as a catch-all -j REJECT

- Use -i $vif instead of --physdev-in $vif for routed / nat based
networking. Bridged networking still uses --physdev-in

- Adds the rules to both FORWARD & INPUT chains instead of just
FORWARD chain

This fixes up the IPTables bit of the routed networking

> And add some logic to network-route which does
>
> dev=....discover primary public interface...
> sysctl -w net.ipv4.conf.$dev.proxy_arp = 1

I've not sorted out a patch to discover the primary interface, so for now
I'm testing with 'echo 1 >/proc/sys/net/ipv4/conf/all/proxy_arp' which enables
proxy_arp for all interfaces. I could submit a patch for this, but I think
it is overkill, so want to get the correct patch.

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>

Regards,
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|