Mailing List Archive

kernel oops/IRQ exception when networking between many domUs
Hi,

I try to build experimental networks with Xen and stumbled over the same
problem that has been described quite well by Mark Doll in his posting
"xen_net: Failed to connect all virtual interfaces: err=-100"
here:

http://lists.xensource.com/archives/html/xen-users/2005-04/msg00447.html

As it was still present in 2.0.6, I tried 3.0-devel and found NR_PIRQS
and NR_DYNIRQS had been adjusted there - so I hoped for the best. I was
then able to fire up my virtual test network and get it running with 20
nodes and approx. 120 interfaces, without problems at first. The vifs
are wired to ~60 bridge interfaces, 2 vifs each, and I can access all
domU-nodes with the console etc. Kernel version is 2.6.11 coming with
xen-unstable as of May 31.

The problem: after allowing free packet delivery within the network by
issueing a

sysctl -w net.bridge.bridge-nf-call-iptables=0

(which was until then set to 1 and my iptables rules blocked all
traffic), the whole machine froze after a very short time (immediately
to 2-3 seconds), apparently when the first packet is traveling through
the network. No output, kernel oops, nothing to see, and magic sysrq
gone as well(!). This behaviour was deterministic. I had quite some
difficulties getting more information - what I finally did was to set
the sysctl *before* starting the domUs. Funnily, nothing happend after
starting the first 10-12 nodes, but after "xm create"ing one or two more
nodes, the system oopsed with at least some info, but sysrq gone as
well. So I wrote it down on a peace of paper ;-) , hopefully someone
can make sense of it:


Stack:
00000000 d06cea20 2f001020 c8b04780 c0403f1c c028cbfa 0002f001 0000000d
ffffffff 08b78020 00000052 00000001 00000028 0000005e 00008b85 d21fe000
00000006 c0457824 0000011d c0453240 00283d58 e01c3a6e c0403cec da6bccd0

Call Trace:
[<c0109c51>] show_stack+0x80/0x96
[<c0100de1>] show_registers+0x15a/0x1d1
[<c010a001>] die+0x106/0x1c4
[<c010a4aa>] do_invalid_op+0xb5/0xbf
[<c010985b>] error_code+0x2b/0x30
[<c028cbfa>] net_rx_action+0x484/0x4df
[<c01239a9>] tasklet_action+0x7b/0xe0
[<c0123533>] __do_softirq+0x6f/0xef
[<c0123632>] do_softirq+0x7f/0x97
[<c0123706>] irq_exit+0x3a/0x3c
[<c010d819>] do_IRQ+0x25/0x2c
[<c0105efe>] evtchn_do_upcall+0x62/0x82
[<c010988c>] hypervisor_callback+0x2c/0x34
[<c0107673>] cpu_idle+0x33/0x41
[<c04047a9>] start_kernel+0x196/0x1e8
[<c010006c>] 0xc010006c

Code: 08 a8 75 30 83 c4 5b 5e 5f 5d c3 bb 01 00 00 00 31 f6 b8 0c 00 00
00 bf f0 7f 00 00 8d 4d 08 89 da cd 82 83 e8 01 2e 74 8e <0f> 0b 66 00
2c 7a 35 c0 eb 84 e8 f8 b1 09 00 eb c9 e8 f6 98 e7

<0>Kernel panic - not syncing: Fatal exception in interrupt



Any suggestions?


Regards,

Birger

PS.: I attach the scripts starting the virtual network for the
interested user. Beware, they have no decent design but are mere hacks.
The root filesystem used is available here:

http://www.iem.uni-due.de/~birger/downloads/root_fs
Re: [Xen-devel] kernel oops/IRQ exception when networking between many domUs [ In reply to ]
On 4 Jun 2005, at 18:05, Birger Tödtmann wrote:

> Funnily, nothing happend after
> starting the first 10-12 nodes, but after "xm create"ing one or two
> more
> nodes, the system oopsed with at least some info, but sysrq gone as
> well. So I wrote it down on a peace of paper ;-) , hopefully someone
> can make sense of it:

Do you have the vmlinux file? It would be useful to know where in
net_rx_action the crash is happening.

-- Keir


_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
Re: kernel oops/IRQ exception when networking between many domUs [ In reply to ]
Re-post without attachments for list readers.


Keir Fraser schrieb am Sun, Jun 05, 2005 at 05:52:13PM +0100:
>
> On 4 Jun 2005, at 18:05, Birger Tödtmann wrote:
>
> >Funnily, nothing happend after
> >starting the first 10-12 nodes, but after "xm create"ing one or two
> >more
> >nodes, the system oopsed with at least some info, but sysrq gone as
> >well. So I wrote it down on a peace of paper ;-) , hopefully someone
> >can make sense of it:
>
> Do you have the vmlinux file? It would be useful to know where in
> net_rx_action the crash is happening.

Apparently it is happening somewhere here:

[...]
0xc028cbe5 <net_rx_action+1135>: test %eax,%eax
0xc028cbe7 <net_rx_action+1137>: je 0xc028ca82 <net_rx_action+780>
0xc028cbed <net_rx_action+1143>: mov %esi,%eax
0xc028cbef <net_rx_action+1145>: shr $0xc,%eax
0xc028cbf2 <net_rx_action+1148>: mov %eax,(%esp)
0xc028cbf5 <net_rx_action+1151>: call 0xc028c4c4 <free_mfn>
0xc028cbfa <net_rx_action+1156>: mov $0xffffffff,%ecx
^^^^^^^^^^
0xc028cbff <net_rx_action+1161>: jmp 0xc028ca82 <net_rx_action+780>
0xc028cc04 <net_rx_action+1166>: call 0xc02c59fe <net_ratelimit>
0xc028cc09 <net_rx_action+1171>: test %eax,%eax
0xc028cc0b <net_rx_action+1173>: jne 0xc028cc47 <net_rx_action+1233>
0xc028cc0d <net_rx_action+1175>: mov 0xc0378b60,%eax
[...]


which is, I presume, reflected by this section within net_rx_action():


[...]
/* Check the reassignment error code. */
status = NETIF_RSP_OKAY;
if ( unlikely(mcl[1].args[5] != 0) )
{
DPRINTK("Failed MMU update transferring to DOM%u\n", netif->domid);
free_mfn(mdata >> PAGE_SHIFT);
status = NETIF_RSP_ERROR;
}
[...]


Kernel image and System.map attached.


Regards,
--
Birger Tödtmann
Technik der Rechnernetze, Institut für Experimentelle Mathematik und Institut
für Informatik und Wirtschaftsinformatik, Universität Duisburg-Essen
email:btoedtmann@iem.uni-due.de skype:birger.toedtmann pgp:0x6FB166C9 icq:294947817


_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
Re: [Xen-devel] kernel oops/IRQ exception when networking between many domUs [ In reply to ]
On 5 Jun 2005, at 17:57, Birger Toedtmann wrote:

> Apparently it is happening somewhere here:
>
> [...]
> 0xc028cbe5 <net_rx_action+1135>: test %eax,%eax
> 0xc028cbe7 <net_rx_action+1137>: je 0xc028ca82
> <net_rx_action+780>
> 0xc028cbed <net_rx_action+1143>: mov %esi,%eax
> 0xc028cbef <net_rx_action+1145>: shr $0xc,%eax
> 0xc028cbf2 <net_rx_action+1148>: mov %eax,(%esp)
> 0xc028cbf5 <net_rx_action+1151>: call 0xc028c4c4 <free_mfn>
> 0xc028cbfa <net_rx_action+1156>: mov $0xffffffff,%ecx
> ^^^^^^^^^^

Most likely the driver has tried to send a bogus page to a domU.
Because it's bogus the transfer fails. The driver then tries to free
the page back to Xen, but that also fails because the page is bogus.
This confuses the driver, which then BUG()s out.

It's not at all clear where the bogus address comes from: the driver
basically just reads the address out of an skbuff, and converts it from
virtual to physical address. But something is obviously going wrong,
perhaps under memory pressure. :-(

-- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Xen-devel] kernel oops/IRQ exception when networking between many domUs [ In reply to ]
Am Montag, den 06.06.2005, 09:23 +0100 schrieb Keir Fraser:
> On 5 Jun 2005, at 17:57, Birger Toedtmann wrote:
>
> > Apparently it is happening somewhere here:
> >
> > [...]
> > 0xc028cbe5 <net_rx_action+1135>: test %eax,%eax
> > 0xc028cbe7 <net_rx_action+1137>: je 0xc028ca82
> > <net_rx_action+780>
> > 0xc028cbed <net_rx_action+1143>: mov %esi,%eax
> > 0xc028cbef <net_rx_action+1145>: shr $0xc,%eax
> > 0xc028cbf2 <net_rx_action+1148>: mov %eax,(%esp)
> > 0xc028cbf5 <net_rx_action+1151>: call 0xc028c4c4 <free_mfn>
> > 0xc028cbfa <net_rx_action+1156>: mov $0xffffffff,%ecx
> > ^^^^^^^^^^
>
> Most likely the driver has tried to send a bogus page to a domU.
> Because it's bogus the transfer fails. The driver then tries to free
> the page back to Xen, but that also fails because the page is bogus.
> This confuses the driver, which then BUG()s out.

I commented out the free_mfn() and status= lines: the kernel now reports
the following after it configured the 10th domU and ~80th vif, with
approx. 20-25 bridges up. Just an idea: the number of vifs + bridges is
somewhere around the magic 128 (NR_IRQS problem in 2.0.x!) when the
crash happens - could this hint to something?


[...]
Jun 6 10:12:14 lomin kernel: 10.2.23.8: port 2(vif10.3) entering
forwarding state
Jun 6 10:12:14 lomin kernel: 10.2.35.16: topology change detected,
propagating
Jun 6 10:12:14 lomin kernel: 10.2.35.16: port 2(vif10.4) entering
forwarding state
Jun 6 10:12:14 lomin kernel: 10.2.35.20: topology change detected,
propagating
Jun 6 10:12:14 lomin kernel: 10.2.35.20: port 2(vif10.5) entering
forwarding state
Jun 6 10:12:20 lomin kernel: c014cea4
Jun 6 10:12:20 lomin kernel: [do_page_fault+643/1665] do_page_fault
+0x469/0x738
Jun 6 10:12:20 lomin kernel: [<c0115720>] do_page_fault+0x469/0x738
Jun 6 10:12:20 lomin kernel: [fixup_4gb_segment+2/12] page_fault
+0x2e/0x34
Jun 6 10:12:20 lomin kernel: [<c0109a7e>] page_fault+0x2e/0x34
Jun 6 10:12:20 lomin kernel: [do_page_fault+49/1665] do_page_fault
+0x217/0x738
Jun 6 10:12:20 lomin kernel: [<c01154ce>] do_page_fault+0x217/0x738
Jun 6 10:12:20 lomin kernel: [fixup_4gb_segment+2/12] page_fault
+0x2e/0x34
Jun 6 10:12:20 lomin kernel: [<c0109a7e>] page_fault+0x2e/0x34
Jun 6 10:12:20 lomin kernel: PREEMPT
Jun 6 10:12:20 lomin kernel: Modules linked in: dm_snapshot pcmcia
bridge ipt_REJECT ipt_state iptable_filter ipt_MASQUERADE iptable_nat
ip_conntrack ip_tables autofs4 snd_seq snd_seq_device evdev usbhid
rfcomm l2cap bluetooth dm_mod cryptoloop snd_pcm_oss snd_mixer_oss
snd_intel8x0 snd_ac97_codec snd_pcm snd_timer snd soundcore
snd_page_alloc tun uhci_hcd usb_storage usbcore irtty_sir sir_dev
ircomm_tty ircomm irda yenta_socket rsrc_nonstatic pcmcia_core 3c59x
Jun 6 10:12:20 lomin kernel: CPU: 0
Jun 6 10:12:20 lomin kernel: EIP: 0061:[do_wp_page+622/1175] Not
tainted VLI
Jun 6 10:12:20 lomin kernel: EIP: 0061:[<c014cea4>] Not tainted
VLI
Jun 6 10:12:20 lomin kernel: EFLAGS: 00010206 (2.6.11.11-xen0)
Jun 6 10:12:20 lomin kernel: EIP is at handle_mm_fault+0x5d/0x222
Jun 6 10:12:20 lomin kernel: eax: 15555b18 ebx: d8788000 ecx:
00000b18 edx: 15555b18
Jun 6 10:12:20 lomin kernel: esi: dcfc3b4c edi: dcaf5580 ebp:
d8789ee4 esp: d8789ebc
Jun 6 10:12:20 lomin kernel: ds: 0069 es: 0069 ss: 0069
Jun 6 10:12:20 lomin kernel: Process python (pid: 4670,
threadinfo=d8788000 task=de1a1520)
Jun 6 10:12:20 lomin kernel: Stack: 00000040 00000001 d40e687c d40e6874
00000006 d40e685c d8789f14 dcaf5580
Jun 6 10:12:20 lomin kernel: dcaf55ac d40e6b1c d8789fbc c01154ce
dcaf5580 d40e6b1c b4ec6ff0 00000001
Jun 6 10:12:20 lomin kernel: 00000001 de1a1520 b4ec6ff0 00000006
d8789fc4 d8789fc4 c03405b0 00000006
Jun 6 10:12:20 lomin kernel: Call Trace:
Jun 6 10:12:20 lomin kernel: [dump_stack+16/32] show_stack+0x80/0x96
Jun 6 10:12:20 lomin kernel: [<c0109c51>] show_stack+0x80/0x96
Jun 6 10:12:20 lomin kernel: [show_registers+384/457] show_registers
+0x15a/0x1d1
Jun 6 10:12:20 lomin kernel: [<c0109de1>] show_registers+0x15a/0x1d1
Jun 6 10:12:20 lomin kernel: [die+301/458] die+0x106/0x1c4
Jun 6 10:12:20 lomin kernel: [<c010a001>] die+0x106/0x1c4
Jun 6 10:12:20 lomin kernel: [do_page_fault+675/1665] do_page_fault
+0x489/0x738
Jun 6 10:12:20 lomin kernel: [<c0115740>] do_page_fault+0x489/0x738
Jun 6 10:12:20 lomin kernel: [fixup_4gb_segment+2/12] page_fault
+0x2e/0x34
Jun 6 10:12:20 lomin kernel: [<c0109a7e>] page_fault+0x2e/0x34
Jun 6 10:12:20 lomin kernel: [do_page_fault+49/1665] do_page_fault
+0x217/0x738
Jun 6 10:12:20 lomin kernel: [<c01154ce>] do_page_fault+0x217/0x738
Jun 6 10:12:20 lomin kernel: [fixup_4gb_segment+2/12] page_fault
+0x2e/0x34
Jun 6 10:12:20 lomin kernel: [<c0109a7e>] page_fault+0x2e/0x34
Jun 6 10:12:20 lomin kernel: Code: 8b 47 1c c1 ea 16 83 43 14 01 8d 34
90 85 f6 0f 84 52 01 00 00 89 f2 8b 4d 10 89 f8 e8 4a d1 ff ff 85 c0 89
c2 0f 84 3c 01 00 00 <8b> 00 a8 81 75 3d 85 c0 0f 84 01 01 00 00 a8 40
0f 84 a4 00 00


>
> It's not at all clear where the bogus address comes from: the driver
> basically just reads the address out of an skbuff, and converts it from
> virtual to physical address. But something is obviously going wrong,
> perhaps under memory pressure. :-(

Where, within the domUs or dom0? The latter has lots of memory at hand,
the domU are quite strapped of memory. I'll try to find out...


Regards,
--
Birger Tödtmann
Technik der Rechnernetze, Institut für Experimentelle Mathematik
Universität Duisburg-Essen, Campus Essen email:btoedtmann@iem.uni-due.de
skype:birger.toedtmann pgp:0x6FB166C9

_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
Re: [Xen-devel] kernel oops/IRQ exception when networking between many domUs [ In reply to ]
Am Montag, den 06.06.2005, 10:52 +0200 schrieb Birger Tödtmann:
[...]
>
> I commented out the free_mfn() and status= lines: the kernel now reports
> the following after it configured the 10th domU and ~80th vif, with
> approx. 20-25 bridges up. Just an idea: the number of vifs + bridges is

Correction: I meant 40-45 bridge devices are then up and running.

> somewhere around the magic 128 (NR_IRQS problem in 2.0.x!) when the
> crash happens - could this hint to something?
>


--
Birger Tödtmann
Technik der Rechnernetze, Institut für Experimentelle Mathematik
Universität Duisburg-Essen, Campus Essen email:btoedtmann@iem.uni-due.de
skype:birger.toedtmann pgp:0x6FB166C9

_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
Re: [Xen-devel] kernel oops/IRQ exception when networking between many domUs [ In reply to ]
On 6 Jun 2005, at 09:52, Birger Tödtmann wrote:

> I commented out the free_mfn() and status= lines: the kernel now
> reports
> the following after it configured the 10th domU and ~80th vif, with
> approx. 20-25 bridges up. Just an idea: the number of vifs + bridges
> is
> somewhere around the magic 128 (NR_IRQS problem in 2.0.x!) when the
> crash happens - could this hint to something?

The crashes you see with free_mfn removed will be impossible to debug
-- things are very screwed by that point. Even the crash within
free_mfn might be far removed from the cause of the crash, if it's due
to memory corruption.

It's perhaps worth investigating what critical limit you might be
hitting, and what resource it is that's limited. e.g., can you can
create a few vifs, but connected together by some very large number of
bridges (daisy chained together)? Or can you create a large number of
vifs if they are connected together by just one bridge?

This kind of thing will give us an idea of where the bug might be
lurking.

-- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [Xen-devel] kernel oops/IRQ exception when networking between many domUs [ In reply to ]
Am Montag, den 06.06.2005, 10:26 +0100 schrieb Keir Fraser:
[...]
> > somewhere around the magic 128 (NR_IRQS problem in 2.0.x!) when the
> > crash happens - could this hint to something?
>
> The crashes you see with free_mfn removed will be impossible to debug
> -- things are very screwed by that point. Even the crash within
> free_mfn might be far removed from the cause of the crash, if it's due
> to memory corruption.
>
> It's perhaps worth investigating what critical limit you might be
> hitting, and what resource it is that's limited. e.g., can you can
> create a few vifs, but connected together by some very large number of
> bridges (daisy chained together)? Or can you create a large number of
> vifs if they are connected together by just one bridge?

This is getting really weird - as I found out I'll enounter problems
with far fewer vifs/bridges that suspected. I just fired up a network
with 7 nodes, all with four interfaces each connected to the same four
bridge interfaces. The nodes can ping through the network, however
after a short time, the system (dom0) crashes as well. This time, it
dies in net_rx_action() at a slightly different place:

[...]
[<c02b6e15>] kfree_skbmem+0x12/0x29
[<c02b6ed1>] __kfree_skb+0xa5/0x13f
[<c028c9b3>] net_rx_action+0x23d/0x4df
[...]

Funnily, I cannot reproduce this with 5 nodes (domUs) running. I'm a
bit unsure where to go from here... Maybe I should try a different
machine for further testing.


Regards
--
Birger Tödtmann
Technik der Rechnernetze, Institut für Experimentelle Mathematik
Universität Duisburg-Essen, Campus Essen email:btoedtmann@iem.uni-due.de
skype:birger.toedtmann pgp:0x6FB166C9

_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users