Mailing List Archive

Pb with modules ...
Hi,

I've got troubles with lvs v0.1.2 with kernel 2.4.0
The LB box work very properly during 3 days, but this morning,
the ipvs module was about 607M of Ram ( in the lsmod cmd)

The connection couldn't be made anymore.

I tried to unload the module with no result.
There were about 600 connections on each real server ( there is 2
configured in my LB Box.

Does someone know something about this pb ?
Is there any way to find why this append ?

Thx,

--
Lifo.
Re: Pb with modules ... [ In reply to ]
Hello,

On Mon, 12 Feb 2001, Sebastien COUREAU wrote:

> Hi,
>
> I've got troubles with lvs v0.1.2 with kernel 2.4.0
> The LB box work very properly during 3 days, but this morning,
> the ipvs module was about 607M of Ram ( in the lsmod cmd)

Hm, memory leak? Can you give more information:

cat /proc/slabinfo
The used scheduler
The used forwarding method: NAT?

BTW, we already discussed this with you, LVS/NAT is not working
properly with netfilter NAT rules. It is fixed in the latest 0.2.x
versions

> The connection couldn't be made anymore.
>
> I tried to unload the module with no result.
> There were about 600 connections on each real server ( there is 2
> configured in my LB Box.
>
> Does someone know something about this pb ?
> Is there any way to find why this append ?

It souldn't be difficult to find the problem if there is
such one.

> Thx,
>
> --
> Lifo.


Regards

--
Julian Anastasov <ja@ssi.bg>
Re: Pb with modules ... [ In reply to ]
> cat /proc/slabinfo
The computer has been rebooted ...

> The used scheduler
wlc and rr, because the LB bos has 3 real server, depending on the dest IP
and dest port.

> The used forwarding method: NAT?
Yes, NAT.

> BTW, we already discussed this with you, LVS/NAT is not working
> properly with netfilter NAT rules. It is fixed in the latest 0.2.x
> versions

There is no NAT Rules, netfilter is compiled in kernel, but no iptables
rules, nor NAT rules

> It souldn't be difficult to find the problem if there is
> such one.

Ok, but if there isn't, why this trouble came up ?

--
Lifo.
Re: Pb with modules ... [ In reply to ]
Hello,

On Mon, 12 Feb 2001, Sebastien COUREAU wrote:

> > cat /proc/slabinfo
> The computer has been rebooted ...

Periodically execute the above command and see whether the
ip_vs* or ip_conntrack entries increase. Here is the description of the
/proc/slabinfo columns:

1. name
2. active objects
3. number of objects
4. object size
5. active slabs
6. number of slabs
7. gfporder (order of pgs per slab)

I'm sure if there is a memory leak in LVS you will see that
columns 2 and 3 increase and the values are greater than the sum of
all active+inactive connections. May be you can see them just now.
If this leak exists lsmod should report it even now, without waiting
such long period of time. I assume the connection rate is now
stabilized and the size can not increase much.

> > It souldn't be difficult to find the problem if there is
> > such one.
>
> Ok, but if there isn't, why this trouble came up ?

It is surprise for me :) I asked you about the scheduler because
some schedulers allocate more memory. But you don't see such big module
sizes for the LVS scheduler modules?

> --
> Lifo.


Regards

--
Julian Anastasov <ja@ssi.bg>
Re: Pb with modules ... [ In reply to ]
Julian,

----- Original Message -----
From: "Julian Anastasov" <ja@ssi.bg>
To: "Sebastien COUREAU" <lifo@elma.fr>
Cc: <lvs-users@LinuxVirtualServer.org>; "Herve Piedvache" <herve@elma.fr>;
"Jean-Christophe Boggio" <cat@thefreecat.org>
Sent: Monday, February 12, 2001 8:11 AM
Subject: Re: Pb with modules ...


>
> Hello,
>
> On Mon, 12 Feb 2001, Sebastien COUREAU wrote:
>
> > Hi,
> >
> > I've got troubles with lvs v0.1.2 with kernel 2.4.0
> > The LB box work very properly during 3 days, but this morning,
> > the ipvs module was about 607M of Ram ( in the lsmod cmd)
>
> Hm, memory leak? Can you give more information:
>

Have you considered using http://realty.sgi.com/boehm_mti/gc.html ? It is a
garbage collector for C/C++ that can also be used as a leak detector. I
don't know much about the lvs source, so take this as a "blind" suggestion.

Regards,

Ivan
Re: Pb with modules ... [ In reply to ]
Hello,

On Mon, 12 Feb 2001, Ivan Figueredo wrote:

> Julian,
>
> Have you considered using http://realty.sgi.com/boehm_mti/gc.html ? It is a
> garbage collector for C/C++ that can also be used as a leak detector. I
> don't know much about the lvs source, so take this as a "blind" suggestion.

:))) You are killing me :)

There are no many places where allocations occur.
But let's see the evidence first, the counters. May be there is
something we are missing. May be there is another reason for such leaks.

> Regards,
>
> Ivan


Regards

--
Julian Anastasov <ja@ssi.bg>
Re: Pb with modules ... [ In reply to ]
> Periodically execute the above command and see whether the
> ip_vs* or ip_conntrack entries increase. Here is the description of the
> /proc/slabinfo columns:
>
> 1. name
> 2. active objects
> 3. number of objects
> 4. object size
> 5. active slabs
> 6. number of slabs
> 7. gfporder (order of pgs per slab)

Thx!

> I'm sure if there is a memory leak in LVS you will see that
> columns 2 and 3 increase and the values are greater than the sum of
> all active+inactive connections. May be you can see them just now.
> If this leak exists lsmod should report it even now, without waiting
> such long period of time. I assume the connection rate is now
> stabilized and the size can not increase much.

Yep, that the case ...
on a little LB rule, I can see:

#ipvsadm -Ln
IP Virtual Server version 0.1.2 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 195.68.41.202:80 rr
-> 10.1.0.97:80 Masq 1 0 0
[

#cat /proc/slabinfo
ip_vs 0 30 128 0 1 1
ip_conntrack 3 22 352 1 2 1

A few minites ago, there were more active objects, but no more connection..

Is the way to ugrade ^my lvs soft ?

Regards,

--
Lifo.
Re: Pb with modules ... [ In reply to ]
Hello,

On Mon, 12 Feb 2001, Sebastien COUREAU wrote:

> #ipvsadm -Ln
> IP Virtual Server version 0.1.2 (size=4096)
> Prot LocalAddress:Port Scheduler Flags
> -> RemoteAddress:Port Forward Weight ActiveConn InActConn
> TCP 195.68.41.202:80 rr
> -> 10.1.0.97:80 Masq 1 0 0
> [
>
> #cat /proc/slabinfo
> ip_vs 0 30 128 0 1 1
^

> ip_conntrack 3 22 352 1 2 1
^

>
> A few minites ago, there were more active objects, but no more connection..

ipvsadm shows 0 connections and /proc/slabinfo:ip_vs shows 0
entries too. So, there is no problem with the LVS connections.

> Is the way to ugrade ^my lvs soft ?

I'm not sure whether an upgrade will change the things but this
is my recommendation. The LVS 0.1.2 cooperation with the netfilter
connection tracking and NAT is not explored. I'm not sure what kind of
problem we can hit in this situation. I assume you upgrade to the latest
kernel and modutils too :)

> Regards,
>
> --
> Lifo.


Regards

--
Julian Anastasov <ja@ssi.bg>
Re: Pb with modules ... [ In reply to ]
Hello,

Why don't you use the stable version of LVS (I meen 1.0.5 over 2.2 kernel)
? For production environnement it is hightly recommanded.

Best regards,

Alexandre

At 14:25 12/02/2001 +0100, you wrote:
>Hi,
>
>I've got troubles with lvs v0.1.2 with kernel 2.4.0
>The LB box work very properly during 3 days, but this morning,
>the ipvs module was about 607M of Ram ( in the lsmod cmd)
>
>The connection couldn't be made anymore.
>
>I tried to unload the module with no result.
>There were about 600 connections on each real server ( there is 2
>configured in my LB Box.
>
>Does someone know something about this pb ?
>Is there any way to find why this append ?
Re: Pb with modules ... [ In reply to ]
> Hello,

Hi,

>
> Why don't you use the stable version of LVS (I meen 1.0.5 over 2.2 kernel)
> ? For production environnement it is hightly recommanded.

For only one reason: The kernel we're assuming to work is the 2.4.0 ( 2.4.1
anyway)
because of all the network work done on it, because of the netfilter soft,
and so on ...
As far as it seem possible to upgrade any firewall from 2.2.x/ipchains to
2.4.x/netfilter,
the "reverse" is not as easy as we can see ...

Regards,

--
Lifo.