Mailing List Archive

modify the inActConn timeout Setting ...
Hi all

we are using a lvs in NAT Mode and everything works fine ...
Probably, the only Problem seems to be the huge number of (idle)
Connection Entries.

ipvsadm shows a lot of inActConn (more than 10000 entries per
Realserver) entries.
ipchains -M -L -n shows that these connections last 2 minutes.
Is it possible to reduce the time to keep the Masquerading Table
small? e.g. 10 seconds ...

thanks in advance




Hendrik Thiel
Falk eSolutions AG
Tel: 02841/9097355
Fax: 02841-9097331
Re: modify the inActConn timeout Setting ... [ In reply to ]
Hendrik Thiel wrote:
>
> Hi all
>
> we are using a lvs in NAT Mode and everything works fine ...
> Probably, the only Problem seems to be the huge number of (idle)
> Connection Entries.
>
> ipvsadm shows a lot of inActConn (more than 10000 entries per
> Realserver) entries.
> ipchains -M -L -n shows that these connections last 2 minutes.

FIN timeout is 2 mins by default.

> Is it possible to reduce the time to keep the Masquerading Table
> small? e.g. 10 seconds ...

yes

http://www.linuxvirtualserver.org/Joseph.Mack/HOWTO/LVS-HOWTO_1.0-10.html#ss10.9

unless this is causing you problems, you don't need to change your FIN timeout.

Joe

--
Joseph Mack PhD, Senior Systems Engineer, Lockheed Martin
contractor to the National Environmental Supercomputer Center,
mailto:mack.joseph@epa.gov ph# 919-541-0007, RTP, NC, USA
Re: modify the inActConn timeout Setting ... [ In reply to ]
hi,

thanks for the quick answer....

ipchains -M -S 900 10 300 has no impact.
fin timeout stays 2 Minutes.Strange. How much Entries
can the lvs handle, or is the memory the only limit?

I just found the sysctl variables net.ipv4.vs.timeout*,
Maybe this is the place to modify the timeout settings?! ..



> Hendrik Thiel wrote:
> >
> > Hi all
> >
> > we are using a lvs in NAT Mode and everything works fine ...
> > Probably, the only Problem seems to be the huge number of (idle)
> > Connection Entries.
> >
> > ipvsadm shows a lot of inActConn (more than 10000 entries per
> > Realserver) entries.
> > ipchains -M -L -n shows that these connections last 2 minutes.
>
> FIN timeout is 2 mins by default.
>
> > Is it possible to reduce the time to keep the Masquerading Table
> > small? e.g. 10 seconds ...
>
> yes
>
> http://www.linuxvirtualserver.org/Joseph.Mack/HOWTO/LVS-HOWTO_1.0-10.html#ss10.9
>
> unless this is causing you problems, you don't need to change your FIN timeout.
>
> Joe
>
> --
> Joseph Mack PhD, Senior Systems Engineer, Lockheed Martin
> contractor to the National Environmental Supercomputer Center,
> mailto:mack.joseph@epa.gov ph# 919-541-0007, RTP, NC, USA
>
> _______________________________________________
> LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
> Send requests to lvs-users-request@LinuxVirtualServer.org
> or go to http://www.in-addr.de/mailman/listinfo/lvs-users



Hendrik Thiel
Falk eSolutions AG
Tel: 02841/9097355
Fax: 02841-9097331
Re: modify the inActConn timeout Setting ... [ In reply to ]
Hendrik Thiel wrote:

> ipchains -M -S 900 10 300 has no impact.
> fin timeout stays 2 Minutes.Strange. How much Entries
> can the lvs handle, or is the memory the only limit?
>
> I just found the sysctl variables net.ipv4.vs.timeout*,
> Maybe this is the place to modify the timeout settings?! ..

don't know sorry

we'll have to wait for Julian for an answer on this

Joe
--
Joseph Mack PhD, Senior Systems Engineer, Lockheed Martin
contractor to the National Environmental Supercomputer Center,
mailto:mack.joseph@epa.gov ph# 919-541-0007, RTP, NC, USA
Re: modify the inActConn timeout Setting ... [ In reply to ]
Hendrik Thiel wrote:
>
> hi,
>
> thanks for the quick answer....
>
> ipchains -M -S 900 10 300 has no impact.

are you using a 2.4.x series director?
if so you'll have to run the appropriate iptables commands.
I don't know what they are yet (the iptables man pages
- down the bottom - says that -M -S has changed),
but we ought to find out.

Joe
--
Joseph Mack PhD, Senior Systems Engineer, Lockheed Martin
contractor to the National Environmental Supercomputer Center,
mailto:mack.joseph@epa.gov ph# 919-541-0007, RTP, NC, USA
Re: modify the inActConn timeout Setting ... [ In reply to ]
> Hendrik Thiel wrote:
> >
> > hi,
> >
> > thanks for the quick answer....
> >
> > ipchains -M -S 900 10 300 has no impact.
>
> are you using a 2.4.x series director?
> if so you'll have to run the appropriate iptables commands.
> I don't know what they are yet (the iptables man pages
> - down the bottom - says that -M -S has changed),
> but we ought to find out.

:) i am using mandrake 7.2 (2.2.17-21mdksmp)
i figured out that the changes i made with ipchains-M -S
where written to net.ipv4.vs.timeout_udp and two more
of the timeout sysctl variables, but the connections are still 2
minutes in the Masq. Table...

to all: what is your count of InActConn entries
(in peak times) ? any known max. values ?

bye



Hendrik Thiel
Falk eSolutions AG
Tel: 02841/9097355
Fax: 02841-9097331
Re: modify the inActConn timeout Setting ... [ In reply to ]
Hello,

On Mon, 19 Mar 2001, Hendrik Thiel wrote:

> Hi all
>
> we are using a lvs in NAT Mode and everything works fine ...
> Probably, the only Problem seems to be the huge number of (idle)
> Connection Entries.
>
> ipvsadm shows a lot of inActConn (more than 10000 entries per
> Realserver) entries.
> ipchains -M -L -n shows that these connections last 2 minutes.
> Is it possible to reduce the time to keep the Masquerading Table
> small? e.g. 10 seconds ...

http://marc.theaimsgroup.com/?t=98227299800016&w=2&r=1
http://www.linux-vs.org/defense.html

You can edit ip_masq.c and to reduce them by hand or to enable
the secure_tcp strategy and to alter the proc values. One entry
occupies 128 bytes. 10k entries mean 1.28MB memory. May be this is
fatal sometimes. You need to alter the TIME_WAIT value, FIN_WAIT
can be changed with ipchains.

> thanks in advance
>
> Hendrik Thiel
> Falk eSolutions AG
> Tel: 02841/9097355
> Fax: 02841-9097331
>

Regards

--
Julian Anastasov <ja@ssi.bg>
Re: modify the inActConn timeout Setting ... [ In reply to ]
Julian Anastasov wrote:
>
> Hello,
>
> On Mon, 19 Mar 2001, Hendrik Thiel wrote:
>
> > Hi all
> >
> > we are using a lvs in NAT Mode and everything works fine ...
> > Probably, the only Problem seems to be the huge number of (idle)
> > Connection Entries.
> >
> > ipvsadm shows a lot of inActConn (more than 10000 entries per
> > Realserver) entries.
> > ipchains -M -L -n shows that these connections last 2 minutes.
> > Is it possible to reduce the time to keep the Masquerading Table
> > small? e.g. 10 seconds ...

Henrik,

I'm trying to reproduce this problem here. I don't have a client
than can produce this many inActConn. Using Julian's testlvs I can only
get about 500. I Henrik has a production LVS with many clients
from outside.

Any better client I can try?

You are just looking with ipvsadm and ipchains on the director? (just
so I can reproduce what you are doing)

Julian,

How do you do ipchains -M -L with iptables?

Joe
--
Joseph Mack PhD, Senior Systems Engineer, Lockheed Martin
contractor to the National Environmental Supercomputer Center,
mailto:mack.joseph@epa.gov ph# 919-541-0007, RTP, NC, USA
Re: modify the inActConn timeout Setting ... [ In reply to ]
Hello,

On Tue, 20 Mar 2001, Joseph Mack wrote:

> Henrik,
>
> I'm trying to reproduce this problem here. I don't have a client
> than can produce this many inActConn. Using Julian's testlvs I can only
> get about 500. I Henrik has a production LVS with many clients

500 is a very big value. testlvs is very restrictive and its
default values prevent errors. By default testlvs sends from 254
different sources. If you change -srcnum you may overload your LAN :)

> from outside.
>
> Any better client I can try?
>
> You are just looking with ipvsadm and ipchains on the director? (just
> so I can reproduce what you are doing)
>
> Julian,
>
> How do you do ipchains -M -L with iptables?

ipvsadm -Lcn
Not sure for such support in iptables

> Joe


Regards

--
Julian Anastasov <ja@ssi.bg>
Re: modify the inActConn timeout Setting ... [ In reply to ]
>
> Hello,
>
> On Tue, 20 Mar 2001, Joseph Mack wrote:
>
> > Henrik,
> >
> > I'm trying to reproduce this problem here. I don't have a client
> > than can produce this many inActConn. Using Julian's testlvs I can only
> > get about 500. I Henrik has a production LVS with many clients
>
> 500 is a very big value. testlvs is very restrictive and its
> default values prevent errors. By default testlvs sends from 254
> different sources. If you change -srcnum you may overload your LAN :)
>
> > from outside.
> >
> > Any better client I can try?
> >
> > You are just looking with ipvsadm and ipchains on the director? (just
> > so I can reproduce what you are doing)
> >
> > Julian,
> >
> > How do you do ipchains -M -L with iptables?
>
> ipvsadm -Lcn
> Not sure for such support in iptables
>
> > Joe
>
>
> Regards
>
> --
> Julian Anastasov <ja@ssi.bg>
>

Hi,

(lvs 0.9.14, kernel 2.2.17)

i managed to get a lower expire time.
an ipchains -M -S 1200 20 0 was not enough.

I did an "sysctl -w net.ipv4.vs.secure_tcp=3"
and "sysctl -w net.ipv4.vs.timeout_timewait=20"
that did what we want ...The expire time is now set to 20
seconds. The question is now, what exactly does secure_tcp=3 ?
"http://www.linuxvirtualserver.org/defense.html" says only a little
about it. Didnt quite figured it out what its all about...

with this lower expire time, we get a far lower amount of "inactconn"
and it seems to be everything allright...

net.ipv4.vs.timeout_icmp = 60
net.ipv4.vs.timeout_udp = 180
net.ipv4.vs.timeout_synack = 100
net.ipv4.vs.timeout_listen = 90
net.ipv4.vs.timeout_lastack = 30
net.ipv4.vs.timeout_closewait = 60
net.ipv4.vs.timeout_close = 10
net.ipv4.vs.timeout_timewait = 20
net.ipv4.vs.timeout_finwait = 10
net.ipv4.vs.timeout_synrecv = 10
net.ipv4.vs.timeout_synsent = 60
net.ipv4.vs.timeout_established = 1200
net.ipv4.vs.secure_tcp = 3
net.ipv4.vs.drop_packet = 0
net.ipv4.vs.drop_entry = 0
net.ipv4.vs.am_droprate = 10
net.ipv4.vs.amemthresh = 1024

these are our settings right now...anything not recommanded ?

with "ab -n 3000 -c 1024 <url>" (apachebench with 3000requests
and 1024 concurrent
connections) we got 50-60 active connections and 500-600
inactconnections.....with -c above 1024 we get an "socket: too
many open files error" client side error i think...
The interesting thing to know is, what are the Limits for the LVS
(with NAT)...The number of available sockets? 65535 simultanous
connections? the memory ? the masq table?

we did not have it in production yet (saturday i think). If something
goes wrong we have a Bigip
as backup system :) ...but only for backup ...

cu ...

Hendrik Thiel
Falk eSolutions AG
Tel: 02841/9097355
Fax: 02841-9097331
Re: modify the inActConn timeout Setting ... [ In reply to ]
* On 03/21/01 thiel@newstrader.de wrote:
>
> Hi,
>
> (lvs 0.9.14, kernel 2.2.17)
>
> i managed to get a lower expire time.
> an ipchains -M -S 1200 20 0 was not enough.
>
> I did an "sysctl -w net.ipv4.vs.secure_tcp=3"
> and "sysctl -w net.ipv4.vs.timeout_timewait=20"
> that did what we want ...The expire time is now set to 20
> seconds. The question is now, what exactly does secure_tcp=3 ?
> "http://www.linuxvirtualserver.org/defense.html" says only a little
> about it. Didnt quite figured it out what its all about...
>
> with this lower expire time, we get a far lower amount of "inactconn"
> and it seems to be everything allright...
>
> net.ipv4.vs.timeout_icmp = 60
> net.ipv4.vs.timeout_udp = 180
> net.ipv4.vs.timeout_synack = 100
> net.ipv4.vs.timeout_listen = 90
> net.ipv4.vs.timeout_lastack = 30
> net.ipv4.vs.timeout_closewait = 60
> net.ipv4.vs.timeout_close = 10
> net.ipv4.vs.timeout_timewait = 20
> net.ipv4.vs.timeout_finwait = 10
> net.ipv4.vs.timeout_synrecv = 10
> net.ipv4.vs.timeout_synsent = 60
> net.ipv4.vs.timeout_established = 1200
> net.ipv4.vs.secure_tcp = 3
> net.ipv4.vs.drop_packet = 0
> net.ipv4.vs.drop_entry = 0
> net.ipv4.vs.am_droprate = 10
> net.ipv4.vs.amemthresh = 1024
>
> these are our settings right now...anything not recommanded ?
>
> with "ab -n 3000 -c 1024 <url>" (apachebench with 3000requests
> and 1024 concurrent
> connections) we got 50-60 active connections and 500-600
> inactconnections.....with -c above 1024 we get an "socket: too
> many open files error" client side error i think...

Try:

ulimit -a

You should get something like:

core file size (blocks) 1000000
data seg size (kbytes) unlimited
file size (blocks) unlimited
max memory size (kbytes) unlimited
stack size (kbytes) 8192
cpu time (seconds) unlimited
max user processes 2048
pipe size (512 bytes) 8
open files 1024
virtual memory (kbytes) 2105343

Where the maximum number of open files is 1024

(for linux kernel 2.2.x)
Unfortunately, there does not seem to be a way to dynamically
raise this per process limit (correct me if I'm wrong) without
recompiling the kernel and resetting NR_OPEN in
/usr/src/linux/include/linux/limits.h

-------------------------------------

I'm using LVS/DR and have been playing around with the same
issues. But in this case since the return path is not through
LVS, I wondering if making these type of changes on LVS box
requrires some additional cooridation/changes on my "real"
web servers.


> The interesting thing to know is, what are the Limits for the LVS
> (with NAT)...The number of available sockets? 65535 simultanous
> connections? the memory ? the masq table?
>
> we did not have it in production yet (saturday i think). If something
> goes wrong we have a Bigip
> as backup system :) ...but only for backup ...
>
> cu ...
>
> Hendrik Thiel
> Falk eSolutions AG
> Tel: 02841/9097355
> Fax: 02841-9097331
>

--
Will
w@sibertron.com
Re: modify the inActConn timeout Setting ... [ In reply to ]
* On 03/21/01 whc2u@leptonics.com wrote:
>
> (for linux kernel 2.2.x)
> Unfortunately, there does not seem to be a way to dynamically
> raise this per process limit (correct me if I'm wrong) without
> recompiling the kernel and resetting NR_OPEN in
> /usr/src/linux/include/linux/limits.h
>

Then again I could be wrong and its (1024 * 1024)

> -------------------------------------
>
> I'm using LVS/DR and have been playing around with the same
> issues. But in this case since the return path is not through
> LVS, I wondering if making these type of changes on LVS box
> requrires some additional cooridation/changes on my "real"
> web servers.
>
>
> > The interesting thing to know is, what are the Limits for the LVS
> > (with NAT)...The number of available sockets? 65535 simultanous
> > connections? the memory ? the masq table?
> >
> > we did not have it in production yet (saturday i think). If something
> > goes wrong we have a Bigip
> > as backup system :) ...but only for backup ...
> >
> > cu ...
> >
> > Hendrik Thiel
> > Falk eSolutions AG
> > Tel: 02841/9097355
> > Fax: 02841-9097331
> >
>
-- Will
w@sibertron.com
Re: modify the inActConn timeout Setting ... [ In reply to ]
Hello,

On Wed, 21 Mar 2001, Hendrik Thiel wrote:

> seconds. The question is now, what exactly does secure_tcp=3 ?
> "http://www.linuxvirtualserver.org/defense.html" says only a little
> about it. Didnt quite figured it out what its all about...

Read again. It contains:

The valid values
are from 0 to 3, where 0 means that this strategy is always disabled, 1 and 2 mean automatic modes
(when there is no enough available memory, the strategy is enabled and the variable is
automatically set to 2, otherwise the strategy is disabled and the variable is set to 1), and 3 means
that that the strategy is always enabled.



The secure_tcp mode does not listen to the client's TCP flags
and by this way prevents long state timeouts caused from external
attackers. All strategies try to keep free memory in the director.
This is the reason you want to reduce the timeouts. No?

> with this lower expire time, we get a far lower amount of "inactconn"
> and it seems to be everything allright...

Yep, more free memory.

> The interesting thing to know is, what are the Limits for the LVS
> (with NAT)...The number of available sockets? 65535 simultanous
> connections? the memory ? the masq table?

The free memory, unlimited, 128 bytes/connection. LVS does
not use system sockets. The masq table is used only for LVS/NAT FTP
or for normal MASQ connections not part from LVS (by default 40960
connections per protocol). LVS has its own connection table, no
limits.


Regards

--
Julian Anastasov <ja@ssi.bg>
Re: modify the inActConn timeout Setting ... [ In reply to ]
Hi,

> Hello,
>
> On Wed, 21 Mar 2001, Hendrik Thiel wrote:
>
> > seconds. The question is now, what exactly does
secure_tcp=3 ?
> > "http://www.linuxvirtualserver.org/defense.html" says only a little
> > about it. Didnt quite figured it out what its all about...
>
> Read again. It contains:
>
> The valid values
> are from 0 to 3, where 0 means that this strategy is always
disabled, 1 and 2 mean automatic modes
> (when there is no enough available memory, the strategy is
enabled and the variable is
> automatically set to 2, otherwise the strategy is disabled and the
variable is set to 1), and 3 means
> that that the strategy is always enabled.

the default is set to 0. This feature seems to make sense why not
set to 3 or any automatic values :)

>
>
>
> The secure_tcp mode does not listen to the client's TCP flags
> and by this way prevents long state timeouts caused from
external
> attackers. All strategies try to keep free memory in the director.
> This is the reason you want to reduce the timeouts. No?

yes and no. The reason was reducing the masq table entries,
because i ve been told
that this might be a bottleneck...might?!
I got a lot of masq entries (3 realserver, 5000 inactconn per
realserver) ...and i am afraid running into problems with this large
numbers of inactconn ... so i reduced the timewait variable to get
20 secs connections instead of 2 minutes...i have to set
"net.ipv4.vs.secure_tcp=3", because that seems the only method
to successfully lower the idle timeout settings.

did i have any alternatives ? or can the masq table handle that
much idle connections without getting in trouble ?

>
> > with this lower expire time, we get a far lower amount of
"inactconn"
> > and it seems to be everything allright...
>
> Yep, more free memory.
>
> > The interesting thing to know is, what are the Limits for the LVS
> > (with NAT)...The number of available sockets? 65535
>simultanous
> > connections? the memory ? the masq table?
>
> The free memory, unlimited, 128 bytes/connection. LVS does
> not use system sockets. The masq table is used only for
LVS/NAT FTP
> or for normal MASQ connections not part from LVS (by default
40960
> connections per protocol). LVS has its own connection table, no
> limits.


so the Masq Table is the weakness when using LVS/NAT ... ?

regards,



Hendrik Thiel
Falk eSolutions AG
Tel: 02841/9097355
Fax: 02841-9097331