Mailing List Archive

[lvs-users] LVS and Squid client with persistent connection
Having client of Squid server with opened persistent connection
causing the LVS balancing forwarding new connections to the other node
only.

LVS configuration and status:
~ $ ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.x.y.55:3128 lblcr
-> 10.x.y.50:3128 Route 1 26 440
-> 10.x.y.51:3128 Route 1 1 0

Once the Squid on 51 node is restarted, the connections are
distributed via both nodes again. After "some time" only one
connection remain opened/active via 51 node. Always caused by the same
client opening HTTP persistent connection to specific domain.
Running on updated Debian9.
~ $ uname -a
Linux proxy01 4.9.0-5-amd64 #1 SMP Debian 4.9.65-3+deb9u2 (2018-01-04)
x86_64 GNU/Linux

Change of weight to 20 caused all the connections going through both nodes.
~ $ ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.x.y.55:3128 lblcr
-> 10.x.y.50:3128 Route 20 21 248
-> 10.x.y.51:3128 Route 20 25 42

Seems that opening 20 persistent connections via one node will cause
the similar behavior like with one connection and weight set to one.

With weight of 200 set on both nodes, all connections were forwarded
to one node only.

How to understand the weight for lblcr scheduler? What is the best
practice for balancing persistent connections?
According the LVS How-to [1], setting the LVS persistence for VIP
might help to let LVS forward connections across all nodes
independently on weight value. Is this my assumption correct?

[1] http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.persistent_connection.html

--
Peter

_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
Send requests to lvs-users-request@LinuxVirtualServer.org
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
Re: [lvs-users] LVS and Squid client with persistent connection [ In reply to ]
Hello Peter,

> Having client of Squid server with opened persistent connection causing
the
> LVS balancing forwarding new connections to the other node only.

Yeah, while there's an established connection over ipvs all subsequent new
connections from the same source ip are routed to the same backend server.

Weight just specifies how traffic from different source ips is going to be
distributed, when using the w* loadbalancing algorithms.

I hate to suggest this, but have you considered using haproxy to distribute
requests to your squid servers?


Mit freundlichen Gr??en,
i.A. Thomas B?tzler
--
BRINGE Informationstechnik GmbH
Zur Seeplatte 12
D-76228 Karlsruhe
Germany

Fon: +49 721 94246-0
Fon: +49 171 5438457
Fax: +49 721 94246-66
Web: http://www.bringe.de/

Gesch?ftsf?hrer: Dipl.-Ing. (FH) Martin Bringe
Ust.Id: DE812936645, HRB 108943 Mannheim
Re: [lvs-users] LVS and Squid client with persistent connection [ In reply to ]
Hello Thomas,

On Wed, Dec 5, 2018 at 3:26 PM Thomas Bätzler <t.baetzler@bringe.com> wrote:
> > Having client of Squid server with opened persistent connection causing
> the
> > LVS balancing forwarding new connections to the other node only.
>
> Yeah, while there's an established connection over ipvs all subsequent new
> connections from the same source ip are routed to the same backend server.

The lblcr algorithm should not balance on source-ip, but on "dh with
wlc", but the algorithm is not clear to me. Would not expect the
described situation to happen.

>
> I hate to suggest this, but have you considered using haproxy to distribute
> requests to your squid servers?

Would like to stay with LVS and not going to user-level balancing if possible.

--
Peter

_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
Send requests to lvs-users-request@LinuxVirtualServer.org
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
Re: [lvs-users] LVS and Squid client with persistent connection [ In reply to ]
Hello,

On Wed, 5 Dec 2018, Peter Viskup wrote:

> Having client of Squid server with opened persistent connection
> causing the LVS balancing forwarding new connections to the other node
> only.
>
> LVS configuration and status:
> ~ $ ipvsadm -Ln
> IP Virtual Server version 1.2.1 (size=4096)
> Prot LocalAddress:Port Scheduler Flags
> -> RemoteAddress:Port Forward Weight ActiveConn InActConn
> TCP 10.x.y.55:3128 lblcr
> -> 10.x.y.50:3128 Route 1 26 440
> -> 10.x.y.51:3128 Route 1 1 0
>
> Once the Squid on 51 node is restarted, the connections are
> distributed via both nodes again. After "some time" only one
> connection remain opened/active via 51 node. Always caused by the same
> client opening HTTP persistent connection to specific domain.
> Running on updated Debian9.
> ~ $ uname -a
> Linux proxy01 4.9.0-5-amd64 #1 SMP Debian 4.9.65-3+deb9u2 (2018-01-04)
> x86_64 GNU/Linux
>
> Change of weight to 20 caused all the connections going through both nodes.
> ~ $ ipvsadm -Ln
> IP Virtual Server version 1.2.1 (size=4096)
> Prot LocalAddress:Port Scheduler Flags
> -> RemoteAddress:Port Forward Weight ActiveConn InActConn
> TCP 10.x.y.55:3128 lblcr
> -> 10.x.y.50:3128 Route 20 21 248
> -> 10.x.y.51:3128 Route 20 25 42
>
> Seems that opening 20 persistent connections via one node will cause
> the similar behavior like with one connection and weight set to one.
>
> With weight of 200 set on both nodes, all connections were forwarded
> to one node only.
>
> How to understand the weight for lblcr scheduler? What is the best
> practice for balancing persistent connections?

LBLC[R] work by directing client traffic based on the destination
address. i.e. the remote web server IP. So, we try to forward every client
that browses some site to same proxy server with the idea to reuse the
cached reply. When some proxy is overloaded (number of TCP connections
exceeds the configured weight value) and we notice imbalance (other
proxy is lightly loaded with TCP conns below its weight/2) we decide to
forward client to such proxies. As result, more proxies start to cache
the same remote site.

So, you should set weight to a value that represents the max
number of established TCP connections the proxy can get simultaneously
before it is considered overloaded, eg. reaching some resource limit
such as bandwidth, memory, CPU, storage, etc. If you put too large value
you risk slowdown, delays, resets, etc.

> According the LVS How-to [1], setting the LVS persistence for VIP
> might help to let LVS forward connections across all nodes
> independently on weight value. Is this my assumption correct?

Peristence is for stickiness based on client IP/subnet.
It is more strict by definition because it is used to direct multiple
connections from same client "session" (TLS or other) to same real
server and where we should not move the client to another server. In the
case with web proxy, this is not so critical, we can reach same
remote site via more than one proxy. I don't know if there are any
benefits of using persistence for proxy setups, same clients will be
forwarded via single proxy only. As result, the remote site will be
present in more proxy servers if more clients access it.

Regards

--
Julian Anastasov <ja@ssi.bg>

_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
Send requests to lvs-users-request@LinuxVirtualServer.org
or go to http://lists.graemef.net/mailman/listinfo/lvs-users