Mailing List Archive

[lvs-users] feedback loop
Hi, I have two nodes running ipvs/keepalived and syslog-ng for the load
balanced service. Both nodes have a single network interface in production,
but two in my local test kitchen. (eth0 for vagrant, eth1 for the multi
node comms).

I have discovered a feedback loop between both directors causing 100%
network utilization. The same packets are being played over and over again
(verified by packet contents timestamp).

I have read this, but the solution is not clear.
http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.localnode.html

When running both the ipvs director and the real server on the same box, do
I need to use firewall marks and -t mangle based on mac-source of the other
box?

lo:1 has the vip.
sysctl.d
net.ipv4.conf.eth1.arp_ignore = 1
net.ipv4.conf.eth1.arp_announce = 2

I disabled lvs synch daemon, but did not improve the problem.

real servers are .41 and .42. vip is .31

[root@local-v-test-log-02 vagrant]# ipvsadm --save -n
-A -t 192.168.11.31:601 -s rr -p 60
-a -t 192.168.11.31:601 -r 192.168.11.41:601 -g -w 100
-a -t 192.168.11.31:601 -r 192.168.11.42:601 -g -w 100
-A -t 192.168.11.31:6514 -s rr -p 60
-a -t 192.168.11.31:6514 -r 192.168.11.41:6514 -g -w 100
-a -t 192.168.11.31:6514 -r 192.168.11.42:6514 -g -w 100
-A -u 192.168.11.31:514 -s rr
-a -u 192.168.11.31:514 -r 192.168.11.41:514 -g -w 100
-a -u 192.168.11.31:514 -r 192.168.11.41:5141 -g -w 100
-a -u 192.168.11.31:514 -r 192.168.11.41:5142 -g -w 100
-a -u 192.168.11.31:514 -r 192.168.11.41:5143 -g -w 100
-a -u 192.168.11.31:514 -r 192.168.11.42:514 -g -w 100
-a -u 192.168.11.31:514 -r 192.168.11.42:5141 -g -w 100
-a -u 192.168.11.31:514 -r 192.168.11.42:5142 -g -w 100
-a -u 192.168.11.31:514 -r 192.168.11.42:5143 -g -w 100


global_defs {
router_id LVS_LOG
}

vrrp_instance VI_1 {
state BACKUP
interface eth1
! lvs_sync_daemon_interface eth1
virtual_router_id 101
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.11.31
}
}

! tcp 6514 for syslog-tls
virtual_server 192.168.11.31 6514 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 60
protocol TCP

real_server 192.168.11.41 6514 {
weight 100
MISC_CHECK {
misc_path "/usr/libexec/keepalived/chk-syslog.sh 192.168.11.41"
misc_timeout 5
}
}
real_server 192.168.11.42 6514 {
weight 100
MISC_CHECK {
misc_path "/usr/libexec/keepalived/chk-syslog.sh 192.168.11.42"
misc_timeout 5
}
}
}

! tcp 601 for syslog-tcp
virtual_server 192.168.11.31 601 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 60
protocol TCP

real_server 192.168.11.41 601 {
weight 100
MISC_CHECK {
misc_path "/usr/libexec/keepalived/chk-syslog.sh 192.168.11.41"
misc_timeout 5
}
}
real_server 192.168.11.42 601 {
weight 100
MISC_CHECK {
misc_path "/usr/libexec/keepalived/chk-syslog.sh 192.168.11.42"
misc_timeout 5
}
}
}

! udp 514 for syslog-udp
virtual_server 192.168.11.31 514 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol UDP

real_server 192.168.11.41 514 {
weight 100
MISC_CHECK {
misc_path "/usr/libexec/keepalived/chk-syslog.sh 192.168.11.41"
misc_timeout 5
}
}
real_server 192.168.11.42 514 {
weight 100
MISC_CHECK {
misc_path "/usr/libexec/keepalived/chk-syslog.sh 192.168.11.42"
misc_timeout 5
}
}
real_server 192.168.11.41 5141 {
weight 100
MISC_CHECK {
misc_path "/usr/libexec/keepalived/chk-syslog.sh 192.168.11.41"
misc_timeout 5
}
}
real_server 192.168.11.42 5141 {
weight 100
MISC_CHECK {
misc_path "/usr/libexec/keepalived/chk-syslog.sh 192.168.11.42"
misc_timeout 5
}
}
real_server 192.168.11.41 5142 {
weight 100
MISC_CHECK {
misc_path "/usr/libexec/keepalived/chk-syslog.sh 192.168.11.41"
misc_timeout 5
}
}
real_server 192.168.11.42 5142 {
weight 100
MISC_CHECK {
misc_path "/usr/libexec/keepalived/chk-syslog.sh 192.168.11.42"
misc_timeout 5
}
}
real_server 192.168.11.41 5143 {
weight 100
MISC_CHECK {
misc_path "/usr/libexec/keepalived/chk-syslog.sh 192.168.11.41"
misc_timeout 5
}
}
real_server 192.168.11.42 5143 {
weight 100
MISC_CHECK {
misc_path "/usr/libexec/keepalived/chk-syslog.sh 192.168.11.42"
misc_timeout 5
}
}
}
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
Send requests to lvs-users-request@LinuxVirtualServer.org
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
Re: [lvs-users] feedback loop [ In reply to ]
On 2/3/2017 20:40, Zetan Drableg wrote:
> Hi, I have two nodes running ipvs/keepalived and syslog-ng for the load
> balanced service. Both nodes have a single network interface in production,
> but two in my local test kitchen. (eth0 for vagrant, eth1 for the multi
> node comms).
>
> I have discovered a feedback loop between both directors causing 100%
> network utilization. The same packets are being played over and over again
> (verified by packet contents timestamp).
>
> I have read this, but the solution is not clear.
> http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.localnode.html
>
> When running both the ipvs director and the real server on the same box, do
> I need to use firewall marks and -t mangle based on mac-source of the other
> box?
>

I have not configured such a setup, but according to the page you
referenced, as well as the basic logic of the situation, you need to
change your configuration from IP and port to firewall mark.

You want to mark all traffic on the target ports, UNLESS it's coming
from the other director, as that would allow the loop condition.

Here's an iptables option string from that page:

-t mangle -I PREROUTING -d $VIP -p tcp -m tcp --dport $VPORT -m mac \ !
--mac-source $MAC_NODE2 -j MARK --set-mark 0x6

Here's something that might work for your syslog-tls service (use the
correct MAC, of course):

-t mangle -I PREROUTING -d 192.168.11.31 -p tcp --dport 6514 -m \!
--mac-source aa:bb:cc:dd:ee:ff -j MARK --set-mark 6514

Then instead of this:

! tcp 6514 for syslog-tls
virtual_server 192.168.11.31 6514 {

You'd use this:

! tcp 6514 for syslog-tls
virtual_server fwmark 6514 {

On the other director, you'd do the same thing, with the only difference being the exclusion of the first director's MAC address in the firewall mark statements.

The firewall mark number is arbitrary, of course. Using the port number just makes it easy to keep track of things, unless you end up needing multiple IP's on the same port.


_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
Send requests to lvs-users-request@LinuxVirtualServer.org
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
Re: [lvs-users] feedback loop [ In reply to ]
Hello,

On Fri, 3 Feb 2017, Zetan Drableg wrote:

> Hi, I have two nodes running ipvs/keepalived and syslog-ng for the load
> balanced service. Both nodes have a single network interface in production,
> but two in my local test kitchen. (eth0 for vagrant, eth1 for the multi
> node comms).
>
> I have discovered a feedback loop between both directors causing 100%
> network utilization. The same packets are being played over and over again
> (verified by packet contents timestamp).
>
> I have read this, but the solution is not clear.
> http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.localnode.html
>
> When running both the ipvs director and the real server on the same box, do
> I need to use firewall marks and -t mangle based on mac-source of the other
> box?

If you have more than one director and you have
the same IPVS rules to support backup mode and while some
director in backup mode is also a real server used by the
master director, then you need filtering by MAC or as a
second option to use the backup_only=1 sysctl flag on the
backup box (present in 3.9+).

Its purpose, if enabled, is to disable the director
function (forwarding of traffic to real servers based on the
IPVS rules) when we are currently in backup mode for all
virtual services. Currently, we do not support disabling
the director function per virtual service.

As result, when traffic comes, such backup server
will assume that another director (master) is using us
as real server. We will deliver the traffic to the local
stack. If backup_only=0 we think that clients sent the
traffic to us and the director function can cause loop
to another director if present as real server in our rules.

Whatever solution you decide to use, its purpose
is to decide whether traffic comes from clients (then
we can forward to real servers) or from another
director (then we are its real server).

Regards

--
Julian Anastasov <ja@ssi.bg>

_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
Send requests to lvs-users-request@LinuxVirtualServer.org
or go to http://lists.graemef.net/mailman/listinfo/lvs-users