Mailing List Archive

[lvs-users] help: lvs performance testing
hi everyone:
I do test throughput of lvs server, the mode is fullnat. Num of director is only one.The testing tool is webbentch, when throughput more than 50,000, It's not increase, I do not know what is the reason, I optimization params of kernal, include limits of open file, tcp connection. etc, thank you

my ipvs version:
[root@xxx.xxx.xxx.xxx ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4194304)

kernel:
Linux lvs-test 2.6.32 #1 SMP Wed May 21 14:31:51 CST 2014 x86_64 x86_64 x86_64 GNU/Linux

optimization of kernel:
net.nf_conntrack_max = 25000000
net.netfilter.nf_conntrack_max = 25000000
net.netfilter.nf_conntrack_tcp_timeout_established = 1500
net.ipv4.tcp_max_tw_buckets = 6000
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 262144
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_fin_timeout = 100
net.ipv4.tcp_keepalive_time = 30
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.route.flush = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.tcp_syncookies = 0
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_max_syn_backlog = 819200
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_max_tw_buckets = 819200
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.core.netdev_max_backlog = 500000
net.ipv4.ip_forward=1
net.ipv4.ip_conntrack_max=1024000


[root@xxx.xxx.xxx.xxx ~]# ulimit -n
655350

log:
Aug 14 15:06:07 10 Keepalived_healthcheckers: TCP connection to [10.153.75.152]:80 success.
Aug 14 15:06:07 10 Keepalived_healthcheckers: Enabling service [10.153.75.152]:80 to VS [test]:0
Aug 14 15:06:15 10 Keepalived_healthcheckers: TCP connection to [10.153.74.85]:80 failed !!!
Aug 14 15:06:15 10 Keepalived_healthcheckers: Disabling service [10.153.74.85]:80 from VS [test]:0
Aug 14 15:06:18 10 Keepalived_healthcheckers: TCP connection to [10.153.74.139]:80 failed !!!
Aug 14 15:06:18 10 Keepalived_healthcheckers: Disabling service [10.153.74.139]:80 from VS [test]:0
Aug 14 15:06:22 10 Keepalived_healthcheckers: TCP connection to [10.153.74.85]:80 success.
Aug 14 15:06:22 10 Keepalived_healthcheckers: Enabling service [10.153.74.85]:80 to VS [test]:0
Aug 14 15:06:25 10 Keepalived_healthcheckers: TCP connection to [10.153.74.139]:80 success.
Aug 14 15:06:25 10 Keepalived_healthcheckers: Enabling service [10.153.74.139]:80 to VS [test]:0
Aug 14 15:06:43 10 Keepalived_healthcheckers: TCP connection to [10.153.75.153]:80 failed !!!
Aug 14 15:06:43 10 Keepalived_healthcheckers: Disabling service [10.153.75.153]:80 from VS [test]:0
Aug 14 15:06:50 10 Keepalived_healthcheckers: TCP connection to [10.153.75.153]:80 success.
Aug 14 15:06:50 10 Keepalived_healthcheckers: Enabling service [10.153.75.153]:80 to VS [test]:0
Aug 14 15:06:52 10 Keepalived_healthcheckers: TCP connection to [10.153.74.140]:80 failed !!!
Aug 14 15:06:52 10 Keepalived_healthcheckers: Disabling service [10.153.74.140]:80 from VS [test]:0
Aug 14 15:06:59 10 Keepalived_healthcheckers: TCP connection to [10.153.74.140]:80 success.
Aug 14 15:06:59 10 Keepalived_healthcheckers: Enabling service [10.153.74.140]:80 to VS [test]:0
Aug 14 15:07:12 10 Keepalived_healthcheckers: TCP connection to [10.153.75.56]:80 failed !!!
Aug 14 15:07:12 10 Keepalived_healthcheckers: Disabling service [10.153.75.56]:80 from VS [test]:0
Aug 14 15:07:19 10 Keepalived_healthcheckers: TCP connection to [10.153.75.56]:80 success.
Aug 14 15:07:19 10 Keepalived_healthcheckers: Enabling service [10.153.75.56]:80 to VS [test]:0
Aug 14 15:07:24 10 Keepalived_healthcheckers: TCP connection to [10.153.74.85]:80 failed !!!
Aug 14 15:07:24 10 Keepalived_healthcheckers: Disabling service [10.153.74.85]:80 from VS [test]:0
Aug 14 15:07:28 10 Keepalived_healthcheckers: TCP connection to [10.153.75.153]:80 failed !!!
Aug 14 15:07:28 10 Keepalived_healthcheckers: Disabling service [10.153.75.153]:80 from VS [test]:0
Aug 14 15:07:31 10 Keepalived_healthcheckers: TCP connection to [10.153.74.85]:80 success.
Aug 14 15:07:31 10 Keepalived_healthcheckers: Enabling service [10.153.74.85]:80 to VS [test]:0
Aug 14 15:07:35 10 Keepalived_healthcheckers: TCP connection to [10.153.75.153]:80 success.
Aug 14 15:07:35 10 Keepalived_healthcheckers: Enabling service [10.153.75.153]:80 to VS [test]:0


iptables: iptables is stopped

route table:
[root@10.153.72.2 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.27.0.0 172.27.0.9 255.255.255.252 UG 101 0 0 eth1
172.27.0.4 172.27.0.9 255.255.255.252 UG 101 0 0 eth1
172.27.0.8 0.0.0.0 255.255.255.252 U 0 0 0 eth1
172.27.0.16 172.27.0.9 255.255.255.252 UG 101 0 0 eth1
172.27.0.20 172.27.0.9 255.255.255.252 UG 101 0 0 eth1
10.153.72.0 0.0.0.0 255.255.248.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1
10.0.0.0 10.153.79.254 255.0.0.0 UG 0 0 0 eth0
0.0.0.0 172.27.0.9 0.0.0.0 UG 0 0 0 eth1


keepalived conf:
the num of realserver is 14:

gobal_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_connect_timeout 30
router_id LVS_DEVEL
}

local_address_group laddr_g1 {
10.153.72.2
}

vrrp_instance VI_1 {
state MASTER
interface eth1
virtual_router_id 200
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
xxx.xxx.xxx.xxx/32
}
}
virtual_server_group test {
xxx.xxx.xxx.xxx 80 //vip1
}


virtual_server group test {
delay_loop 7
lb_algo rr
lb_kind FNAT
laddr_group_name laddr_g1
protocol TCP
# syn_proxy
# persistence_timeout 50
omega
quorum 1
alpha
hysteresis 0

real_server 10.153.75.118 80 {
weight 1
inhibit_on_failure
TCP_CHECK {
connect_timeout 3
nb_get_retry 2
delay_before_retry 5
}
}

real_server 10.153.74.139 80 {
weight 1
inhibit_on_failure
TCP_CHECK {
connect_timeout 3
nb_get_retry 2
delay_before_retry 5
}
}

real_server 10.153.74.140 80 {
weight 1
inhibit_on_failure
TCP_CHECK {
connect_timeout 3
#nb_get_retry 2
#delay_before_retry 5
}
}
.
.
.

}




_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
Send requests to lvs-users-request@LinuxVirtualServer.org
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
Re: [lvs-users] help: lvs performance testing [ In reply to ]
Do you mean 50,000 TPS? That is the kind of maximum performance I would
expect on a gigabit network.
We can do 110,000+ on a 10G network (both tests with a single Xeon Quad
core chip + intel NICs)




On 14 August 2014 08:23, yige2008123 <yige2008123@126.com> wrote:

>
>
> hi everyone:
> I do test throughput of lvs server, the mode is fullnat. Num of
> director is only one.The testing tool is webbentch, when throughput more
> than 50,000, It's not increase, I do not know what is the reason, I
> optimization params of kernal, include limits of open file, tcp connection.
> etc, thank you
>
> my ipvs version:
> [root@xxx.xxx.xxx.xxx ~]# ipvsadm
> IP Virtual Server version 1.2.1 (size=4194304)
>
> kernel:
> Linux lvs-test 2.6.32 #1 SMP Wed May 21 14:31:51 CST 2014 x86_64 x86_64
> x86_64 GNU/Linux
>
> optimization of kernel:
> net.nf_conntrack_max = 25000000
> net.netfilter.nf_conntrack_max = 25000000
> net.netfilter.nf_conntrack_tcp_timeout_established = 1500
> net.ipv4.tcp_max_tw_buckets = 6000
> net.ipv4.tcp_sack = 1
> net.ipv4.tcp_window_scaling = 1
> net.ipv4.tcp_rmem = 4096 87380 4194304
> net.ipv4.tcp_wmem = 4096 16384 4194304
> net.core.wmem_default = 8388608
> net.core.rmem_default = 8388608
> net.core.rmem_max = 16777216
> net.core.wmem_max = 16777216
> net.core.netdev_max_backlog = 262144
> net.core.somaxconn = 262144
> net.ipv4.tcp_max_orphans = 3276800
> net.ipv4.tcp_max_syn_backlog = 262144
> net.ipv4.tcp_timestamps = 0
> net.ipv4.tcp_synack_retries = 1
> net.ipv4.tcp_syn_retries = 1
> net.ipv4.tcp_tw_recycle = 1
> net.ipv4.tcp_tw_reuse = 1
> net.ipv4.tcp_mem = 94500000 915000000 927000000
> net.ipv4.tcp_fin_timeout = 100
> net.ipv4.tcp_keepalive_time = 30
> net.ipv4.ip_local_port_range = 1024 65000
> net.ipv4.route.flush = 1
> net.ipv4.conf.default.rp_filter = 1
> net.ipv4.conf.all.rp_filter = 1
> net.ipv4.tcp_syncookies = 0
> net.ipv6.conf.all.disable_ipv6 = 1
> net.ipv4.tcp_max_syn_backlog = 819200
> net.ipv4.tcp_synack_retries = 1
> net.ipv4.tcp_max_tw_buckets = 819200
> net.ipv4.tcp_tw_reuse = 1
> net.ipv4.tcp_tw_recycle = 1
> net.ipv4.conf.all.arp_ignore = 1
> net.ipv4.conf.all.arp_announce = 2
> net.core.netdev_max_backlog = 500000
> net.ipv4.ip_forward=1
> net.ipv4.ip_conntrack_max=1024000
>
>
> [root@xxx.xxx.xxx.xxx ~]# ulimit -n
> 655350
>
> log:
> Aug 14 15:06:07 10 Keepalived_healthcheckers: TCP connection to
> [10.153.75.152]:80 success.
> Aug 14 15:06:07 10 Keepalived_healthcheckers: Enabling service
> [10.153.75.152]:80 to VS [test]:0
> Aug 14 15:06:15 10 Keepalived_healthcheckers: TCP connection to
> [10.153.74.85]:80 failed !!!
> Aug 14 15:06:15 10 Keepalived_healthcheckers: Disabling service
> [10.153.74.85]:80 from VS [test]:0
> Aug 14 15:06:18 10 Keepalived_healthcheckers: TCP connection to
> [10.153.74.139]:80 failed !!!
> Aug 14 15:06:18 10 Keepalived_healthcheckers: Disabling service
> [10.153.74.139]:80 from VS [test]:0
> Aug 14 15:06:22 10 Keepalived_healthcheckers: TCP connection to
> [10.153.74.85]:80 success.
> Aug 14 15:06:22 10 Keepalived_healthcheckers: Enabling service
> [10.153.74.85]:80 to VS [test]:0
> Aug 14 15:06:25 10 Keepalived_healthcheckers: TCP connection to
> [10.153.74.139]:80 success.
> Aug 14 15:06:25 10 Keepalived_healthcheckers: Enabling service
> [10.153.74.139]:80 to VS [test]:0
> Aug 14 15:06:43 10 Keepalived_healthcheckers: TCP connection to
> [10.153.75.153]:80 failed !!!
> Aug 14 15:06:43 10 Keepalived_healthcheckers: Disabling service
> [10.153.75.153]:80 from VS [test]:0
> Aug 14 15:06:50 10 Keepalived_healthcheckers: TCP connection to
> [10.153.75.153]:80 success.
> Aug 14 15:06:50 10 Keepalived_healthcheckers: Enabling service
> [10.153.75.153]:80 to VS [test]:0
> Aug 14 15:06:52 10 Keepalived_healthcheckers: TCP connection to
> [10.153.74.140]:80 failed !!!
> Aug 14 15:06:52 10 Keepalived_healthcheckers: Disabling service
> [10.153.74.140]:80 from VS [test]:0
> Aug 14 15:06:59 10 Keepalived_healthcheckers: TCP connection to
> [10.153.74.140]:80 success.
> Aug 14 15:06:59 10 Keepalived_healthcheckers: Enabling service
> [10.153.74.140]:80 to VS [test]:0
> Aug 14 15:07:12 10 Keepalived_healthcheckers: TCP connection to
> [10.153.75.56]:80 failed !!!
> Aug 14 15:07:12 10 Keepalived_healthcheckers: Disabling service
> [10.153.75.56]:80 from VS [test]:0
> Aug 14 15:07:19 10 Keepalived_healthcheckers: TCP connection to
> [10.153.75.56]:80 success.
> Aug 14 15:07:19 10 Keepalived_healthcheckers: Enabling service
> [10.153.75.56]:80 to VS [test]:0
> Aug 14 15:07:24 10 Keepalived_healthcheckers: TCP connection to
> [10.153.74.85]:80 failed !!!
> Aug 14 15:07:24 10 Keepalived_healthcheckers: Disabling service
> [10.153.74.85]:80 from VS [test]:0
> Aug 14 15:07:28 10 Keepalived_healthcheckers: TCP connection to
> [10.153.75.153]:80 failed !!!
> Aug 14 15:07:28 10 Keepalived_healthcheckers: Disabling service
> [10.153.75.153]:80 from VS [test]:0
> Aug 14 15:07:31 10 Keepalived_healthcheckers: TCP connection to
> [10.153.74.85]:80 success.
> Aug 14 15:07:31 10 Keepalived_healthcheckers: Enabling service
> [10.153.74.85]:80 to VS [test]:0
> Aug 14 15:07:35 10 Keepalived_healthcheckers: TCP connection to
> [10.153.75.153]:80 success.
> Aug 14 15:07:35 10 Keepalived_healthcheckers: Enabling service
> [10.153.75.153]:80 to VS [test]:0
>
>
> iptables: iptables is stopped
>
> route table:
> [root@10.153.72.2 ~]# route -n
> Kernel IP routing table
> Destination Gateway Genmask Flags Metric Ref Use
> Iface
> 172.27.0.0 172.27.0.9 255.255.255.252 UG 101 0 0
> eth1
> 172.27.0.4 172.27.0.9 255.255.255.252 UG 101 0 0
> eth1
> 172.27.0.8 0.0.0.0 255.255.255.252 U 0 0 0
> eth1
> 172.27.0.16 172.27.0.9 255.255.255.252 UG 101 0 0
> eth1
> 172.27.0.20 172.27.0.9 255.255.255.252 UG 101 0 0
> eth1
> 10.153.72.0 0.0.0.0 255.255.248.0 U 0 0 0
> eth0
> 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0
> eth0
> 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0
> eth1
> 10.0.0.0 10.153.79.254 255.0.0.0 UG 0 0 0
> eth0
> 0.0.0.0 172.27.0.9 0.0.0.0 UG 0 0 0
> eth1
>
>
> keepalived conf:
> the num of realserver is 14:
>
> gobal_defs {
> notification_email {
> acassen@firewall.loc
> failover@firewall.loc
> sysadmin@firewall.loc
> }
> notification_email_from Alexandre.Cassen@firewall.loc
> smtp_connect_timeout 30
> router_id LVS_DEVEL
> }
>
> local_address_group laddr_g1 {
> 10.153.72.2
> }
>
> vrrp_instance VI_1 {
> state MASTER
> interface eth1
> virtual_router_id 200
> priority 150
> advert_int 1
> authentication {
> auth_type PASS
> auth_pass 123456
> }
> virtual_ipaddress {
> xxx.xxx.xxx.xxx/32
> }
> }
> virtual_server_group test {
> xxx.xxx.xxx.xxx 80 //vip1
> }
>
>
> virtual_server group test {
> delay_loop 7
> lb_algo rr
> lb_kind FNAT
> laddr_group_name laddr_g1
> protocol TCP
> # syn_proxy
> # persistence_timeout 50
> omega
> quorum 1
> alpha
> hysteresis 0
>
> real_server 10.153.75.118 80 {
> weight 1
> inhibit_on_failure
> TCP_CHECK {
> connect_timeout 3
> nb_get_retry 2
> delay_before_retry 5
> }
> }
>
> real_server 10.153.74.139 80 {
> weight 1
> inhibit_on_failure
> TCP_CHECK {
> connect_timeout 3
> nb_get_retry 2
> delay_before_retry 5
> }
> }
>
> real_server 10.153.74.140 80 {
> weight 1
> inhibit_on_failure
> TCP_CHECK {
> connect_timeout 3
> #nb_get_retry 2
> #delay_before_retry 5
> }
> }
> .
> .
> .
>
> }
>
>
>
>
> _______________________________________________
> Please read the documentation before posting - it's available at:
> http://www.linuxvirtualserver.org/
>
> LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
> Send requests to lvs-users-request@LinuxVirtualServer.org
> or go to http://lists.graemef.net/mailman/listinfo/lvs-users




--
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
Send requests to lvs-users-request@LinuxVirtualServer.org
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
Re: [lvs-users] help: lvs performance testing [ In reply to ]
NO, 50,000+ is QPS, The NIC is "Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)"


在 2014-08-14 05:00:46,"Malcolm Turnbull" <malcolm@loadbalancer.org> 写道:
>Do you mean 50,000 TPS? That is the kind of maximum performance I would
>expect on a gigabit network.
>We can do 110,000+ on a 10G network (both tests with a single Xeon Quad
>core chip + intel NICs)
>
>
>
>
>On 14 August 2014 08:23, yige2008123 <yige2008123@126.com> wrote:
>
>>
>>
>> hi everyone:
>> I do test throughput of lvs server, the mode is fullnat. Num of
>> director is only one.The testing tool is webbentch, when throughput more
>> than 50,000, It's not increase, I do not know what is the reason, I
>> optimization params of kernal, include limits of open file, tcp connection.
>> etc, thank you
>>
>> my ipvs version:
>> [root@xxx.xxx.xxx.xxx ~]# ipvsadm
>> IP Virtual Server version 1.2.1 (size=4194304)
>>
>> kernel:
>> Linux lvs-test 2.6.32 #1 SMP Wed May 21 14:31:51 CST 2014 x86_64 x86_64
>> x86_64 GNU/Linux
>>
>> optimization of kernel:
>> net.nf_conntrack_max = 25000000
>> net.netfilter.nf_conntrack_max = 25000000
>> net.netfilter.nf_conntrack_tcp_timeout_established = 1500
>> net.ipv4.tcp_max_tw_buckets = 6000
>> net.ipv4.tcp_sack = 1
>> net.ipv4.tcp_window_scaling = 1
>> net.ipv4.tcp_rmem = 4096 87380 4194304
>> net.ipv4.tcp_wmem = 4096 16384 4194304
>> net.core.wmem_default = 8388608
>> net.core.rmem_default = 8388608
>> net.core.rmem_max = 16777216
>> net.core.wmem_max = 16777216
>> net.core.netdev_max_backlog = 262144
>> net.core.somaxconn = 262144
>> net.ipv4.tcp_max_orphans = 3276800
>> net.ipv4.tcp_max_syn_backlog = 262144
>> net.ipv4.tcp_timestamps = 0
>> net.ipv4.tcp_synack_retries = 1
>> net.ipv4.tcp_syn_retries = 1
>> net.ipv4.tcp_tw_recycle = 1
>> net.ipv4.tcp_tw_reuse = 1
>> net.ipv4.tcp_mem = 94500000 915000000 927000000
>> net.ipv4.tcp_fin_timeout = 100
>> net.ipv4.tcp_keepalive_time = 30
>> net.ipv4.ip_local_port_range = 1024 65000
>> net.ipv4.route.flush = 1
>> net.ipv4.conf.default.rp_filter = 1
>> net.ipv4.conf.all.rp_filter = 1
>> net.ipv4.tcp_syncookies = 0
>> net.ipv6.conf.all.disable_ipv6 = 1
>> net.ipv4.tcp_max_syn_backlog = 819200
>> net.ipv4.tcp_synack_retries = 1
>> net.ipv4.tcp_max_tw_buckets = 819200
>> net.ipv4.tcp_tw_reuse = 1
>> net.ipv4.tcp_tw_recycle = 1
>> net.ipv4.conf.all.arp_ignore = 1
>> net.ipv4.conf.all.arp_announce = 2
>> net.core.netdev_max_backlog = 500000
>> net.ipv4.ip_forward=1
>> net.ipv4.ip_conntrack_max=1024000
>>
>>
>> [root@xxx.xxx.xxx.xxx ~]# ulimit -n
>> 655350
>>
>> log:
>> Aug 14 15:06:07 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.75.152]:80 success.
>> Aug 14 15:06:07 10 Keepalived_healthcheckers: Enabling service
>> [10.153.75.152]:80 to VS [test]:0
>> Aug 14 15:06:15 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.74.85]:80 failed !!!
>> Aug 14 15:06:15 10 Keepalived_healthcheckers: Disabling service
>> [10.153.74.85]:80 from VS [test]:0
>> Aug 14 15:06:18 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.74.139]:80 failed !!!
>> Aug 14 15:06:18 10 Keepalived_healthcheckers: Disabling service
>> [10.153.74.139]:80 from VS [test]:0
>> Aug 14 15:06:22 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.74.85]:80 success.
>> Aug 14 15:06:22 10 Keepalived_healthcheckers: Enabling service
>> [10.153.74.85]:80 to VS [test]:0
>> Aug 14 15:06:25 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.74.139]:80 success.
>> Aug 14 15:06:25 10 Keepalived_healthcheckers: Enabling service
>> [10.153.74.139]:80 to VS [test]:0
>> Aug 14 15:06:43 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.75.153]:80 failed !!!
>> Aug 14 15:06:43 10 Keepalived_healthcheckers: Disabling service
>> [10.153.75.153]:80 from VS [test]:0
>> Aug 14 15:06:50 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.75.153]:80 success.
>> Aug 14 15:06:50 10 Keepalived_healthcheckers: Enabling service
>> [10.153.75.153]:80 to VS [test]:0
>> Aug 14 15:06:52 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.74.140]:80 failed !!!
>> Aug 14 15:06:52 10 Keepalived_healthcheckers: Disabling service
>> [10.153.74.140]:80 from VS [test]:0
>> Aug 14 15:06:59 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.74.140]:80 success.
>> Aug 14 15:06:59 10 Keepalived_healthcheckers: Enabling service
>> [10.153.74.140]:80 to VS [test]:0
>> Aug 14 15:07:12 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.75.56]:80 failed !!!
>> Aug 14 15:07:12 10 Keepalived_healthcheckers: Disabling service
>> [10.153.75.56]:80 from VS [test]:0
>> Aug 14 15:07:19 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.75.56]:80 success.
>> Aug 14 15:07:19 10 Keepalived_healthcheckers: Enabling service
>> [10.153.75.56]:80 to VS [test]:0
>> Aug 14 15:07:24 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.74.85]:80 failed !!!
>> Aug 14 15:07:24 10 Keepalived_healthcheckers: Disabling service
>> [10.153.74.85]:80 from VS [test]:0
>> Aug 14 15:07:28 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.75.153]:80 failed !!!
>> Aug 14 15:07:28 10 Keepalived_healthcheckers: Disabling service
>> [10.153.75.153]:80 from VS [test]:0
>> Aug 14 15:07:31 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.74.85]:80 success.
>> Aug 14 15:07:31 10 Keepalived_healthcheckers: Enabling service
>> [10.153.74.85]:80 to VS [test]:0
>> Aug 14 15:07:35 10 Keepalived_healthcheckers: TCP connection to
>> [10.153.75.153]:80 success.
>> Aug 14 15:07:35 10 Keepalived_healthcheckers: Enabling service
>> [10.153.75.153]:80 to VS [test]:0
>>
>>
>> iptables: iptables is stopped
>>
>> route table:
>> [root@10.153.72.2 ~]# route -n
>> Kernel IP routing table
>> Destination Gateway Genmask Flags Metric Ref Use
>> Iface
>> 172.27.0.0 172.27.0.9 255.255.255.252 UG 101 0 0
>> eth1
>> 172.27.0.4 172.27.0.9 255.255.255.252 UG 101 0 0
>> eth1
>> 172.27.0.8 0.0.0.0 255.255.255.252 U 0 0 0
>> eth1
>> 172.27.0.16 172.27.0.9 255.255.255.252 UG 101 0 0
>> eth1
>> 172.27.0.20 172.27.0.9 255.255.255.252 UG 101 0 0
>> eth1
>> 10.153.72.0 0.0.0.0 255.255.248.0 U 0 0 0
>> eth0
>> 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0
>> eth0
>> 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0
>> eth1
>> 10.0.0.0 10.153.79.254 255.0.0.0 UG 0 0 0
>> eth0
>> 0.0.0.0 172.27.0.9 0.0.0.0 UG 0 0 0
>> eth1
>>
>>
>> keepalived conf:
>> the num of realserver is 14:
>>
>> gobal_defs {
>> notification_email {
>> acassen@firewall.loc
>> failover@firewall.loc
>> sysadmin@firewall.loc
>> }
>> notification_email_from Alexandre.Cassen@firewall.loc
>> smtp_connect_timeout 30
>> router_id LVS_DEVEL
>> }
>>
>> local_address_group laddr_g1 {
>> 10.153.72.2
>> }
>>
>> vrrp_instance VI_1 {
>> state MASTER
>> interface eth1
>> virtual_router_id 200
>> priority 150
>> advert_int 1
>> authentication {
>> auth_type PASS
>> auth_pass 123456
>> }
>> virtual_ipaddress {
>> xxx.xxx.xxx.xxx/32
>> }
>> }
>> virtual_server_group test {
>> xxx.xxx.xxx.xxx 80 //vip1
>> }
>>
>>
>> virtual_server group test {
>> delay_loop 7
>> lb_algo rr
>> lb_kind FNAT
>> laddr_group_name laddr_g1
>> protocol TCP
>> # syn_proxy
>> # persistence_timeout 50
>> omega
>> quorum 1
>> alpha
>> hysteresis 0
>>
>> real_server 10.153.75.118 80 {
>> weight 1
>> inhibit_on_failure
>> TCP_CHECK {
>> connect_timeout 3
>> nb_get_retry 2
>> delay_before_retry 5
>> }
>> }
>>
>> real_server 10.153.74.139 80 {
>> weight 1
>> inhibit_on_failure
>> TCP_CHECK {
>> connect_timeout 3
>> nb_get_retry 2
>> delay_before_retry 5
>> }
>> }
>>
>> real_server 10.153.74.140 80 {
>> weight 1
>> inhibit_on_failure
>> TCP_CHECK {
>> connect_timeout 3
>> #nb_get_retry 2
>> #delay_before_retry 5
>> }
>> }
>> .
>> .
>> .
>>
>> }
>>
>>
>>
>>
>> _______________________________________________
>> Please read the documentation before posting - it's available at:
>> http://www.linuxvirtualserver.org/
>>
>> LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
>> Send requests to lvs-users-request@LinuxVirtualServer.org
>> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
>
>
>
>
>--
>Regards,
>
>Malcolm Turnbull.
>
>Loadbalancer.org Ltd.
>Phone: +44 (0)330 1604540
>http://www.loadbalancer.org/
>_______________________________________________
>Please read the documentation before posting - it's available at:
>http://www.linuxvirtualserver.org/
>
>LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
>Send requests to lvs-users-request@LinuxVirtualServer.org
>or go to http://lists.graemef.net/mailman/listinfo/lvs-users
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
Send requests to lvs-users-request@LinuxVirtualServer.org
or go to http://lists.graemef.net/mailman/listinfo/lvs-users