Mailing List Archive

How can i run win 2000 as real server in Direct Routing?
Hi...
How can i run win 2000 as real server in Direct Routing?
I can run win NT as real server by adding ms loopback adaptor,
but win 2000 is correctly worked...
I want to run director and real server in same subnet
how can i do?




--MIME Multi-part separator--
Re: How can i run win 2000 as real server in Direct Routing? [ In reply to ]
why not try win2000's build in load balance module ?

"±è¿µÁý" wrote:

> Hi...
> How can i run win 2000 as real server in Direct Routing?
> I can run win NT as real server by adding ms loopback adaptor,
> but win 2000 is correctly worked...
> I want to run director and real server in same subnet
> how can i do?
>
> --MIME Multi-part separator--
>
> _______________________________________________
> LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
> Send requests to lvs-users-request@LinuxVirtualServer.org
> or go to http://www.in-addr.de/mailman/listinfo/lvs-users
Re: How can i run win 2000 as real server in Direct Routing? [ In reply to ]
zhangxch wrote:
>
> why not try win2000's build in load balance module ?

Finally someone comes up with it. I didn't dare to mention it but
you might have a look at:

http://www.microsoft.com/TechNet/win2000/nlbovw.asp?a=printable

This really is the first text I read from Microsoft that impressed
me quite a bit. They have some very advanced approach to clustering
and loadbalancing. For example (for those that don't want to read it):

o They have a slightely different architectural approach to LVS-DR.
It looks very promising and I'd love to test it out but I can't
since I don't have a copy of W2K.
o They claim to have a real cluster with fully distributed software
architecture.
o Because of the different design architecture they can achieve very
statistical load balanced services even with sticky option. Example:
"When inspecting an arriving packet, all hosts simultaneously perform
a statistical mapping to quickly determine which host should handle
the packet. The mapping uses a randomization function that calculates
a host priority based on the client's IP address, port, and other state
information maintained to optimize load balance. The corresponding
host forwards the packet up the network stack to TCP/IP, and the other
cluster hosts discard it."
o They have some kind of SSL Termination and persistent cookie support.
o Interesting arguments such as:" This architecture maximizes throughput
by using the broadcast subnet to deliver incoming network traffic to
all cluster hosts and by eliminating the need to route incoming
packets to individual cluster hosts. Since filtering unwanted packets
is faster than routing packets (which involves receiving, examining,
rewriting, and resending), Network Load Balancing delivers higher
network throughput than dispatcher-based solutions. As network and
server speeds grow, its throughput also grows proportionally, thus
eliminating any dependency on a particular hardware routing implementation.
For example, Network Load Balancing has demonstrated 250 megabits per
second (Mbps) throughput on Gigabit networks."
o they have PPC, persistency mask, different schedulers, failover,
healthcheck, state transition table synchronisation (see Convergence),
LVS-DR (somehow described under "Distribution of Cluster Traffic #1"),
event-logging, special ether type-value [0x886F], simple filtering.
o Oh, I forget to tell you that you can only go up to 32 servers :)

But hey, maybe he doesn't have a W2K farm :)
To finalize and come back to your initial question: Doesn't W2K have
something like a MS loopback adapter? In the paper above they mentioned
something like this (cluster adapter)

Best regards,
Roberto Nibali, ratz

--
mailto: `echo NrOatSz@tPacA.cMh | sed 's/[NOSPAM]//g'`
Re: How can i run win 2000 as real server in Direct Routing? [ In reply to ]
Hi,

From the implementation viewpoint, Windows Load Balancing Service
implements a local filter between the NIC driver and the TCP/IP stack.
There is a filtering function, which can map incoming packets to the
cluster ndoe based on the source IP address and port number and only
pass the packets to the upper layer in one node. If some nodes fails or
new nodes are added, all the cluster nodes need neogociate a new
filtering function. I guess that each node may keep states of its
established connections, so that even under a new filtering function,
the packets destined for the local node can still be passed to the upper
layer, the new filtering function only works on new connections (SYN
packets). However, in persistent services, the new filtering function
will make all the persistence broken, no matter that they are connected
to the alive nodes or the failed nodes. It affects all the nodes. I see
that's the big shortcoming, but the distributed fileter architecture can
avoid the failure of dispatcher.

It is simple to write a filter, but it must be complicated to write
convergence stuff (negociation for a new function).

I see that maybe we can learn somethings from it, investigate some
mechanism to implement active-active load balancers.

Cheers,

Wensong


On Thu, 4 Jan 2001, ratz wrote:

>
> Finally someone comes up with it. I didn't dare to mention it but
> you might have a look at:
>
> http://www.microsoft.com/TechNet/win2000/nlbovw.asp?a=printable
>
> This really is the first text I read from Microsoft that impressed
> me quite a bit. They have some very advanced approach to clustering
> and loadbalancing. For example (for those that don't want to read it):
>
> o They have a slightely different architectural approach to LVS-DR.
> It looks very promising and I'd love to test it out but I can't
> since I don't have a copy of W2K.
> o They claim to have a real cluster with fully distributed software
> architecture.
> o Because of the different design architecture they can achieve very
> statistical load balanced services even with sticky option. Example:
> "When inspecting an arriving packet, all hosts simultaneously perform
> a statistical mapping to quickly determine which host should handle
> the packet. The mapping uses a randomization function that calculates
> a host priority based on the client's IP address, port, and other state
> information maintained to optimize load balance. The corresponding
> host forwards the packet up the network stack to TCP/IP, and the other
> cluster hosts discard it."
> o They have some kind of SSL Termination and persistent cookie support.
> o Interesting arguments such as:" This architecture maximizes throughput
> by using the broadcast subnet to deliver incoming network traffic to
> all cluster hosts and by eliminating the need to route incoming
> packets to individual cluster hosts. Since filtering unwanted packets
> is faster than routing packets (which involves receiving, examining,
> rewriting, and resending), Network Load Balancing delivers higher
> network throughput than dispatcher-based solutions. As network and
> server speeds grow, its throughput also grows proportionally, thus
> eliminating any dependency on a particular hardware routing implementation.
> For example, Network Load Balancing has demonstrated 250 megabits per
> second (Mbps) throughput on Gigabit networks."
> o they have PPC, persistency mask, different schedulers, failover,
> healthcheck, state transition table synchronisation (see Convergence),
> LVS-DR (somehow described under "Distribution of Cluster Traffic #1"),
> event-logging, special ether type-value [0x886F], simple filtering.
> o Oh, I forget to tell you that you can only go up to 32 servers :)
>
> But hey, maybe he doesn't have a W2K farm :)
> To finalize and come back to your initial question: Doesn't W2K have
> something like a MS loopback adapter? In the paper above they mentioned
> something like this (cluster adapter)
>
> Best regards,
> Roberto Nibali, ratz
>
> --
> mailto: `echo NrOatSz@tPacA.cMh | sed 's/[NOSPAM]//g'`
>
> _______________________________________________
> LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
> Send requests to lvs-users-request@LinuxVirtualServer.org
> or go to http://www.in-addr.de/mailman/listinfo/lvs-users
>
Re: How can i run win 2000 as real server in Direct Routing? [ In reply to ]
Hi,

> o Interesting arguments such as:" This architecture maximizes throughput
> by using the broadcast subnet to deliver incoming network traffic to
> all cluster hosts and by eliminating the need to route incoming
> packets to individual cluster hosts. Since filtering unwanted packets
> is faster than routing packets (which involves receiving, examining,
> rewriting, and resending), Network Load Balancing delivers higher
> network throughput than dispatcher-based solutions. As network and
> server speeds grow, its throughput also grows proportionally, thus
> eliminating any dependency on a particular hardware routing
implementation.
> For example, Network Load Balancing has demonstrated 250 megabits per
> second (Mbps) throughput on Gigabit networks."

This is very interesting. Some questions:
1. What will happened if client also share the same subnet as the cluster
farm? Are the clients still working fine under this massive broadcast? Or we
alway need to use dedicate a subnet behind router for the farm. Again, how
can we use it in intranet environment when small to medium size company use
a single subnet for the whole company.

2. Although the reason sound quite good, but it seems a little strange to
me. By not examining the package, net work load balancing host can give
much better dispatching throughput. But that will be true only for the load
balancing host. Since the task of discarding unwanted packet now move to
all hosts in the farm, each one have to work harder since it has been
bombarded by broadcast packet. Moreover, for each host to deny packet, it
seems that they have to received the packet, examine the header and discard
it , I would say, most of the time. This does not seems to be the task that
can automatically done by NIC.
Isn't this slowed down each host since it must handle massive amount of
interrupt from NIC all the time? OK, I am not sure how network subsystem
work so I would like to get an expert opinion on this.

Again, I am not an expert so I might be wrong. Please help correct me.

Thanks,

Putchong
Re: How can i run win 2000 as real server in Direct Routing? [ In reply to ]
Hello,

On Thu, 4 Jan 2001, Wensong Zhang wrote:

> Hi,
>
> >From the implementation viewpoint, Windows Load Balancing Service
> implements a local filter between the NIC driver and the TCP/IP stack.
> There is a filtering function, which can map incoming packets to the
> cluster ndoe based on the source IP address and port number and only
> pass the packets to the upper layer in one node. If some nodes fails or
> new nodes are added, all the cluster nodes need neogociate a new
> filtering function. I guess that each node may keep states of its
> established connections, so that even under a new filtering function,
> the packets destined for the local node can still be passed to the upper
> layer, the new filtering function only works on new connections (SYN
> packets). However, in persistent services, the new filtering function
> will make all the persistence broken, no matter that they are connected
> to the alive nodes or the failed nodes. It affects all the nodes. I see
> that's the big shortcoming, but the distributed fileter architecture can
> avoid the failure of dispatcher.
>
> It is simple to write a filter, but it must be complicated to write
> convergence stuff (negociation for a new function).
>
> I see that maybe we can learn somethings from it, investigate some
> mechanism to implement active-active load balancers.

Hm, interesting reading. I'm thinking about the load
percentages and whether the collisions or other factors lead to
situations where two real servers can reply to same request.

I'm looking in my stats for the banner servers that serve only
static images, LVS/DR. Wow, I have the stats in packets/sec and not in
bytes/sec. Can you believe, the input packets are 90% of the output
packets. We don't talk for the bytes. So, if the web receives 90 packets
and send 100 packets in LVS/DR and if the real servers are 10 each web
in WIN2K/NLB mode will receive 90*10 packets and will send 100 packets,
9:1, a picture very different for the assumptions about the short
web requests and the long answers. I'm not sure if the packet size makes
any sense in the handling. IMO, we even don't waste CPU cycles in
checksuming when forwarding the packets. In the real servers may be the
cards with hardware checksuming help the WIN2K/NLB mode not to waste
CPU cycles in checksuming the (N-1)/N of the incoming packets, i.e. the
packets that will not be accepted locally.

OK, now I'm looking in the other my web servers that can connect
to the databases. Can you believe, input packets are 98% of the output.
Again, all hosts are in LVS/DR setup but this is not only a LVS/DR
traffic to/from the clients.

So, it seems all my real servers have equal number for in
and out packets. If I have 32 real servers for each 32 packets
I will send only one output packet in WIN2K/NLB mode. Oh, yes, there
are full-duplex links too.

Guys, what show your stats for the incoming and outgoing
packets in your real servers? And only for LVS/DR traffic, i.e. static
web for example or traffic that includes packets to and from the
client only. Are my assumptions correct? May be for FTP the picture
will be different, i.e. small request with long data. But there are
acks too, not every packet contains data. But the incoming packets
must be a small number compared to the output packets in long
FTP downloads, you know, delayed acks, etc.

> Cheers,
>
> Wensong


Regards

--
Julian Anastasov <ja@ssi.bg>
Re: How can i run win 2000 as real server in Direct Routing? [ In reply to ]
On Fri, 5 Jan 2001, Julian Anastasov wrote:

>
> Hello,
>
> On Thu, 4 Jan 2001, Wensong Zhang wrote:
>
> > Hi,
> >
> > >From the implementation viewpoint, Windows Load Balancing Service
> > implements a local filter between the NIC driver and the TCP/IP stack.
> > There is a filtering function, which can map incoming packets to the
> > cluster ndoe based on the source IP address and port number and only
> > pass the packets to the upper layer in one node. If some nodes fails or
> > new nodes are added, all the cluster nodes need neogociate a new
> > filtering function. I guess that each node may keep states of its
> > established connections, so that even under a new filtering function,
> > the packets destined for the local node can still be passed to the upper
> > layer, the new filtering function only works on new connections (SYN
> > packets). However, in persistent services, the new filtering function
> > will make all the persistence broken, no matter that they are connected
> > to the alive nodes or the failed nodes. It affects all the nodes. I see
> > that's the big shortcoming, but the distributed fileter architecture can
> > avoid the failure of dispatcher.
> >
> > It is simple to write a filter, but it must be complicated to write
> > convergence stuff (negociation for a new function).
> >
> > I see that maybe we can learn somethings from it, investigate some
> > mechanism to implement active-active load balancers.
>
> Hm, interesting reading. I'm thinking about the load
> percentages and whether the collisions or other factors lead to
> situations where two real servers can reply to same request.
>
> I'm looking in my stats for the banner servers that serve only
> static images, LVS/DR. Wow, I have the stats in packets/sec and not in
> bytes/sec. Can you believe, the input packets are 90% of the output
> packets. We don't talk for the bytes. So, if the web receives 90 packets
> and send 100 packets in LVS/DR and if the real servers are 10 each web
> in WIN2K/NLB mode will receive 90*10 packets and will send 100 packets,
> 9:1, a picture very different for the assumptions about the short
> web requests and the long answers. I'm not sure if the packet size makes
> any sense in the handling. IMO, we even don't waste CPU cycles in
> checksuming when forwarding the packets. In the real servers may be the
> cards with hardware checksuming help the WIN2K/NLB mode not to waste
> CPU cycles in checksuming the (N-1)/N of the incoming packets, i.e. the
> packets that will not be accepted locally.
>

Good point. Local filtering do some unnecessary checksuming the (N-1)/N
of huge incoming traffic. It will requires that real servers have good
hardware configuration. In the dispatcher method, we can optimize the
load balancer with good hardware, such as 2-way box with Gigabit cards,
and the real servers can still have common hardware.

Thanks,

Wensong

> OK, now I'm looking in the other my web servers that can connect
> to the databases. Can you believe, input packets are 98% of the output.
> Again, all hosts are in LVS/DR setup but this is not only a LVS/DR
> traffic to/from the clients.
>
> So, it seems all my real servers have equal number for in
> and out packets. If I have 32 real servers for each 32 packets
> I will send only one output packet in WIN2K/NLB mode. Oh, yes, there
> are full-duplex links too.
>
> Guys, what show your stats for the incoming and outgoing
> packets in your real servers? And only for LVS/DR traffic, i.e. static
> web for example or traffic that includes packets to and from the
> client only. Are my assumptions correct? May be for FTP the picture
> will be different, i.e. small request with long data. But there are
> acks too, not every packet contains data. But the incoming packets
> must be a small number compared to the output packets in long
> FTP downloads, you know, delayed acks, etc.
>
> > Cheers,
> >
> > Wensong
>
>
> Regards
>
> --
> Julian Anastasov <ja@ssi.bg>
>
>
> _______________________________________________
> LinuxVirtualServer.org mailing list - lvs-users@LinuxVirtualServer.org
> Send requests to lvs-users-request@LinuxVirtualServer.org
> or go to http://www.in-addr.de/mailman/listinfo/lvs-users
>