Mailing List Archive

[mod_backhand-users] backhand & load balancer
Hi,

We're getting a coyote point systems E350 load balancer for our cluster.

Reading the docs, it has all kinds of balancing and failover mechanisms
using server agents, connection response times, active weighting
distribution etc.

What I'm wondering is, does mod_backhand help much behind a load
balancer like this? Does it help or hinder things? Is anyone familiar
with the coyote point LB, and used it with mod_backhand?

TIA
Monte
[mod_backhand-users] backhand & load balancer [ In reply to ]
On Monday, September 17, 2001, at 12:38 PM, Monte Ohrt wrote:
> We're getting a coyote point systems E350 load balancer for our cluster.
>
> Reading the docs, it has all kinds of balancing and failover mechanisms
> using server agents, connection response times, active weighting
> distribution etc.
>
> What I'm wondering is, does mod_backhand help much behind a load
> balancer like this? Does it help or hinder things? Is anyone familiar
> with the coyote point LB, and used it with mod_backhand?

Almost all of the load balancers support subset or supersets of these
features. I have used mod_backhand behind both BIG/ip and Arrowpoint in
production. I have also tested mod_backhand behind server irons.

There are certain things that all of those load balancers cannot do.
One is stick sessions to a web servers irregardless of the protocol
(HTTP, HTTPS). Another is complicated content to resource association
(balancing based on URL paths and cookie in a non-trivial way).

The important thing to remember is that mod_backhand induces close to
zero overhead when it is not in use. So, if you are to use byLoad
<bias> with a decent bias, mod_backhand will only trigger if things get
"out of whack."

Hardware load balancers are very useful and most people will find that
they will not need something like mod_backhand if they use one.
However, as web applications and overall site architectures become more
and more complex, mod_backhand can provide some otherwise unavailable
features.

--
Theo Schlossnagle
1024D/82844984/95FD 30F1 489E 4613 F22E 491A 7E88 364C 8284 4984
2047R/33131B65/71 F7 95 64 49 76 5D BA 3D 90 B9 9F BE 27 24 E7
[mod_backhand-users] backhand & load balancer [ In reply to ]
Theo Schlossnagle wrote:
>
> On Monday, September 17, 2001, at 12:38 PM, Monte Ohrt wrote:
> > We're getting a coyote point systems E350 load balancer for our cluster.
> >
> > Reading the docs, it has all kinds of balancing and failover mechanisms
> > using server agents, connection response times, active weighting
> > distribution etc.
> >
> > What I'm wondering is, does mod_backhand help much behind a load
> > balancer like this? Does it help or hinder things? Is anyone familiar
> > with the coyote point LB, and used it with mod_backhand?
>
> Almost all of the load balancers support subset or supersets of these
> features. I have used mod_backhand behind both BIG/ip and Arrowpoint in
> production. I have also tested mod_backhand behind server irons.
>
> There are certain things that all of those load balancers cannot do.
> One is stick sessions to a web servers irregardless of the protocol
> (HTTP, HTTPS). Another is complicated content to resource association
> (balancing based on URL paths and cookie in a non-trivial way).

I think coyote point accomplishes this by making "sticky clusters", so
for instance one cluster is www.domain.com:http and one is
www.domain.com:https, then making a session stick no matter which
cluster they access. I think the hostname must be the same though.

>
> The important thing to remember is that mod_backhand induces close to
> zero overhead when it is not in use. So, if you are to use byLoad
> <bias> with a decent bias, mod_backhand will only trigger if things get
> "out of whack."
>
> Hardware load balancers are very useful and most people will find that
> they will not need something like mod_backhand if they use one.
> However, as web applications and overall site architectures become more
> and more complex, mod_backhand can provide some otherwise unavailable
> features.

Thanks for the info!

Monte
[mod_backhand-users] backhand & load balancer [ In reply to ]
On Monday, September 17, 2001, at 02:30 PM, Monte Ohrt wrote:
> Theo Schlossnagle wrote:
>
> I think coyote point accomplishes this by making "sticky clusters", so
> for instance one cluster is www.domain.com:http and one is
> www.domain.com:https, then making a session stick no matter which
> cluster they access. I think the hostname must be the same though.

This isn't what I was describing. That is easy to do. I am saying that
if you visit a site over HTTPS and hit machine C, when you visit the
site again over HTTP you must again be delivered to machine C. If you
application has this requirement, there is no existing hardware load
balancing solution that solves this AFAIK.

--
Theo Schlossnagle
1024D/82844984/95FD 30F1 489E 4613 F22E 491A 7E88 364C 8284 4984
2047R/33131B65/71 F7 95 64 49 76 5D BA 3D 90 B9 9F BE 27 24 E7
[mod_backhand-users] backhand & load balancer [ In reply to ]
I thought that is what I was describing? See page 46 of the user manual
about "inter-cluster stickyness."

http://www.coyotepoint.com/manual.pdf



Theo Schlossnagle wrote:
>
> On Monday, September 17, 2001, at 02:30 PM, Monte Ohrt wrote:
> > Theo Schlossnagle wrote:
> >
> > I think coyote point accomplishes this by making "sticky clusters", so
> > for instance one cluster is www.domain.com:http and one is
> > www.domain.com:https, then making a session stick no matter which
> > cluster they access. I think the hostname must be the same though.
>
> This isn't what I was describing. That is easy to do. I am saying that
> if you visit a site over HTTPS and hit machine C, when you visit the
> site again over HTTP you must again be delivered to machine C. If you
> application has this requirement, there is no existing hardware load
> balancing solution that solves this AFAIK.
>
> --
> Theo Schlossnagle
> 1024D/82844984/95FD 30F1 489E 4613 F22E 491A 7E88 364C 8284 4984
> 2047R/33131B65/71 F7 95 64 49 76 5D BA 3D 90 B9 9F BE 27 24 E7
>
> _______________________________________________
> backhand-users mailing list
> backhand-users@lists.backhand.org
> http://lists.backhand.org/mailman/listinfo/backhand-users

--
Monte Ohrt <monte@ispi.net>
http://www.ispi.net/
[mod_backhand-users] backhand & load balancer [ In reply to ]
On Monday, September 17, 2001, at 03:00 PM, Monte Ohrt wrote:
> I thought that is what I was describing? See page 46 of the user manual
> about "inter-cluster stickyness."
>
> http://www.coyotepoint.com/manual.pdf

Very interesting. They say they do it, but they don't say how. I am
interested in how they accomplish this. If anyone knows, please post.

I have an eerie feeling that they stick the end user based on the IP
they come from. This method is fundamentally broken. For AOL users and
other users of ISPs who do aggressive transparent caching can come from
different IP addresses for the image on the same page! If this is the
method that is used, I suggest never using it or you will see all sorts
of problems.

--
Theo Schlossnagle
1024D/82844984/95FD 30F1 489E 4613 F22E 491A 7E88 364C 8284 4984
2047R/33131B65/71 F7 95 64 49 76 5D BA 3D 90 B9 9F BE 27 24 E7
[mod_backhand-users] backhand & load balancer [ In reply to ]
I believe you are correct, they do use the IP number as the basis of
stickyness. But, they have reasons to believe that this shouldn't be a
problem.

See "sticky network aggregation", page 6.
See "sticky network aggregation", page 56.


Basically, they say that even though all AOL traffic gets routed to one
server, the other visitors get spread to the other servers anyways, so
the load does get evenly balanced in the big picture. I suppose a
potential problem would be if so many people come from AOL that they can
saturate the resources on that one server.

They also have the ability to put a subnet mask on a sticky connection
for ISPs a that use multiple outgoing proxies.


Theo Schlossnagle wrote:
>
> On Monday, September 17, 2001, at 03:00 PM, Monte Ohrt wrote:
> > I thought that is what I was describing? See page 46 of the user manual
> > about "inter-cluster stickyness."
> >
> > http://www.coyotepoint.com/manual.pdf
>
> Very interesting. They say they do it, but they don't say how. I am
> interested in how they accomplish this. If anyone knows, please post.
>
> I have an eerie feeling that they stick the end user based on the IP
> they come from. This method is fundamentally broken. For AOL users and
> other users of ISPs who do aggressive transparent caching can come from
> different IP addresses for the image on the same page! If this is the
> method that is used, I suggest never using it or you will see all sorts
> of problems.
>
> --
> Theo Schlossnagle
> 1024D/82844984/95FD 30F1 489E 4613 F22E 491A 7E88 364C 8284 4984
> 2047R/33131B65/71 F7 95 64 49 76 5D BA 3D 90 B9 9F BE 27 24 E7
>
> _______________________________________________
> backhand-users mailing list
> backhand-users@lists.backhand.org
> http://lists.backhand.org/mailman/listinfo/backhand-users

--
Monte Ohrt <monte@ispi.net>
http://www.ispi.net/
[mod_backhand-users] backhand & load balancer [ In reply to ]
On Monday, September 17, 2001, at 03:34 PM, Monte Ohrt wrote:
> I believe you are correct, they do use the IP number as the basis of
> stickyness. But, they have reasons to believe that this shouldn't be a
> problem.
>
> See "sticky network aggregation", page 6.

Thanks for the pointer. It does appear that they use the network
portion... that is evil. I will assume that they assume that every
network is a class C. Two issues:

o proxies are not always on the same network (many are multihomed).
This breaks that concept.

o For those ISPs that have several proxies on the same network. Let's
think about this... Why would they have more than one proxy on the same
/24? Capacity and fault-tolerance. If you have so many users (that
they need the capacity and faul-tolerance) do you really want them all
stuck to the same server? -- Absolutely not.

Each user should be stuck to a server based on available resources, not
IP space.

Actually, my opinion is that people should never use server stickiness.
That is last years technology. The session state that backs your web
applications should be distributed across your cluster so that you can
truly withstand a failure. However, some applications still require
this antiquated feature, and the methodolgy that the Coyote employees
(and every other hardware/blackbox solution) is subpar.

Don't get me wrong, I have never used them but I hear great things about
the Coyote boxes. I do experience the same problems with ServerIrons,
BIG/ips, Arrowpoints, ACE Directors and the like. All of these have
some form of stickiness that is similar to that described in the Coyote
docs. They all simply have limitations that can be compensated for by
the _addition_ of mod_backhand at no cost.
[mod_backhand-users] backhand & load balancer [ In reply to ]
Our applications handle session data across the cluster, so we don't
even need any of this sticky session features. But I thought it was an
interesting read anyways. I suppose this is a good solution to get
legacy code ported into a cluster environment without a lot of code
rewriting.

Theo Schlossnagle wrote:
>
> On Monday, September 17, 2001, at 03:34 PM, Monte Ohrt wrote:
> > I believe you are correct, they do use the IP number as the basis of
> > stickyness. But, they have reasons to believe that this shouldn't be a
> > problem.
> >
> > See "sticky network aggregation", page 6.
>
> Thanks for the pointer. It does appear that they use the network
> portion... that is evil. I will assume that they assume that every
> network is a class C. Two issues:
>
> o proxies are not always on the same network (many are multihomed).
> This breaks that concept.
>
> o For those ISPs that have several proxies on the same network. Let's
> think about this... Why would they have more than one proxy on the same
> /24? Capacity and fault-tolerance. If you have so many users (that
> they need the capacity and faul-tolerance) do you really want them all
> stuck to the same server? -- Absolutely not.
>
> Each user should be stuck to a server based on available resources, not
> IP space.
>
> Actually, my opinion is that people should never use server stickiness.
> That is last years technology. The session state that backs your web
> applications should be distributed across your cluster so that you can
> truly withstand a failure. However, some applications still require
> this antiquated feature, and the methodolgy that the Coyote employees
> (and every other hardware/blackbox solution) is subpar.
>
> Don't get me wrong, I have never used them but I hear great things about
> the Coyote boxes. I do experience the same problems with ServerIrons,
> BIG/ips, Arrowpoints, ACE Directors and the like. All of these have
> some form of stickiness that is similar to that described in the Coyote
> docs. They all simply have limitations that can be compensated for by
> the _addition_ of mod_backhand at no cost.
>
> _______________________________________________
> backhand-users mailing list
> backhand-users@lists.backhand.org
> http://lists.backhand.org/mailman/listinfo/backhand-users

--
Monte Ohrt <monte@ispi.net>
http://www.ispi.net/
[mod_backhand-users] backhand & load balancer [ In reply to ]
Theo Schlossnagle wrote:
>
> On Monday, September 17, 2001, at 03:34 PM, Monte Ohrt wrote:
> > I believe you are correct, they do use the IP number as the basis of
> > stickyness. But, they have reasons to believe that this shouldn't be a
> > problem.
> >
> > See "sticky network aggregation", page 6.
>
> Thanks for the pointer. It does appear that they use the network
> portion... that is evil. I will assume that they assume that every
> network is a class C. Two issues:


It looks like you can set the sticky session mask to either a class C, B
or A. Quite rudimentary and still has its own problems.


>
> o proxies are not always on the same network (many are multihomed).
> This breaks that concept.
>
> o For those ISPs that have several proxies on the same network. Let's
> think about this... Why would they have more than one proxy on the same
> /24? Capacity and fault-tolerance. If you have so many users (that
> they need the capacity and faul-tolerance) do you really want them all
> stuck to the same server? -- Absolutely not.
>
> Each user should be stuck to a server based on available resources, not
> IP space.
>
> Actually, my opinion is that people should never use server stickiness.
> That is last years technology. The session state that backs your web
> applications should be distributed across your cluster so that you can
> truly withstand a failure. However, some applications still require
> this antiquated feature, and the methodolgy that the Coyote employees
> (and every other hardware/blackbox solution) is subpar.
>
> Don't get me wrong, I have never used them but I hear great things about
> the Coyote boxes. I do experience the same problems with ServerIrons,
> BIG/ips, Arrowpoints, ACE Directors and the like. All of these have
> some form of stickiness that is similar to that described in the Coyote
> docs. They all simply have limitations that can be compensated for by
> the _addition_ of mod_backhand at no cost.
>
> _______________________________________________
> backhand-users mailing list
> backhand-users@lists.backhand.org
> http://lists.backhand.org/mailman/listinfo/backhand-users

--
Monte Ohrt <monte@ispi.net>
http://www.ispi.net/
[mod_backhand-users] backhand & load balancer [ In reply to ]
> Actually, my opinion is that people should never use server stickiness.
> That is last years technology. The session state that backs your web
> applications should be distributed across your cluster so that you can
> truly withstand a failure.

It's a good idea, but is it really possible? You can use a tool like Spread
for distributing the data, but the distribution of changes may not be fast
enough to handle requests in quick succession.

The common approach to state management on a cluster is to use a central
database, so that any machine can handle any request. However, when people
try to scale this up they often end up needing to do a write-through cache
of data from the database, and if that data is user-specific and updateable
then the user needs to come back to the same cache and thus the same
machine.

At the moment, it looks to me like sticky sessions are still the only game
in town for large sites with intensive database activity, but I'm keeping an
eye out for alternatives.

By the way, I only have experience with big/ip, but I believe that they can
do sticky sessions without regard for http/https by using cookies. Not a
good solution if you hate cookies, but it works for environments where
cookies are a given.

- Perrin
[mod_backhand-users] backhand & load balancer [ In reply to ]
Just to through out a question (probably an FAQ), how about using an NFS
mounted file system for session data? I don't mean the traditional NFS,
I mean the newer versions. I've heard good things about the performance
an reliability of NFS v3 on Solaris for example. I've also heard of
CODA, but not had much experience with it.

With session data, there shouldn't be a file locking retension issue
since session data should be unique per visitor.

Perrin Harkins wrote:
>
> > Actually, my opinion is that people should never use server stickiness.
> > That is last years technology. The session state that backs your web
> > applications should be distributed across your cluster so that you can
> > truly withstand a failure.
>
> It's a good idea, but is it really possible? You can use a tool like Spread
> for distributing the data, but the distribution of changes may not be fast
> enough to handle requests in quick succession.
>
> The common approach to state management on a cluster is to use a central
> database, so that any machine can handle any request. However, when people
> try to scale this up they often end up needing to do a write-through cache
> of data from the database, and if that data is user-specific and updateable
> then the user needs to come back to the same cache and thus the same
> machine.
>
> At the moment, it looks to me like sticky sessions are still the only game
> in town for large sites with intensive database activity, but I'm keeping an
> eye out for alternatives.
>
> By the way, I only have experience with big/ip, but I believe that they can
> do sticky sessions without regard for http/https by using cookies. Not a
> good solution if you hate cookies, but it works for environments where
> cookies are a given.
>
> - Perrin
>
> _______________________________________________
> backhand-users mailing list
> backhand-users@lists.backhand.org
> http://lists.backhand.org/mailman/listinfo/backhand-users

--
Monte Ohrt <monte@ispi.net>
http://www.ispi.net/
[mod_backhand-users] backhand & load balancer [ In reply to ]
> Just to through out a question (probably an FAQ), how about using an NFS
> mounted file system for session data?

In all the tests I've done, databases have always scaled better. I've tried
this on NetApp servers, and they just fall apart when you hit them with too
many simultaneous updates. Databases do too eventually, but not as quickly.

> With session data, there shouldn't be a file locking retension issue
> since session data should be unique per visitor.

But if you aren't using sticky sessions, the same user could be accessing
this data from different machines...

- Perrin
[mod_backhand-users] backhand & load balancer [ In reply to ]
Perrin Harkins wrote:
>
> > Just to through out a question (probably an FAQ), how about using an NFS
> > mounted file system for session data?
>
> In all the tests I've done, databases have always scaled better. I've tried
> this on NetApp servers, and they just fall apart when you hit them with too
> many simultaneous updates. Databases do too eventually, but not as quickly.
>
> > With session data, there shouldn't be a file locking retension issue
> > since session data should be unique per visitor.
>
> But if you aren't using sticky sessions, the same user could be accessing
> this data from different machines...

But not at the same time right? Thus the reason to mount on all the
cluster servers.

>
> - Perrin
>
> _______________________________________________
> backhand-users mailing list
> backhand-users@lists.backhand.org
> http://lists.backhand.org/mailman/listinfo/backhand-users

--
Monte Ohrt <monte@ispi.net>
http://www.ispi.net/
[mod_backhand-users] backhand & load balancer [ In reply to ]
> > > With session data, there shouldn't be a file locking retension issue
> > > since session data should be unique per visitor.
> >
> > But if you aren't using sticky sessions, the same user could be
accessing
> > this data from different machines...
>
> But not at the same time right? Thus the reason to mount on all the
> cluster servers.

Imagine a situation like a site with frames (ugh) or a user who hits reload
quickly or accidentally submits twice. These requests could be processed
simultaneously on different machines and both need to update the same
record.
- Perrin
[mod_backhand-users] backhand & load balancer [ In reply to ]
Ok, so the lock retension _can_ happen, but only a few at one given
moment, and this shouldn't hurt anything. My concern would be lock
retension with potentially hundreds of connections (like some sort of
shared session files) which I'm sure NFS (and databases for that matter)
could have serious performance problems with.

Perrin Harkins wrote:
>
> > > > With session data, there shouldn't be a file locking retension issue
> > > > since session data should be unique per visitor.
> > >
> > > But if you aren't using sticky sessions, the same user could be
> accessing
> > > this data from different machines...
> >
> > But not at the same time right? Thus the reason to mount on all the
> > cluster servers.
>
> Imagine a situation like a site with frames (ugh) or a user who hits reload
> quickly or accidentally submits twice. These requests could be processed
> simultaneously on different machines and both need to update the same
> record.
> - Perrin
>
> _______________________________________________
> backhand-users mailing list
> backhand-users@lists.backhand.org
> http://lists.backhand.org/mailman/listinfo/backhand-users

--
Monte Ohrt <monte@ispi.net>
http://www.ispi.net/
[mod_backhand-users] backhand & load balancer [ In reply to ]
> > Just to through out a question (probably an FAQ), how about using an NFS
> > mounted file system for session data?
>
> In all the tests I've done, databases have always scaled better. I've tried
> this on NetApp servers, and they just fall apart when you hit them with too
> many simultaneous updates. Databases do too eventually, but not as quickly.

(subliminal message)
Check out spread, kicks the snot out of databases:

http://www.spread.org/
(/subliminal message)

-sc

--
Sean Chittenden
[mod_backhand-users] backhand & load balancer [ In reply to ]
Are there any working examples using spread as a distributed file
system? We use it for broadcasting apache log data in a cluster to a log
server, but nothing beyond that. I didn't see any examples on the web
site. The web site also mentions using spread for database replication.
How would you, for example, use this to replicate mysql database
information? Build completely from scratch with the C API?

TIA
Monte

Sean Chittenden wrote:
>
> > > Just to through out a question (probably an FAQ), how about using an NFS
> > > mounted file system for session data?
> >
> > In all the tests I've done, databases have always scaled better. I've tried
> > this on NetApp servers, and they just fall apart when you hit them with too
> > many simultaneous updates. Databases do too eventually, but not as quickly.
>
> (subliminal message)
> Check out spread, kicks the snot out of databases:
>
> http://www.spread.org/
> (/subliminal message)
>
> -sc
>
> --
> Sean Chittenden
>
> _______________________________________________
> backhand-users mailing list
> backhand-users@lists.backhand.org
> http://lists.backhand.org/mailman/listinfo/backhand-users

--
Monte Ohrt <monte@ispi.net>
http://www.ispi.net/
[mod_backhand-users] backhand & load balancer [ In reply to ]
> Are there any working examples using spread as a distributed file
> system? We use it for broadcasting apache log data in a cluster to a log
> server, but nothing beyond that. I didn't see any examples on the web
> site. The web site also mentions using spread for database replication.
> How would you, for example, use this to replicate mysql database
> information? Build completely from scratch with the C API?

Hmm... I wouldn't use mysql. ;) I'd wait for postgres 7.2 (or 7.3,
can't remember which now) that'll have multi-master database replication
support.

What I've done in the past however, is mimic mod_log_spread w/ readers
and writers, then had a polling mechanism for collecting state
information from the readers. Check out ruby and rb_spread. Perl works
too, but I've put that language on my shit list recently and prefer the
former. HTH. -sc

PS Eventually I want to write an OS version of that under ruby and
release it as a part of ruby-session... if there's sufficient interest,
I'll spend more time working on that than other projects I've got in the
air.

--
Sean Chittenden
[mod_backhand-users] backhand & load balancer [ In reply to ]
Sean Chittenden wrote:
> Check out spread, kicks the snot out of databases:

I mentioned Spread, and my concern about it. It works fine for static
data, but I'm not confident that it will replicate data fast enough to
handle updates to data that is accessed by multiple requests in rapid
succession, i.e. a user changes data on machine A and then wants to use
it on machine B immediately afterward. There is a potential for
introducing race conditions which are not there when using sticky
sessions to keep a user on one machine. Maybe some kind of distributed
locking mechanism could fix this problem, or maybe Spread is so fast
that this would only be a concern for sites that have to be very careful
about data integrity, i.e. banks as opposed to discussion boards.
- Perrin
[mod_backhand-users] backhand & load balancer [ In reply to ]
On Monday, September 17, 2001, at 10:19 PM, Perrin Harkins wrote:
> Sean Chittenden wrote:
>> Check out spread, kicks the snot out of databases:
>
> I mentioned Spread, and my concern about it. It works fine for static
> data, but I'm not confident that it will replicate data fast enough to
> handle updates to data that is accessed by multiple requests in rapid
> succession, i.e. a user changes data on machine A and then wants to use
> it on machine B immediately afterward. There is a potential for
> introducing race conditions which are not there when using sticky

I don't know if I agree. The argument of frames in a web page doesn't
hold. The reason is that you wouldn't have a deterministic action even
if it was handled on one machine. The user can request the frames all
at the same time and chance plays a role on which gets serviced first.

> sessions to keep a user on one machine. Maybe some kind of distributed
> locking mechanism could fix this problem, or maybe Spread is so fast
> that this would only be a concern for sites that have to be very careful
> about data integrity, i.e. banks as opposed to discussion boards.

I think this type of integrity is important whether you are a bank or a
discussion forum.

Distributed locking with Spread is quite easy if you have already
exerted the effort to handle remedial content replication with AGREED
messages. However, if a machine dies or gets "stuck" (app crashes,
etc.) you need to be careful to drop or time out the lock, respectively.

But, I think that locks are unnecessary.

If you design you data store as something more intelligent than simple
put/retrieve, then it is possible to do some cool things. Let's say one
of the operations on your data store (be it session state or something
else) is find element A in the database and decrement it and find and
increment element B. Assume, to keep this interesting, you need to make
this atomic. One option is to acquire a lock and perform both actions
and then release the lock. This is a clean and flexible way to do it.
But, a pain in the ass.

Instead, use a concept like stored procedures and create an "action" on
the data store that take A and B and arguments and performs this
operation. So, only one AGREED message (foo(A,B)) is passed through
Spread and the action happens IN the data store. Locks are not
necessary and the speed will be frightening :-)

This is an excellent approach if you have a firm grasp of the complete
operations you will be performing on your data set and I do not think
that requirement is so silly considering many of today's OLTP systems.
It is like writing a "data store procedure" instead of SQL
transactions. I would wager most would be fairly simply (as SQL is
often overkill for most of these transactions anyway.)

Using this, Spread can pass well over 1000 AGREED messages per second if
they are small (as these would definitely be).

This doesn't tackle network partitions, network merges or fresh arrivals
(joins). That is a slightly more complicated issue :-) Nevertheless it
is an obstacle that can be tackled as well.

--
Theo Schlossnagle
1024D/82844984/95FD 30F1 489E 4613 F22E 491A 7E88 364C 8284 4984
2047R/33131B65/71 F7 95 64 49 76 5D BA 3D 90 B9 9F BE 27 24 E7