Hi,
[.Copied to linux-ha-dev as this looks like a private email is turning
into discussion worth sharing...]
On Mon, 31 Jan 2000 13:07:25 -0500, Keith Barrett <kbarrett@redhat.com>
said:
> I've have some private communications with Stephen, and his feeling
> is that going straight to socket calls is as much of a standard
> messaging layer that he wants in the communication system.
We need it. Sockets are what existing applications use. The
socket-related Unix event APIs are what all existing application event
loops are built around. Sockets are what all existing support
libraries, such as xdr/rpc, run on top of. As such, I'd need an
overwhelming reason not to use that as the primary communications
architecture for clustering.
> I feel we gain a great deal of portability and flexibility by
> providing our own messaging API layer -- creating independence from
> protocols and providing an API for connecting other platforms (even if
> initially, this API just rerouted the argument list to a socket
> call).
Umm, sockets *are* the Unix transport-independent messaging layer. What
I've already suggested is that we have a name service which lets us open
or connect to a socket by the cluster node ID, rather than by address.
That already gives us full protocol independence. We can run over IP,
IPX, VIA or whatever if we have that in place, and we don't need to
throw away the socket infrastructure to get it.
> I also know that even without clustering, there is a tremendous
> opportunity to provide GPL Message Queuing services in Linux (which
> will give application programmers a lot of power, and increase Linux
> integration with other platforms).
I agree, but I just don't think that we want to build it as a core
cluster component on which the functioning cluster services will
depend. Rather it should be something we can offer to applications
later.
> Messaging bascially turns the network into a local bus. For example;
> rather than the cluster daemon's having to handle their own
> cross-connections, and notifications of loss of connectivity, they
> would simple connect to the messaging system, declare that they want
> clustering messages, and all the connections and notifications are
> handled elsewhere.
Cross-connection management is a service which we will want for the core
clustering component. I'd definitely support building an infrastructure
for that on top of the socket transport layers. Loss of connectivity is
something which we already need to hide in that layer, as any cluster
transition will necessarily force all cluster services to reset their
peer-to-peer communications at the very least, and in some cases
reestablish them. Doing that in a common manner is already something I
expect the API-lib bit of the core software to perform, and a
message-queue type of architecture may be appropriate.
Doing protocol transparency in the same layer, however, is probably just
daft, as Unix already provides protocol transparency (it's called the
BSD socket API).
--Stephen
[.Copied to linux-ha-dev as this looks like a private email is turning
into discussion worth sharing...]
On Mon, 31 Jan 2000 13:07:25 -0500, Keith Barrett <kbarrett@redhat.com>
said:
> I've have some private communications with Stephen, and his feeling
> is that going straight to socket calls is as much of a standard
> messaging layer that he wants in the communication system.
We need it. Sockets are what existing applications use. The
socket-related Unix event APIs are what all existing application event
loops are built around. Sockets are what all existing support
libraries, such as xdr/rpc, run on top of. As such, I'd need an
overwhelming reason not to use that as the primary communications
architecture for clustering.
> I feel we gain a great deal of portability and flexibility by
> providing our own messaging API layer -- creating independence from
> protocols and providing an API for connecting other platforms (even if
> initially, this API just rerouted the argument list to a socket
> call).
Umm, sockets *are* the Unix transport-independent messaging layer. What
I've already suggested is that we have a name service which lets us open
or connect to a socket by the cluster node ID, rather than by address.
That already gives us full protocol independence. We can run over IP,
IPX, VIA or whatever if we have that in place, and we don't need to
throw away the socket infrastructure to get it.
> I also know that even without clustering, there is a tremendous
> opportunity to provide GPL Message Queuing services in Linux (which
> will give application programmers a lot of power, and increase Linux
> integration with other platforms).
I agree, but I just don't think that we want to build it as a core
cluster component on which the functioning cluster services will
depend. Rather it should be something we can offer to applications
later.
> Messaging bascially turns the network into a local bus. For example;
> rather than the cluster daemon's having to handle their own
> cross-connections, and notifications of loss of connectivity, they
> would simple connect to the messaging system, declare that they want
> clustering messages, and all the connections and notifications are
> handled elsewhere.
Cross-connection management is a service which we will want for the core
clustering component. I'd definitely support building an infrastructure
for that on top of the socket transport layers. Loss of connectivity is
something which we already need to hide in that layer, as any cluster
transition will necessarily force all cluster services to reset their
peer-to-peer communications at the very least, and in some cases
reestablish them. Doing that in a common manner is already something I
expect the API-lib bit of the core software to perform, and a
message-queue type of architecture may be appropriate.
Doing protocol transparency in the same layer, however, is probably just
daft, as Unix already provides protocol transparency (it's called the
BSD socket API).
--Stephen