Mailing List Archive

(no subject)
Aaron Stone writes:

> An interesting front end is SQL Relay, http://firstworks.com/
>
> My thinking on the database redundancy issue is a compromise with the
> full-failover and the write-master+read-slaves ideas; There's a main
> server with read/write (the master) and a slave with read only.

Geesh why everyone want to stop one of the servers from operating in normal
mode to a read-only mode. I personally cant accept the fact however,
i`m working further on this issue, for now i have a simple algorithm which
solves the issue with the unique ids, negotiates the backend grouping and
makes it more generic in its structure, since obviously this can't be done
on the db layer thanks to the primitive[1] replication in mysql which doesnt
do any merging(not any of i`m aware of) and is quite scared when it meets
few ids with the same values that it makes it stop the whole replication
process, which IMHO is absolutely unacceptable. The problems in PgSQL are a
bit different since it doesnt have native replication ready for productional
use, but it`s in development, and it`s doing quite well according to my
research.
Now we have an issue which is obviously called 'data sharing' since there is
_a_ difference between clustering db servers and having a backup in any way
or redundancy (that was the idea from the begining). If we`re doing
clustering the issue is not only one but it will create more sub-issues on
its layer. The term redundacy is quite different, the simplest one is 'if
server1 is down server2 will take over and continue the operations without a
downtime', and personally that`s quite enuf for me, since there wont be any
downtime. But if we add more servers we need more sofisticated and complex
approach than the first one, we need consistency, the consistency itself is
so vast in terms of communication and data share[2], since we`re aware of
what might and might not happen, but assuming that something is, is not the
right way of creating something generic and therefor portable, because
simply the false postivives on the network are growing, more members, more
participants..etc.

What i want to say, is something that`ve already said - It`s a dbmail's
problem, because we cannot force the db devels to get to the place where we
want, and since that wont be in the nearest months or whatever bit of time,
i prefer solving the problem on dbmail layer.

> The primary and secondary MX idea is a very good one, and when the
> primary
> goes down, the secondary will receive the mail but not be allowed to
> insert it.
> Incoming messages will queue up between postfix and dbmail
> (must be tuned not to attempt immediate retries, though) and once the
> master is alive, will be delivered. In the mean time, everyone can still
> read their mail from the slave (but no flagging, deleting or moving).

That was my idea in the begining but i want both of my databases to be able
to share and therefor modify the data whenever they want in way that wont
cause any collisions. However we are in power to minimize the possibility of
any collisions and i suppose that`s the first thing we should do before
everything failes. Somehow I personally dont accept downtime vs wrongly
inserted message[2].

I`m happy with Jose`s example about the whole stuff, but he`s taking it too
far in the future, because that is the first step we take on this road, and
that`s not the last, since it will evolve in something more sofisticated in
aspect of consistency, redundancy and in general distributed systems, grids,
blahblah.

[1] primitive in terms of no grouping. no failover. no decent multimaster
replication nor any advanced data merging abilities.
[2] client1 gets client2 message, obviously there is a workaround since we
can control how the message is inserted and where is inserted.. that`s not
an issue in my design though, nor the random subscribers we might get in
Jose`s example.

I`ll continue to work on it, if anyone in any way is interested, ask.

cheers,
-lou

>
>
> On Thu, 27 Mar 2003, Roel Rozendaal - IC&S wrote:
>
>> Hi lou,
>>
>> The problem here is that the database consistency is not guaranteed -
>> the databases are not synchronized so behaviour seems pretty undefined
>> when for example the imap daemon connects to another database in the
>> mid of a session. The unique-id's and message_idnr's are no longer
>> unique nor will the message_idnr guarantee the correct order of
>> delivery; some messages/folders will suddenly be no longer available
>> when a system fails and some others again will no longer be available
>> as the first system is up again.
>>
>> We are still looking for some good replication funcionality but it
>> seems that the logics for such failover system should be a database
>> issue and not a dbmail one - the ultimate system would allow dbmail to
>> connect to some front-end (preferrably local so network failure is
>> shielded from dbmail) SQL interface which would implement all the
>> failover functionality we desire: different groups of replicating
>> clusters spread out over the world :)
>>
>> regards roel
>>
>>
>> lou heeft op woensdag, 26 maa 2003 om 20:57 (Europe/Amsterdam) het
>> volgende geschreven:
>>
>> > Ello group{s},
>> >
>> > some time ago i mentioned something about having a fallback database in
>> > case the first one doesnt respond. I found it really usefull in the
>> > following scenario.
>> > I have domain.com and two MX records for it mx1.domain.com(5) and
>> > mx2.domain.com(10),
>> > and i`m using dbmail, let say the db on mx1 is gone, what happens, mx2
>> > wont help me, but
>> > with this patch if dbmail service on mx1 cant connect to it`s primary
>> > db it`ll to the
>> > secondary at mx2, where db1 and db2 are quite aware with it`s data in
>> > sense of
>> > replication.
>> >
>> > kinda
>> > if(conn1 == fails){ tellus; conn2; if(conn2 == fails) { tellus; return
>> > _err; } }
>> > of course with each connect N it`ll try to connect to db1 before
>> > falling back to db2;
>> >
>> > ligthly tested with pgsql/mysql agains dbmail-1.1(from
>> > http://www.dbmai.org), it`s quite
>> > simple, though i cant say how it`ll work on your mailservers.
>> >
>> >
>> > let me know if i did something wrong. sometime (when i find it) i`ll
>> > try to change the
>> > stuff to use more than 2 dbs and not to be so static. Hope Eelco, Roel
>> > would be keen on
>> > impl something like this permanently?
>> > (patch attached)
>> >
>> >
>> > cheers
>> > -lou
>> >
>> > --
>> >
>> > Lou Kamenov AEYE R&D lou.kamenov@aeye.net
>> > FreeBSD BGUG http://www.freebsd-bg.org lou@FreeBSD-bg.org
>> > Secureroot UK http://secureroot.org.uk phayze@secureroot.org.uk
>> > Key Fingerprint - 936F F64A AD50 2D27 07E7 6629 F493 95AE A297 084A
>> > One advantage of talking to yourself is that you know at least
>> > somebody's listening. - Franklin P. Jones
>> > <dbmail-fallback.patch.gz>
>>
>> _________________________
>> R.A. Rozendaal
>> ICT Manager
>> IC&S
>> T: +31 30 2322878
>> F: +31 30 2322305
>> www.ic-s.nl
>>
>> _______________________________________________
>> Dbmail-dev mailing list
>> Dbmail-dev@dbmail.org
>> http://twister.fastxs.net/mailman/listinfo/dbmail-dev
>>
>
> _______________________________________________
> Dbmail-dev mailing list
> Dbmail-dev@dbmail.org
> http://twister.fastxs.net/mailman/listinfo/dbmail-dev