Mailing List Archive

HA MediaWiki
Hi!

We would like to increase the stability of our MediaWiki setup by
running several (identical) instances behind a load balancer.

We are using a PostgreSQL cluster as database, so I am reluctant to set
up a multi-master solution ala $wgLBFactoryConf. Instead I would like to
use $wgSharedDB/$wgSharedTables to have all MW frontends access the same
database and eventually share necessary directories via NFS (coming from
an HA fileserver). The plan is to run the frontends active/passive, i.e.
usually only one frontend will write to the database.

Is this a supported/tested configuration? The warning not to share
certain tables is obvious for wikis with different content, but it is
not clear to me, if there are also problems to be expected if I actually
want to have identical content across all wikis.

If it is possible in principle, are there any other gotchas? E.g.
disabling $wgPHPSessionHandling or something similar?

--
Jörn Clausen
Plattformen & Serverdienste
BITS - Bielefelder IT-Servicezentrum
https://www.uni-bielefeld.de/bits

_______________________________________________
MediaWiki-l mailing list
To unsubscribe, go to:
https://lists.wikimedia.org/mailman/listinfo/mediawiki-l
Re: HA MediaWiki [ In reply to ]
Generally no - $wgSharedDB is used to override what db specific tables live
in, its not for high availability.

If you want multiple php application servers to use the same db, you dont
have to do anything special - just give all the servers the same config.

If you want multiple db servers (in a master/slave setup) it is highly
reccomended to use mysql/mariadb instead. All of mediawiki's high
performance testing is done using mariadb.

If setting up multiple php application servers, it is reccomended to setup
a memcached server as a shared cache for all of them (if both memcache and
an accellerator like apcu is available, mw will use memcache for shared
caching and apcu for things that are appropriate to cache per server)

There is some experimental work to have mediawiki work in clusters where
one cluster can handle read/write traffic and the other only handles read
only traffic. This is still very experimental, probably not complete (not
sure) and probably not remotely ready for use yet.

--
Brian

On Wednesday, November 6, 2019, Clausen, Jörn <
joern.clausen@uni-bielefeld.de> wrote:

> Hi!
>
> We would like to increase the stability of our MediaWiki setup by running
> several (identical) instances behind a load balancer.
>
> We are using a PostgreSQL cluster as database, so I am reluctant to set up
> a multi-master solution ala $wgLBFactoryConf. Instead I would like to use
> $wgSharedDB/$wgSharedTables to have all MW frontends access the same
> database and eventually share necessary directories via NFS (coming from an
> HA fileserver). The plan is to run the frontends active/passive, i.e.
> usually only one frontend will write to the database.
>
> Is this a supported/tested configuration? The warning not to share certain
> tables is obvious for wikis with different content, but it is not clear to
> me, if there are also problems to be expected if I actually want to have
> identical content across all wikis.
>
> If it is possible in principle, are there any other gotchas? E.g.
> disabling $wgPHPSessionHandling or something similar?
>
> --
> Jörn Clausen
> Plattformen & Serverdienste
> BITS - Bielefelder IT-Servicezentrum
> https://www.uni-bielefeld.de/bits
>
> _______________________________________________
> MediaWiki-l mailing list
> To unsubscribe, go to:
> https://lists.wikimedia.org/mailman/listinfo/mediawiki-l
>
_______________________________________________
MediaWiki-l mailing list
To unsubscribe, go to:
https://lists.wikimedia.org/mailman/listinfo/mediawiki-l