Mailing List Archive

Multiple live cib in one pacemaker
Hello,
I am using Pacemaker 1.1.12 together with Corosync 2.4.3 in my cluster
environment. I have two nodes in cluster that are in different LAN
locations. It may be situation that nodes will not be able to connect with
each other because of network failure. In that situation split brain can
occur if we add new configuration on both nodes but that configuration will
be different on each node.

1. After repairing network failure, both nodes will rejoin to cluster, but
there are two different configurations. Which cib configuration will be
used ? Is it possible that configuration on first node will be overwriten
by second node ?
2. Is there any possibility to use multiple live cib configurations to
prevent from losing configuration in split brain situation ?

My split brain example:

1. Both nodes are up and cib configuration is synced
2. node1 has vip1 as resource, node2 has vip2 as resource
3. network failure, nodes cannot sync configuration
4. admin add vip2 on node1
5. second admin add vip3 on node2 (now we have split brain in configuration)
6. network failure resolved, nodes are rejoining the cluster
7. cib configuration is synced and probably we lost configuration from
node1 or node2

How to solve this problem ?

Thank you in advance !
Best Regards.
Adam Blaszczykowski
Re: Multiple live cib in one pacemaker [ In reply to ]
Fixed by using a third node.

On Thu, Feb 19, 2015 at 8:26 AM, Adam BÅ‚aszczykowski <
adam.blaszczykowski@gmail.com> wrote:

> Hello,
> I am using Pacemaker 1.1.12 together with Corosync 2.4.3 in my cluster
> environment. I have two nodes in cluster that are in different LAN
> locations. It may be situation that nodes will not be able to connect with
> each other because of network failure. In that situation split brain can
> occur if we add new configuration on both nodes but that configuration will
> be different on each node.
>
> 1. After repairing network failure, both nodes will rejoin to cluster, but
> there are two different configurations. Which cib configuration will be
> used ? Is it possible that configuration on first node will be overwriten
> by second node ?
> 2. Is there any possibility to use multiple live cib configurations to
> prevent from losing configuration in split brain situation ?
>
> My split brain example:
>
> 1. Both nodes are up and cib configuration is synced
> 2. node1 has vip1 as resource, node2 has vip2 as resource
> 3. network failure, nodes cannot sync configuration
> 4. admin add vip2 on node1
> 5. second admin add vip3 on node2 (now we have split brain in
> configuration)
> 6. network failure resolved, nodes are rejoining the cluster
> 7. cib configuration is synced and probably we lost configuration from
> node1 or node2
>
> How to solve this problem ?
>
> Thank you in advance !
> Best Regards.
> Adam Blaszczykowski
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>


--

~ Mark
Gardner ~

markgard@gmail.com
***
Re: Multiple live cib in one pacemaker [ In reply to ]
Adam BÅ‚aszczykowski <adam.blaszczykowski@gmail.com> writes:

> Hello,
> I am using Pacemaker 1.1.12 together with Corosync 2.4.3 in my cluster
> environment. I have two nodes in cluster that are in different LAN
> locations. It may be situation that nodes will not be able to connect with
> each other because of network failure. In that situation split brain can
> occur if we add new configuration on both nodes but that configuration will
> be different on each node.

Running a single cluster across different LANs with an unreliable
network connection doesn't seem like a situation supported by
Pacemaker. You probably need to look at something like booth [1] to get
this to work reliably.

Booth allows resources to migrate between different clusters, using
tickets and an arbiter in a third location.

[1]: https://github.com/ClusterLabs/booth


--
// Kristoffer Grönlund
// kgronlund@suse.com

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
Re: Multiple live cib in one pacemaker [ In reply to ]
> On 20 Feb 2015, at 1:52 am, Kristoffer Grönlund <kgronlund@suse.com> wrote:
>
> Adam BÅ‚aszczykowski <adam.blaszczykowski@gmail.com> writes:
>
>> Hello,
>> I am using Pacemaker 1.1.12 together with Corosync 2.4.3 in my cluster
>> environment. I have two nodes in cluster that are in different LAN
>> locations. It may be situation that nodes will not be able to connect with
>> each other because of network failure. In that situation split brain can
>> occur if we add new configuration on both nodes but that configuration will
>> be different on each node.
>
> Running a single cluster across different LANs with an unreliable
> network connection doesn't seem like a situation supported by
> Pacemaker. You probably need to look at something like booth [1] to get
> this to work reliably.
>
> Booth allows resources to migrate between different clusters, using
> tickets and an arbiter in a third location.

And probably sbd

>
> [1]: https://github.com/ClusterLabs/booth
>
>
> --
> // Kristoffer Grönlund
> // kgronlund@suse.com
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org