Mailing List Archive

tie-braker node promotion
Hi drbd users,

I’ve set up a two-node configuration, and it works as expected protecting me from the single node failure.
Now I want to also be protected against network failure — and the resulting split brain state.
I’ve added a third diskless node to serve as a tiebreaker. Now when I’m testing working node failure, I’m
sometimes getting this third node promotion to an active one.

The configuration is as follows:

resource "data" {
device minor 1;
disk /dev/sda;
meta-disk internal;
on "worker1" {
node-id 0;
address 172.16.3.113:7789;
}
on "worker2" {
node-id 1;
address 172.16.3.114:7789;
}
on "breaker" {
node-id 2;
address 172.16.3.115:7789;
disk none;
}
connection-mesh {
hosts "worker1" "worker2" "breaker";
}
}

What am I doing wrong here?
Can DRBD be configured so the diskless node never gets promoted?
Or is there another way to get what I need?
Thanks a lot in advance!
--
Regards, — Boris.
Re: tie-braker node promotion [ In reply to ]
On 3/7/24 04:40, Boris Tobotras wrote:
> Hi drbd users,
>
> I’ve set up a two-node configuration, and it works as expected protecting me from the single node failure.
> Now I want to also be protected against network failure — and the resulting split brain state.
> I’ve added a third diskless node to serve as a tiebreaker. Now when I’m testing working node failure, I’m
> sometimes getting this third node promotion to an active one.


I believe that you need 3 replicating nodes to use quorum. Diskless
nodes do not participate in quorum, since they do not themselves have a
copy of the replicated data and so cannot vouch for its validity.

An alternative is to add redundancy to the network itself.


>
> The configuration is as follows:
>
> resource "data" {
> device minor 1;
> disk /dev/sda;
> meta-disk internal;
> on "worker1" {
> node-id 0;
> address 172.16.3.113:7789;
> }
> on "worker2" {
> node-id 1;
> address 172.16.3.114:7789;
> }
> on "breaker" {
> node-id 2;
> address 172.16.3.115:7789;
> disk none;
> }
> connection-mesh {
> hosts "worker1" "worker2" "breaker";
> }
> }
>
> What am I doing wrong here?
> Can DRBD be configured so the diskless node never gets promoted?
> Or is there another way to get what I need?
> Thanks a lot in advance!
Re: tie-braker node promotion [ In reply to ]
https://linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-configuring-quorum-tiebreaker

On Fri, Mar 8, 2024 at 4:51?PM Josh Fisher <jfisher@jaybus.com> wrote:

>
> On 3/7/24 04:40, Boris Tobotras wrote:
>
> Hi drbd users,
>
> I’ve set up a two-node configuration, and it works as expected protecting me from the single node failure.
> Now I want to also be protected against network failure — and the resulting split brain state.
> I’ve added a third diskless node to serve as a tiebreaker. Now when I’m testing working node failure, I’m
> sometimes getting this third node promotion to an active one.
>
>
> I believe that you need 3 replicating nodes to use quorum. Diskless nodes
> do not participate in quorum, since they do not themselves have a copy of
> the replicated data and so cannot vouch for its validity.
>
> An alternative is to add redundancy to the network itself.
>
>
> The configuration is as follows:
>
> resource "data" {
> device minor 1;
> disk /dev/sda;
> meta-disk internal;
> on "worker1" {
> node-id 0;
> address 172.16.3.113:7789;
> }
> on "worker2" {
> node-id 1;
> address 172.16.3.114:7789;
> }
> on "breaker" {
> node-id 2;
> address 172.16.3.115:7789;
> disk none;
> }
> connection-mesh {
> hosts "worker1" "worker2" "breaker";
> }
> }
>
> What am I doing wrong here?
> Can DRBD be configured so the diskless node never gets promoted?
> Or is there another way to get what I need?
> Thanks a lot in advance!
>
>