Mailing List Archive

allow-two-primaries (hot migration) and three nodes...
Hi all,

Apart from the network consumption, is there a way to have a duplication
on 3 servers with hot migration of an instance on one of the two other
servers?

The "allow-two-primaries" parameter seems to indicate the opposite but I
ask the question...

Many thanks for your advice...


#----------------------------------------------------------------------------
#--- DRBD instance conf - Configuration DRBD pour GENESIX v3 (dom0 Xen)
#----------------------------------------------------------------------------

resource i1c1-disk {

device /dev/drbd1;
disk /dev/vg-gnx-001/i1c1-disk;
meta-disk /dev/vg-gnx-001/i1c1-meta;

on n201c1.genesix.org {
address 10.0.1.201:63001;
node-id 1;

}

on n202c1.genesix.org {
address 10.0.1.202:63001;
node-id 2;

}

on n203c1.genesix.org {
address 10.0.1.203:63001;
node-id 3;

}

connection-mesh {
hosts n201c1.genesix.org n202c1.genesix.org; n203c1.genesix.org
}

net {
sndbuf-size 4M;
allow-two-primaries yes; <<<< IS THIS ALLOWED ?
}

}

#----------------------------------------------------------------------------
#--- EOF
#----------------------------------------------------------------------------

--
St?phane Rivi?re
Ile d'Ol?ron - France
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user
Re: allow-two-primaries (hot migration) and three nodes... [ In reply to ]
Hi Stéphane,

despite its name, this option simply means "allow many primaries" as of DRBD 9.
I’ve got a similar setup with two Diskful and one Diskless nodes. I don’t know why this shouldn’t work with three Diskful nodes.

--
Best regards
Thomas Keppler

> Am 12.12.2020 um 10:02 schrieb Stéphane Rivière <stef@genesix.org>:
>
> ?Hi all,
>
> Apart from the network consumption, is there a way to have a duplication
> on 3 servers with hot migration of an instance on one of the two other
> servers?
>
> The "allow-two-primaries" parameter seems to indicate the opposite but I
> ask the question...
>
> Many thanks for your advice...
>
>
> #----------------------------------------------------------------------------
> #--- DRBD instance conf - Configuration DRBD pour GENESIX v3 (dom0 Xen)
> #----------------------------------------------------------------------------
>
> resource i1c1-disk {
>
> device /dev/drbd1;
> disk /dev/vg-gnx-001/i1c1-disk;
> meta-disk /dev/vg-gnx-001/i1c1-meta;
>
> on n201c1.genesix.org {
> address 10.0.1.201:63001;
> node-id 1;
>
> }
>
> on n202c1.genesix.org {
> address 10.0.1.202:63001;
> node-id 2;
>
> }
>
> on n203c1.genesix.org {
> address 10.0.1.203:63001;
> node-id 3;
>
> }
>
> connection-mesh {
> hosts n201c1.genesix.org n202c1.genesix.org; n203c1.genesix.org
> }
>
> net {
> sndbuf-size 4M;
> allow-two-primaries yes; <<<< IS THIS ALLOWED ?
> }
>
> }
>
> #----------------------------------------------------------------------------
> #--- EOF
> #----------------------------------------------------------------------------
>
> --
> Stéphane Rivière
> Ile d'Oléron - France
> _______________________________________________
> Star us on GITHUB: https://github.com/LINBIT
> drbd-user mailing list
> drbd-user@lists.linbit.com
> https://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user
Re: allow-two-primaries (hot migration) and three nodes... [ In reply to ]
Hi Thomas,

> despite its name, this option simply means "allow many primaries" as of DRBD 9.

I suspected a little but I preferred to have an "enlightened" opinion :)

> I’ve got a similar setup with two Diskful and one Diskless nodes. I don’t know why this shouldn’t work with three Diskful nodes.

Many thanks for your answer. I'll test it this week end.

All the best from Oleron Island

--
Stéphane Rivière
Ile d'Oléron - France
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user
Re: allow-two-primaries (hot migration) and three nodes... [ In reply to ]
Hi again Thomas

> despite its name, this option simply means "allow many primaries" as of DRBD 9.
> I’ve got a similar setup with two Diskful and one Diskless nodes. I don’t know why this shouldn’t work with three Diskful nodes.

I confirm that, after many tests between three servers.
So we can implement Tiers-III instances.
DRBD is great.

Freundliche Grüße der Insel Oléron

--
Stéphane Rivière
Ile d'Oléron - France
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user
Re: allow-two-primaries (hot migration) and three nodes... [ In reply to ]
Hi Stéphane,

May be i m wrong, but with drdb 9 , you can t have 2 primary servers in
good condition. The release 9.1 can do this

Anthony

Le sam. 12 déc. 2020 19:55, Stéphane Rivière <stef@genesix.org> a écrit :

> Hi again Thomas
>
> > despite its name, this option simply means "allow many primaries" as of
> DRBD 9.
> > I’ve got a similar setup with two Diskful and one Diskless nodes. I
> don’t know why this shouldn’t work with three Diskful nodes.
>
> I confirm that, after many tests between three servers.
> So we can implement Tiers-III instances.
> DRBD is great.
>
> Freundliche Grüße der Insel Oléron
>
> --
> Stéphane Rivière
> Ile d'Oléron - France
> _______________________________________________
> Star us on GITHUB: https://github.com/LINBIT
> drbd-user mailing list
> drbd-user@lists.linbit.com
> https://lists.linbit.com/mailman/listinfo/drbd-user
>
Re: allow-two-primaries (hot migration) and three nodes... [ In reply to ]
AFAIK the recommendation is to have dual primary mode enabled _only_ during
the live migration stage and once that's completed, one must revert back to
the single primary mode. Linbit folks have multiple times warned against
using _permanently_ dual primary mode for obvious to me reasons.


On Sat, 12 Dec 2020 at 19:08, Anthony Frnog <anth.frnog@gmail.com> wrote:

> Hi Stéphane,
>
> May be i m wrong, but with drdb 9 , you can t have 2 primary servers in
> good condition. The release 9.1 can do this
>
> Anthony
>
> Le sam. 12 déc. 2020 19:55, Stéphane Rivière <stef@genesix.org> a écrit :
>
>> Hi again Thomas
>>
>> > despite its name, this option simply means "allow many primaries" as of
>> DRBD 9.
>> > I’ve got a similar setup with two Diskful and one Diskless nodes. I
>> don’t know why this shouldn’t work with three Diskful nodes.
>>
>> I confirm that, after many tests between three servers.
>> So we can implement Tiers-III instances.
>> DRBD is great.
>>
>> Freundliche Grüße der Insel Oléron
>>
>> --
>> Stéphane Rivière
>> Ile d'Oléron - France
>> _______________________________________________
>> Star us on GITHUB: https://github.com/LINBIT
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> https://lists.linbit.com/mailman/listinfo/drbd-user
>>
> _______________________________________________
> Star us on GITHUB: https://github.com/LINBIT
> drbd-user mailing list
> drbd-user@lists.linbit.com
> https://lists.linbit.com/mailman/listinfo/drbd-user
>
Re: allow-two-primaries (hot migration) and three nodes... [ In reply to ]
Hi,

> AFAIK the recommendation is to have dual primary mode enabled _only_ during the live migration stage and once that's completed, one must revert back to the single primary mode. Linbit folks have multiple times warned against using _permanently_ dual primary mode for obvious to me reasons.

Yes, to be totally save, one should use drbdsetup to change this value temporarily only. So workflow would look something like that:

1.) drbdsetup net-options <resource> --allow-two-primaries yes
2.) xl migrate --live <domu> <host>
3.) drbdsetup net-options <resource> --allow-two-primaries no

...and the resource would be set to "no".

However, this is really up to your decision and how many people will actually manage the server.

> May be i m wrong, but with drdb 9 , you can t have 2 primary servers in good condition. The release 9.1 can do this

Using the connection-mesh, you can have up to 32 per resource.

--
Best regards
Thomas Keppler

> On 12. Dec 2020, at 20:47, Gianni Milo <gianni.milo22@gmail.com> wrote:
>
> AFAIK the recommendation is to have dual primary mode enabled _only_ during the live migration stage and once that's completed, one must revert back to the single primary mode. Linbit folks have multiple times warned against using _permanently_ dual primary mode for obvious to me reasons.
>
>
> On Sat, 12 Dec 2020 at 19:08, Anthony Frnog <anth.frnog@gmail.com> wrote:
> Hi Stéphane,
>
> May be i m wrong, but with drdb 9 , you can t have 2 primary servers in good condition. The release 9.1 can do this
>
> Anthony
>
> Le sam. 12 déc. 2020 19:55, Stéphane Rivière <stef@genesix.org> a écrit :
> Hi again Thomas
>
> > despite its name, this option simply means "allow many primaries" as of DRBD 9.
> > I’ve got a similar setup with two Diskful and one Diskless nodes. I don’t know why this shouldn’t work with three Diskful nodes.
>
> I confirm that, after many tests between three servers.
> So we can implement Tiers-III instances.
> DRBD is great.
>
> Freundliche Grüße der Insel Oléron
>
> --
> Stéphane Rivière
> Ile d'Oléron - France
> _______________________________________________
> Star us on GITHUB: https://github.com/LINBIT
> drbd-user mailing list
> drbd-user@lists.linbit.com
> https://lists.linbit.com/mailman/listinfo/drbd-user
> _______________________________________________
> Star us on GITHUB: https://github.com/LINBIT
> drbd-user mailing list
> drbd-user@lists.linbit.com
> https://lists.linbit.com/mailman/listinfo/drbd-user
> _______________________________________________
> Star us on GITHUB: https://github.com/LINBIT
> drbd-user mailing list
> drbd-user@lists.linbit.com
> https://lists.linbit.com/mailman/listinfo/drbd-user

_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user
Re: allow-two-primaries (hot migration) and three nodes... [ In reply to ]
Hi Anthony, Hi Gianni,

At first thanks for your friendly advices.

And I confirm I'm new in today hot migrating fields :) More precisely
I've done some successful experiments almost ten years ago with xen drbd
and remus (true HA). Very resource-intensive but also an interesting
experience.

I've carefully checked each migrating steps with "drbdadm status
<drbd-volume>".

To my understanding and observations (feel free to correct me) :

- The parameter "allows two primaries" is just a right to use two
primaries at the same time, not a constant state with two primaries on.

- I always only saw one primary at a time on the three servers.

- When I launch a "xl migrate <domu> <destination_server>" a xen/drbd
script is executed to manage (among others things) the primary state on
<destination_server> when necessary.

I confirm I don't have to "drbdadm primary <drbd-volume>" on
<destination_server> before a "xl migrate <domu> <destination_server>"

I migrate many times a test instance to server 1 > server 2 > server 3 >
server 1 > server 2 > again and again with, a the same time a simple
process running inside the instance and outputting text on a console.

Le 12/12/2020 à 20:47, Gianni Milo a écrit :
> AFAIK the recommendation is to have dual primary mode enabled _only_
> during the live migration stage and once that's completed, one must
> revert back to the single primary mode. Linbit folks have multiple times
> warned against using _permanently_ dual primary mode for obvious to
> me reasons.

Sounds logical. Very hazardous. I understand that a "drbdadm primary
<drbd-volume>" is like a mount with r/w rights.

The management of drbd and migrations is done only with scripts. So I
hope to have a correct, error-free process. It's a hope, not a certainty :)

--
Stéphane Rivière
Ile d'Oléron - France
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user
Re: allow-two-primaries (hot migration) and three nodes... [ In reply to ]
Hi Thomas,

> Yes, to be totally save, one should use drbdsetup to change this value temporarily only. So workflow would look something like that:
>
> 1.) drbdsetup net-options <resource> --allow-two-primaries yes
> 2.) xl migrate --live <domu> <host>
> 3.) drbdsetup net-options <resource> --allow-two-primaries no

I will script following this wise advice :)

Many thanks for the suggestion...

--
Stéphane Rivière
Ile d'Oléron - France
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user
Re: allow-two-primaries (hot migration) and three nodes... [ In reply to ]
Hi Thomas,

> Yes, to be totally save, one should use drbdsetup to change this value temporarily only. So workflow would look something like that:
>
> 1.) drbdsetup net-options <resource> --allow-two-primaries yes
> 2.) xl migrate --live <domu> <host>
> 3.) drbdsetup net-options <resource> --allow-two-primaries no
>
> ...and the resource would be set to "no".

After following your advice, I face a strange behaviour with a three
nodes DRBD setup.

The command "drbdsetup net-options <resource> --allow-two-primaries yes"
seems to have no effect...

Obviouvsly the dual primary mode parameter in each global_conf has been
deleted and all ressources has been stopped...

On the primary node and the destination node, I've ran (even it was now
the default) : drbdsetup net-options <resource> --allow-two-primaries no

Then, I can plays with drbdsetup primary/secondary ressources, hot
migrates instances, etc...

When migrating I set destination volume as primary¹ and once the
migration is done, I set the former volume as secondary...

¹ So, at this very moment, I have two primary volumes : the running one
and the destination one.

I don't use "Xen script DRBD automation" and have not replaced "phy" by
"drbd" in xen disk parameter instance config file as all is managed by
our own script...


So I'm shure I'm doing wrong with this command.
I wonder what is the right use and range of it.

How do you use it ?


Freundliche Grüße der Insel Oléron



#----------------------------------------------------------------------------
#--- DRBD instance conf - Configuration DRBD pour GENESIX v3 (dom0 Xen)
#----------------------------------------------------------------------------

resource i1c1 {

device /dev/drbd1;
disk /dev/vg-gnx-001/i1c1-disk;
meta-disk /dev/vg-gnx-001/i1c1-meta;

on n201c1.genesix.org {
address 10.0.1.201:63001;
node-id 1;
}

on n202c1.genesix.org {
address 10.0.1.202:63001;
node-id 2;
}

on n250c1.genesix.org {
address 10.0.1.250:63001;
node-id 3;
}

connection-mesh {
hosts n201c1.genesix.org n202c1.genesix.org n250c1.genesix.org;
}
}

#----------------------------------------------------------------------------
#--- EOF
#----------------------------------------------------------------------------


#----------------------------------------------------------------------------
#--- DRBD global conf - Configuration DRBD pour GENESIX v3 (dom0 Xen)
#----------------------------------------------------------------------------

global {
minor-count 200; # 200 devices max per node for 200 instances
# More a theorical limit than anything else)
udev-always-use-vnr; # treat implicit the same as explicit volumes
usage-count yes; # Statistics count for DRBD authors
}
common {

handlers {
}
startup {
}
options {
# 76543210
cpu-mask FF; # 04 -> 00000100 -> CPU 2 only
}
disk {
on-io-error detach;
resync-rate 1500M;
}
net {
protocol C;
verify-alg sha1;
}
}

#----------------------------------------------------------------------------
#--- EOF
#----------------------------------------------------------------------------


--
Stéphane Rivière
Ile d'Oléron - France
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user
Re: allow-two-primaries (hot migration) and three nodes... [ In reply to ]
Hi Stéphane,

> The command "drbdsetup net-options <resource> --allow-two-primaries yes"
> seems to have no effect...

That is unfortunate to hear. What does `drbdsetup show <resource>` say after you've executed the command? Does it tell the same story on all nodes taking part in that volume? Any interesting logs in the journal?

> On the primary node and the destination node, I've ran (even it was now
> the default) : drbdsetup net-options <resource> --allow-two-primaries no
>
> Then, I can plays with drbdsetup primary/secondary ressources, hot
> migrates instances, etc...

Something seems wrong here. In a standard config, only one primary can exist. So it is weird that you can have two primaries when specifically resetting this parameter.

> How do you use it ?

Right now, I go the "lazy way" by just having it turned on, living with the attributed risk. Ultimately, I want to move into the same direction that I've described to you in the other mail.
I should have mentioned that this was just from my notes without proper testing. I apologise.

--
Sincerely
Thomas Keppler
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user
Re: allow-two-primaries (hot migration) and three nodes... [ In reply to ]
Hi Thomas,

> That is unfortunate to hear. What does `drbdsetup show <resource>` say after you've executed the command? Does it tell the same story on all nodes taking part in that volume?
I've checked before posting and the answer was allow-two-primaries no :

>Any interesting logs in the journal?

My mistake... Not dig in kern.log and... I've seen this morning my
syslog-ng rules are not perfect on our new cluster. So I fix the rules
and I take advantage to create specific rules for drbd to have an clean
drbd.log :)

Then I took time to test again carefully and go back for the retex :)

> Something seems wrong here. In a standard config, only one primary can exist. So it is weird that you can have two primaries when specifically resetting this parameter.

Yes I'll find why. I'm shure I'm doing somethin bad and I want to
understand why... This cluster is not /yet/ in production :)

--
Stéphane Rivière
Ile d'Oléron - France
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user
Re: allow-two-primaries (hot migration) and three nodes... [ In reply to ]
So...

I've the answer. After many tries. The right syntax is...
With an equal sign between the parameter and the value :)

drbdadm net-options i1c1 --allow-two-primaries=no
drbdadm net-options i1c1 --allow-two-primaries=yes

But the command did not return an error message (viciously:)

I was alerted because I use this trick in my prompt¹ :

root@n202c1 ???? ~ >drbdadm status
4-leaf clover=0

root@n202c1 ???? ~ >drbdadm net-options i1c1 --allow-two-primaries no
8-billiard ball > 0

root@n201c1 ???? ~ >drbdadm net-options i1c1 --allow-two-primaries=no
4-leaf clover=happy !

And... then... all went "logical"

root@n201c1 ???? ~ >drbdadm status i1c1
i1c1 role:Secondary
disk:UpToDate
n202c1.genesix.org role:Primary
peer-disk:UpToDate
n250c1.genesix.org role:Secondary
peer-disk:UpToDate

root@n201c1 ???? ~ >drbdadm primary i1c1
i1c1: State change failed: (-1) Multiple primaries not allowed by config
Command 'drbdsetup primary i1c1' terminated with exit code 11
root@n201c1 ???? ~ >drbdadm net-options i1c1 --allow-two-primaries=yes
root@n201c1 ???? ~ >drbdadm primary i1c1
root@n201c1 ???? ~ >

root@n201c1 ???? ~ >drbdadm status i1c1
i1c1 role:Primary
disk:UpToDate
n202c1.genesix.org role:Primary
peer-disk:UpToDate
n250c1.genesix.org role:Secondary
peer-disk:UpToDate


All the best from Oleron Island Thomas,

And thanks again to point me the "best practise" to put on
"allow-two-primaries" only if appropriate.

¹ This trick saved my day !!!


in .bashrc

emoji_err_code() {
local errcode="$?"
if [[ "$errcode" -eq 0 ]]
then
ErrSym=$(printf '\xF0\x9F\x8D\x80')
else
ErrSym=$(printf '\xF0\x9F\x8E\xB1')
fi
PS1=''${vecb}'\u'${blcb}'@'${vert}'\h '${ErrSym}' '${gris}'\w
'${blcb}''${c_of}'${STY}>'
}
export PROMPT_COMMAND=emoji_err_code

--
Stéphane Rivière
Ile d'Oléron - France
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user