Mailing List Archive

Replacing sbd devices in running cluster
Hi Pacemaker Experts

I have inherited a pacemaker cluster running on the SLES11SP1 stack.
Sadly there is no support contract any more.

# rpm -qa | egrep 'pacemaker'
pacemaker-mgmt-2.0.0-0.5.5
pacemaker-mgmt-client-2.0.0-0.5.5
pacemaker-1.1.5-5.9.11.1
libpacemaker3-1.1.5-5.9.11.1
drbd-pacemaker-8.3.11-0.3.1

I need to migrate both SBD devices. I.e. /dev/sdX and /dev/sdY. to
/dev/sdA and /dev/sdB

It is required to do this without service downtime.

The sbd resource is running OK:

# crm_mon -1 | grep sbd
stonith_sbd_is (stonith:external/sbd): Started node1

and is configured as follows

# crm configure show | grep sbd
primitive stonith_sbd_is stonith:external/sbd \
params sbd_device="/dev/sdX;/dev/sdY"

and sbd processes are running from this config

# cat /etc/sysconfig/sbd
SBD_DEVICE="/dev/sdX;/dev/sdY"
SBD_OPTS="-W"

pstree -halp | grep sbd
|-sbd,8797
| |-sbd,8798
| `-sbd,8799

(similarly on node2)

The resources are all running, and both nodes are OK

# crm_mon
Last updated: Thu Jan 29 18:52:53 2015
Stack: openais
Current DC: s987l1020 - partition with quorum
Version: 1.1.5-5bd2b9154d7d9f86d7f56fe0a74072a5a6590c60
2 Nodes configured, 2 expected votes
61 Resources configured.
============

Online: [ node1 node2]

stonith_sbd_is (stonith:external/sbd): Started node1
....


Is this change possible without stopping the cluster? If so, how
should I best implement?

With downtime I guess all I would need to do is change the
/etc/sysconfig/sbd file, change the settings within the cluster via
crm, and restart everything. But I cannot see the cluster surviving
this without a downtime?

Best
GH

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
Re: Replacing sbd devices in running cluster [ In reply to ]
from one of two:

/dev/sdX and /dev/sdY

sbd -d "/dev/sdX;/dev/sdY" message node1 exit
sbd -d "/dev/sdX;/dev/sdY" message node2 exit

sbd -d /dev/sdA create && sbd -d /dev/sdB create

Now in every cluster node

sbd -d "/dev/sdA;/dev/sdB" -D -W watch


If you speficy your sbd devices in /etc/sysconfig/sbd, you don't need
to use "params sbd_device="/dev/sdX;/dev/sdY"

2015-01-29 19:00 GMT+01:00 Gregory House <raintown.us@gmail.com>:
> Hi Pacemaker Experts
>
> I have inherited a pacemaker cluster running on the SLES11SP1 stack.
> Sadly there is no support contract any more.
>
> # rpm -qa | egrep 'pacemaker'
> pacemaker-mgmt-2.0.0-0.5.5
> pacemaker-mgmt-client-2.0.0-0.5.5
> pacemaker-1.1.5-5.9.11.1
> libpacemaker3-1.1.5-5.9.11.1
> drbd-pacemaker-8.3.11-0.3.1
>
> I need to migrate both SBD devices. I.e. /dev/sdX and /dev/sdY. to
> /dev/sdA and /dev/sdB
>
> It is required to do this without service downtime.
>
> The sbd resource is running OK:
>
> # crm_mon -1 | grep sbd
> stonith_sbd_is (stonith:external/sbd): Started node1
>
> and is configured as follows
>
> # crm configure show | grep sbd
> primitive stonith_sbd_is stonith:external/sbd \
> params sbd_device="/dev/sdX;/dev/sdY"
>
> and sbd processes are running from this config
>
> # cat /etc/sysconfig/sbd
> SBD_DEVICE="/dev/sdX;/dev/sdY"
> SBD_OPTS="-W"
>
> pstree -halp | grep sbd
> |-sbd,8797
> | |-sbd,8798
> | `-sbd,8799
>
> (similarly on node2)
>
> The resources are all running, and both nodes are OK
>
> # crm_mon
> Last updated: Thu Jan 29 18:52:53 2015
> Stack: openais
> Current DC: s987l1020 - partition with quorum
> Version: 1.1.5-5bd2b9154d7d9f86d7f56fe0a74072a5a6590c60
> 2 Nodes configured, 2 expected votes
> 61 Resources configured.
> ============
>
> Online: [ node1 node2]
>
> stonith_sbd_is (stonith:external/sbd): Started node1
> ....
>
>
> Is this change possible without stopping the cluster? If so, how
> should I best implement?
>
> With downtime I guess all I would need to do is change the
> /etc/sysconfig/sbd file, change the settings within the cluster via
> crm, and restart everything. But I cannot see the cluster surviving
> this without a downtime?
>
> Best
> GH
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org



--
esta es mi vida e me la vivo hasta que dios quiera

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
Re: Replacing sbd devices in running cluster [ In reply to ]
On 2015-01-30T08:29:18, emmanuel segura <emi2fast@gmail.com> wrote:

> from one of two:
>
> /dev/sdX and /dev/sdY
>
> sbd -d "/dev/sdX;/dev/sdY" message node1 exit
> sbd -d "/dev/sdX;/dev/sdY" message node2 exit
>
> sbd -d /dev/sdA create && sbd -d /dev/sdB create
>
> Now in every cluster node
>
> sbd -d "/dev/sdA;/dev/sdB" -D -W watch

This command will not work. You need to specify the devices separately
for sbd:

sbd -d /dev/sdX -d /dev/sdY


Regards,
Lars

--
Architect Storage/HA
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
Re: Replacing sbd devices in running cluster [ In reply to ]
yes Lars, you are right,

I forgot to look in /etc/init.d/openais

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

SBD_DEVS=${SBD_DEVICE%;}
SBD_DEVICE=${SBD_DEVS//;/ -d }

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

Thanks

2015-01-30 12:47 GMT+01:00 Lars Marowsky-Bree <lmb@suse.com>:
> On 2015-01-30T08:29:18, emmanuel segura <emi2fast@gmail.com> wrote:
>
>> from one of two:
>>
>> /dev/sdX and /dev/sdY
>>
>> sbd -d "/dev/sdX;/dev/sdY" message node1 exit
>> sbd -d "/dev/sdX;/dev/sdY" message node2 exit
>>
>> sbd -d /dev/sdA create && sbd -d /dev/sdB create
>>
>> Now in every cluster node
>>
>> sbd -d "/dev/sdA;/dev/sdB" -D -W watch
>
> This command will not work. You need to specify the devices separately
> for sbd:
>
> sbd -d /dev/sdX -d /dev/sdY
>
>
> Regards,
> Lars
>
> --
> Architect Storage/HA
> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG Nürnberg)
> "Experience is the name everyone gives to their mistakes." -- Oscar Wilde
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org



--
esta es mi vida e me la vivo hasta que dios quiera

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org