Mailing List Archive

Fwd: Re: drbd / libvirt / Pacemaker Cluster?
Hello,

ok i have configured it in pacemaker / crm

Since the config has stonith/fencing it has many problems,
after reboot the nodes are unclean and so on, i need an
automatic Hot Standby...

When i power off the master box - the slave resources dont came up,
the slave always says then the master is "online" - but the machine
is powered off...

---

Logs that may be interesting:
master corosync[1350]: [QUORUM] This node is within the non-primary
component and will NOT provide any services.
master warning: do_state_transition: Only 1 of 2 cluster nodes are
eligible to run resources - continue 0
notice: pcmk_quorum_notification: Membership 900: quorum lost (1)
notice: crm_update_peer_state: pcmk_quorum_notification: Node
slave[1084777474] - state is now lost (was member)
notice: stonith_device_register: Added 'st-null:0' to the device list (2
active devices)

---

The config is now (why there are that many "^O" ? I never wrote
them, but the crm shell works.)

node^O $id="1084777473" ^Omaster^O \
attributes^O standby^O="off^O" maintenance^O="off^O"
node^O $id="1084777474" ^Oslave^O \
attributes^O maintenance^O="off^O" standby^O="off^O"
primitive^O ^Olibvirt^O upstart:libvirt-bin \
op^O start timeout^O="120s^O" interval^O="0^O" \
op^O stop timeout^O="120s^O" interval^O="0^O" \
op^O monitor interval^O="30s^O" \
meta^O target-role^O="Stopped^O"
primitive^O ^Ost-null^O stonith:null \
params^O hostlist^O="master slave^O"
primitive^O ^Ovmdata^O ocf:linbit:drbd \
params^O drbd_resource^O="vmdata^O" \
op^O monitor interval^O="29s^O" role^O="Master^O" timeout^O="20^O" \
op^O monitor interval^O="31s^O" role^O="Slave^O" timeout^O="20^O" \
op^O start interval^O="0^O" timeout^O="240^O" \
op^O stop interval^O="0^O" timeout^O="100^O"
primitive^O ^Ovmdata_fs^O ocf:heartbeat:Filesystem \
params^O device^O="/dev/drbd0^O" directory^O="/vmdata^O"
fstype^O="ext4^O" \
meta^O target-role^O="Started^O" \
op^O monitor interval^O="20^O" timeout^O="40^O" \
op^O start timeout^O="30^O" interval^O="0^O" \
op^O stop timeout^O="30^O" interval^O="0^O"
ms^O ^Odrbd_master_slave^O vmdata^O \
meta^O master-max^O="1^O" master-node-max^O="1^O"
clone-max^O="2^O" clone-node-max^O="1^O" notify^O="true^O"
target-role^O="Started^O"
clone^O ^Ofencing^O st-null^O
location^O ^OPrimaryNode-libvirt^O libvirt^O 200^O: master
location^O ^OPrimaryNode-vmdata_fs^O vmdata_fs^O 200^O: master
location^O ^OSecondaryNode-libvirt^O libvirt^O 10^O: slave
location^O ^OSecondaryNode-vmdata_fs^O vmdata_fs^O 10^O: slave
colocation^O ^Olibvirt-with-fs^O inf^O: libvirt^O vmdata_fs^O
colocation^O ^Oservices_colo^O inf^O: vmdata_fs^O drbd_master_slave^O:Master
order^O ^Ofs_after_drbd^O inf^O: drbd_master_slave^O:promote
vmdata_fs^O:start libvirt^O:start
property^O $id="cib-bootstrap-options" \
dc-version^O="1.1.10-42f2063^O" \
cluster-infrastructure^O="corosync^O" \
stonith-enabled^O="true^O" \
no-quorum-policy^O="ignore^O" \
last-lrm-refresh^O="1416390260^O"

hm

Am 11/14/14 um 14:00 schrieb emmanuel segura:
> you need to configure the fencing in pacemaker
>
> 2014-11-14 13:04 GMT+01:00 Heiner Meier <linuxforums@erwo.net>:
>> Hello,
>>
>> i now have configured fencing in drbd:
>>
>> disk {
>> fencing resource-only;
>> }
>> handlers {
>> fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
>> after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
>> }
>>




_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
Re: Fwd: Re: drbd / libvirt / Pacemaker Cluster? [ In reply to ]
export TERM=linux and resend your config

2014-12-02 11:04 GMT+01:00 Heiner Meier <linuxforums@erwo.net>:
> Hello,
>
> ok i have configured it in pacemaker / crm
>
> Since the config has stonith/fencing it has many problems,
> after reboot the nodes are unclean and so on, i need an
> automatic Hot Standby...
>
> When i power off the master box - the slave resources dont came up,
> the slave always says then the master is "online" - but the machine
> is powered off...
>
> ---
>
> Logs that may be interesting:
> master corosync[1350]: [QUORUM] This node is within the non-primary
> component and will NOT provide any services.
> master warning: do_state_transition: Only 1 of 2 cluster nodes are
> eligible to run resources - continue 0
> notice: pcmk_quorum_notification: Membership 900: quorum lost (1)
> notice: crm_update_peer_state: pcmk_quorum_notification: Node
> slave[1084777474] - state is now lost (was member)
> notice: stonith_device_register: Added 'st-null:0' to the device list (2
> active devices)
>
> ---
>
> The config is now (why there are that many "^O" ? I never wrote
> them, but the crm shell works.)
>
> node^O $id="1084777473" ^Omaster^O \
> attributes^O standby^O="off^O" maintenance^O="off^O"
> node^O $id="1084777474" ^Oslave^O \
> attributes^O maintenance^O="off^O" standby^O="off^O"
> primitive^O ^Olibvirt^O upstart:libvirt-bin \
> op^O start timeout^O="120s^O" interval^O="0^O" \
> op^O stop timeout^O="120s^O" interval^O="0^O" \
> op^O monitor interval^O="30s^O" \
> meta^O target-role^O="Stopped^O"
> primitive^O ^Ost-null^O stonith:null \
> params^O hostlist^O="master slave^O"
> primitive^O ^Ovmdata^O ocf:linbit:drbd \
> params^O drbd_resource^O="vmdata^O" \
> op^O monitor interval^O="29s^O" role^O="Master^O" timeout^O="20^O" \
> op^O monitor interval^O="31s^O" role^O="Slave^O" timeout^O="20^O" \
> op^O start interval^O="0^O" timeout^O="240^O" \
> op^O stop interval^O="0^O" timeout^O="100^O"
> primitive^O ^Ovmdata_fs^O ocf:heartbeat:Filesystem \
> params^O device^O="/dev/drbd0^O" directory^O="/vmdata^O"
> fstype^O="ext4^O" \
> meta^O target-role^O="Started^O" \
> op^O monitor interval^O="20^O" timeout^O="40^O" \
> op^O start timeout^O="30^O" interval^O="0^O" \
> op^O stop timeout^O="30^O" interval^O="0^O"
> ms^O ^Odrbd_master_slave^O vmdata^O \
> meta^O master-max^O="1^O" master-node-max^O="1^O"
> clone-max^O="2^O" clone-node-max^O="1^O" notify^O="true^O"
> target-role^O="Started^O"
> clone^O ^Ofencing^O st-null^O
> location^O ^OPrimaryNode-libvirt^O libvirt^O 200^O: master
> location^O ^OPrimaryNode-vmdata_fs^O vmdata_fs^O 200^O: master
> location^O ^OSecondaryNode-libvirt^O libvirt^O 10^O: slave
> location^O ^OSecondaryNode-vmdata_fs^O vmdata_fs^O 10^O: slave
> colocation^O ^Olibvirt-with-fs^O inf^O: libvirt^O vmdata_fs^O
> colocation^O ^Oservices_colo^O inf^O: vmdata_fs^O drbd_master_slave^O:Master
> order^O ^Ofs_after_drbd^O inf^O: drbd_master_slave^O:promote
> vmdata_fs^O:start libvirt^O:start
> property^O $id="cib-bootstrap-options" \
> dc-version^O="1.1.10-42f2063^O" \
> cluster-infrastructure^O="corosync^O" \
> stonith-enabled^O="true^O" \
> no-quorum-policy^O="ignore^O" \
> last-lrm-refresh^O="1416390260^O"
>
> hm
>
> Am 11/14/14 um 14:00 schrieb emmanuel segura:
>> you need to configure the fencing in pacemaker
>>
>> 2014-11-14 13:04 GMT+01:00 Heiner Meier <linuxforums@erwo.net>:
>>> Hello,
>>>
>>> i now have configured fencing in drbd:
>>>
>>> disk {
>>> fencing resource-only;
>>> }
>>> handlers {
>>> fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
>>> after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
>>> }
>>>
>
>
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org



--
esta es mi vida e me la vivo hasta que dios quiera

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org