Mailing List Archive

Fencing dependency between bare metal host and its VMs guest
Hello,

As I finally manage to integrate my VM to corosync and my dlm/clvm/GFS2
are running on it.

Now I have one issue, when the bare metal host on which the VM is
running die, the VM is lost and can not be fenced.

Is there a way to make pacemaker ACK the fencing of the VM running on a
host when the host is fenced itself?

Regards.

--
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF
Re: Fencing dependency between bare metal host and its VMs guest [ In reply to ]
Ð’ Fri, 07 Nov 2014 17:46:40 +0100
Daniel Dehennin <daniel.dehennin@baby-gnu.org> пишет:

> Hello,
>
> As I finally manage to integrate my VM to corosync and my dlm/clvm/GFS2
> are running on it.
>
> Now I have one issue, when the bare metal host on which the VM is
> running die, the VM is lost and can not be fenced.
>
> Is there a way to make pacemaker ACK the fencing of the VM running on a
> host when the host is fenced itself?
>

Yes, you can define multiple stonith agents and priority between them.

http://clusterlabs.org/wiki/Fencing_topology

> Regards.
>
Re: Fencing dependency between bare metal host and its VMs guest [ In reply to ]
Andrei Borzenkov <arvidjaar@gmail.com> writes:


[...]

>> Now I have one issue, when the bare metal host on which the VM is
>> running die, the VM is lost and can not be fenced.
>>
>> Is there a way to make pacemaker ACK the fencing of the VM running on a
>> host when the host is fenced itself?
>>
>
> Yes, you can define multiple stonith agents and priority between them.
>
> http://clusterlabs.org/wiki/Fencing_topology

Hello,

If I understand correctly, fencing topology is the way to have several
fencing devices for a node and try them consecutively until one works.

In my configuration, I group the VM stonith agents with the
corresponding VM resource, to make them move together[1].

Here is my use case:

1. Resource ONE-Frontend-Group runs on nebula1
2. nebula1 is fenced
3. node one-fronted can not be fenced

Is there a way to say that the life on node one-frontend is related to
the state of resource ONE-Frontend?

In which case when the node nebula1 is fenced, pacemaker should be aware that
resource ONE-Frontend is not running any more, so node one-frontend is
OFFLINE and not UNCLEAN.

Regards.

Footnotes:
[1] http://oss.clusterlabs.org/pipermail/pacemaker/2014-October/022671.html

--
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF
Re: Fencing dependency between bare metal host and its VMs guest [ In reply to ]
I think the suggestion was to put shooting the host in the fencing path of a VM. This way if you can't get the host to fence the VM (as the host is already dead) you just check if the host was fenced.

Daniel Dehennin <daniel.dehennin@baby-gnu.org> napisał:
>Andrei Borzenkov <arvidjaar@gmail.com> writes:
>
>
>[...]
>
>>> Now I have one issue, when the bare metal host on which the VM is
>>> running die, the VM is lost and can not be fenced.
>>>
>>> Is there a way to make pacemaker ACK the fencing of the VM running
>on a
>>> host when the host is fenced itself?
>>>
>>
>> Yes, you can define multiple stonith agents and priority between
>them.
>>
>> http://clusterlabs.org/wiki/Fencing_topology
>
>Hello,
>
>If I understand correctly, fencing topology is the way to have several
>fencing devices for a node and try them consecutively until one works.
>
>In my configuration, I group the VM stonith agents with the
>corresponding VM resource, to make them move together[1].
>
>Here is my use case:
>
>1. Resource ONE-Frontend-Group runs on nebula1
>2. nebula1 is fenced
>3. node one-fronted can not be fenced
>
>Is there a way to say that the life on node one-frontend is related to
>the state of resource ONE-Frontend?
>
>In which case when the node nebula1 is fenced, pacemaker should be
>aware that
>resource ONE-Frontend is not running any more, so node one-frontend is
>OFFLINE and not UNCLEAN.
>
>Regards.
>
>Footnotes:
>[1]
>http://oss.clusterlabs.org/pipermail/pacemaker/2014-October/022671.html
>
>--
>Daniel Dehennin
>Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
>Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF
>
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
>http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
>Project Home: http://www.clusterlabs.org
>Getting started:
>http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>Bugs: http://bugs.clusterlabs.org

--
Wysłane za pomocą K-9 Mail.
Re: Fencing dependency between bare metal host and its VMs guest [ In reply to ]
Ð’ Mon, 10 Nov 2014 10:07:18 +0100
Tomasz Kontusz <tomasz.kontusz@gmail.com> пишет:

> I think the suggestion was to put shooting the host in the fencing path of a VM. This way if you can't get the host to fence the VM (as the host is already dead) you just check if the host was fenced.
>

Exactly. One thing I do not know how it will behave in case of multiple
VMs on the same host. I.e. will pacemaker try to fence host for every
VM or recognize that all VMs are dead after the first time agent is
invoked.

> Daniel Dehennin <daniel.dehennin@baby-gnu.org> napisał:
> >Andrei Borzenkov <arvidjaar@gmail.com> writes:
> >
> >
> >[...]
> >
> >>> Now I have one issue, when the bare metal host on which the VM is
> >>> running die, the VM is lost and can not be fenced.
> >>>
> >>> Is there a way to make pacemaker ACK the fencing of the VM running
> >on a
> >>> host when the host is fenced itself?
> >>>
> >>
> >> Yes, you can define multiple stonith agents and priority between
> >them.
> >>
> >> http://clusterlabs.org/wiki/Fencing_topology
> >
> >Hello,
> >
> >If I understand correctly, fencing topology is the way to have several
> >fencing devices for a node and try them consecutively until one works.
> >
> >In my configuration, I group the VM stonith agents with the
> >corresponding VM resource, to make them move together[1].
> >
> >Here is my use case:
> >
> >1. Resource ONE-Frontend-Group runs on nebula1
> >2. nebula1 is fenced
> >3. node one-fronted can not be fenced
> >
> >Is there a way to say that the life on node one-frontend is related to
> >the state of resource ONE-Frontend?
> >
> >In which case when the node nebula1 is fenced, pacemaker should be
> >aware that
> >resource ONE-Frontend is not running any more, so node one-frontend is
> >OFFLINE and not UNCLEAN.
> >
> >Regards.
> >
> >Footnotes:
> >[1]
> >http://oss.clusterlabs.org/pipermail/pacemaker/2014-October/022671.html
> >
> >--
> >Daniel Dehennin
> >Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
> >Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF
> >
> >
> >
> >------------------------------------------------------------------------
> >
> >_______________________________________________
> >Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> >http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> >Project Home: http://www.clusterlabs.org
> >Getting started:
> >http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >Bugs: http://bugs.clusterlabs.org
>


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org