Mailing List Archive

STONITH
hi ,
i want to configure STONITH in my two nodes openais (suse 11) cluster. i am
looking for a doucment on this. i saw Configuration 1.0
Explained<http://clusterlabs.org/mediawiki/images/f/fb/Configuration_Explained.pdf>on
clusterlab site but unfortunately the stonith section does not have
any
information. if any one can send me a link , it will be very helpful.


if no such good document is available , can anyone send me steps for
configuring STONITH . do we configure clone .

i am interested in suicide STONITH , i am running two STONITH suicide
resouces as clone on both nodes but when i am bringing dowin the network
interface on a node it is not being reset.

Thanks a lot in advance,
Priyanka.
Re: STONITH [ In reply to ]
On Mon, Feb 2, 2009 at 09:16, Priyanka Ranjan <priyanka3rdfeb@gmail.com> wrote:
> hi ,
> i want to configure STONITH in my two nodes openais (suse 11) cluster. i am
> looking for a doucment on this. i saw Configuration 1.0 Explained on
> clusterlab site but unfortunately the stonith section does not have any
> information. if any one can send me a link , it will be very helpful.
>
>
> if no such good document is available , can anyone send me steps for
> configuring STONITH . do we configure clone .
>
> i am interested in suicide STONITH , i am running two STONITH suicide
> resouces as clone on both nodes but when i am bringing dowin the network
> interface on a node it is not being reset.

why would it?

the node has no reason to kill itself (it cannot differentiate between
the other side being dead and a network failure).
and since the network is down, neither side can contact the other to
tell it that it should die.

_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Re: STONITH [ In reply to ]
Andrew Beekhof wrote:
> On Mon, Feb 2, 2009 at 09:16, Priyanka Ranjan <priyanka3rdfeb@gmail.com> wrote:
>> hi ,
>> i want to configure STONITH in my two nodes openais (suse 11) cluster. i am
>> looking for a doucment on this. i saw Configuration 1.0 Explained on
>> clusterlab site but unfortunately the stonith section does not have any
>> information. if any one can send me a link , it will be very helpful.
>>
>>
>> if no such good document is available , can anyone send me steps for
>> configuring STONITH . do we configure clone .
>>
>> i am interested in suicide STONITH , i am running two STONITH suicide
>> resouces as clone on both nodes but when i am bringing dowin the network
>> interface on a node it is not being reset.
>
> why would it?
>
> the node has no reason to kill itself (it cannot differentiate between
> the other side being dead and a network failure).
> and since the network is down, neither side can contact the other to
> tell it that it should die.

what about connecting this to a pingd resource?

cheers,
raoul
--
____________________________________________________________________
DI (FH) Raoul Bhatia M.Sc. email. r.bhatia@ipax.at
Technischer Leiter

IPAX - Aloy Bhatia Hava OEG web. http://www.ipax.at
Barawitzkagasse 10/2/2/11 email. office@ipax.at
1190 Wien tel. +43 1 3670030
FN 277995t HG Wien fax. +43 1 3670030 15
____________________________________________________________________

_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Re: STONITH [ In reply to ]
On Mon, Feb 2, 2009 at 3:35 PM, Raoul Bhatia [IPAX] <r.bhatia@ipax.at>wrote:

> Andrew Beekhof wrote:
> > On Mon, Feb 2, 2009 at 09:16, Priyanka Ranjan <priyanka3rdfeb@gmail.com>
> wrote:
> >> hi ,
> >> i want to configure STONITH in my two nodes openais (suse 11) cluster. i
> am
> >> looking for a doucment on this. i saw Configuration 1.0 Explained on
> >> clusterlab site but unfortunately the stonith section does not have any
> >> information. if any one can send me a link , it will be very helpful.
> >>
> >>
> >> if no such good document is available , can anyone send me steps for
> >> configuring STONITH . do we configure clone .
> >>
> >> i am interested in suicide STONITH , i am running two STONITH suicide
> >> resouces as clone on both nodes but when i am bringing dowin the network
> >> interface on a node it is not being reset.
> >
> > why would it?
> >
> > the node has no reason to kill itself (it cannot differentiate between
> > the other side being dead and a network failure).
> > and since the network is down, neither side can contact the other to
> > tell it that it should die.
>

network interface is down on only one node hence only one node is offline.
other node is online
so the node which is online knows that something happened to first node and
it is not reachable.

ok , i am bit confused here . can you tell me the scenario when node reset
will happen in case of STONITH suicide.

can we monitor STONITH resource.

>
>
> what about connecting this to a pingd resource?
>

could you please explain more how to connect it to pingd resource.


>
> cheers,
> raoul
> --
> ____________________________________________________________________
> DI (FH) Raoul Bhatia M.Sc. email. r.bhatia@ipax.at
> Technischer Leiter
>
> IPAX - Aloy Bhatia Hava OEG web. http://www.ipax.at
> Barawitzkagasse 10/2/2/11 email. office@ipax.at
> 1190 Wien tel. +43 1 3670030
> FN 277995t HG Wien fax. +43 1 3670030 15
> ____________________________________________________________________
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
Re: STONITH [ In reply to ]
On 02.02.2009 18:54, Priyanka Ranjan wrote:
> can we monitor STONITH resource.

to my knowledge: yes. my stonith configuration looks like:
> <clone id="DoFencing">
> <meta_attributes id="doFancing_m">
> <nvpair id="doFencing_ma_globally-unique" name="globally-unique" value="true"/>
> <nvpair id="doFencing_ma_clone-max" name="clone-max" value="2"/>
> <nvpair id="doFencing_ma_clone-node-max" name="clone-node-max" value="1"/>
> </meta_attributes>
> <primitive id="stonith_rackpdu" class="stonith" type="external/rackpdu">
> <operations>
> <op id="stonith_rackpdu_monitor" name="monitor" interval="5s" timeout="20s" requires="nothing"/>
> </operations>
> <instance_attributes id="stonith_rackpdu_ia">
> <nvpair id="stonith_rackpdu_ia_hostlist" name="hostlist" value="wc01 wc02"/>
> <nvpair id="stonith_rackpdu_ia_pduip" name="pduip" value="pdu02.r02.ipax.at"/>
> <nvpair id="stonith_rackpdu_ia_community" name="community" value="wc"/>
> </instance_attributes>
> </primitive>
> </clone>

so i have a 5s monitoring operation for my stonith resource.

>> what about connecting this to a pingd resource?
>
> could you please explain more how to connect it to pingd resource.

i started to play around with pingd in the last few days only. it was
just a *wild* guess - e.g. disable the suicide stonith resource if
you find that you are connected to e.g. your default gateway.

any thoughts?

cheers,
raoul
--
____________________________________________________________________
DI (FH) Raoul Bhatia M.Sc. email. r.bhatia@ipax.at
Technischer Leiter

IPAX - Aloy Bhatia Hava OEG web. http://www.ipax.at
Barawitzkagasse 10/2/2/11 email. office@ipax.at
1190 Wien tel. +43 1 3670030
FN 277995t HG Wien fax. +43 1 3670030 15
____________________________________________________________________

_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Re: STONITH [ In reply to ]
On 2009-02-02T23:24:41, Priyanka Ranjan <priyanka3rdfeb@gmail.com> wrote:

> ok , i am bit confused here . can you tell me the scenario when node reset
> will happen in case of STONITH suicide.

The suicide plugin only works in case the semi-healthy node needs to be
reset - ie, when it is still reachable on the network, but needs to be
recovered because of a failed stop, for instance.

It is NOT! usable for fencing needs.


Regards,
Lars

--
Teamlead Kernel, SuSE Labs, Research and Development
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde


_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Re: STONITH [ In reply to ]
Ok as you said suicide stonith is not used for fencing need so trying to use
stonith external riloe.

i used stonith command to test the stonith external/riloe . it is working
fine.

Could you please tell me what is required to configure stonith properly i
have done follwoing

1) stonith_enabled=true
2) stonith_action= reboot
3) no_quorum_policy=suicide

i have added a monitor to stonith and set "on_fail" to fence

can you guide me how to debug it to know why it is not getting executed. i
am using this in two nodes cluster. does a node require quourm to fence
other node.

Thanks & Regards


On Tue, Feb 3, 2009 at 4:07 PM, Lars Marowsky-Bree <lmb@suse.de> wrote:

> On 2009-02-02T23:24:41, Priyanka Ranjan <priyanka3rdfeb@gmail.com> wrote:
>
> > ok , i am bit confused here . can you tell me the scenario when node
> reset
> > will happen in case of STONITH suicide.
>
> The suicide plugin only works in case the semi-healthy node needs to be
> reset - ie, when it is still reachable on the network, but needs to be
> recovered because of a failed stop, for instance.
>
> It is NOT! usable for fencing needs.
>
>
> Regards,
> Lars
>
> --
> Teamlead Kernel, SuSE Labs, Research and Development
> SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
> "Experience is the name everyone gives to their mistakes." -- Oscar Wilde
>
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
Re: stonith [ In reply to ]
for information: i am using pacemaker 1.1.12 on debian wheezy

-----Ursprüngliche Nachricht-----
Gesendet: Freitag, 17 April 2015 um 12:36:00 Uhr
Von: "Thomas Manninger" <DBGTMaster@gmx.at>
An: pacemaker@oss.clusterlabs.org
Betreff: [Pacemaker] stonith


Hi list,
 
i have a pacemaker/corosync2 setup with 4 nodes, stonith configured over ipmi interface.
 
My problem is, that sometimes, a wrong node is stonithed.
As example:
I have 4 servers: node1, node2, node3, node4
 
I start a hardware- reset on node node1, but node1 and node3 will be stonithed.
 
In the cluster.log, i found following entry:
Apr 17 11:02:41 [20473] node2   stonithd:    debug: stonith_action_create:       Initiating action reboot for agent fence_legacy (target=node1)
Apr 17 11:02:41 [20473] node2   stonithd:    debug: make_args:   Performing reboot action for node 'node1' as 'port=node1'
Apr 17 11:02:41 [20473] node2   stonithd:    debug: internal_stonith_action_execute:     forking
Apr 17 11:02:41 [20473] node2   stonithd:    debug: internal_stonith_action_execute:     sending args
Apr 17 11:02:41 [20473] node2   stonithd:    debug: stonith_device_execute:      Operation reboot for node node1 on p_stonith_node3 now running with pid=113092, timeout=60s
 
node1 will be reseted with the stonith primitive of node3 ?? Why??
 
my stonith config:
primitive p_stonith_node1 stonith:external/ipmi \
        params hostname=node1 ipaddr=10.100.0.2 passwd_method=file passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus priv=OPERATOR \
        op monitor interval=3s timeout=20s \
        meta target-role=Started failure-timeout=30s
primitive p_stonith_node2 stonith:external/ipmi \
        op monitor interval=3s timeout=20s \
        params hostname=node2 ipaddr=10.100.0.4 passwd_method=file passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus priv=OPERATOR \
        meta target-role=Started failure-timeout=30s
primitive p_stonith_node3 stonith:external/ipmi \
        op monitor interval=3s timeout=20s \
        params hostname=node3 ipaddr=10.100.0.6 passwd_method=file passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus priv=OPERATOR \
        meta target-role=Started failure-timeout=30s
primitive p_stonith_node4 stonith:external/ipmi \
        op monitor interval=3s timeout=20s \
        params hostname=node4 ipaddr=10.100.0.8 passwd_method=file passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus priv=OPERATOR \
        meta target-role=Started failure-timeout=30s
 
Somebody can help me??
Thanks!
 
Regards,
Thomas_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
Re: stonith [ In reply to ]
On 2015-04-17 12:36, Thomas Manninger wrote:
> Hi list,
>
> i have a pacemaker/corosync2 setup with 4 nodes, stonith configured over
> ipmi interface.
>
> My problem is, that sometimes, a wrong node is stonithed.
> As example:
> I have 4 servers: node1, node2, node3, node4
>
> I start a hardware- reset on node node1, but node1 and node3 will be
> stonithed.

You have to tell pacemaker exactly what stonith-resource can fence what
node if the stonith agent you are using does not support the "list" action.

Do this by adding "pcmk_host_check=static-list" and "pcmk_host_list" to
every stonith-resource like:

primitive p_stonith_node3 stonith:external/ipmi \
op monitor interval=3s timeout=20s \
params hostname=node3 ipaddr=10.100.0.6 passwd_method=file
passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
priv=OPERATOR \
pcmk_host_check="static-list" pcmk_host_list="node3"

... see "man stonithd".

Best regards,
Andreas

>
> In the cluster.log, i found following entry:
> Apr 17 11:02:41 [20473] node2 stonithd: debug:
> stonith_action_create: Initiating action reboot for agent
> fence_legacy (target=node1)
> Apr 17 11:02:41 [20473] node2 stonithd: debug: make_args:
> Performing reboot action for node 'node1' as 'port=node1'
> Apr 17 11:02:41 [20473] node2 stonithd: debug:
> internal_stonith_action_execute: forking
> Apr 17 11:02:41 [20473] node2 stonithd: debug:
> internal_stonith_action_execute: sending args
> Apr 17 11:02:41 [20473] node2 stonithd: debug:
> stonith_device_execute: Operation reboot for node node1 on
> p_stonith_node3 now running with pid=113092, timeout=60s
>
> node1 will be reseted with the stonith primitive of node3 ?? Why??
>
> my stonith config:
> primitive p_stonith_node1 stonith:external/ipmi \
> params hostname=node1 ipaddr=10.100.0.2 passwd_method=file
> passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
> priv=OPERATOR \
> op monitor interval=3s timeout=20s \
> meta target-role=Started failure-timeout=30s
> primitive p_stonith_node2 stonith:external/ipmi \
> op monitor interval=3s timeout=20s \
> params hostname=node2 ipaddr=10.100.0.4 passwd_method=file
> passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
> priv=OPERATOR \
> meta target-role=Started failure-timeout=30s
> primitive p_stonith_node3 stonith:external/ipmi \
> op monitor interval=3s timeout=20s \
> params hostname=node3 ipaddr=10.100.0.6 passwd_method=file
> passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
> priv=OPERATOR \
> meta target-role=Started failure-timeout=30s
> primitive p_stonith_node4 stonith:external/ipmi \
> op monitor interval=3s timeout=20s \
> params hostname=node4 ipaddr=10.100.0.8 passwd_method=file
> passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
> priv=OPERATOR \
> meta target-role=Started failure-timeout=30s
>
> Somebody can help me??
> Thanks!
>
> Regards,
> Thomas
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
Re: stonith [ In reply to ]
Ð’ Sun, 19 Apr 2015 14:23:27 +0200
Andreas Kurz <andreas.kurz@gmail.com> пишет:

> On 2015-04-17 12:36, Thomas Manninger wrote:
> > Hi list,
> >
> > i have a pacemaker/corosync2 setup with 4 nodes, stonith configured over
> > ipmi interface.
> >
> > My problem is, that sometimes, a wrong node is stonithed.
> > As example:
> > I have 4 servers: node1, node2, node3, node4
> >
> > I start a hardware- reset on node node1, but node1 and node3 will be
> > stonithed.
>
> You have to tell pacemaker exactly what stonith-resource can fence what
> node if the stonith agent you are using does not support the "list" action.
>

pacmeker is expected to get this information dynamically from stonith
agent.

> Do this by adding "pcmk_host_check=static-list" and "pcmk_host_list" to
> every stonith-resource like:
>

Default for pcmk_host_check is "dynamic"; why it does not work in this
case? I use external/ipmi muself and I do not remember ever fiddling
with static list.

> primitive p_stonith_node3 stonith:external/ipmi \
> op monitor interval=3s timeout=20s \
> params hostname=node3 ipaddr=10.100.0.6 passwd_method=file
> passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
> priv=OPERATOR \
> pcmk_host_check="static-list" pcmk_host_list="node3"
>
> ... see "man stonithd".
>
> Best regards,
> Andreas
>
> >
> > In the cluster.log, i found following entry:
> > Apr 17 11:02:41 [20473] node2 stonithd: debug:
> > stonith_action_create: Initiating action reboot for agent
> > fence_legacy (target=node1)
> > Apr 17 11:02:41 [20473] node2 stonithd: debug: make_args:
> > Performing reboot action for node 'node1' as 'port=node1'
> > Apr 17 11:02:41 [20473] node2 stonithd: debug:
> > internal_stonith_action_execute: forking
> > Apr 17 11:02:41 [20473] node2 stonithd: debug:
> > internal_stonith_action_execute: sending args
> > Apr 17 11:02:41 [20473] node2 stonithd: debug:
> > stonith_device_execute: Operation reboot for node node1 on
> > p_stonith_node3 now running with pid=113092, timeout=60s
> >
> > node1 will be reseted with the stonith primitive of node3 ?? Why??
> >
> > my stonith config:
> > primitive p_stonith_node1 stonith:external/ipmi \
> > params hostname=node1 ipaddr=10.100.0.2 passwd_method=file
> > passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
> > priv=OPERATOR \
> > op monitor interval=3s timeout=20s \
> > meta target-role=Started failure-timeout=30s
> > primitive p_stonith_node2 stonith:external/ipmi \
> > op monitor interval=3s timeout=20s \
> > params hostname=node2 ipaddr=10.100.0.4 passwd_method=file
> > passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
> > priv=OPERATOR \
> > meta target-role=Started failure-timeout=30s
> > primitive p_stonith_node3 stonith:external/ipmi \
> > op monitor interval=3s timeout=20s \
> > params hostname=node3 ipaddr=10.100.0.6 passwd_method=file
> > passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
> > priv=OPERATOR \
> > meta target-role=Started failure-timeout=30s
> > primitive p_stonith_node4 stonith:external/ipmi \
> > op monitor interval=3s timeout=20s \
> > params hostname=node4 ipaddr=10.100.0.8 passwd_method=file
> > passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
> > priv=OPERATOR \
> > meta target-role=Started failure-timeout=30s
> >
> > Somebody can help me??
> > Thanks!
> >
> > Regards,
> > Thomas
> >
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
>
>
Re: stonith [ In reply to ]
> On 19 Apr 2015, at 11:37 pm, Andrei Borzenkov <arvidjaar@gmail.com> wrote:
>
> Ð’ Sun, 19 Apr 2015 14:23:27 +0200
> Andreas Kurz <andreas.kurz@gmail.com> пишет:
>
>> On 2015-04-17 12:36, Thomas Manninger wrote:
>>> Hi list,
>>>
>>> i have a pacemaker/corosync2 setup with 4 nodes, stonith configured over
>>> ipmi interface.
>>>
>>> My problem is, that sometimes, a wrong node is stonithed.
>>> As example:
>>> I have 4 servers: node1, node2, node3, node4
>>>
>>> I start a hardware- reset on node node1, but node1 and node3 will be
>>> stonithed.
>>
>> You have to tell pacemaker exactly what stonith-resource can fence what
>> node if the stonith agent you are using does not support the "list" action.
>>
>
> pacmeker is expected to get this information dynamically from stonith
> agent.

Only from those agents that support it.

>
>> Do this by adding "pcmk_host_check=static-list" and "pcmk_host_list" to
>> every stonith-resource like:
>>
>
> Default for pcmk_host_check is "dynamic"; why it does not work in this
> case?

Because IPMI usually has no notion of host names?

> I use external/ipmi muself and I do not remember ever fiddling
> with static list.
>
>> primitive p_stonith_node3 stonith:external/ipmi \
>> op monitor interval=3s timeout=20s \
>> params hostname=node3 ipaddr=10.100.0.6 passwd_method=file
>> passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
>> priv=OPERATOR \
>> pcmk_host_check="static-list" pcmk_host_list="node3"
>>
>> ... see "man stonithd".
>>
>> Best regards,
>> Andreas
>>
>>>
>>> In the cluster.log, i found following entry:
>>> Apr 17 11:02:41 [20473] node2 stonithd: debug:
>>> stonith_action_create: Initiating action reboot for agent
>>> fence_legacy (target=node1)
>>> Apr 17 11:02:41 [20473] node2 stonithd: debug: make_args:
>>> Performing reboot action for node 'node1' as 'port=node1'
>>> Apr 17 11:02:41 [20473] node2 stonithd: debug:
>>> internal_stonith_action_execute: forking
>>> Apr 17 11:02:41 [20473] node2 stonithd: debug:
>>> internal_stonith_action_execute: sending args
>>> Apr 17 11:02:41 [20473] node2 stonithd: debug:
>>> stonith_device_execute: Operation reboot for node node1 on
>>> p_stonith_node3 now running with pid=113092, timeout=60s
>>>
>>> node1 will be reseted with the stonith primitive of node3 ?? Why??
>>>
>>> my stonith config:
>>> primitive p_stonith_node1 stonith:external/ipmi \
>>> params hostname=node1 ipaddr=10.100.0.2 passwd_method=file
>>> passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
>>> priv=OPERATOR \
>>> op monitor interval=3s timeout=20s \
>>> meta target-role=Started failure-timeout=30s
>>> primitive p_stonith_node2 stonith:external/ipmi \
>>> op monitor interval=3s timeout=20s \
>>> params hostname=node2 ipaddr=10.100.0.4 passwd_method=file
>>> passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
>>> priv=OPERATOR \
>>> meta target-role=Started failure-timeout=30s
>>> primitive p_stonith_node3 stonith:external/ipmi \
>>> op monitor interval=3s timeout=20s \
>>> params hostname=node3 ipaddr=10.100.0.6 passwd_method=file
>>> passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
>>> priv=OPERATOR \
>>> meta target-role=Started failure-timeout=30s
>>> primitive p_stonith_node4 stonith:external/ipmi \
>>> op monitor interval=3s timeout=20s \
>>> params hostname=node4 ipaddr=10.100.0.8 passwd_method=file
>>> passwd="/etc/stonith_ipmi_passwd" userid=stonith interface=lanplus
>>> priv=OPERATOR \
>>> meta target-role=Started failure-timeout=30s
>>>
>>> Somebody can help me??
>>> Thanks!
>>>
>>> Regards,
>>> Thomas
>>>
>>>
>>> _______________________________________________
>>> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>>
>>
>>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org