Mailing List Archive

move service basing on both connection status and hostname
Hi Guys,
I am having some troubles with the location constraint.

In particular, what I want to achieve, is to run my service on a host; if
the ip interconnection fails I want to migrate it to another host, but on
IP connectivity restoration the resource should move again on the primary
node.

So, I have this configuration:

primitive vfy_ether ocf:pacemaker:l2check \
> params nic_list="eth1 eth2" debug="false" dampen="1s" \
> op monitor interval="2s"
> clone ck_ether vfy_ether
> location cli-ethercheck MCluster \
> rule $id="cli-prefer-rule-ethercheck" -inf: not_defined l2ckd or
> l2ckd lt 2
> location cli-prefer-masterIP MCluster \
> rule $id="cli-prefer-rule-masterIP" 50: #uname eq GHA-MO-1


when the connectivity fails on the primary node, the resource is correctly
moved to the secondary one.
But, on IP connectivity restoration, the resource stays on the secondary
node (and does not move to the primary one).

How can I solve that?
Any hint? :-)

thanks,
stefano
Re: move service basing on both connection status and hostname [ In reply to ]
On 11/09/2015 10:00 AM, Stefano Sasso wrote:
> Hi Guys,
> I am having some troubles with the location constraint.
>
> In particular, what I want to achieve, is to run my service on a host; if
> the ip interconnection fails I want to migrate it to another host, but on
> IP connectivity restoration the resource should move again on the primary
> node.
>
> So, I have this configuration:
>
> primitive vfy_ether ocf:pacemaker:l2check \
>> params nic_list="eth1 eth2" debug="false" dampen="1s" \
>> op monitor interval="2s"
>> clone ck_ether vfy_ether
>> location cli-ethercheck MCluster \
>> rule $id="cli-prefer-rule-ethercheck" -inf: not_defined l2ckd or
>> l2ckd lt 2
>> location cli-prefer-masterIP MCluster \
>> rule $id="cli-prefer-rule-masterIP" 50: #uname eq GHA-MO-1
>
>
> when the connectivity fails on the primary node, the resource is correctly
> moved to the secondary one.
> But, on IP connectivity restoration, the resource stays on the secondary
> node (and does not move to the primary one).
>
> How can I solve that?
> Any hint? :-)
>
> thanks,
> stefano

Mostly likely, you have a default resource-stickiness set. That tells
Pacemaker to keep services where they are if possible. You can either
delete the stickiness setting or make sure it has a lower score than
your location preference.

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org