Mailing List Archive

breaking resource dependencies by replacing resource group by co-location constrains
Hi,

i have a Cluster running on SLES 11 SP3. A resource group is defined. So every entry in that resource group is a hard dependency for the folliowing resource in that group. If just only one resource fails on both nodes, the resource res_MyApplication won't be started. I want to change that behavior. I want the cluster to start the resources on just one host. The local resources should be started on just the same host, regardless of the status of the CIFS Shares. If the CIFS Shares are available they have to be mounted on the same node running the application. I would try to accomplish this by deleting the resource group grp_application, creating colocation constrains for local fs, service ip and application. Additionally I would create co-location constrains for each CIFS Share to mount it on the node running the application. Any hints/thoughts on that? Is that the "right way" to achieve what I want?

Here's the currently running config:

node mynodea\
attributes standby="off"
node mynodeb \
attributes standby="off"
primitive res_Service-IP ocf:heartbeat:IPaddr2 \
params ip="192.168.10.120" cidr_netmask="24" \
op monitor interval="10s" timeout="20s" depth="0" \
meta target-role="started"
primitive res_MyApplication lsb:myapp \
operations $id="res_MyApplication-operations" \
op start interval="0" timeout="180" \
op stop interval="0" timeout="180" \
op status interval="0" timeout="600" \
op monitor interval="15" timeout="15" start-delay="120" \
op meta-data interval="0" timeout="600" \
meta target-role="Started"
primitive res_mount-CIFSshareData ocf:heartbeat:Filesystem \
params device="//CIFS/Share1" directory="/Datafstype="cifs" options="uid=30666,gid=30666" \
op monitor interval="20" timeout="40" depth="0" \
meta target-role="started"
primitive res_mount-CIFSshareData2 ocf:heartbeat:Filesystem \
params device="//CIFS/Share2" directory="/Data2" fstype="cifs" options="uid=30666,gid=30666" \
op monitor interval="20" timeout="40" depth="0" \
meta target-role="started"
primitive res_mount-CIFSshareData3 ocf:heartbeat:Filesystem \
params device="//CIFS/Share3" directory="/Data3" fstype="cifs" options="uid=30666,gid=30666" \
op monitor interval="20" timeout="40" depth="0" \
meta target-role="started"
primitive res_mount-application ocf:heartbeat:Filesystem \
params device="/dev/disk/by-id/scsi-36000d56e0000324561d9dcae19034e90-part1" directory="/MyApp" fstype="ext3" \
op monitor interval="20" timeout="40" depth="0" \
meta target-role="started"
primitive stonith-sbd stonith:external/sbd \
params sbd_device="/dev/disk/by-id/scsi-36000d56e0000a31a36d8dcaebaf5a439" \
meta target-role="Started"
group grp_application res_mount-application res_Service-IP res_mount-CIFSshareData res_mount-CIFSshareData2 res_mount-CIFSshareData3 res_MyApplication
location prefer-application grp_application 50: nodea
property $id="cib-bootstrap-options" \
stonith-enabled="true" \
no-quorum-policy="ignore" \
placement-strategy="balanced" \
dc-version="1.1.9-2db99f1" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
last-lrm-refresh="1412758622" \
stonith-action="poweroff" \
stonith-timeout="216s" \
maintenance-mode="true"
rsc_defaults $id="rsc-options" \
resource-stickiness="100" \
migration-threshold="3"
op_defaults $id="op-options" \
timeout="600" \
record-pending="true"

Thanks and kind regards

Sven

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
Re: breaking resource dependencies by replacing resource group by co-location constrains [ In reply to ]
group grp_application res_mount-application res_Service-IP res_mount-CIFSshareData res_mount-CIFSshareData2 res_mount-CIFSshareData3 res_MyApplication

is just a shortcut for

colocation res_Service-IP with res_mount-application
colocation res_mount-CIFSshareData with res_Service-IP
...

and

colocation res_mount-application then res_Service-IP
colocation res_Service-IP then res_mount-CIFSshareData
...

just re-order and remove any bit you don't need anymore



> On 21 Jan 2015, at 3:43 am, Sven Moeller <smoeller@nichthelfer.de> wrote:
>
> Hi,
>
> i have a Cluster running on SLES 11 SP3. A resource group is defined. So every entry in that resource group is a hard dependency for the folliowing resource in that group. If just only one resource fails on both nodes, the resource res_MyApplication won't be started. I want to change that behavior. I want the cluster to start the resources on just one host. The local resources should be started on just the same host, regardless of the status of the CIFS Shares. If the CIFS Shares are available they have to be mounted on the same node running the application. I would try to accomplish this by deleting the resource group grp_application, creating colocation constrains for local fs, service ip and application. Additionally I would create co-location constrains for each CIFS Share to mount it on the node running the application. Any hints/thoughts on that? Is that the "right way" to achieve what I want?
>
> Here's the currently running config:
>
> node mynodea\
> attributes standby="off"
> node mynodeb \
> attributes standby="off"
> primitive res_Service-IP ocf:heartbeat:IPaddr2 \
> params ip="192.168.10.120" cidr_netmask="24" \
> op monitor interval="10s" timeout="20s" depth="0" \
> meta target-role="started"
> primitive res_MyApplication lsb:myapp \
> operations $id="res_MyApplication-operations" \
> op start interval="0" timeout="180" \
> op stop interval="0" timeout="180" \
> op status interval="0" timeout="600" \
> op monitor interval="15" timeout="15" start-delay="120" \
> op meta-data interval="0" timeout="600" \
> meta target-role="Started"
> primitive res_mount-CIFSshareData ocf:heartbeat:Filesystem \
> params device="//CIFS/Share1" directory="/Datafstype="cifs" options="uid=30666,gid=30666" \
> op monitor interval="20" timeout="40" depth="0" \
> meta target-role="started"
> primitive res_mount-CIFSshareData2 ocf:heartbeat:Filesystem \
> params device="//CIFS/Share2" directory="/Data2" fstype="cifs" options="uid=30666,gid=30666" \
> op monitor interval="20" timeout="40" depth="0" \
> meta target-role="started"
> primitive res_mount-CIFSshareData3 ocf:heartbeat:Filesystem \
> params device="//CIFS/Share3" directory="/Data3" fstype="cifs" options="uid=30666,gid=30666" \
> op monitor interval="20" timeout="40" depth="0" \
> meta target-role="started"
> primitive res_mount-application ocf:heartbeat:Filesystem \
> params device="/dev/disk/by-id/scsi-36000d56e0000324561d9dcae19034e90-part1" directory="/MyApp" fstype="ext3" \
> op monitor interval="20" timeout="40" depth="0" \
> meta target-role="started"
> primitive stonith-sbd stonith:external/sbd \
> params sbd_device="/dev/disk/by-id/scsi-36000d56e0000a31a36d8dcaebaf5a439" \
> meta target-role="Started"
> group grp_application res_mount-application res_Service-IP res_mount-CIFSshareData res_mount-CIFSshareData2 res_mount-CIFSshareData3 res_MyApplication
> location prefer-application grp_application 50: nodea
> property $id="cib-bootstrap-options" \
> stonith-enabled="true" \
> no-quorum-policy="ignore" \
> placement-strategy="balanced" \
> dc-version="1.1.9-2db99f1" \
> cluster-infrastructure="classic openais (with plugin)" \
> expected-quorum-votes="2" \
> last-lrm-refresh="1412758622" \
> stonith-action="poweroff" \
> stonith-timeout="216s" \
> maintenance-mode="true"
> rsc_defaults $id="rsc-options" \
> resource-stickiness="100" \
> migration-threshold="3"
> op_defaults $id="op-options" \
> timeout="600" \
> record-pending="true"
>
> Thanks and kind regards
>
> Sven
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org