Mailing List Archive

FYI: Added two new RAs to ocf:xola: bfa-npiv, multipath
Hi!

Just in case anybody is interested to review or test (please refer to "resource-agents-xola-0.2-0.4"): I wrote two more OCF RAs:

bfa-npiv that manages NPort ID Virtualization for the Brocade bfa driver

multipath that can add, chack and remove multipath maps

Both are needed if you want to dynamically add disks to your node (e.g. to host a virtual machine). The drawback is that a "live migration" will not work as the disk is only visible at one node at a time.

One would typically group one or more NPIV resources and one or more multipath resources to gether with the actual VM startup...

As a small demonstration, I used these parameters:

RA=/usr/lib/ocf/resource.d/xola/bfa-npiv
/usr/sbin/ocf-tester -n bfa-npiv \
-o WWNN="6E6D-FB8F-D879-7938" \
-o WWPN="6E6D-FB8F-D879-DADE" \
-o fabric_list="1000-0800-88e3-5f7d:h01,1000-0005-335b-ec63:h05,1000-2000-3000-4000:.*" \
$RA

RA=/usr/lib/ocf/resource.d/xola/multipath
/usr/sbin/ocf-tester -n multipath \
-o mapname="3600508b4001085e30000f000034d0000" \
$RA

The docs say:
# crm ra info bfa-npiv
OCF Resource Agent implementing NPIV for "bfa" driver (ocf:xola:bfa-npiv)

OCF Resource Agent implementing NPIV for Brocade "bfa"

This RA manages "NPort Id Virtualization" (NPIV) for the Brocade Fibre Channel
HBA (BFA). This agent relies on the /sys structure that driver "bfa" v2.1.2.1
provides.

Virtual FibreChannel ports work very much like normal ports (if the
infrastructure supports them): A new instance of a FibreChannel host is
created together with a new instance of a SCSI host. These hosts disappear
again if the virtual port is destroyed. Otherwise those hosts work very much
like normal hosts.

When a virtual port only exists on one machine at a time, storage devices
assigned solely to that port are protected from multiple concurrent access.

Obviously the "start" method creates the specified virtual port on the first
HBA that is connected to the specified fabric, while the "stop" method deletes
the specified virtual port correspondingly. Method "monitor" scans all HBAs
connected to the specified fabric to locate the specified virtual port.

Removing a virtual port is very much like pulling out the cable to the device
for the higher levels of the storage infrastructure. Probably they like to be
informed before that, so that no endless stream of errors does occur.

WWNs are a sequence of 16 hexadecimal characters. For improved readability,
WWNs may use a minus after every four characters (e.g.: "1234-5678-9abc-def0").
Fabrics use the same syntax as WWNs do.

Parameters (* denotes required, [] the default):

WWNN* (string): World-Wide Node Number
World-Wide Node Number specifies the unique node number to be used
for the virtual device.

WWPN* (string): World-Wide Port Number
World-Wide Port Number specifies the unique port
number to be used for the virtual device.

fabric_list* (string): List of "fabric_token"s (used to select HBA)
The Fabric List indirectly selects the Fibre Channel HBA, where the new
virtual device is to be created:
A "fabric_list" consists of one or more "fabric_token"s which are separated by
a comma. Each "fabric_token" consists of two elements that are separated by a
colon: The first element is the fabric number, while the second element is a
regular expression as understood by egrep. The regular expression is used to
select the fabric number corresponding to the hostname where the RA is running
(the regular expression is anchored automatically).

All "fabric_tokens" are examined until a regular expression matches the
hostname where the RA is running (the regular expression is anchored
automatically). If so, the corresponding fabric number will be used to search
all HBAs until the specified fabric number was found.

As a fabric number is required, a regular expression matching any hostname may
be used. The most simple example for a "fabric_list" may look like this:
1111-2222-3333-4444:.*

If different nodes see different fabric numbers, the "fabric_list" may look
like this:
1111-2222-3333-4444:host01|host02,1111-2222-3333-4567:host0[345],1111-2222-3333-5678:.*

Operations' defaults (advisory minimum):

start timeout=10s
stop timeout=10s
monitor interval=5m timeout=10s start-delay=5s


# crm ra info multipath
OCF Resource Agent managing multipath maps (ocf:xola:multipath)

OCF Resource Agent managing multipath maps

This RA manages multipath maps as typically needed for SAN or iSCSI.

Parameters (* denotes required, [] the default):

mapname* (string): Name of multipath map
Name of the multipath map as found in /etc/multipath.conf.

Operations' defaults (advisory minimum):

start timeout=20s
stop timeout=20s
monitor interval=5m timeout=20s start-delay=5s

Regards,
Ulrich Windl

_______________________________________________
ha-wg-technical mailing list
ha-wg-technical@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/ha-wg-technical
Re: FYI: Added two new RAs to ocf:xola: bfa-npiv, multipath [ In reply to ]
Hi,

On 8/9/2011 4:58 PM, Ulrich Windl wrote:
> Hi!
>
> Just in case anybody is interested to review or test (please refer to "resource-agents-xola-0.2-0.4"): I wrote two more OCF RAs:
>
> bfa-npiv that manages NPort ID Virtualization for the Brocade bfa driver
>
> multipath that can add, chack and remove multipath maps
>

I am not entirely sure I understand the use case for multipath RA.
multipathd does that already and it can run on multiple nodes at the
same time.

multipathd also monitors changes to devices and bind them to the correct
entry /etc/multipath.conf etc. etc. etc.

On top of that, it also executes kpartx to scan for partitions on the
devices and all done via udev/kernel sockets. So as soon as the kernerl
discovers the device, it is managed by mpathd.

Fabio
_______________________________________________
ha-wg-technical mailing list
ha-wg-technical@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/ha-wg-technical