How to mirror array LUNs on a host with two paths to the array

If you have two paths to an array but no multipathing software then this procedure will explain how to create some resilience should a path fail.

The lack of multipathing software will probably be because the version of AIX does not allow it. For instance multipathing software is not available for AIX 4.3.

Whilst this solution can help if a path to the array fails, multipathing is a much better solution and should be implemented if available. This solution will also use twice as much disk space as a multipathing solution.

Create two LUNs of equal size on the array (i.e. 30GB); because they will be mirrored, the size of one LUN should equal the storage capacity required on the host.

Before mapping the new LUNs to the host, check the current number and IDs of physical volumes already configured on the host by typing the following.

lspv

Then map only one LUN to the host and then type the following to configure the new devices on the host. (By mapping one LUN at a time it makes it a little easier to see which hdisks to add to the volume group).

cfgmgr

Typing lspv again, you should see listed an additional two hdisk devices. (There are two devices because the same LUN will be visible down two paths). The output of the command should show the two additional disks similar to the following.

hdisk11	none	None
hdisk12	none	None

To ensure that both LUNs are presented through different fibre channel controllers type the commands below.

lsdev -Cc adapter | grep fcs

fcs3    Available 20-58    FC Adapter
fcs2    Available 10-58    FC Adapter
fcs0    Available 30-70    FC Adapter
fcs1    Available 40-60    FC Adapter

The third column in the above output shows the location ID of the fibre channel controller.

Check the fibre channel controller location code associated with each new hdisk as follows.

lscfg -vl hdisk11

DEVICE            LOCATION          DESCRIPTION

hdisk11           10-58-01          Other FC SCSI Disk Drive

        Manufacturer................IBM
        Machine Type and Model......1815      FAStT
        ROS Level and ID............30393135
        Serial Number...............
        Device Specific.(Z0)........0000053245004033
        Device Specific.(Z1)........

Where hdisk11 should be replaced with the hdisk you would like to add to the VG.

Ensure that both hdisks that are added to the volume group have different location codes.

Type the following to see how many free PPs are currently in the volume group.

lsvg appvg

Where appvg is the name of your volume group.

Type the following to add one hdisk device to the volume group you would like to increase in size.

extendvg appvg hdisk11

Where hdisk11 should be replaced with the hdisk you would like to add.

Again type lsvg appvg to see how many free PPs are now available in the volume group. The number should have increased and the free available megabytes should have increased by the size of the LUN just added.

On typing lspv again you should see an output similar to the following, where the columns display information in this order: hdisk name, PVID, volume group name.

hdisk11	0050f4ba28948322	appvg
hdisk12	none			None

Now map out the second LUN to the host and type the following.

cfgmgr

Typing lspv again, you should see listed another two hdisk devices. (There are two devices because the same LUN will be visible down two paths). The output of the command should show the two additional disks similar to the following.

hdisk13	none	None
hdisk14	none	None

Type the following to add one of these additional hdisk devices to the volume group you would like to increase in size, ensuring this hdisk is on a different fibre channel controller to the one already added (see above).

extendvg appvg hdisk13

Where hdisk13 should be replaced with the hdisk you would like to add.

Again type lsvg appvg to see how many free PPs are now available in the volume group. The number should have increased and the free available megabytes should have increased again by the size of the LUN just added. The total of additional megabytes should now be twice the amount you require, this is again because of the need for mirroring.

On typing lspv again you should see an output similar to the following, where the columns display information in this order: hdisk name, PVID, volume group name.

hdisk13	0050f4ba28928465	appvg
hdisk14	none			None

Now let us suppose we have a logical volume in our volume group that we wish to increase in size by 10GB of our available 30GB.

Type the following to see the current configuration of the logical volume. (Where applv should be replaced by the name of your logical volume).

lslv applv

LOGICAL VOLUME:     applv             	   VOLUME GROUP:   appvg
LV IDENTIFIER:      0050f4baf319a0e2.1     PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       opened/syncd
TYPE:               jfs                    WRITE VERIFY:   off
MAX LPs:            1024                   PP SIZE:        256 megabyte(s)
COPIES:             2                      SCHED POLICY:   parallel
LPs:                638                    PPs:            1276
STALE PPs:          0                      BB POLICY:      relocatable
INTER-POLICY:       minimum                RELOCATABLE:    yes
INTRA-POLICY:       middle                 UPPER BOUND:    32
MOUNT POINT:        /apps              	   LABEL:          /apps
MIRROR WRITE CONSISTENCY: on
EACH LP COPY ON A SEPARATE PV ?: yes

We can see from the output above that "COPIES:" is equal to 2. This means that each logical partition contains at least two physical partitions (one copy) on the physical volume. Having two physical partitions for each logical partition ensures the logical partitions are mirrored. The value for "EACH LP COPY ON A SEPARATE pv ?:" is set to yes. This ensures that each physical partition associated with a single logical partition is located on a separate physical volume, in this case a separate LUN on the array. This ensures the logical partitions are mirrored across both paths and both LUNs.

If this is a new LV then these settings will need to be set to these values on creation of the LV.

So with the above settings set to the values shown, we can be confident that an increase in the size of the LV will ensure the additional space will be mirrored correctly. We now need to increase the size of the logical volume to utilize 10GB of our newly created available space in the volume group.

To do this make a note of how many PPs are currently assigned to the LV by typing.

lslv applv

Now increase the size of the filesystem and underlying LV by 10GB by typing the following.

chfs -a size=+20971520 /apps

The chfs command above increased the /apps filesystem by 20971520 x 512-byte blocks, which equals 10GB. Later versions of AIX allow you to specify 10G instead of the number of 512-byte blocks.

Have a look at the number of PPs assigned to the LV again. If mirroring is working correctly you should see the number of PPs assigned using a capacity of twice that specified by the chfs command.

That's it, now if a path fails we will lose one half of the mirror but still have access to the other half.

When the path comes back online the mirror will have to be rebuilt.