Friday, 19 May 2017

Online migration of a file system to a smaller physical volume

Tip: Online migration of a file system to a smaller physical volume

Use Logical Volume Manager (LVM) to reclaim disk space without an outage
The problem: Downsizing a physical volume
The IBM AIX LVM has several features that allow you to reclaim unused disk space without downtime. You can reduce a file system using chfs, and you can remove unused physical volumes (PVs) from volume groups (VGs) so that you can allocate the storage somewhere else.
However, if you want to reduce the size of an AIX PV to reclaim unused disk space, you will damage the data on the PV. If you have a large SAN LUN that has a significant amount of unused physical partitions (PPs), you could back up the data, reduce the LUN size, and restore the data to the new, smaller PV. But that could involve unacceptable downtime. If, after a data cleanup, a large LUN needs to have some of its space reclaimed, it should be as seamless a procedure as possible.
The solution: Migrate to a new, smaller physical volume
It may not be possible to downsize a PV that is in use, but you can create a new, smaller SAN LUN, add it to the existing VG, and then migrate data from the larger PV to the smaller one. When that is done, the original oversized PV can be removed from the VG. The hdisk can then be taken out of the Object Data Manager (ODM) using rmdev. Then, you can recycle the SAN storage for some other use.
This procedure requires the new PV to have a suitable size and characteristics for adding to the existing VG. It must be large enough to hold all the data that is on the original PV (the used PPs). The procedure also assumes that there is no logical volume (LV) striping that would restrict the ability to run a mirror of a logical volume using mklvcopy.
For this exercise, there is a VG called datavg with a 50 GB PV. The lspv command shows the total size of the PV as well as the free and used PPs (see Listing 1).

Listing 1. Displaying physical volume characteristics

 
# lspv hdisk1
PHYSICAL VOLUME:    hdisk1                   VOLUME GROUP:     datavg
PV IDENTIFIER:      00cb07a45a12b4ca VG IDENTIFIER     00cb07a400004c00000001345a26db3e
PV STATE:           active
STALE PARTITIONS:   0                        ALLOCATABLE:      yes
PP SIZE:            512 megabyte(s)          LOGICAL VOLUMES:  1
TOTAL PPs:       99 (50688 megabytes)     VG DESCRIPTORS:   2
FREE PPs:         0 (0 megabytes)          HOT SPARE:        no
USED PPs:        99 (50688 megabytes)     MAX REQUEST:      256 kilobytes
FREE DISTRIBUTION:  00..00..00..00..00
USED DISTRIBUTION:  20..20..19..20..20
MIRROR POOL:        None
There is a single enhanced journaled file system (JFS2) called /scratch that has more than 35 GB free of its 49.50 GB allocation. This file system was created with an INLINE JFS2 log:
# df -gI /scratch
Filesystem    GB blocks      Used      Free %Used Mounted on
/dev/scratchlv     49.50     14.20     35.30   29% /scratch

The fine print

This solution applies to LVs that are not striped at the LVM level. LVM striping introduces its own restrictions that are beyond the scope of this article. The file system in this example uses an INLINE JFS2 log.
The solution here also assumes that there is sufficient storage to create a new SAN LUN of similar performance and redundancy to the original so that the end result will not affect system response times. The new LUN should have sufficient space to fit the used PPs from the original, larger LUN.
Reduce the file system
Because the file system is using less than one third of its allocation, its total disk allocation can be reduced. You can reduce the file system size using chfs. The following command reduces it by 30 GB:
# chfs -a size=-30G /scratch
Filesystem size changed to 40894464
Inlinelog size changed to 78 MB.
The total disk allocation for the file system has been reduced to 19.50 GB:
# df -gI /scratch
Filesystem    GB blocks      Used      Free %Used Mounted on
/dev/scratchlv     19.50     14.08      5.42   73% /scratch
This process has freed up some PPs on the PV, as the lspv command in Listing 2 shows.

Listing 2. lspv showing free physical partitions

 
# lspv hdisk1
PHYSICAL VOLUME:    hdisk1                   VOLUME GROUP:     datavg
PV IDENTIFIER:      00cb07a45a12b4ca VG IDENTIFIER     00cb07a400004c00000001345a26db3e
PV STATE:           active
STALE PARTITIONS:   0                        ALLOCATABLE:      yes
PP SIZE:            512 megabyte(s)          LOGICAL VOLUMES:  1
TOTAL PPs:          99 (50688 megabytes)     VG DESCRIPTORS:   2
FREE PPs:        60 (30720 megabytes)     HOT SPARE:        no
USED PPs:           39 (19968 megabytes)     MAX REQUEST:      256 kilobytes
FREE DISTRIBUTION:  00..01..19..20..20
USED DISTRIBUTION:  20..19..00..00..00
MIRROR POOL:        None
Add a smaller physical volume to the volume group
The next step is to add a new, smaller PV to the existing VG. This PV should have at least the same redundancy and input/output (I/O) performance as the original, larger PV. For example, it should be of an equivalent Redundant Array of Independent Disks (RAID) array. Any other tuning characteristics—such as queue depth— should be set to ensure that the system performance is comparable to the original, larger PV.
Create a new LUN, and allocate it to the AIX logical partition (LPAR). In this example, the new LUN is 20 GB:
# cfgmgr
The output of the lspv command shows that the new disk is called hdisk2 (see Listing 3) and it does not yet belong to a VG.

Listing 3. Listing the new disk

 
# lspv
hdisk0          00c5a47e3f356f3c                    rootvg          active
hdisk1          00cb07a45a12b4ca                    datavg          active
hdisk2          none                                None
You can view the size of the disk even before it is added to a VG using the getconf command. This reports the size in MB:
# getconf DISK_SIZE /dev/hdisk2
20480
Add the disk to the existing VG using the extendvg command:
# extendvg datavg hdisk2
0516-1254 extendvg: Changing the PVID in the ODM.
Mirror or migrate the logical partitions to a new physical volume
You can mirror the LV to the new PV, and then remove the copy from the original PV after all the PPs have been synchronised between the two PVs.
mklvcopy -k scratchlv 2

Migrate instead of mirror

Instead of mirroring, you can migrate the LV usingmigratepv. Refer to Resources for a link to more information.
There are other options for mklvcopy. For example, you can postpone the synchronization until a quieter time. You can also specify the disk allocation policy. The official documentation formklvcopy has the necessary details (refer to Resources).
Remove the copy from the original physical volume
When the synchronization is complete, you can remove the copy from the original PV using rmlvcopy. Be sure to specify the PV that has the copy you want to remove.
rmlvcopy scratchlv 1 hdisk1
You can use the lspv command to confirm that there are no more used PPs on the original PV. If there are still some PPs in use, you can list the LVs on the PV using lspv -l PVNAME. When you are sure that all the PPs have been moved to other PVs, the original PV can be removed from the VG using reducevg:
reducevg datavg hdisk1
You should be able to remove the original larger PV from the ODM using rmdev:
rmdev -d -l hdisk1
Finally, you can remove the LUN or allocate it for some other use.
Cutting out outages
As you can see, the LVM features allow you to move data around—even to smaller disks—without unnecessary user impact. By taking advantage of the LVM mirroring and migration capabilities, you can keep your system up and still recover the much-needed storage space if it has been over-allocated.

No comments:

Post a Comment