Assign path priorities to virtualized disks
Automation method for load balancing SAN traffic
Summary:
This article discusses the method for assigning physical path
priorities to virtual scsi paths on VIO servers based on even/odd
numbers associated with each disk and each path to disk. The script is
useful in a virtualized environment utilizing VIO servers with MPIO on
the client LPARs. This script also provides a system administrator with
the ability to load balance manually SAN traffic from a client LPARs
between dual VIO servers and across all physical adapters on the VIO
server.
Introduction
This
article discusses standardized procedures for prioritizing the storage
communication on an AIX client LPAR utilizing dual VIO servers.
Prioritizing the communication paths allows an administrator to maximize
the utilization of all available SAN fabric bandwidth and to distribute
the SAN traffic across all available hardware paths.
The
methodology described can be utilized on a standalone AIX system or on
AIX LPARs that obtain their storage from one or more VIO servers. In a
VIO environment, the communication path priorities should be set on all
VIO servers as well as the client AIX LPARs.
The
methodology of maximizing the utilization of SAN fabric bandwidth is
dependent upon the implementation of a naming and numbering scheme that
will be fully described in this article. The automated procedure to
assign priorities to each SAN fabric path is based upon the
implementation of this naming and numbering scheme.
The
first task is to perform an AIX installation; this can be from
distribution media, mksysb media, or from the NIM server (the preferred
method). It is suggested that a standard build be created and a mksysb
image be saved from this standard build. Then using the NIM server, use
this standard build mksysb to perform all AIX installations. Once
installed, then use a configuration script to customize the standard
build according to the individual system needs and requirements.
A
recommended configuration for a virtualized environment includes
redundant VIO servers, which provide redundant access to storage and
networking devices. This means that each VIO server will be configured
with multiple physical adapters for access to storage and multiple
physical devices with access to networking. Typically, each VIO server
is configured with two or three fiber channel devices providing access
to storage.
Configuration
of the virtual SCSI adapters requires the knowledge of disk layouts, as
well as networking configuration. The virtual SCSI adapters require
server and client side adapters to be configured on the HMC. The server
side portion of the SCSI adapter, configured on the VIO, requires the
definition of a frame wide unique "slot number". For high availability,
the server side portion of the SCSI adapter must be configured on both
VIO servers in a dual VIO configuration.
For
each client LPAR that uses virtual disk or logical volumes, a client
side virtual SCSI adapter must be configured on the HMC, one for each
VIO server. The client side of the virtual SCSI adapter requires
additional information and its settings to correspond with the server
side of the SCSI adapter. Coordinating the slot numbers defined here
will make debugging and tracking of problems much easier and is highly
desired.
To
define a slot numbering standard for VIO client/server environments, the
slot numbers should be divided on an even/odd basis. Even numbered
slots shall only be used on even numbered VIO servers; odd numbered
slots shall only be used on odd numbered VIO servers. Thus providing an
easy mechanism to determine which slot is served by which VIO server.
To
keep track of each slot and to provide a recognizable pattern to slot
numbering, a range of slot numbers has been arbitrarily selected for
assignment with virtual SCSI adapters. This standard specifies the range
of slot numbers to be those between 10 and 499.
Using
the following example VIO server host names (in Table 1 below), virtual
SCSI slot numbers can be assigned based on the ultimate purpose of the
storage attached to the virtual SCSI adapter (in Table 2 below):
VIO server name | Description | Managed frame name |
---|---|---|
mtxapvio00 | First VIO Server node on the frame | Server-9119-590-SN12A345B |
mtxapvio01 | Second VIO Server node on the frame | Server-9119-590-SN12A345B |
Virtual SCSI adapter slot number | VIO server numbering (even/odd) | VIO server (Example) | Purpose |
---|---|---|---|
100 | even | mtxapvio00 | Operating system storage |
101 | odd | mtxapvio01 | Operating system storage |
102 | even | mtxapvio00 | Database/Application storage |
103 | odd | mtxapvio01 | Database/Application storage |
104 | even | mtxapvio00 | Database/Application storage |
105 | odd | mtxapvio01 | Database/Application storage |
106 | even | mtxapvio00 | Database/Application storage |
107 | odd | mtxapvio01 | Database/Application storage |
108 | even | mtxapvio00 | Miscellaneous |
109 | odd | mtxapvio01 | Miscellaneous |
110 | even | mtxapvio00 | Operating system storage |
111 | odd | mtxapvio01 | Operating system storage |
112 | even | mtxapvio00 | Database/Application storage |
113 | odd | mtxapvio01 | Database/Application storage |
114 | even | mtxapvio00 | Database/Application storage |
115 | odd | mtxapvio01 | Database/Application storage |
116 | even | mtxapvio00 | Database/Application storage |
117 | odd | mtxapvio01 | Database/Application storage |
118 | even | mtxapvio00 | Miscellaneous |
119 | odd | mtxapvio01 | Miscellaneous |
120 | even | mtxapvio00 | Operating system storage |
121 | odd | mtxapvio01 | Operating system storage |
122 | even | mtxapvio00 | Database/Application storage |
123 | odd | mtxapvio01 | Database/Application storage |
124 | even | mtxapvio00 | Database/Application storage |
125 | odd | mtxapvio01 | Database/Application storage |
126 | even | mtxapvio00 | Database/Application storage |
127 | odd | mtxapvio01 | Database/Application storage |
128 | even | mtxapvio00 | Miscellaneous |
129 | odd | mtxapvio01 | Miscellaneous |
Under this standard, virtual SCSI adapters that are assigned slot numbers ending in 0 or 1 are used for communication with operating system storage. Virtual SCSI adapters that are assigned slot numbers ending in 2, 3, 4, 5, 6, or 7 are used for communication with application or database storage. Finally, virtual SCSI adapters that are assigned slot numbers ending in 8 or 9 are used for communication with miscellaneous other storage.
Create
the virtual SCSI adapters according to the standardized slot numbering
scheme described above. It creates virtual SCSI adapters for all slots
numbered between 10 and 499 for slots ending with numbers 0, 1, 2, 3, 8,
and 9. The even numbered slots should be created on the even numbered
VIO server and the odd numbered slots on the odd numbered VIO server.
The purpose of segmenting the SCSI adapters on an even/odd basis is to
provide the administrator with a mechanism to identify easily and
maintain these resources. Using this standard, up to 49 client LPARs
could be configured on any single frame. Slight modifications to this
standard would allow as many client LPARs as desired to be configured.
Table 3 below shows an example slot numbering sequence for dual VIO
servers providing virtual SCSI adapters for three client LPARs:
Client LPAR name | Client LPAR slots (from VIO) | VIO Server mtxapvio00 slot number (even) | VIO Server mtxapvio01 slot number (odd) | VIO server assignment to LPAR |
---|---|---|---|---|
mtxapora00 | 100/101 | 100 | 101 | Operating system storage |
mtxapora00 | 102/103 | 102 | 103 | Database storage |
mtxapora00 | 104/105 | 104 | 105 | Database storage |
mtxapora00 | 106/107 | 106 | 107 | Database storage |
mtxapora00 | 108/109 | 108 | 109 | Paging |
mtxapora01 | 110/111 | 110 | 111 | Operating system storage |
mtxapora01 | 112/113 | 112 | 113 | Database storage |
mtxapora01 | 114/115 | 114 | 115 | Database storage |
mtxapora01 | 116/117 | 116 | 117 | Database storage |
mtxapora01 | 118/119 | 118 | 119 | Paging |
mtxapora02 | 120/121 | 120 | 121 | Operating system storage |
mtxapora02 | 122/123 | 122 | 123 | Application storage |
mtxapora02 | 124/125 | 124 | 125 | Database storage |
mtxapora02 | 126/127 | 126 | 127 | Database storage |
mtxapora02 | 128/129 | 128 | 129 | Paging |
This standardized method of assigning slot numbers to virtual SCSI adapter greatly enhances the administrators ability to build, modify, and maintain the client LPARs and VIO servers. The administrator can immediately recognize which VIO server is providing active (or inactive) paths to storage from the client LPAR and where problems exist. The administrator is also able to take specific virtual SCSI adapters offline for maintenance or reconfiguration without effecting storage attached to other virtual adapters. Create the virtual SCSI adapters on each VIO server based on the even/odd numbering scheme; however, it is up to the administrator to assign these adapters to each client LPAR according to the standard described here.
Normally,
LUNs from a storage array are assigned to the physical adapters
associated with the dual VIO servers. An individual LUN would be
assigned to both VIO servers through the virtual SCSI adapters and
presented to a client LPAR. The Multi-Path I/O (MPIO) drivers on the
client LPAR then recognize that a single LUN is presented through
multiple virtual paths (one from each VIO Server), and a single "hdisk"
is discovered by the "cfgmgr" command.
The
LPAR to be configured will have multiple resources being provided by the
dual VIOS including one or more "vscsi" adapters. Additional physical
devices can be manually configured in the LPAR configuration, however
this procedure only describes the configuration associated with the
virtual devices.
Adapter type | Example slot | Example device name | Purpose |
---|---|---|---|
Virtual SCSI adapter | 100 | vscsi0 | Operating system disks |
Virtual SCSI adapter | 101 | vscsi1 | Operating system disks |
Virtual SCSI adapter | 102 | vscsi2 | Data/Application disks |
Virtual SCSI adapter | 103 | vscsi3 | Data/Application disks |
Virtual SCSI adapter | 104 | vscsi4 | Data/Application disks |
Virtual SCSI adapter | 105 | vscsi5 | Data/Application disks |
Virtual SCSI adapter | 106 | vscsi6 | Data/Application disks |
Virtual SCSI adapter | 107 | vscsi7 | Data/Application disks |
Virtual SCSI adapter | 108 | vscsi8 | Miscellaneous disks |
Virtual SCSI adapter | 109 | vscsi9 | Miscellaneous disks |
The disk configuration described in this assumes that each virtualized disk has multiple paths to the storage SAN, one path from each VIO server. The default configuration of the Multi-Path I/O (MPIO) is to give all paths the same priority; this has the effect of directing all SAN traffic across the first VIO server. To distribute the storage communication traffic evenly across both VIO servers, each path in an MPIO configuration must be assigned a path priority. The goal is to distribute the load evenly across the VIO servers; however, it is undesirable for all "hdisk0" disks to have their highest priority path always point to the even numbered VIO server. Therefore, the following logic should be implemented when assigning path priorities:
- Even numbered disk + even numbered client LPAR host name = highest priority path is the even numbered VIO server
- Odd numbered disk + odd numbered client LPAR host name = highest priority path is the even numbered VIO server
- Even numbered disk + odd numbered client LPAR host name = highest priority path is the odd numbered VIO server
- Odd numbered disk + even numbered client LPAR host name = highest priority path is the odd numbered VIO server
This path priority logic is captured in a shell script called "vscsiPriority.ksh" and automatically prioritizes all disks on a VIO server and/or client LPAR. This script can be downloaded at the following URL:http://www.mtxia.com/js/Downloads/Scripts/Korn/Functions/vscsiPriority.txt.
The
next task is to configure the virtual storage assigned to the client
LPAR (illustrating the VSCSI slot numbering standards and why they are
useful). The VSCSI slot numbers associated with each client LPAR is a
known entity on the VIO server; however, on the client LPAR it is
sufficient to recognize the only important part of the slot number is
the last digit. Slots that end with a zero (0) or one (1) are used for
operating system disks. Slots that end with a two (2) or three (3) are
used for application and data disks. The slot ending in a two (2) is
served by the VIO server whose host name is even numbered, the slot
ending in a three (3) is served by the VIO server whose host name is odd
numbered. Therefore, on the client LPAR, all disks whose parent VSCSI
adapter has a slot number ending in a two (2) or three (3) can be
automatically detected and assembled into a volume group. The disks
within a volume group can then be divided into logical volumes and file
systems. The following Korn shell statement utilizes the output from the
"lscfg" command to obtain a list of disks associated with the VSCSI
adapter at slot numbers ending in two(2) or three(3):
VGDISKS=$( lscfg -l 'hdisk*' | egrep -- '-C[0-9]*[23]-' | awk '{ print $1 }' | sort -n )
The following Korn shell code defines several values that will be used during the automated virtual storage configuration.
# Determine the length of the host name LEN="${#HNAME}" # extract the last two characters of the host name, assumed to be a two digit number SEQNBR=${HNAME:LEN-2:2} # assign a resource group name which will be used to define the VG, LVs, and file systems. RG="mtxdev${SEQNBR}" # Assign the volume group major number VGMJ="8${SEQNBR}" # Assign a unique identifier to use during the creation of the volume group VGID="00vg" # Assign a unique identifier to use during the creation of the Log Logical Volume LGID="jfs2" # Assign a directory mount point for the file system using the resource group name MTPT="/${RG}" # Create the volume group using the previously defined values mkvg -f -y ${RG}${VGID} -V ${VGMJ} ${VGDISKS} # Create the log logical volume using the previously defined values /usr/sbin/mklv -y ${RG}${LGID}lv -t jfs2log -a e "${RG}${VGID}" 1 # Determine the number of free physical partitions associated with the volume group FREEPPS=$( print "a=0; $( lsvg -p ${RG}${VGID} | sed -e '1,2 d' | awk '{ print $4 }' | sed -e 's/^/a=a+/' ); a" | bc ) # Assign the number of physical partitions to use for the application/data logical volume. LVSIZE=$(( FREEPPS - 5 )) # Create the application/data logical volume /usr/sbin/mklv -y ${RG}${LVID}lv \ -t jfs2 \ -x 5000 \ -a e \ "${RG}${VGID}" \ ${LVSIZE} # Create the application/data file system /usr/sbin/crfs -v jfs2 \ -d "${RG}${LVID}lv" \ -m "${MTPT}" \ -A y \ -p rw \ -a agblksize=4096 \ -a logname="${RG}${LGID}lv" # Mount the newly created file system mount /${RG}
The
result of the previous commands is a fully configured file system
mounted on a directory identified by the resource group name. The point
to all of the shell script commands shown in this article is to
re-enforce the business continuity mentality associated with
standardized procedures for all aspects of system administration,
including the build-out of networking and storage. Standardized
procedures such as these lead quickly into process automation and data
center automation.
With
the automated build-out of networking and storage comes the ability to
consider other components for process automation (such as application
deployment, database deployment, workload manager configuration, high
availability (HACMP) implementation, disaster recovery implementation,
automated documentation, audit compliance, and audit response).
Prioritizing the SAN fabric communication paths provides multiple benefits:
- Evenly distribute SAN traffic load across multiple VIO servers.
- Evenly distribute SAN traffic load across multiple physical adapters.
- Requires that naming standards be established and implemented that will enhance business continuity efforts.
- Requires that vSCSI slot numbering standards be established and implemented that will enhance business continuity efforts.
- Reduces hardware requirements by fully utilizing existing infrastructure.
- Increases return on investment (ROI) by fully utilizing existing infrastructure.
Most
importantly, establishing path priorities provides a standardized,
repeatable, teachable methodology that can be maintained across multiple
platforms and generations of administrators. A consistent approach to
managing and distributing SAN traffic can be documented, tested, and
tracked. IT management can use this method to encourage optimization of
physical resources and obtain the highest return on investment from
those resources.
Get products and technologies
- Download the vscsiPriority.ksh shell script.
- Try out IBM software for free. Download a trial version, log into an online trial, work with a product in a sandbox environment, or access it through the cloud. Choose from over 100 IBM product trials.
No comments:
Post a Comment