Tuesday 30 May 2017

PowerVM QuickStart I-Overview

PowerVM QuickStart I-Overview

  1. Virtualization Components
  2. PowerVM Acronyms & Definitions
  3. CPU Allocations
  4. Shared Processor Pools
  5. PowerVM Types
  6. VIOS (Virtual I/O Server)
  7. VIO Redundancy


1.Virtualization Components:

• (Whole) CPUs, (segments of) memory, and (physical) I/O devices can be assigned to an LPAR (Logical Partition). This is commonly referred to as dedicated resource partition (or Power 4 LPARs, as this capability was introduced with the Power 4 based systems).

• A DLPAR is the ability to dynamically add or remove resources from a running partition. All modern LPARs are DLPAR capable (even if the OS running on them is not). For this reason, DLPAR has more to do with the capabilities of the OS running in the LPAR than the LPAR itself. The acronym DLPAR is typically used as a verb (to add/remove a resource) as opposed to define a type of LPAR. Limits can be placed on DLPAR operations, but this is primarily a user imposed limit, not a limitation of the hypervisor.

• Slices of a CPU can be assigned to back a virtual processor in a partition. This is commonly referred to as a micro-partition (or commonly referred to as Power 5 LPAR, as this capability was introduced with the Power 5 based systems). Micro-partitions consume CPU from a shared pool of CPU resources.
• Power 5 and later systems introduced the concept of VIOS (Virtual I/O Server) that effectively allowed physical I/O resources to be shared amongst partitions.

• The IBM System P hypervisor is a Type 1 hypervisor with the option to provide shared I/O via a Type 2-ish software (VIOS) partition. The hypervisor does not own I/O resources (such as an Ethernet card), these can only be assigned to a LPAR. When owned by a VIOS LPAR they can be shared to other LPARs.

• Micro-Partitions in Power 6 systems can utilize multiple SPP (shared processor pools) to control CPU utilization to groups of micro-partitions.

• Some Power 6 systems introduced hardware based network virtualization with the IVE (Integrated Virtual Ethernet) device that allows multiple partitions to share a single Ethernet connection without the use of VIOS.

• Newer HBAs that offer NPIV (N Port ID Virtualization) can be used in conjunction with the appropriate VIOS version to present a virtualized HBA with a unique WWN directly to a client partition.

• VIOS 2.1 introduced memory sharing, only on Power 6 systems, that allows memory to be shared between two or more LPARs. Overcommitted memory can be paged to disk using a VIOS partition.

• VIOS and Micro-Partition technologies can be implemented independently. For example, a (CPU) Micro-Partition can be created with or without the use of VIOS.

2.PowerVM Acronyms & Definitions

CoD - Capacity on Demand. The ability to add compute capacity in the form of CPU or memory to a running system by simply activating it. The resources must be pre-staged in the system prior to use and are (typically) turned on with an activation key. There are several different pricing models for CoD.
DLPAR - Dynamic Logical Partition. This was used originally as a further clarification on the concept of an LPAR as one that can have resources dynamically added or removed. The most popular usage is as a verb; ie: to DLPAR (add) resources to a partition.
HEA - Host Ethernet Adapter. The physical port of the IVE interface on some of the Power 6 systems. A HEA port can be added to a port group and shared amongst LPARs or placed in promiscuous mode and used by a single LPAR (typically a VIOS LPAR).
HMC - Hardware Management Console. An "appliance" server that is used to manage Power 4, 5, and 6 hardware. The primary purpose is to enable / control the virtualization technologies as well as provide call-home functionality, remote console access, and gather operational data.
IVE - Integrated Virtual Ethernet. The capability to provide virtualized Ethernet services to LPARs without the need of VIOS. This functionality was introduced on several Power 6 systems.
IVM - Integrated Virtualization Manager. This is a management interface that installs on top of the VIOS software that provides much of the HMC functionality. It can be used instead of a HMC for some systems. It is the only option for virtualization management on the blades as they cannot have HMC connectivity.
LHEA - Logical Host Ethernet Adapter. The virtual interface of a IVE in a client LPAR. These communicate via a HEA to the outside / physical world.
LPAR - Logical Partition. This is a collection of system resources that can host an operating system. To the operating system this collection of resources appears to be a complete physical system. Some or all the resources on a LPAR may be shared with other LPARs in the physical system.
Lx86 - Additional software that allows x86 Linux binaries to run on Power Linux without recompilation.
MES - Miscellaneous Equipment Specification. This is a change order to a system, typically in the form of an upgrade. A RPO MES is for Record Purposes Only. Both specify to IBM changes that are made to a system.
MSPP - Multiple Shared Processor Pools. This is a Power 6 capability that allows for more than one SPP.
SEA - Shared Ethernet Adapter. This is a VIOS mapping of a physical to a virtual Ethernet adapter. A SEA is used to extend the physical network (from a physical Ethernet switch) into the virtual environment where multiple LPARs can access that network.
SPP - Shared Processor Pool. This is an organizational grouping of CPU resources that allows caps and guaranteed allocations to be set for an entire group of LPARs. Power 5 systems have a single SPP, Power 6 systems can have multiple.
VIOC - Virtual I/O Client. Any LPAR that utilizes VIOS for resources such as disk and network.
VIOS - Virtual I/O Server. The LPAR that owns physical resources and maps them to virtual adapters so VIOC can share those resources.

3. CPU Allocations

• Shared (virtual) processor partitions (Micro-Partitions) can utilize additional resources from the shared processor pool when available. Dedicated processor partitions can only use the "desired" amount of CPU, and only above that amount if another CPU is (dynamically) added to the LPAR.
• An uncapped partition can only consume up to the number of virtual processors that it has. (Ie: A LPAR with 5 virtual CPUs, that is backed by a minimum of .5 physical CPUs can only consume up to 5 whole / physical CPUs.) A capped partition can only consume up to its entitled CPU value. Allocations are in increments of 1/100th of a CPU, the minimal allocation is 1/10th of a CPU for each virtual CPU.
• All Micro-Partitions are guaranteed to have at least the entitled CPU value. Uncapped partitions can consume beyond that value, capped cannot. Both capped and uncapped relinquish unused CPU to a shared pool. Dedicated CPU partitions are guaranteed their capacity, cannot consume beyond their capacity, and on Power 6 systems, can relinquish CPU capacity to a shared pool.
• All uncapped micro-partitions using the shared processor pool compete for the remaining resources in the pool. When there is no contention for unused resources, a micro-partition can consume up to the number of virtual processors it has or the amount of CPU resources available to the pool.
• The physical CPU entitlement is set with the "processing units" values during the LPAR setup in the HMC. The values are defined as:
 › Minimum: The minimum physical CPU resource required for this partition to start.
 › Desired: The desired physical CPU resource for this CPU. In most situations this will be the CPU entitlement. The CPU entitlement can be higher if resources were DLPARed in or less if the LPAR started closer to the minimum value.
 › Maximum: This is the maximum amount of physical CPU resources that can be DLPARed into the partition. This value does not have a direct bearing on capped or uncapped CPU utilization.
• The virtual CPU entitlement is set in the LPAR configuration much like the physical CPU allocation. Virtual CPUs are allocated in whole integer values. The difference with virtual CPUs (from physical entitlements) is that they are not a potentially constrained resource and the desired number is always received upon startup. The minimum and maximum numbers are effectively limits on DLPAR operations.
• Processor folding is an AIX CPU affinity method that insures that an AIX partition only uses as few CPUs as required. This is achieved by insuring that the LPAR uses a minimal set of physical CPUs and idles those it does not need. The benefit is that the system will see a reduced impact of configuring additional virtual CPUs. Processor folding was introduced in AIX 5.3 TL 3.

• When multiple uncapped micro-partitions compete for remaining CPU resources then the uncapped weight is used to calculate the CPU available to each partition. The uncapped weight is a value from 0 to 255. The uncapped weight of all partitions requesting additional resources is added together and then used to divide the available resources. The total amount of CPU received by a competing micro-partition is determined by the ratio of the partitions weight to the total of the requesting partitions. (The weight is not a nice value like in Unix.) The default priority for this value is 128. A partition with a priority of 0 is effectively a capped partition.

CPU Usage
Figure 0: Virtualized and dedicated CPUs in a four CPU system with a single SPP.
• Dedicated CPU partitions do not have a setting for virtual processors. LPAR 3 in Figure 0 has a single dedicated CPU.
• LPAR 1 and LPAR2 in Figure 0 are Micro-Partitions with a total of five virtual CPUs backed by three physical CPUs. On a Power 6 system, LPAR 3 can be configured to relinquish unused CPU cycles to the shared pool where they will be available to LPAR 1 and 2 (provided they are uncapped).

4.Shared Processor Pools

• While SPP (Shared Processor Pool) is a Power 6 convention, Power 5 systems have a single SPP commonly referred to as a "Physical Shared Processor Pool". Any activated CPU that is not reserved for, or in use by a dedicated CPU LPAR, is assigned to this pool.
• All Micro-Partitions on Power 5 systems consume and relinquish resources from/to the single / physical SPP.
• The default configuration of a Power 6 system will behave like a Power 5 system in respect to SPPs. By default, all LPARs will be placed in the SPP0 / physical SPP.
• Power 6 systems can have up to 64 SPPs. (The limit is set to 64 because the largest Power 6 system has 64 processors, and a SPP must have at least 1 CPU assigned to it.)
• Power 6 SPPs have additional constraints to control CPU resource utilization in the system. These are:
 > Desired Capacity - This is the total of all CPU entitlements for each member LPAR (at least .1 physical CPU for each LPAR). This value is changed by adding LPARs to a pool.
 > Reserved Capacity - Additional (guaranteed) CPU resources that are assigned to the pool of partitions above the desired capacity. By default this is set to 0. This value is changed from the HMC as an attribute of the pool.
 > Entitled Capacity - This is the total guaranteed processor capacity for the pool. It includes the guaranteed processor capacity of each partition (aka: Desired) as well as the Reserved pool capacity. This value is a derived value and is not directly changed, it is a sum of two other values.
 > Maximum Pool Capacity - This is the maximum amount of CPU that all partitions assigned to this pool can consume. This is effectively a processor cap that is expanded to a group of partitions rather than a single partition. This value is changed from the HMC as an attribute of the pool.
• Uncapped Micro-Partitions can consume up to the total of the virtual CPUs defined, the maximum pool capacity, or the unused capacity in the system - whatever is smaller.

5.PowerVM Types

• PowerVM inherits nearly all the APV (Advanced Power Virtualization) concepts from Power 5 based systems. It uses the same VIOS software and options as the previous APV.
• PowerVM is Power 6 specific only in that it enables features available exclusively on Power 6 based systems (ie: Multiple shared processor pools and partition mobility are not available on Power 5 systems.)
• PowerVM (and its APV predecessor) are optional licenses / activation codes that are ordered for a server. Each is licensed by CPU.
• PowerVM, unlike its APV predecessor, comes in several different versions. Each is tiered for price and features. Each of these versions is documented in the table on the right.
• PowerVM activation codes can be checked from the HMC, IVM, or from IBM records at this URL: http://www-912.ibm.com/pod/pod. The activation codes on the IBM web site will show up as type "VET" codes.


Express• Only available on limited lower-end P6 systems
• Maximum of 3 LPARs (1 VIOS and 2 VIOC)
• CPU Micro-Partitions with single processor pool
• VIOS/IVM only, no HMC support
• Dedicated I/O resources (in later versions)
Standard• Up to the maximum partitions supported on each server
• Multiple shared processor pools
• IVM or HMC managed
Premium• All options in Standard & Live Partition Mobility

• The VET codes from the IBM activation code web site can be decoded using the following "key":
 XXXXXXXXXXXXXXXXXXXXXXXX0000XXXXXX Express
 XXXXXXXXXXXXXXXXXXXXXXXX2C00XXXXXX Standard
 XXXXXXXXXXXXXXXXXXXXXXXX2C20XXXXXX Enterprise

6.VIOS (Virtual I/O Server)

• VIOS is a special purpose partition that can serve I/O resources to other partitions. The type of LPAR is set at creation. The VIOS LPAR type allows for the creation of virtual server adapters, where a regular AIX/Linux LPAR does not.
• VIOS works by owning a physical resource and mapping that physical resource to virtual resources. Client LPARs can connect to the physical resource via these mappings.
• VIOS is not a hypervisor, nor is it required for sub-CPU virtualization. VIOS can be used to manage other partitions in some situations when a HMC is not used. This is called IVM (Integrated Virtualization Manager).
• Depending on configurations, VIOS may or may not be a single point of failure. When client partitions access I/O via a single path that is delivered via in a single VIOS, then that VIOS represents a potential single point of failure for that client partition.
• VIOS is typically configured in pairs along with various multipathing / failover methods in the client for virtual resources to prevent the VIOS from becoming a single point of failure.
• Active memory sharing and partition mobility require a VIOS partition. The VIOS partition acts as the controlling device for backing store for active memory sharing. All I/O to a partition capable of partition mobility must be handled by VIOS as well as the process of shipping memory between physical systems.

7.VIO Redundancy

• Fault tolerance in client LPARs in a VIOS configuration is provided by configuring pairs of VIOS to serve redundant networking and disk resources. Additional fault tolerance in the form of NIC link aggregation and / or disk multipathing can be provided at the physical layer to each VIOS server. Multiple paths / aggregations from a single VIOS to a VIOC do not provide additional fault tolerance. These multiple paths / aggregations / failover methods for the client are provided by multiple VIOS LPARs. In this configuration, an entire VIOS can be lost (ie: rebooted during an upgrade) without I/O interruption to the client LPARs.
• In most cases (when using AIX) no additional configuration is required in the VIOC for this capability. (See discussion below in regards to SEA failover vs. NIB for network, and MPIO vs. LVM for disk redundancy.)
• Both virtualized network and disk redundancy methods tend to be active / passive. For example, it is not possible to run EtherChannel within the system, from a VIOC to a single VIOS.
• It is important to understand that the performance considerations of an active / passive configuration are not relevant inside the system as all VIOS can utilize pooled processor resources and therefore gain no significant (internal) performance benefit by active / active configurations. Performance benefits of active / active configurations are realized when used to connect to outside / physical resources such as EtherChannel (port aggregation) from the VIOS to a physical switch.

No comments:

Post a Comment