Credit: IBM
Comparing performance with and without AIX flash cache
Introduction to
flash cache
Flash cache technology is an emerging technology in recent years because of its own
advantages and one of them is faster access of to data. IBM® wants to use this
recent technology in IBM Power® servers and has introduced flash cache in IBM AIX® 7.2
and later versions so that enterprise-class customers can have faster access to data in
their production environment which was much essential. In order to make use of flash cache,
you need to have a set of flash drives and solid-state drive (SSD) adapter connected to the
flash drives.
IBM has introduced flash drives in AIX 7.2 to improve faster data access by caching data in
flash drives. Flash cache technology in AIX can be used in the following three different
ways:
- Attaching an SSD adapter to IBM AIX logical partition (LPAR)
- Attaching an SSD adapter to Virtual I/O Server (VIOS) and assigning flash drives to AIX clients using the
virtual Small Computer System Interface (vSCSI) technology - Attaching an SSD adapter to VIOS and assigning flash
drives to AIX clients using N-Port ID Virtualization (NPIV) technology.
Scope of this
article
Flash caching technology in IBM Power Systems™ can be used in the following three
different ways:
- Assigning an SSDadapter to AIX LPAR: In this method, we are assigning an SSD adapter
directly to an AIX LPAR and flash drives will be coming to AIX LPAR through an SSD
adapter. - Assigning an SSD adapter to VIOS and assigning cache disk to AIX LPAR using the
vSCSI technology: In this method, we are assigning an SSD adapter to VIOS
and flash drives will be coming to VIOS through an SSD adapter. We will assign
cache disks from VIOS to AIX client using vSCSI technology. - Assigning an SSD adapter to VIOS and assigning cache disk to AIX LPAR using NPIV
(N-Port ID Virtualization) technology: In this method, we are assigning an SSD
adapter to VIOS and flash drives will be coming to VIOS through an SSD adapter.
We will assign a cache disk from VIOS to an AIX client using NPIV
technology.
For all of these three methods, we use caching software which is developed by IBM. Caching
software consists of cache management and cache engine.
Cache management software is an application program interface (API) which has a new set of
commands available on both AIX and VIOS to create a cache pool and a cache partition. We
must assign a cache partition to the disks that are present on the AIX LPAR and start
caching. A cache engine is caching software that decides the blocks of the disk that needs
to be cached and whether data requested need to be fetched from the cache disk or the
primary storage.
Figure 1 shows a pictorial representation of method 2 explained earlier.
Figure 1. AIX flash cache using vSCSI technology
Performance measurement with and without flash cache
This section explains how performance differs with and without flash caching. Before
measuring performance, we need to make sure that the following system configuration and the
pre-requisites are met.
System configuration:
- Power 795 (Type/model – 9119-FHB)
- AIX 7.2
- VIOS 2.2.6.0
- Flash drives, both locally attached and storage area network (SAN) subsystem based and
assigned to VIOS
You need to perform the following tunable changes for performance evaluation.
On both VIOS and AIX clients:
Set the max_transfer
attribute to 100000 using the chdev
command for all SAN subsystem
disks and cache disks.
# chdev -l hdisk5 -a max_transfer=0x100000 hdisk1 changed
Set the queue_depth
attribute to 256 using the chdev
command for all SAN subsystem disks
and cache disks.
# chdev -l hdisk5 -a queue_depth=256 hdisk1 changed
Next, we need to configure AIX flash cache and run performance benchmark with and without
flash cache to see the improvements.
Configuration on VIOS:
- Create a cache pool on VIOS using the following command:
$ cache_mgt pool create -d hdisk5 Pool cmpool0 created with devices hdisk5.
- Create a cache partition on the cache pool (created on VIOS) using the following
command:
$ cache_mgt partition create -s 4G -P cache_partition0 Partition cache_partition0 created in pool cmpool0. $ cache_mgt partition create -s 4G -P cache_partition1 Partition cache_partition0 created in pool cmpool0.
- Assign cache partition to a vSCSI adapter present on VIOS using the following
command:
$ cache_mgt partition assign -P cache_partition0 -v vhost0 Partition cache_partition0 assigned to vSCSI Host Adapter vhost0. $ cache_mgt partition assign -P cache_partition1 -v vhost1 Partition cache_partition1 assigned to vSCSI Host Adapter vhost1.
Configuration on the AIX client:
- Run the cfgmgr command on the AIX client to get the cache disk on the client on
VIOS. - Assign the cache disk to the storage disk using the following command on LPAR1:
# cache_mgt partition assign -P cachedisk0 -t hdisk3 Partition cachedisk0 assigned to target hdisk3. # cache_mgt partition assign -P cachedisk0 -t hdisk4 Partition cachedisk0 assigned to target hdisk4.
- Perform step 2 on LPAR2 as well.
# cache_mgt partition assign -P cachedisk0 -t hdisk0 Partition cachedisk0 assigned to target hdisk0. # cache_mgt partition assign -P cachedisk0 -t hdisk1 Partition cachedisk0 assigned to target hdisk0.
- Start caching on both LPARs, LPAR1 and LPAR2, using the following command:
# cache_mgt cache start -t all All caches have been started.
Now, let us discuss about the performance of the I/O storage workload used in the benchmark
test. The workload performs I/O operation with the storage devices and provides the
performance metrics in IOPS and data transfer rate per second.
It has the provision to run I/O request in a random or a sequential fashion. We can apply
stress by defining the number of threads to be run simultaneously on it. You can customize
the parameter file accordingly. You can change the following parameters as required:
NBROFPROC ->1 4 8 24 48 96 128 256 - Specifies Number of threads DURATION -> 5 - Specifies Duration it should run for each threads ACCESS -> R - Specifies Random(R) / Sequential(s) RESPONSETIME -> 0.005 0.010 0.015 0.020 0.030 0.050 0.100 0.200 BUCKETS -> .002 .005 .010 READPROBABILITY -> 0.70 - Specifies %read and write(here 70% read and 30% write) READLOCALITY -> (0.5 0.5) WRITELOCALITY ->(0.33 0.33 0.34) * RECPERTRK -> 1000 SECPERTRK -> 128 RECSIZE -> 4096 - block size default is 4k PDEV -> 240 - No. of physical device used in operation NBROFDEV -> 240 - No. of device used
Now, let’s look at the storage I/O performance benchmark numbers while running this workload on
AIX LPARs (LPAR1 and LPAR2) with and without flash cache.
The chart in Figure 2 shows the comparison of data metrics with cache enabled versus
disabled and we can see there is approximately 2.7 times improvements when cache is
enabled.
Figure 2. Performance measurement with and without AIX flash cache with respect to IOPS
parameter
The chart in Figure 3 shows the comparison of data transfer in MBps with cache enabled vs
disabled and we can see that there is approximately 2.5 times improvement when cache is
enabled.
Figure 3. Performance comparison with cache enabled versus disabled
Conclusion
This article indicates that with AIX flash cache, there is about 2.5 to 2.7 times
improvement data metrics. This performance increase is quite significant for a large number
of workloads and flash cache plays a major role here. We can see this increase in
performance even with virtualization in IBM Power servers.
Resources
Downloadable resources
Credit: IBM