Often when we are planning a new storage system we want to know the estimation of the maximum number of IOPS, which the storage system can give. This methodology provides a simple way to evaluate the performance of the storage system.
The vendors rarely publish these values for their systems, or this information is only available to partners, and the various available tests, related with specific applications and with specific configurations for these applications, don’t always allow to estimate the maximum value of IOPS.
Just note that these estimates are not official estimates of the vendors and they do not claim to be highly accurate, but nevertheless, they can give the independent performance classification of the systems.
Basic assumptions used for the calculation:
- Storage vendors comply with the balance between the maximum performance of controllers and the maximum disk configuration (BackEnd) of the system.
- We assume that the maximum performance of the disk configuration corresponds to the performance of controllers and we do not take into account the resources that provide additional services (Replication, Snapshots, Cloning, Deduplication, etc.)
- We use our storage backend calculator for IOPS estimation of the disks configuration.
- Often, vendors publish the results with a huge number of IOPS, when the specific workload with high data locality is used. In this case, data is read from the cache of controllers, and BackEnd is hardly ever used (Read Hits). Such results are useless for real systems planning and it’s mainly used for marketing purposes and competitions. If all data is not placed in the cache, the performance is greatly reduced, since in this case the system begins to work actively with BackEnd devices which are much slower than the memory of controllers. This situation corresponds more to the actual workload. Therefore, we will evaluate the performance, given the fact that the most of operations performed with disks (Read Miss).
- The IOPS performance depends on a workload profile, we will use configurations which give the maximum number of IOPS. This is the configuration with a high percentage of read operations – 99% (R / W – 99/1).
- We will make calculations for the more realistic workload profile, which is typical for the OLTP databases (DB like). In this case, the percentage of read operations is about 70% (R/W – 70/30).
- The IOPS performance depends on the size of data blocks, so we assume that blocks of small size are used (less than 32K).
- Now, most storage systems are a hybrid. And the difference in performance between Flash drives and SAS or FC drives is very large. There is a specific class of systems; this is All Flash arrays. They use Flash drives only. The vendors indicate that it’s possible to install a sufficiently large number of Flash drives in the hybrid arrays. However, the hybrid arrays support much less Flash drives in All Flash configurations. Thus, the maximum number of Flash drives in the hybrid array is a very important factor in determining the maximum performance of the array.
- We will estimate the maximum number of Flash drives for each model of storage system depending on its features.
- We will evaluate such parameter as the ratio of IOPS to the sum frequency of processor cores in GHz (IOPS per GHz). This parameter determines the performance of the controller of storage system, in case the BackEnd is not a bottleneck and the software of controllers works efficiently.
- We define the sum frequency in the following way. Multiply the total number of cores on a frequency of the core.
- Note that the ratio of IOPS per GHz is a great option, cos it does not change much for different models of midrange systems and for modern Intel processors and the average ratio is about 10000 IOPS per GHz. This ratio may vary between about +/- 50%. These limits of variation are correct only in the case of the maximum IOPS estimation. The ratio IOPS per GHz will be different if we use DB like workload profile.