February 19, 2014 SPC-1 performance testing report of NetApp FAS8040 in 2-node C-Mode configuration was published. In this article we make shot analysis of the configuration which was tested and the results which were got.
Let’s consider the test results:
And the most interesting part – the figure, which represents how response time of storage resources depends on workload:
As you can see from the figure, the system gives 86K IOPS when utilization is 100%. Of course, a situation with 100% utilization isn’t so interesting. Primarily, this result is needed for vendors marketing wars. And we want to know performance parameters when the system has high utilization (for example 70%) and response time corresponds allowable limits – 3-4 ms, and how these limits are reached.
Let’s consider the specification of testing system:
|Description||q-ty||Price 1 q-ty||Price|
|FAS8040A-001-R6||FAS8040 High Availability System||2||22,350.00$||44,700.00$|
|SW-2-8040A-FCP-C||SW-2-8040A-FCP-C – FCP protocol||2||7,000.00$||14,000.00$|
|X-6505-24-16G-1R-R6||Switch,Brocade 6505 24-Pt w/16Gb SWL SFP+ Ent||2||19,580.00$||39,160.00$|
|X-SFP-H10GB-CU1M-R6-C||Cisco N5020 10GBase Copper SFP+cable, 1m, -C, R6 (Cluster interconnect)||2||112.00$||224.00$|
|X1973A-R6||Flash Cache 512GB PCIe Module||2||227,050.00$||54,100.00$|
|X2065A-EN-R6-C||HBA SAS 4-Port Copper 3/6 Gb QSFP PCIe,EN,-C||2||1,400.00$||2,800.00$|
|X1095A-R6||HBA Qlogic QLE2562 2-Port 8Gb PCIe||4||2,005.00$||8,020.00$|
|X6558-R6-C||Cable, SAS Cntlr-Shelf/Shelf-Shelf/HA, 2m, -C||16||125.00$||2,000.00$|
|X8712C-R6-C||PDU, 1-Phase, 24 Outlet, 30A,NEMA, -C, R6||2||550.00$||1,100.00$|
|X870D-EN-R6-C||Cab,Deep,HeavyDuty,Empty,No PDU,No Rail,EN,-C||1||3,595.00$||3,595.00$|
|X8778-R6-C||Mounting Bracket, Tie-Down,32X0, -C, R6||2||50.00$||100.00$|
|CS-A-INST-4R||SupportEdge Standard Replace 4hr, Hardware Support: 3 years||1||40,925.43$||40,925.43$|
So, what we have:
- 192 SAS 10K 450GB drives (8 shelves with 24 drives per each).
- They use RAID-DP configuration for SAS drives, but they don’t note how many drives are used for. We will use default configuration RAID-DP 14+2.
- 2 PCIE Flash Cache cards with 512GB, each one is installed in the PCIE slot at the Controller.
- The amount of memory on the HA-pair is 64GB (32GB per controller)
It’s clear, that 192 drives can’t give ~ 80K IOPS, and therefore, we want to know how many IOPS Flash Cache cards give. We suppose also, that the amount of memory isn’t so big and it doesn’t impact so much the IOPS performance when data locality is bad and utilization is high, and in this case, we suppose that the performance is limited by Flash cache cards and backend devices.
So, let’s evaluate the performance of 192 SAS 10K drives.
We will use Storage Backend Calculator to make it.
Let’s set base data for calculation:
- 2 root aggregate in configuration RAID-DP 2+2;
- For the rest SAS drives we set RAID-DP 14+2, 16 group total;
- We set max read profile – 99%, for getting max IOPS quantity;
The result of calculation is here:
Thus, 192 SAS 10K drives can give about 26K IOPS.
Early, we said that for us more interesting IOPS values when the response time corresponds to normal values (3-4 ms). As we can see from the figure above, 65K -75K IOPS corresponds to this interval of response time. For our estimation we take average value – 70K IOPS.
Now, it’s not difficult to estimate the contribution of Flash Cache cards in the overall performance. Thus 70K – 26K = 44K IOPS for 2 Flash Cash Cards. For one card – 22K IOPS.
This is a rather good result with normal values of response time.
And now, we will estimate how many Flash drives need for getting same result without using Flash Cache cards.
We get, that for the same IOPS configuration it’s needed about 15 Flash drives (RAID-DP 12+2 and one drive for hot spare).
It’s fair to say, that one Flash Cache 512 GB card is more costly (27K $, list price US), than a shelf with 24 SAS 10K 450GB drives (25K $, list price US), but probably, it’s cheaper than a shelf with 15 costly Flash drives.