|QNAP TS-879U-RP 10GbE NAS Server|
|Reviews - Featured Reviews: Network|
|Written by Bruce Normann|
|Monday, 19 March 2012|
Page 8 of 10
NAS System Overhead Measurements
I've discussed the potential impact the NAS hardware has on performance in general terms so far. The hard reality is that the CPU, drive controllers, memory, and network subsystems have a direct and profound impact on the throughput of a NAS device. In extreme cases where multiple drives (4+) are arranged in higher-order RAID configurations, the CPU has a ton of work to do, calculating parity bits and parsing them out to multiple data streams. In-line data encryption adds another potential load to the infrastructure. In this section, I'm going to look at some results from the System Monitor capability that is available on the QNAP Turbo NAS server.
Let's start off looking at Network Bandwidth usage on the NAS server. During straight data transfers to and from the PC, with 8 disks configured as RAID 5 on the TS-879U-RP, the results show the single 10GbE connection consistently pushing and pulling over 500 MB/s of data through the wire. No real surprises here, just secondary confirmation that the data is actually being moved around from one place to another. You never know when an unsuspecting buffer will decide to make its presence known. The peak transfer rate during these tests is shown by a marker on the chart, and it's sitting at 586 MB/s. That's about 20% higher than the average throughput, which makes sense when you consider the effect of various system buffers and wait states. These charts had a lot more detail in them when each transfer took about 100 seconds to complete, now that they're over in about 20-25 seconds, the refresh rate of the chart is a little low.
Now let's look at CPU usage on the NAS server for the same set of transfers. During straight data transfers the results show the Intel Core i3-2120 CPU still not being pushed to the max. Data writes to the NAS still consume more cores than reads, but the load really never gets higher than 50% on all cores. During Read tests, some of the additional "hyper-threaded" cores are doing close to nothing. They're involved, but only in a peripheral way. This is in marked contrast with every other NAS I've tested, where the CPU is maxed out at 100% when doing anything involving RAID. The Intel Atoms hold their own for the most part, but the Marvell processors have been a major bottleneck in my experience. Finally, with this corporate beast, we have a CPU that can handle the load. The memory subsystem on the QNAP TS-879U-RP is not being taxed by these file transfers at all. It's not even worth looking at the chart.
Write tests with AES-256 volume encryption slow the transfer rate down quite a bit, and you can see from the marker on the chart that the peak was only 145 MB/s. The overall traces are pretty consistent, but the multiple small peaks in each transfer show some short-term variation in bandwidth. No surprises there, the refresh rate is pretty slow on these charts, and the various buffers and wait states always throw a couple wrinkles in any computer performance chart. In the next chart we'll see that the CPU gets hit hard, and in spikes - that's a factor that impacts the network throughput traces, as well.
Finally, let's look again at the CPU workload during disk write tasks with 256-bit encryption enabled. Reading the encrypted data doesn't tax the system as heavily, as far as I could see. With data encryption in the mix, the load on each of the CPU cores is much higher, spiking up to 100% quite often. Remember that these are virtual CPUs, as the Intel Core i3 2120 CPU has only two physical cores, but it supports Hyper-Threading. Also, the Core i3 does not support the recent AES-NI enhancements, so it's using brute force to encrypt this data. With the 10GbE interface keeping the bandwidth pipeline open, it looks like the CPU may have some bit of headroom left, but not much.
I hope this section showed you some objective reasons why the infrastructure that any NAS product brings to the table is important to its overall performance. As the number of drive bays goes up, the hardware requirements increase as well, and the price has to follow. I know it's disheartening to see that you don't get great economies of scale on the larger NAS units, but it would be even more of a shame if they didn't perform up to their true capabilities because the hardware was holding them back. In this case, the network interface that definitely was holding the system back is no longer an issue, and the system is showing the balanced performance that is more typical with a well-designed NAS system.
Now that we've shown you all the performance information, I'll share some Final Thoughts and then move on to our Conclusion page.