|QNAP TS-879U-RP NAS Network Storage Rack Server|
|Reviews - Featured Reviews: Network|
|Written by Bruce Normann|
|Tuesday, 07 February 2012|
Page 14 of 16
NAS System Overhead Measurements
I've discussed the potential impact the NAS hardware has on performance in general terms so far. The hard reality is that the CPU, drive controllers, memory, and network subsystems have a direct and profound impact on the throughput of a NAS device. In extreme cases where multiple drives (4+) are arranged in higher-order RAID configurations, the CPU has a ton of work to do, calculating parity bits and parsing them out to multiple data streams. In-line data encryption adds another potential load to the infrastructure. In this section, I'm going to look at some results from the System Monitor capability that is available on the QNAP Turbo NAS server.
Let's start off looking at CPU usage on the NAS server. During a straight data transfer from the PC to 4 disks configured as RAID 5 on the TS-879U-RP, the results show the Intel Core i3-2120 CPU coasting along at close to 25% on both cores, with the additional "hyper-threaded" cores doing close to nothing. They're involved, but only in a peripheral way. This is in marked contrast with every other NAS I've tested, where the CPU is maxed out at 100% when doing anything involving RAID. The Intel Atoms hold their own for the most part, but the Marvell processors have been a major bottleneck in my experience. Finally, with this corporate beast, we have a CPU that can handle the load. The memory subsystem on the QNAP TS-419P II is not being taxed by these file transfers at all. It's not even worth looking at the chart.
The host system is also tooling along at about 25% on the CPU (use the green trace in every graph below...), barely breathing hard. The Disk subsystem is having an even easier time of it, cruising at less than 15% of its throughput capacity. It's what you would expect for a 3d generation SSD capable of transferring over 500MB/s of data in sequential tests. The Network trace is where we see the real issue. That GbE NIC is working overtime and still can't keep up! I'm sad to say, we are completely limited in this test set by the network interface. It's maxed out and we aren't going to get any more, unless we spend an additional $1,000 on a new pair of 10GbE NICs from Intel.
Let's take a look at the network interface on the NAS side. This particular chart was produced during five disk-writing tests, with both Ethernet connections set up for for teaming via IEEE 802.3ad/Link Aggregation. The first thing to inspect is the green trace, which shows Packets Received by the NAS. Since this was a "Write to NAS" test, you can see that the data throughput into the NAS is pretty well maxed out on the Ethernet 2 connection, at 116.1 MB/s (929 Mbps). Over on the Ethernet 1 connection, you see much less data being sent from the NAS back to the host computer, only 1.7 MB/s. This is likely just housekeeping data, checksums and such. So in theory, teaming the two GbE NICs together allows for double the network throughput; in reality it only does that if you have equal amounts of data being transferred IN and OUT of the device. In real-world usage that's a distinct possibility for some applications, but in my experience many data storage systems get hit asymmetrically all the time. In a typical tech office, everyone needs to check their work out of the "vault" when they get started in the morning, and they all need to check it back in before they leave for the night.
In an earlier review, I said, "One day, I'm going to load up one of the big NAS units with high end SSDs in RAID 0 and let it rip; then we'll see where the system bottlenecks are." Well, here it is: it's the industry standard network interface that's holding the big rigs back. Once a pair of 10GbE NICs are brought onto the team, I'm sure the load will balance out and the other team members will be pulling more weight.
Just to show the contrast between the TS-x79 series and the lesser models in the product line, take a look at the CPU utilization from an earlier test. The NAS CPU is being taxed to the max during these file transfers, with either small numbers of large files or a large number of small files. In the chart below, you can see some occasional dips where individual, smaller files were transferred. The system buffers are getting bounced around during this scenario and you see some sharp drops, with a corresponding sharp recovery. There's nowhere for the CPU to hide in a high performance NAS appliance, and the ARM processor in QNAP's lower-priced models gets hammered pretty bad in typical use cases.
Finally, I give you a glimpse at some further testing I plan to do on the TS-879U-RP. I reformatted all eight drives in RAID 5, but this time I selected the option to encrypt the data to AES 256-bit standards. Now, during the disk write tasks, the CPU gets a little more of a workload. Reading the encrypted data doesn't tax the system as heavily, as far as I could see. In the CPU chart at the beginning of this section, you can see that CPU 2 and CPU 4 were just along for the ride; there was nothing for them to do. With data encryption in the mix, the load on them is much higher, spiking up to 100% quite often. Remember that these are virtual CPUs, as the Intel Core i3 2120 CPU has only two physical cores, but it supports Hyper-Threading. Also, the Core i3 does not support the recent AES-NI enhancements, so it's using brute force to encrypt this data. With the GbE interface keeping the throughput artificially low, it looks like the CPU still has some headroom left.
I hope this section showed you some objective reasons why the infrastructure that any NAS product brings to the table is important to its overall performance. As the number of drive bays goes up, the hardware requirements increase as well, and the price has to follow. I know it's disheartening to see that you don't get great economies of scale on the larger NAS units, but it would be even more of a shame if they didn't perform up to their true capabilities because the hardware was holding them back. In this case, the network interface is definitely holding the system back, and I hope to rectify that in the future.
Now, let's look at some Final Thoughts, and then move on to our Conclusion and Product Ratings.