|QNAP TS-219P+ NAS Network Server|
|Reviews - Featured Reviews: Network|
|Written by Bruce Normann|
|Thursday, 30 June 2011|
Page 11 of 13
NAS System Overhead Measurements
I've discussed the potential impact the NAS hardware has on performance in general terms so far. The consensus is that the CPU, drive controllers, memory, and network subsystems have a direct and profound impact on the throughput of a NAS device. In extreme cases where multiple drives (4+) are arranged in higher-order RAID configurations, the CPU has a ton of work to do, calculating parity bits and parsing them out to multiple data streams. In this section, I'm going to look at some results from the System Monitor capability that is available both on the host PC and on the QNAP Turbo NAS server.
Let's start off looking at CPU usage on the NAS server. During a straight data transfer from the PC to the single disk on the TS-219P+, the results show the Marvell 88F6282 SoC device completely maxed out at 100%. The two blocks in the chart represent two different sets of files being transferred, with a break between them. The fist block represents ten 1GB files transferred by one Windows command, and you can see little dips where the CPU paused between each file. The second block represents one 10GB file, transferred the same way. Any dips in CPU utilization in this section are due to system wait states thrown in by some other sub-system crying "Uncle" for a brief period of time. The little blip on the left occurred when I deleted 20GB of data from the target directory on the NAS, to get it ready to accept the file transfers from the PC. It's interesting how you can get a 50% load by just deleting 11 files from the allocation table.
The memory subsystem on the QNAP TS-219P+ is not being taxed by these file transfers at all. Unless you plan to use the NAS for all of the "extra" things it can do, as a media server and such, don't worry about the fact that it only comes with 512MB of memory capacity. That's plenty, at least for the basic disk functions.
The network interface is getting more of a workout than the memory, but it is still running well below the throughput limits of the Gigabit Ethernet (GbE) interface. There's a lot of extra capacity here, as there should be, given that many of the larger NAS devices are running multiple disks in RAID configurations with a single GbE connection. The larger, and more highly specified QNAP units (TS-x59...) all have dual Ethernet connections that allow for teaming via IEEE 802.3ad/Link Aggregation, that in certain cases allows for almost double the network throughput. This is only going to be required in rare cases, where both systems connected this way have the raw transfer speed to make it necessary. That's the sort of thing you're only going to see in a corporate LAN room, at least for now. One day, I'm going to load up one of the big NAS units with high end SSDs in RAID 0 and let it rip; then we'll see where the system bottlenecks are.
The network throughput scales right along with the disk throughput, as seen here. In this test, I was using a more sophisticated file transfer program, called Rich Copy (v184.108.40.206.) It's sort of a Microsoft product, in that it was developed by Ken Tamaru while he worked at Microsoft, but it was never a real product with Marketing support; it was just thrown in as a freebee on several versions of their server software over the years. It is still available for download via TechNet, but it is not supported by Microsoft, and some bugs that have been reported have not been acknowledged or addressed by the developer. Its main strength is that it copies several files simultaneously in a multi-threaded operation, which can drastically reduce the time required for multi-gigabyte file transfers. It also has an easy-to-use GUI, with a status window that aids in benchmarking. It will not copy open files, so it has limited usefulness as a backup utility, but for trying to squeeze the maximum throughput out of a NAS, it's excellent. In the following chart, you see the effect on the network load as a multi-file transfer operation finishes up, and the number of active threads goes down in discrete increments toward the end. The last file was much bigger than the others, so the long plateau at ~20 MB/s is the throughput with one thread running.
In contrast, the host PC is just loafing along during these file transfers, with the notable exception of the HDD, which is spinning right along at 10,000 RPM, with the heads chunking away. The CPU is running below 10% utilization, memory usage is essentially zero, and the network interface is running around 300 Mb/s, well below its 1,000Mb/s capacity.
Now, let's look at some Final Thoughts, and then move on to our Conclusion and Product Ratings.
NAS Comparison Products