Archive Home arrow Reviews: arrow Network arrow QNAP TS-219P+ NAS Network Server

QNAP TS-219P+ NAS Network Server E-mail
Reviews - Featured Reviews: Network
Written by Bruce Normann   
Thursday, 30 June 2011
Table of Contents: Page Index
QNAP TS-219P+ NAS Network Server
QNAP v3.4 New Features
Closer Look: QNAP TS-219P
Insider Details: QNAP TS-219P
QNAP Turbo NAS Features
QNAP TS-219P NAS Hardware
QNAP TS-219P Software
QPKG Center Software Expansion
NAS Testing Methodology
Basic-Disk Test Results
NAS System Overhead Measurements
NAS Server Final Thoughts
QNAP TS-219P Conclusion

NAS System Overhead Measurements

I've discussed the potential impact the NAS hardware has on performance in general terms so far. The consensus is that the CPU, drive controllers, memory, and network subsystems have a direct and profound impact on the throughput of a NAS device. In extreme cases where multiple drives (4+) are arranged in higher-order RAID configurations, the CPU has a ton of work to do, calculating parity bits and parsing them out to multiple data streams. In this section, I'm going to look at some results from the System Monitor capability that is available both on the host PC and on the QNAP Turbo NAS server.

Let's start off looking at CPU usage on the NAS server. During a straight data transfer from the PC to the single disk on the TS-219P+, the results show the Marvell 88F6282 SoC device completely maxed out at 100%. The two blocks in the chart represent two different sets of files being transferred, with a break between them. The fist block represents ten 1GB files transferred by one Windows command, and you can see little dips where the CPU paused between each file. The second block represents one 10GB file, transferred the same way. Any dips in CPU utilization in this section are due to system wait states thrown in by some other sub-system crying "Uncle" for a brief period of time. The little blip on the left occurred when I deleted 20GB of data from the target directory on the NAS, to get it ready to accept the file transfers from the PC. It's interesting how you can get a 50% load by just deleting 11 files from the allocation table.


The memory subsystem on the QNAP TS-219P+ is not being taxed by these file transfers at all. Unless you plan to use the NAS for all of the "extra" things it can do, as a media server and such, don't worry about the fact that it only comes with 512MB of memory capacity. That's plenty, at least for the basic disk functions.


The network interface is getting more of a workout than the memory, but it is still running well below the throughput limits of the Gigabit Ethernet (GbE) interface. There's a lot of extra capacity here, as there should be, given that many of the larger NAS devices are running multiple disks in RAID configurations with a single GbE connection. The larger, and more highly specified QNAP units (TS-x59...) all have dual Ethernet connections that allow for teaming via IEEE 802.3ad/Link Aggregation, that in certain cases allows for almost double the network throughput. This is only going to be required in rare cases, where both systems connected this way have the raw transfer speed to make it necessary. That's the sort of thing you're only going to see in a corporate LAN room, at least for now. One day, I'm going to load up one of the big NAS units with high end SSDs in RAID 0 and let it rip; then we'll see where the system bottlenecks are.


The network throughput scales right along with the disk throughput, as seen here. In this test, I was using a more sophisticated file transfer program, called Rich Copy (v4.0.217.0.) It's sort of a Microsoft product, in that it was developed by Ken Tamaru while he worked at Microsoft, but it was never a real product with Marketing support; it was just thrown in as a freebee on several versions of their server software over the years. It is still available for download via TechNet, but it is not supported by Microsoft, and some bugs that have been reported have not been acknowledged or addressed by the developer. Its main strength is that it copies several files simultaneously in a multi-threaded operation, which can drastically reduce the time required for multi-gigabyte file transfers. It also has an easy-to-use GUI, with a status window that aids in benchmarking. It will not copy open files, so it has limited usefulness as a backup utility, but for trying to squeeze the maximum throughput out of a NAS, it's excellent. In the following chart, you see the effect on the network load as a multi-file transfer operation finishes up, and the number of active threads goes down in discrete increments toward the end. The last file was much bigger than the others, so the long plateau at ~20 MB/s is the throughput with one thread running.


In contrast, the host PC is just loafing along during these file transfers, with the notable exception of the HDD, which is spinning right along at 10,000 RPM, with the heads chunking away. The CPU is running below 10% utilization, memory usage is essentially zero, and the network interface is running around 300 Mb/s, well below its 1,000Mb/s capacity.


I hope this section showed you some objective reasons why the infrastructure that any NAS product brings to the table is important to its overall performance. As the number of drive bays goes up, the hardware requirements increase as well, and the price has to follow. I know it's disheartening to see that you don't get great economies of scale on the larger NAS units, but it would be even more of a shame if they didn't perform up to their true capabilities because the hardware was holding them back.

Now, let's look at some Final Thoughts, and then move on to our Conclusion and Product Ratings.

NAS Comparison Products



# Why no consumer drives?Dirk 2011-07-23 23:23

to the "Cons" in the conlusion:
Why aren't consumer hard disks often the right choice for drive arrays, also a simple RAID-1 ?

I've heard about it before, but didn't find a real explanation. If you activate HDD sleep after xx idle minutes, the maximum hours of operation should be limited. What else?
Report Comment
# RE: Why no consumer drives?Bruce 2011-07-24 06:55
QNAP has a detailed compatability list on their site, but you have to read between the lines to find out WHY consumer drives don't always cut it in RAID applications. Two things are primarily responsible: a software setting in the drive itself and the mechanical design of the platter bearings.

The consumer drives have an error recovery scheme that can interfere with the RAID controller, calle "Time-Limited Error Recovery" (TLER). There's ton's of info on the web, including the major drive manufacturer's sites about it.

The second factor is that the drive spindles can wear out quickly from excessive vibration when many, many drives are all chattering away in the same rack. So, some drives (WD Black for instance) are approved by the manufacturer in RAID 0 or RAID1 when there are only two drives in the enclosure. This is great news for all the two-bay NAS owners...
Report Comment
# RE: RE: Why no consumer drives?Dirk 2011-07-24 08:32
I see, and I remember that I've read complaints about WD's "deep error recovery" with consumer drives. Too bad, because most home users might prefer the energy efficient drives.

By the way: Thanks for the extensive review, Bruce!

Your measured power consumption on the page "insider details" (8 W in sleep mode) was with or without drives installed? In many reviews, the sleep mode consumption with discs amounts to 12-13W, which is on par with the comparable Synology DS-211+.
Report Comment
# Sleep ModeBruce 2011-07-24 12:14
There was one drive installed at the time I did the power measurement. In sleep mode, the drive is not spinning, that's why the power usage was lower.
Report Comment

Comments have been disabled by the administrator.

Search Benchmark Reviews

Like Benchmark Reviews on FacebookFollow Benchmark Reviews on Twitter