|QNAP TS-879U-RP 10GbE NAS Server|
|Reviews - Featured Reviews: Network|
|Written by Bruce Normann|
|Monday, 19 March 2012|
Page 2 of 10
Closer Look: QNAP TS-879U-RP NAS with 10GbE
The QNAP TS-879U-RP shares the same basic technology platform as all the new TS-x79 models, and it's actually on the low end of this series, believe it or not. There are 8, 10, and 12 bay units available, and both tower and rack mount formats. The size and weight are substantial: 88mm(H) x 439mm(W) x 520mm(D), and 27.6 pounds without drives installed. Each HDD you install will add about 1-1/2 pounds, depending on your choice of drive. Multiple SATA 6Gb/s drives can be installed as: a single disk, RAID 0 (Disk Striping), RAID 1 (Disk Mirroring), RAID 5 (Block-level striping with distributed parity), RAID 6 (Block-level striping with double distributed parity), RAID 10 (AKA RAID 1+0, a stripe of mirrors), and JBOD (Linear Disk Volume). RAID 5 is a very popular arrangement, and all testing for this phase was done with all eight drive bays loaded and configured as a single RAID 5 volume.
Here's what makes this extended test session possible, and it's something you don't see every day on a NAS server: two x8 PCI Express slots. This is where you have to go if you want to get the full performance that the TS-879U-RP is capable of. Plain old 1000BASE-T limits the real-world throughput to about 120 MB/s, and the potential is there for way over 1000 MB/s with this model. You really only need one of these PCIe slots, since most 10GbE NICs come in a dual-port configuration, but products of this caliber need to have some degree of future-proofing built into them. The future is definitely where 10GbE is, it's just not that widely implemented at this time. In 2007, one million 10GbE ports were shipped, in 2009 two million, and in 2010 three million ports were shipped. That's a pretty slow and linear adoption rate, and it's a measure of how entrenched one-Gigabit Ethernet is in the networking world.
The thin 2U profile of the TS-879U-RP limits the form factor for expansion cards to "low profile", or "½ height", as I like to call them. Fortunately, most of the high end network cards are intended to be used in just this type of rack-mount hardware, so they come with low profile I/O plates. Either they ship that way as standard, or they are included as an accessory. The Intel E10G42BT, X520-T2, 10Gbps Ethernet NIC, looks right at home here in its PCIe 2.0 x 8 expansion slot. There's no interference with any of the other components, and the airflow from the centrally mounted fan module blows straight down the length of the card. Network cards with this level of performance need a fair amount of cooling. They don't need as much as a video card, but note that there are two heat sinks on the card and one of them has an integral cooling fan.
Looking at the back panel of the TS-879U-RP, you can see the business end of the Intel X520-T2, and the twin RJ-45 connectors. Both of them are identical in every way, and there are a number of different ways of configuring them from within the QNAP system software, either individually or as a bonded pair. While the configuration options are not as broad as those offered by the Intel Advanced Networking Services driver, the NAS system software does provide the most common and useful alternatives. Both PCI Express expansion slots have access to the outside world, through the two removable covers on the back panel. This provides a degree of flexibility in setting up the networking connection on the TS-879U-RP. In a corporate LAN environment, there are some potential advantages to having up to four network ports on a storage server, both for redundancy, and the opportunity to establish a few critical connections directly, instead of running everything through a switch.
Once the additional NIC is installed, they are configured from the Network tab in the System Administration section of QSM (QNAP Storage Manager) 3.5. If a network port has a physical connection to another device, then it automatically enters the "active" state and the status is shown on this screen. There were no additional steps needed to install or initialize the new 10GbE interface. All the drivers are already loaded on the NAS, and the new device is automatically detected and configured without any user involvement. The downside to proprietary driver support is usually the limited number of devices that are supported. The upside is the way they are integrated in the overall package, which is seamlessly in this case.
During NAS operations, the QNAP Resource Monitor, in the System Status section of QSM 3.5 shows the actual bandwidth usage of each available connection. In this set of testing, I used a single 10GbE connection between the NAS and the host PC. The pink and green traces for Ethernet 4 show the bandwidth used during both Read and Write testing of the NAS. The green trace shows Packets Received, and the pink traces show Packets Sent from the NAS. These charts offer a useful window into the inner operation of the NAS. Even though they don't provide the precision necessary to generate accurate benchmark performance results, they certainly offer a solid means of keeping an eye on the system during the tests. It's one way of making sure that there aren't some hidden anomalies occurring that might affect the results.
That's it for the upgrade, there's not a lot to it besides picking the 10GbE NIC that best meets your needs. Most of the decision-making process will involve selecting the most appropriate interconnect specification. CAT 6 was the easiest and cheapest for me to implement, but most people will have to focus more on interoperability with existing hardware on their network. All the most popular connection types are provided for on the QNAP compatibility listing, so no one should be left out.
Let's take a brief look again at the hardware specs, since this follow-up review is almost exclusively focused on performance.