Archive Home arrow Reviews: arrow Storage arrow OCZ Vertex SSD RAID-0 Performance
OCZ Vertex SSD RAID-0 Performance E-mail
Reviews - Featured Reviews: Storage
Written by Olin Coles   
Friday, 03 April 2009
Table of Contents: Page Index
OCZ Vertex SSD RAID-0 Performance
Features and Specifications
First Look: OCZ Vertex SSD
Vertex SSD Internal Components
SSD Testing Methodology
Random Access Time Benchmark
Basic IOPS Performance
Linear Bandwidth Speed
I/O Response Time
Buffered Transaction Speed
Windows XP Startup Times
The Truth Behind Heat Output
Solid State Drive Final Thoughts
Vertex RAID-0 Conclusion

Iometer IOPS Performance

EDITORS NOTE 06/01/2009: Benchmark Reviews added the Iometer results to this article after it was originally published, as a result of reader requests and suggestions.

Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. Iometer does for a computer's I/O subsystem what a dynamometer does for an engine: it measures performance under a controlled load. Iometer was originally developed by the Intel Corporation and formerly known as "Galileo". Intel has discontinued work on Iometer, and has gifted it to the Open Source Development Lab (OSDL).

Iometer is both a workload generator (that is, it performs I/O operations in order to stress the system) and a measurement tool (that is, it examines and records the performance of its I/O operations and their impact on the system). It can be configured to emulate the disk or network I/O load of any program or benchmark, or can be used to generate entirely synthetic I/O loads. It can generate and measure loads on single or multiple (networked) systems.

Benchmark Reviews has resisted publishing Iometer results because there are hundreds of different configuration variables available, making it impossible to reproduce our tests without having our Iometer configuration file. To measure random I/O response time as well as total I/O's per second, Iometer is set to use 4KB file size chunks over a 100% random sequential distribution. The tests are given a 50% read and 50% write distribution. Our charts show the Read and Write IOPS performance as well as I/O response time (measured in ms).

Iometer_Random_IOPS_ICH10.png

Iometer was configured to test for 120 seconds, and after five tests the average is displayed in our benchmark results. The first tests included random read and write IOPS performance, where a higher I/O is preferred. In this test the single layer cell OCZ Vertex EX rendered 3106/3091 I/O's and outperformed all other products. A set of RAID-0 Vertex (v1.10 firmware) 120GB MLC SSDs performed at 1517/1515, which is just slightly ahead of a single Vertex SSD which renders 1197 for read and write IOPS. The OCZ Summit MLC SSD completed 730/733 IO's. All other products performed far beneath this group, and are not suggested for high input/output applications.

The Mtron MOBI 3000 performed 107 read and write IOPS, while the Western Digital WD5001AALS rendered 86 and the Seagate 7200.11 completing 77. The newer Mtron MOBI 3500 rendered 58 IOPS, which was worse than the older 3000 model. The OCZ Apex strugged to complete 9 IOPS, and its identically-designed G.Skill Titan managed o nly 8 IOPS. Clearly, the twin RAID-0 JMicron controllers are built for speed and not input/output operations. Next came the average I/O response time tests...

Iometer_Average_Response_Time.png

The Iometer random IOPS average response time test results were nearly an inverse order of the IOPS performance results. It's no surprise that SLC drives perform I/O processes far better than their MLC versions, but that gap is slowly closing as controller technology improves the differences and enhances cache buffer space. In our Read/Write IOPS performance the SLC OCZ Vertex EX achieves a dramatic lead ahead of the other SSDs tested.

OCZ's Vertex EX offered the fastest read and write response time, measuring 0.26/0.06ms, and showing strength in write requests. The RAID-0 set of Vertex MLC SSD's scored 0.58/0.07ms, dramatically improving the write-to response time over a single Vertex SSD which offered 0.42/0.77ms. The OCZ Summit responded to read requests in 0.78ms while write requests were a bit quicker at 0.59ms. These times were collectively the best available, as each product measured hereafter performed much slower.

The Mtron MOBI 3000 offered a fast 0.42ms read response time, but suffered a slower 8.97ms write response. Both the WD5001AALS and Seagate 7200.11 hard drives performed around 11ms read and 1.2ms write. Mtron's newer MOBI 3500 offered great read response times at 0.19ms, but suffered poor write responses at 17.19ms. The worst was yet to come, as the G.Skill Titan and OCZ Apex offered decent 0.42ms read response times but absulutely unacceptable 127ms write times.

Drive Hardware



 

Comments 

 
# MRAnthony 2010-03-18 04:56
I'm always wary of Mbps(bits) and MB(bytes), too many people use them interchangably. The Ads on the same page for this product say "250MB" not bits, so what is the Atto 249 MBps maximum read bandwidth??? bizarre?
Report Comment
 
 
# El Presidente'Marko 2010-11-27 01:09
Anthony, typically Mbps (Megabits) refers to a transfer speed whereas MBs refer to a capacity. Whether ignorant people use them interchangeably or not, using this guideline you should always be able to figure out which it is. :)
Report Comment
 
 
# RE: El Presidente'Olin Coles 2010-11-27 07:51
I'm not exactly clear which side of the argument you're on here, Marko. Read up on the specifications for any SSD product, and you'll see their bandwidth speed represented as MB/s.
Report Comment
 
 
# DKSGDKSG 2012-04-18 00:41
MB when used in advertised capacity is not Megabytes, it's Million Bytes. Bytes or Bits will be denoted by B or b respectively. When the vendor advertise 250MB, it means 250 Million Bytes which is approx to 244.14 Mega Bytes. This 244.14 is RAW Megabytes and have not included partitioning and other possible overhead used in the system which may yield lower capacity than 244.14 Megabytes.

When used on the bandwidth, make sure you fully understand what the bandwidth measures. In different network or cable setup, the bandwidth could be shared and a single device do not usually get that kind of bandwidth on average. On network, typically vendors means Megabytes when they denote MB unless otherwise denoted using fineprints, but the usual price is using Megabits which looks a lot better on paper.
Report Comment
 
 
# RAID-0 Setuptypoknig 2010-05-10 09:52
How exactly did you have your RAID-0 setup during this test? For instance, were you using the Intel Matrix Storage Manager or some other method?
Report Comment
 
 
# Intel ICH10Olin Coles 2010-05-10 15:11
RAID-0 was built using the motherboard's Intel ICH10 controller.
Report Comment
 
 
# Stripe sizeJ Walsh 2010-05-12 08:46
What stripe size was used in the RAID 0 setup and why?
Report Comment
 
 
# 128KB Stripe SizeOlin Coles 2010-05-12 08:49
This articles used a 128KB stripe size, which is the largest the Intel ICH10 controller allows for RAID-0 sets.
Report Comment
 
 
# Benchmarking A Bigger RAID 0 Arraytypoknig 2010-06-08 22:06
Hi, I have been running the same benchmarks you ran on my RAID 0 array which has 3 120GB OCZ Vertex drives compared to the 2 used in this benchmark. My results have not even been close to what I thought I would be getting after reading this review. I have posted some info about my results here:

##overclock.net/benchmarking-software-discussion/750979-benchmarking-3-120gb-ocz-vertex-ssds.html

Maybe you can take a look at my stuff and tell me why my linear read in Everest does not produce a flat line like yours (I realize I used 512MB block size, but the 1MB block size produced identical results), and why my numbers are so much lower when they should be higher. I have also ran the benchmarks without an OS (or any data) on the array at all, and the results are very similar. Any thoughts?
Report Comment
 
 
# Partition alignmentOlin Coles 2010-06-14 19:43
I'm betting that our results are higher because of drive conditioning: partition alignment, diskpart clean all, secure erase, etc. Since TRIM doesn't always pass through to RAID arrays, used drives will produce lower performance results.
Report Comment
 
 
# sanitary erasetypoknig 2010-11-27 21:23
I have had this problem fixed for quite some time now. If you go to the link I provided in my last comment you will see that using sanitary erase did trick for me... so as you said, "drive conditioning" was my problem. To keep my drives as clean as possible I use the "Wipe Free Space" feature of CCleaner. Does the same thing as wiper.exe but it works when drives are in RAID (unlike wiper.exe).
Report Comment
 
 
# flash, not dramscott 2010-12-02 15:43
Samsung K9HCG08U1M-PCB00 is flash memory, not DRAM... this is why we call it an SSD
Report Comment
 
 
# 4-drive RAID 0Remo 2010-12-23 08:42
Mr Coles, do you have any idea how would a 4 SSD in RAID-0 perform? Would you use it as the boot drive in a windows 7 system?
Report Comment
 
 
# RE: 4-drive RAID 0Olin Coles 2010-12-23 08:44
You should look into the OCZ RevoDrive 2 PCI-Express SSDs, which fit four SSDs into RAID-0 on one board. Our review is here:

benchmarkreviews.com/index.php?option=com_content&task=view&id=635
Report Comment
 
 
# RE: RE: 4-drive RAID 0Remo 2010-12-23 08:57
I definitely will look for it. But, how much gain in performance would you expect when upgrading from a 2-drive raid-0 to a 4-drive raid-0?
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews
QNAP Network Storage Servers

Follow Benchmark Reviews on FacebookReceive Tweets from Benchmark Reviews on Twitter