| SSD Benchmark Tests: SATA IDE vs AHCI Mode | |
| Articles - Featured Guides | |
| Written by Olin Coles | |
| Thursday, 15 April 2010 | |
|
Page 1 of 13
SSD Benchmark Tests: SATA IDE vs AHCI ModeIn a recent Benchmark Reviews technical article, SSD performance was tested in AHCI and IDE mode using only the HD-Tune software to produce results. It wasn't intentional, but our test results were flawed by this single-threaded benchmark tool, and our conclusion did not properly illustrate IOPS performance. In this article, Benchmark Reviews tests the latest JMicron, Indilinx, and SandForce SSDs using a combination of tools to illustrate the true difference between SATA IDE and AHCI-mode, and demonstrates how one SATA mode is better suited than the other. Solid State Drive technology is unfamiliar to many consumers, and so long as there are different ways an SSD can operate there always be questions that need answering. Benchmark Reviews offers various SSD tests, but proving performance speeds and matching manufacturer claims is only part of the story. Each SSD processor has unique behaviors, with some working well with TRIM and offering improved performance in AHCI mode, while others include Garbage Collection (GC) and work best in IDE mode. In this article Benchmark Reviews demonstrates how SSDs are tested by the manufacturer, and illustrates how real-world performance is different for end-users.
When Benchmark Reviews first started testing SSD storage products two years ago, we discovered that there were just too many variables to alter benchmark results. For example, the exact same Solid State Drive may offer one specific write speed while connected to the Intel ICH10 controller, yet could operate up to 30% slower on Marvell or JMicron SATA controller. Additionally, immature driver software can further degrade performance, or optimized drivers can restore speeds. Complicating matters even more, SSD-specific firmware can add features and improvements, but may also reduce operational speeds. Readers familiar with the new technology have learned to read reviews from as many sources as possible; which is exactly what we've been suggesting in our product conclusions for almost two years. Some websites don't use special drive conditioning tools such as DISKPART or Sanitary Erase, and other do. The same is true for SSD owner, which span from hardware enthusiasts to basic computer users. Benchmark Reviews has concentrated on including several different tools for realizing quantitative performance results for our articles, while other websites use real-world file transfers and application routines. It's all relative, subjective, and impossible to determine which method is best. For this article, our tests will focus on three of the most widely used SSD processors from Indilinx, JMicron, and SandForce. Using the Intel ICH10 SATA-3.0 controller, our SSD tests will benchmark read and write performance speeds with pristine 'fresh' NAND conditions with Sanitary Erase (for Indilinx) and DISKPART with the 'clean all' command for drive conditioning. Operational IOPS performance will be tested in much the same way, and will reveal which SATA controller is best suited for SSDs.
|
|





Comments
#support.microsoft.com/kb/977178
Ever tested SCSI versus SATA at the same RPM? Well there you will see exactly the benefit of access-times. Speed is rubbish when AT isn't good.
Test controllers on AT first, speed second, the picture will be a lot different in favor of IDE mode.
Same as with Internet, band-weight is nice, but if your ping is crap it won't be fast either.
You focus on the wrong points to measure a drive.
it would be more like:
IOPS is the ship's speed, Bandwith is the cargo capacity.
Why:
If you want to build a small house, and you need parts that can be delivered by either:
1x Big slow ship, that has 100x the capacity required
or
1x Small fast ship, that has just the capacity required
you'll be able to start building your house sooner with the fast ship, as it will arrive sooner.
Another would be to compare bandwidth and IOPS to a highway. Bandwidth determines the number of lanes, and IOPS determines the travelers.
The lower the number the closer they can drive behind each other, also affecting the IOPS.
As such access-time is the most important number over all of them.
IOPS can not be fast if your access-time is slow.
The access-time is the time the device needs to find the data and start delivering it.
There is no way on earth your IOPS can be high with a low access-time.
Sorry Olin, but you better find a new job if you don't understand the importance of access-time.
And yes I have read your article, and it's way too much focused on band-weight.
Typical mistake, why do you think SCSI drives are twice as fast as the fastest IDE drive at the same RMP and band-weight?
Exactly, it's the low access-time that causes this...not throughput.
Because of your wrong interpretation of access-time, I consider your entire article and conclusion as useless for anybody.
Do research on the matter, you will find I'm right.
Your IOPS versus Access-time is completely wrong.
IOPS means I/O operations per second, if your access-time is low it means it will do LESS I/O operations per second as it needs to wait for the device to access the data.
Do your research before writing rubbish as you make a fool of yourself.
Practice ... preach ... honestly.
It would be nice to see some Tests/Reviews of Benchmarks that provided results for both camps ( Intel & AMD )
I would also be appreciative to see Microsofts AHCI Driver tested/included as well as AMD's ATI AHCI Driver tested/included. As well as the Marvell AHCI driver that is more commonly seen in AMD platforms verses what shows up or is seen in the Intel Platforms.....
Thanks for this review, but it is hard to apply what I have read to my AMD Platform which uses ATI AHCI Drivers &/or Microsoft AHCI Drivers if I use the MS Windows 7 generic AHCI driver......
If you would like to push a drive to the maximum performance in a real world situation, you should put a single large paging file on the drive and use programs that drive the paging rate as high as possible. When that settles down to a constant rate over time, then a measurement of paging rate will tell you what the maximum performance is in a real world situation. I am not sure that TRIM will actually improve performance in a busy drive, and it certainly loses the ability to recover any deleted file. A smart TRIM handling on a drive would queue the TRIM requests up and satisfy them when the drive is not otherwise busy, rather than giving them the priority they have under existing classification. TRIM could certainly help in a situation where the read to write ratio is high and the drive is not particularly busy, but this also would lessen the advantage the SSD has over a drive with a large built-in cache.
...continued...
There is no point in doing NCQ as there is no rotation of the drive where command ordering is needed or even wanted.
NCQ has been proven to hinder the performance as it can stall the controller by making the computer WAIT before giving new commands.
SSD is a matrix and ordering is silly as all cells deliver data at the same speed, there is no ordering wanted to optimise the rotations in where data can be read.
But AHCI is still doing it, as such it lowers performance, and this can be noticed when using better benchmarks that can do proper I/O reading and writing combined with multiple commands at the same time. That benchmark has been here for ages, called ATTO.
Give ATTO a high command-que depth and see what happens, it will show the difference. As ATTO gives read and write speeds, for me it's the best harddisk/ssd benchmark on the planet when used properly.
Meaning VERY large Block-size and high Que-depth, then it gives pretty accurate numbers.
That's only applicable in a single-threaded system.
Your modern PC is more akin to a large downtown reconstruction effort after Catrina...
Also, comparing the HDD with a big cargo ship...
On a large cargoship, unless your goods are all in one container, or all the containers with your goods have all been stacked on top of each other, they will need to unload some containers inbetween yours. Then probably reload the inbetween containers.
A better analogy would be a long freight-train carrying cargo containers that passes under a single crane that can lift containers off the wagons.
What is more effective?
1. Unloading the containers belonging to one customer(1, 15, 33, 7 - the last one was a recent addition that was fitted there after container 7 was removed on an earlier stop), then the next customer(4, 8, 12, 13, 14, 53)?
". Or to offload them in a sequential order?
Remember, the crane can only offload containers at a fixed speed, the train can only be moved forward or back at a low accelleration...
Vertex 3 120gb SSD
WD 1.5tb SATA2
DVD/CD RW
A recent BIOS update added VRM MOS protection.
This disables access to the BIOS settings during POST .
If the Windows 7 based TouchBIOS utility is used, and the SATA type is changed from IDE to AHCI in CMOS and the system is rebooted. Windows renames the Boot drive from C:\ to E:\ and Windows Blue Screens during loading. Normally this wouldn't be a big problem, since the BIOS setting could be changed during POST. However, the VRM MOS protection forbids changing those settings during POST. The BIOS must be reflashed with a legacy BIOS (F6) or earlier, that does not have VRM MOS protection, in order to enable the IDE type support that Windows was installed with. I believe this VRM MOS protection is a standard part of the new types of BIOS required by Windows 8. So it must be enabled during OS installation or it could be lost