|DDR3 RAM: System Memory Technology Explained|
|Articles - Featured Guides|
|Written by Olin Coles|
|Sunday, 11 May 2008|
Page 5 of 5
Final Opinion on DDR3 RAM
When I first began this article, it felt to me like this kind of information should be required reading for anyone who considers themselves a hardware enthusiasts or overclocker. Even after discussing the topic with some of my colleges, it was clear that the misconceptions had already entrenched themselves deep into the everyman. I can't give up hope, not yet, because if you've made it this far into the article then you've probably picked up a thing or two about the technology.
Retracing my key points, there are a few important major features worth mentioning again for those who like skipping to the end (statistically 70% of visitors). To begin with, DDR3 RAM modules can conserve up to 32% of the energy used on system memory, while at the same time saving money on maintenance costs for facilities HVAC systems. Next on this list is the data prefetch buffer; which has doubled from only 4 bits per cycle to a full 8 bits with each pass. Then comes the new Fly-by topology that removes the mechanical limitations of physical line balancing by replacing it with an automatically controlled and calibrated signal time delay. After that comes latencies which are lower in rate than the previous curve, and in some cases offer 50% better timings per MHz. Finally, we have all of the extra perks.
The first few perks are more to a technical advantage than anything else. At the beginning of this article I listed the introduction of an asynchronous reset pin, which gives DDR3 the ability to complete a device reset without interfering with the operation of the entire computer system. Additionally, DDR3 can also complete a partial refresh, so energy isn't wasted on refreshing memory that isn't active.
The concept that appears to be gaining momentum is onboard AI for the memory modules. For instance, the JEDEC standard allows for an optional on-die thermal sensor that can be used to detect a nearing temperature threshold for the memory, and shorten the refresh intervals if necessary. This fail-safe offers the memory an opportunity to reduce temperatures and consume less power.
I consider another major perk to be the XMP eXtended Memory Profile, which I have personally seen in action. One simple decision to enable the profile (or particular profile if more than one exists), and your system is automatically adjusted for a pre-defined overclock - voltages and all. This is going to be a great feature for anyone who just isn't ready to burn up their investment while trying to discover the mystery overclocking sweet spot.
Another perk is the increased front side bus speed which allows for extremely high overclocks and excellent bandwidth throughput. Some will argue that this comes at the expense of high latency; but let's be realistic. You can't reach 100 MPH in a car with out traveling a long distance. This analogy is just as true to system memory as it is to cars: the faster you want to your top speed to be farther you'll have to travel before you'll reach it.
There are other benefits to the new standard, but the last of the major differences is in the capacity. DDR3 allows for chip capacities of 512 megabits to 8 gigabits, effectively enabling a maximum memory module size of 16 gigabytes. This should (hopefully) help move the computing world into 64-bit computing with a more compelling force. I hope.
With every action comes an equal and opposite reaction. I am constantly reminded of this, because whenever I'm feeling especially good about something there will always be something to bring me right back down.
When you compare DDR3 to previous SDRAM generations, it inherently claims a higher CAS latency. The higher timings may be compensated by higher bandwidth which increases overall system performance, but they aren't nullified.
Additionally, this is new technology and it wears the new technology price tag. DDR3 generally costs more if you compare the price per megahertz ratio, this was also the case when DDR2 replaced DDR years ago. In fact, I still have the receipt for a nearly $400 set of Corsair Dominator 1066 MHz DDR2 from just under two years ago. For that some amount today, I could get a lot more performance for my dollar.
There are also a few technical difficulties which must be overcome in order to take advantage of DDR3. For example, to achieve a maximum memory efficiency on the system level, the systems front side bus frequency must also extend to that level. In most cases, it's best to have the front side bus operate at a matching memory frequency. Now obviously this isn't going to be a problem as 1600 MHz FSB processors become mainstream, it still places burden on the processor and motherboard chipset to make accommodations.
But we're not quite out of the woods yet... a higher operating frequency also means more signal integrity issues. Both motherboard and memory module design engineers now have to overcome new technologies and purchase test equipment for verification which often takes very expensive equipment just to look at the specific performance routines. In the end, the lab facilities costs will be passed along to you know who. This might explain how $300+ DDR3 motherboards have become such a common sight.
System memory has had the opportunity to evolve and improve, but it hasn't been alone. Processor and motherboard technology have also moved forward at what might might be considered a faster rate of development. Just as the speed of system memory has increased, the amount of onboard processor cache memory has also increased. As I write this article, I have a set of DDR3 memory modules running at 2000 MHz, and an Intel E8200 processor with 6 MB of cache buffer. It seems that at some point in the upcoming wave of product evolution, my computer may not see the need to call on system memory unless I'm utilizing a graphics-intensive application. If the trend continues, as it likely will, we might not see any benefit from the ever-increasing operating frequency of system memory because the processor will have large amount of buffer operating at a far faster speed.
Another concern is scalability and expansion. While I admire the brilliance of JEDEC to bring a more efficient module into mainstream use, I sometimes wonder how they come about other decisions. One key issue that may become a problem down the road is the specification calling for a maximum of 2 two rank modules per channel at 800-1333 MHz frequencies. It get's worse, only one memory slot is allowed at the present-specification top operational frequency of 1600 MHz.
All in all, DDR3 isn't perfect. It's unquestionably better than its predecessor, but I think my points have illustrated that the good also comes with the bad. For the past year our concentration here at Benchmark Reviews has constantly centered around DDR3, as if it's a new toy to play with. But it's not; DDR3 is here to stay, and whether you want it to or not the market will soon be treating DDR2 the same way it presently treats DDR. You can cling on to your old technology, but at this point that would also be like reverting to AGP discrete graphics... which also cost a lot less than PCI Express. But that's for another article.
Questions? Comments? Benchmark Reviews really wants your feedback. We invite you to leave your remarks in our Discussion Forum.
Intel XMS Technology Standards: http://download.intel.com/personal/gaming/367654.pdf JEDEC DDR3 SDRAM Standards JESD 79-3B: http://www.jedec.org/download/search/JESD79-3B.pdf JEDEC Specialty DDR2-1066 SDRAM Standards JESD 208: http://www.jedec.org/download/search/JESD208.pdf JEDEC DDR2 SDRAM Standards JESD 79-2E: http://www.jedec.org/download/search/JESD79-2E.pdf