Archive Home arrow Reviews: arrow Video Cards arrow ASUS GeForce GTX-465 Video Card
ASUS GeForce GTX-465 Video Card E-mail
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Monday, 21 June 2010
Table of Contents: Page Index
ASUS GeForce GTX-465 Video Card
Features and Specifications
NVIDIA GF100 GPU Fermi Architecture
Closer Look: ASUS GeForce GTX-465
Video Card Testing Methodology
DX10: 3DMark Vantage
DX10: Crysis Warhead
DX10: Far Cry 2
DX10: Resident Evil 5
DX11: Aliens vs Predator
DX11: Battlefield Bad Company 2
DX11: BattleForge
DX11: Metro 2033
DX11: Unigine Heaven 2.1
NVIDIA APEX PhysX Enhancements
NVIDIA 3D-Vision Effects
GeForce GTX465 Temperatures
VGA Power Consumption
ASUS SmartDoctor and GamerOSD
Editors Opinion: Fermi GF100
ASUS ENGTX465 Conclusion

Editor's Opinion: NVIDIA Fermi

My opinion of NVIDIA's Fermi architecture has changed over the past several months, as they've developed their graphics processor to fully embody the originally unclear long-term plan. Testing with NVIDIA's GF100 GPU held its own set of challenges, and many times the video cards based on this graphics processor seemed condemned by the inherited legacy of problems. From the flagship GeForce GTX 480 down to the GTX 465, Fermi impressed gamers with strong FPS performance... and that was about it. Thermal output and power consumption were unfashionably high, to which ATI constantly and consistently focused their marketing attacks. Then along comes GF104 on the GeForce GTX 460.

NVIDIA's GeForce GTX 460 not only changes the collective opinion of their Fermi architecture, it also changes the GPU landscape. ATI held the upper hand by releasing a DirectX-11 video card first, but they've painted themselves into a corner with their Evergreen GPU. Unlike NVIDIA's Fermi architecture, which can shape-shift as desired, ATI's Cedar, Redwood, and Juniper GPUs are all simply slices of the same processor: Cypress. This is where intelligent consumers will spot the flaw: ATI came to the (video) card game and showed their entire hand from the first deal, while NVIDIA had a few spare aces up their sleeves. NVIDIA's GeForce GTX 480 is only 15/16 of the complete GF100 package, and we're just beginning to see what's possible with a 7/8-whole GF104 GPU. It's unknown what NVIDIA has planned for the GF102, GF106, and GF108... although the speculation is rampant.

So now ATI and NVIDIA are even-Steven in the running for DirectX-11, and all that they need are video games to increase demand for their product. This becomes a real problem (for them both) because very few existing games demand any more graphical processing power than games demanded back in 2006. Video cards have certainly gotten bigger and faster, but video games has lacked fresh development. DirectX-10 helped the industry, but every step forward received two steps back because of the dislike for Microsoft's Windows Vista O/S. Introduced with Windows 7 (and also available for Windows Vista with an update), enthusiasts now have DirectX-11 detail and special effects in their video games.

NVIDIA-GeForce-Fermi-Product-Family.jpg

NVIDIA GeForce Fermi Graphics Card Family

Even if you're only after raw gaming performance and have no real-world interest in CUDA, there's reason to appreciate the GF100 GPU. New enhancement products, such as the NVIDIA GeForce 3D Vision Gaming Kit, double the demands on frame rate output and hence require more powerful graphics processing. This is where products like the GeForce GTX470 and GTX480 deliver the performance necessary to enjoy the extended gaming experience. I'm a huge fan of GeForce 3D-Vision, which is why it's earned our Editor's Choice Award, and Fermi delivers the power necessary to drive up to three monitors. The newly dubbed NVIDIA 3D-Vision Surround (stereo) requires three 3D-Vision capable LCD, projector, or DLP devices and offers bezel correction support. Alternatively, NVIDIA Surround (non-stereo) supports mixed displays with common resolution/timing.

Even some older game titles benefit by the Fermi architecture, beyond just an increase in frame rates. For example, Far Cry 2 will receive 32x CSAA functionality native to the game, but future NVIDIA Forceware driver updates could also further add new features into existing co-developed video games. Additionally, NVIDIA NEXUS technology brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA's GF100 is the first GPU to ever offer full C++ support, the programming language of choice among game developers.

Fermi is also the first GPU to support Error Correcting Code (ECC) based protection of data in memory. ECC was requested by GPU computing users to enhance data integrity in high performance computing environments. ECC is a highly desired feature in areas such as medical imaging and large-scale cluster computing. Naturally occurring radiation can cause a bit stored in memory to be altered, resulting in a soft error. ECC technology detects and corrects single-bit soft errors before they affect the system. Fermi's register files, shared memories, L1 caches, L2 cache, and DRAM memory are ECC protected, making it not only the most powerful GPU for HPC applications, but also the most reliable. In addition, Fermi supports industry standards for checking of data during transmission from chip to chip. All NVIDIA GPUs include support for the PCI Express standard for CRC check with retry at the data link layer. Fermi also supports the similar GDDR5 standard for CRC check with retry (aka "EDC") during transmission of data across the memory bus.

The true potential of NVIDIA's Fermi architecture has still yet to be seen. Sure, we've already poked around at the inner workings for our NVIDIA GF100 GPU Fermi Graphics Architecture article, but there's so much more that goes untested. Well into 2010, only a beta version of the Folding@Home client is available. The difference between work unit performance on the GeForce GTX 400-series is going to surpass ATI's Radeon HD 5000 series equivalents without much struggle, but it's uncertain how much better the performance will be compared to the previous-generations.



 

Comments 

 
# Little mistake...BETA911 2010-06-21 23:33
At Battleforge, how can a none DX11 card (9800GTX+) be in the charts when DX11 is tested? Same with the HD490.
Then, the HD5770 is not 256-bit but 128-bit!
Report Comment
 
 
# RE: Little mistake...Olin Coles 2010-06-22 06:07
Thanks for finding that typo - it's been fixed. I'll update the chart, too, since those products shouldn't be included. Even though the game allows them to benchmark with the same settings, they're not compliant and likely ignore the DX11 extensions.
Report Comment
 
 
# A Strange review pt1The Crouch 2010-06-22 11:50
I'm really sorry, but this review does not make much sense to me. Not compared to other reviews mind you, but in itself!

I count 5 clear wins for the 5850, 3 for the 465 and one wash (Resident evil 5). From the 465's point of view, thats a staggering 67% more wins for the 5850!!
Report Comment
 
 
# A Strange review pt2The Crouch 2010-06-22 11:52
When it comes to the value numbers you provide I count 5 wins for the 5850 and 4 for the 465 (RE5 is clearly a 465 win).

And by the way, I don't count the two parts of 3D vantage as separate tests.

So not only is the 5850 the faster card with over half the tests won, more importantly, it also offers the most bang for your buck! All according to your own figures!

At least to me, this would count as a clear win for the 5850, but that is hardly what I see in the summary.

Also worth mentioning i think: Having been on Newegg on a few occasions, $305 seemed a bit steep for a 5850, and for aspiring customers for a graphics card, I can tell a 5850 can be found for $285. Only $5 more expensive than the price for the 465 you are quoting, and with that small difference I think the value numbers throughout the test would look a bit different.
Report Comment
 
 
# RE: A Strange review pt2Olin Coles 2010-06-22 16:03
Based on NewEgg prices today, nearly every single Radeon HD 5850 is priced above $305 with an average price of $325 (I did the math). Conversely, several models of the GTX-465 sells for as little as $250, with an average price of $260. That makes the Radeon HD 5850 22~25% more expensive... but does it perform 22~25% better? No, it doesn't. It doesn't even perform better than the GTX-465 all of the time; only 'some' of the time... slightly more than half (as you point out). So should a card that costs $55-75 more than GTX-465 be considered the best value when it doesn't even offer a relative boost to performance? I don't think so.
You should also check your math on the cost per FPS, because the GTX-465 beats the Radeon 5850 in nearly all of them.
Report Comment
 
 
# Thank you !SiliconDoc 2010-06-27 17:10
I came here to see just how much red raging rooster ATI bias was here on the gtx465.
I thank you and congratulate you for your response to the commenter.
I sit here absolutely STUNNED. I can't believe that somebody didn't just "take it" and nearly agree with the ati fan fraud.
THANK YOU SO MUCH.
My faith in humanity has been renewed.
Believe me, I really, really appreciate it.
Sincerely sick of the rampant red bias,
SiliconDoc
Report Comment
 
 
# Is a 1~2 FPS lead really a win?Olin Coles 2010-06-22 17:54
Is a 1~2 FPS lead really a win? You might see it that way, but I don't. Especially when the Radeon HD 5850 costs $55 more.
Report Comment
 
 
# RE: ASUS GeForce GTX-465 Video CardStephen E 2010-06-22 16:48
About the VGA Power Comparison that you did, can you provide a sample calculation on how you came up with your data?

Did you just report the AC Power differnence between no graphic card in the system and with the Graphics card installed? Did you try to take into account the PSU efficiency?
Report Comment
 
 
# RE: RE: ASUS GeForce GTX-465 Video CardOlin Coles 2010-06-22 16:53
From the power consumption section: "A baseline test is taken without a video card installed inside our test computer system, which is allowed to boot into Windows-7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (not system total) displayed in Watts for each specified test product."

Power supply efficiency is not taken into consideration for any of our reported results. Only the motherboard, processor, memory, SSD, and video card are drawing power. The math is simply idle/load result minus baseline.
Report Comment
 
 
# Weird...xtremesv 2010-06-22 18:04
Why do reviewers still benchmark FarCry 2? Is it a requirement recommended (imposed) by Nvidia?

And I don't get your pricing figures. I found a 5850 for $285 and another for $305 in Newegg... the ones you mention beyond $325 include special cooling designs.
Report Comment
 
 
# nooneoverclockyourkeyboard 2012-02-11 03:10
hey do you know that i got my zotac gtx 465 at just 7250 which is $147.17(converted to USD) and the 5850 costs 14950 which is $303.48.At this price i can sli a gtx 465 and when you sli a gtx 465 against a 5850 clearly 465's the winner.I dunno why the prices aren't coming down for the 5850.
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews
QNAP Network Storage Servers

Follow Benchmark Reviews on FacebookReceive Tweets from Benchmark Reviews on Twitter