Archive Home arrow Reviews: arrow Video Cards arrow NVIDIA GeForce GTX 580 Video Card Performance
NVIDIA GeForce GTX 580 Video Card Performance E-mail
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Tuesday, 09 November 2010
Table of Contents: Page Index
NVIDIA GeForce GTX 580 Video Card Performance
GeForce GTX 580 Closer Look
GeForce GTX 580 Detailed
Features and Specifications
Video Card Testing Methodology
DX10: 3DMark Vantage
DX10: Crysis Warhead
DX11: Aliens vs Predator
DX11: Battlefield Bad Company 2
DX11: BattleForge
DX11: Lost Planet 2
DX9 SSAO: Mafia II
DX11: Metro 2033
DX11: Tom Clancy's HAWX2
DX11: Unigine Heaven 2.1
Overclocking and Temperatures
VGA Power Consumption
NVIDIA APEX PhysX Enhancements
NVIDIA 3D-Vision Effects
Editor's Opinion: NVIDIA Fermi
NVIDIA GeForce GTX 580 Conclusion

NVIDIA Fermi Features

In today's complex graphics, tessellation offers the means to store massive amounts of coarse geometry, with expand-on-demand functionality. In the NVIDIA GF100-series GPU, tessellation also enables more complex animations. In terms of model scalability, dynamic Level of Detail (LOD) allows for quality and performance trade-offs whenever it can deliver better picture quality over performance without penalty. Comprised of three layers (original geometry, tessellation geometry, and displacement map), the final product is far more detailed in shade and data-expansion than if it were constructed with bump-map technology. In plain terms, tessellation gives the peaks and valleys with shadow detail in-between, while previous-generation technology (bump-mapping) would give the illusion of detail.

id-imp-tessellated-character.jpg

Using GPU-based tessellation, a game developer can send a compact geometric representation of an object or character and the tessellation unit can produce the correct geometric complexity for the specific scene. Consider the "Imp" character illustrated above. On the far left we see the initial quad mesh used to model the general outline of the figure; this representation is quite compact even when compared to typical game assets. The two middle images of the character are created by finely tessellating the description at the left. The result is a very smooth appearance, free of any of the faceting that resulted from limited geometry. Unfortunately this character, while smooth, is no more detailed than the coarse mesh. The final image on the right was created by applying a displacement map to the smoothly tessellated third character to the left.

Benchmark Reviews also more detail in our full-length NVIDIA GF100 GPU Fermi Graphics Architecture guide.

Tessellation in DirectX-11

Control hull shaders run DX11 pre-expansion routines, and operates explicitly in parallel across all points. Domain shaders run post-expansion operations on maps (u/v or x/y/z/w) and is also implicitly parallel. Fixed function tessellation is configured by Level of Detail (LOD) based on output from the control hull shader, and can also produce triangles and lines if requested. Tessellation is something that is new to NVIDIA GPUs, and was not part of GT200 because of geometry bandwidth bottlenecks from sequential rendering/execution semantics.

In regard to the GF110 graphics processor, NVIDIA has added a new PolyMorph and Raster engines to handle world-space processing (PolyMorph) and screen-space processing (Raster). There are sixteen PolyMorph engines and four Raster engines on the GF110, which depend on an improved L2 cache to keep buffered geometric data produced by the pipeline on-die.

GF100 Compute for Gaming

As developers continue to search for novel ways to improve their graphics engines, the GPU will need to excel at a diverse and growing set of graphics algorithms. Since these algorithms are executed via general compute APIs, a robust compute architecture is fundamental to a GPU's graphical capabilities. In essence, one can think of compute as the new programmable shader. GF110's compute architecture is designed to address a wider range of algorithms and to facilitate more pervasive use of the GPU for solving parallel problems. Many algorithms, such as ray tracing, physics, and AI, cannot exploit shared memory-program memory locality is only revealed at runtime. GF110's cache architecture was designed with these problems in mind. With up to 48 KB of L1 cache per Streaming Multiprocessor (SM) and a global L2 cache, threads that access the same memory locations at runtime automatically run faster, irrespective of the choice of algorithm.

NVIDIA Codename NEXUS brings CPU and GPU code development together in Microsoft Visual Studio 2008 for a shared process timeline. NEXUS also introduces the first hardware-based shader debugger. NVIDIA's GF100-series is the first GPU to ever offer full C++ support, the programming language of choice among game developers. To ease the transition to GPU programming, NVIDIA developed Nexus, a Microsoft Visual Studio programming environment for the GPU. Together with new hardware features that provide better debugging support, developers will be able enjoy CPU-class application development on the GPU. The end results is C++ and Visual Studio integration that brings HPC users into the same platform of development. NVIDIA offers several paths to deliver compute functionality on the GF110 GPU, such as CUDA C++ for video games.

Image processing, simulation, and hybrid rendering are three primary functions of GPU compute for gaming. Using NVIDIA's GF100-series GPU, interactive ray tracing becomes possible for the first time on a standard PC. Ray tracing performance on the NVIDIA GF100 is roughly 4x faster than it was on the GT200 GPU, according to NVIDIA tests. AI/path finding is a compute intensive process well suited for GPUs. The NVIDIA GF110 can handle AI obstacles approximately 3x better than on the GT200. Benefits from this improvement are faster collision avoidance and shortest path searches for higher-performance path finding.

GF110 Specifications

  • 512 CUDA Cores
  • 16 Geometry Units
  • 4 Raster Units
  • 64 Texture Units
  • 48 ROP Units
  • 384-bit GDDR5
  • DirectX-11 API Support

GeForce 400-Series Products

Graphics Card

GeForce GTS 450

GeForce GTX 460

GeForce GTX 465

GeForce GTX 470

GeForce GTX 480

GeForce GTX 580
GPU Transistors 1.17 Billion 1.95 Billion 3.2 Billion 3.2 Billion 3.2 Billion 3.0 Billion

Graphics Processing Clusters

1 2

4

4

4

4

Streaming Multiprocessors

4 7 11

14

15

16

CUDA Cores

192 336 352

448

480

512

Texture Units

32 56 44

56

60

64

ROP Units

16 768MB=24 / 1GB=32 32

40

48

48

Graphics Clock
(Fixed Function Units)

783 MHz

675 MHz

607 MHz

607 MHz

700 MHz

772 MHz

Processor Clock
(CUDA Cores)

1566 MHz

1350 MHz

1215 MHz

1215 MHz

1401 MHz

1544 MHz

Memory Clock
(Clock Rate/Data Rate)

902/3608 MHz

900/3600 MHz

837/3348 MHz

837/3348 MHz

924/3696 MHz

1002/4016 MHz

Total Video Memory

1024MB GDDR5

768MB / 1024MB GDDR5

1024MB GDDR5

1280MB GDDR5

1536MB GDDR5

1536MB GDDR5

Memory Interface

128-Bit 768MB=192 / 1GB=256-Bit

256-Bit

320-Bit

384-Bit

384-Bit

Total Memory Bandwidth

57.7 GB/s

86.4 / 115.2 GB/s

102.6 GB/s

133.9 GB/s

177.4 GB/s

192.4 GB/s

Texture Filtering Rate
(Bilinear)

25.1 GigaTexels/s

37.8 GigaTexels/s

26.7 GigaTexels/s

34.0 GigaTexels/s

42.0 GigaTexels/s

49.4 GigaTexels/s

GPU Fabrication Process

40 nm

40 nm

40 nm

40 nm

40 nm

40 nm

Output Connections

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

Form Factor

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Power Input

6-Pin

2x 6-Pin

2x 6-Pin

2x 6-Pin

6-Pin + 8-Pin

6-Pin + 8-Pin

Thermal Design Power (TDP)

106 Watts 768MB=150W / 1GB=160W

200 Watts

215 Watts

250 Watts

244 Watts

Recommended PSU

400 Watts

450 Watts

550 Watts

550 Watts

600 Watts

600 Watts

GPU Thermal Threshold

95°C

104°C

105°C

105°C

105°C

97°C

GeForce Fermi Chart Courtesy of Benchmark Reviews



 

Comments 

 
# Your power consumption is wrongDS 2010-11-09 06:45
Because the GTX485, sorry GTX580, throttles clocks in Furmark and OCCT. Please redo the tests, with this future turned off, or with some other stressing program.
Report Comment
 
 
# RE: Your power consumption is wrongOlin Coles 2010-11-09 06:52
I can always tell when a visitor comments without reading the article, because if you had then you'd know that it will do this with every application because it's controlled at the hardware level. Since there's circuitry limiting power consumption, how do you suggest we 'turn this off'?

I'm asking you, the empowered visitor, since you obviouly know the 'right' way to do this after telling me that our way was wrong.
Report Comment
 
 
# GTX485ChrisW 2010-11-13 21:45
He may be wrong about Furmark and OCCT, but he hit the GTX580's more realistic name on the head!

If anything, at most a GTX490...
Report Comment
 
 
# RE: Your power consumption is wrongOlin Coles 2010-11-13 23:45
If you rename the exe's to something else (like Crysis.exe) they reveal full power consumption statistics, which are reported in this review.
Report Comment
 
 
# No Can Do....BruceBruce 2010-11-09 06:55
The "throttling" of the card is based on input from thermal sensors and current sensors. It is not software-specific. It doesn't matter how you stress the card; if it gets too hot, or draws too much current it will take corrective action.
Report Comment
 
 
# ActuallyIntratech 2010-11-09 07:20
Actually the limiter is only engaged when the driver detects Furmark or OCCT at present so you can test it with some other stress testing application.
Report Comment
 
 
# RE: ActuallyOlin Coles 2010-11-09 07:25
This isn't a software function. You'll notice from the chart that was linked that the GPU isn't being throttled, and renaming furmark had no effect. Do you have evidence of it working otherwise?
Report Comment
 
 
# Power consumptionLED Guy 2010-11-09 12:13
Anandtech had this to say about the power consumption:

Quote:
Compared to the GTX 480 NVIDIA?s power consumption is down 10%...


##anandtech.com/show/4008/nvidias-geforce-gtx-580/17

Anand's numbers also seem to be more in line with other reviews I've looked at. So unfortunately it looks like you need to retest power consumption with another program, why not ask Anand what program they used?

Otherwise, from what I read, good review. Not planning on upgrading so I didn't read the whole article unfortunately. Been a reader for some time now, first comment so thanks for the great articles so far. BTW I have a suggestion/request for the graphics card articles: Add minimum frame rate numbers to the tests, as these are as important as average frame rates, if not more so.

Cheers
Report Comment
 
 
# TechPowerUpIntratech 2010-11-09 07:33
Quote:
NVIDIA also explained that this is just a work in progress with more changes to come. From my own testing I can confirm that the limiter only engaged in Furmark and OCCT and not in other games I tested.

##techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/3.html
Report Comment
 
 
# Sorry, full quote hereIntratech 2010-11-09 07:35
Quote:
At this time the limiter is only engaged when the driver detects Furmark / OCCT, it is not enabled during normal gaming. NVIDIA also explained that this is just a work in progress with more changes to come. From my own testing I can confirm that the limiter only engaged in Furmark and OCCT and not in other games I tested.

Real-time clock monitoring does not show the changed clocks, so besides the loss in performance it could be difficult to detect that state without additional testing equipment or software support.


##techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/3.html
Report Comment
 
 
# ThanksRealNeil 2010-11-09 07:47
This was the first GTX-580 review to hit my in-box today. I like the fact that this card uses such a small amount of power and still achieves very nice performance. While it's true that the top-end Radeon cards bested it in some of the tests, they don't offer the same compatibilities with CUDA and Phys-X processing and that shows up glaringly in some of your results. Not knowing for sure how game development will go as to what technologies each company will embrace, it makes more sense to me to buy the NVIDIA card.
Report Comment
 
 
# RE: ThanksBanzai 2010-11-09 07:57
Well, it matches fairly well to a 5970, which is a dual gpu card. Honestly, it shows some good results in my opinion, although it may not beat crossfired 6870's, it can match/beat a 5970 for a similiar price.
Report Comment
 
 
# Mr. NobodyFranck 2010-11-09 08:23
The only thing i find a quite deceiving is the choice of the competing cards.
For the sake of performance, it is obvious that a single 470 wouldn t do it. It would have been tem time more interesting to know how would perform a SLI with more popular 460 gtx 1g HAWK or FTW or even a 470 SLI since you put a 6870 Crossfire on the stake, and that such are in pair with the mentioned cards.
That would be relevant to know how this newby 580 perform agaisnt his own kind.
Report Comment
 
 
# RE: Mr. NobodyOlin Coles 2010-11-09 09:03
I'm sorry to ruin your conspiracy theory, but I simply didn't have a second GTX 470 to combine and test in SLI. I don't want to deceive you, so I'll add that I might never have a second 470.

I'm also sorry that all of the work I put into this article didn't provide enough information for you to develop a decent idea of where this card fits.
Report Comment
 
 
# Que'RealNeil 2010-11-09 09:19
So you did this review, and I'm not sure how long it takes to do one of these, but I really appreciate having them to look at. But I wonder if you ever just relax with some of these Wazoo cards in a PC and just game a little. Time is probably a factor.
Report Comment
 
 
# RE: RE: Mr. NobodyFranck 2010-11-09 17:37
Not even 2 460 ?
Ususally those review are to make an opinion of what to buy according to what you use, and guide the pretenders to an upgrade.
Well it doesn t take rocket science to guess the 580 is to topple ATI best dog, this is the obvious part, everybody will do it.
But what of the average joe ? Since GTX 460 is the average card on the market and not the 470, what should he do, acquire a 580, or double his 460 ? That s the second more important question on the market, now.
If i take that the 6870 is equal to a hawk and less than a FTW,according to your own reviews, average joe could guess that 2 of those would let the 580 on the floor panting as did the double 6870.
But as other factors kick average joe can t be sure then the revue fails him, and the average joe drive the market bulk, and want the not so obvious answers.
If the review is to give an "en passant" review of the 580 its excellent, but its too obvious, everysingle review will do it.
Report Comment
 
 
# RE: RE: RE: Mr. NobodyFranck 2010-11-09 17:42
You reviews usually fill my doubts and i hardly consult others sites as they tend to say all the same that you do, so your very thrustable. . SLI or buying a new card ? no answer for NVIDIA owners.
I hope you ll have 2 NVIDIA card when your start reviewing the non vanilla 580. I can assure you its all that matters for much people.
You solved the ATI side and its great for them but we Nvidia side are left hanging.
Report Comment
 
 
# RE: RE: RE: RE: Mr. NobodyWayne Manor 2010-11-09 19:56
I have 2 1GB GTX 460's and unless I really wanted the brute power of 2 580's, I'm sticking to the 460's as they're probably on par with 1 580. I'll probably wait till the 680's or 7900's till I upgrade next.
Report Comment
 
 
# Updated with SLI GTX 460'sOlin Coles 2010-11-10 21:39
I purchased a second overclocked 1GB EVGA GeForce GTX 460 FTW video card specifically for SLI testing, and have updated all of the charts with the new results. I may write a separate article discussing value and performance, but there are no plans to re-write this article at the moment.
Report Comment
 
 
# man these guyssweatshopking 2010-11-09 09:21
Man, you guys always get a tough time on video cards reviews. that sucks guys. take it easy, quit whining about omg no 470, etc. look and see. buy or don't. up to you, but stop complaining.
Report Comment
 
 
# Mmm competitionHaters gonna hate 2010-11-09 11:31
I think this a great GPU I just think the price is perhaps a little high at the moment. Plus, you may want to see what the 6970 and 6950 yield from AMD. If nothing else, I bet it forces the price lower on the GTX 580. Olin, I liked the article and appreciate all the time I'm sure it took to get all this info.
Report Comment
 
 
# 580Pawnshock 2010-11-09 12:04
I just want to say thanks a million for all the effort and time you put into this benchmark review Olin Coles. It is detailed, precise and exactly what I was looking for and I am sure many more will find it helpful as well. Much appreciated.
Report Comment
 
 
# Oui...Chris H 2010-11-09 13:07
Why couldn't they have just called it the GeForce 485 or 490?! Looking at these performance numbers, it would of been more appropriate. When I see a 480 and 580, I expect the 580 to be at-least 60% faster than that their last model...

Are they really going to have to start their next gen chip in the 600's?

Hey Olin, GREAT article BTW!
Report Comment
 
 
# RE: Oui...Jack 2010-11-09 14:26
You mean like they did w/ the 5870 and 6870 ? Wait that's slower.

Same reason they slap 2011 on car models .... it works to sell things.
Report Comment
 
 
# RE: RE: Oui...Franck 2010-11-09 17:43
Exactly MKT stuff only.
Report Comment
 
 
# RE: Oui...David Ramsey 2010-11-10 09:19
Because the underlying chip has actually changed. Admittedly, the changes were minor: fine-tuning the types of transistors used in various parts of the chip, moving fan control onboard, et cetera so on and such forth. Still, a case could be made for a "4xx" designation. But although NVIDIA has a track record of confusing model numbers, they're still way better than AMD, whose 6xxx series cards are slower than the existing 5xxx series.
Report Comment
 
 
# GTX 580 3DRobert17 2010-11-09 15:04
Olin, having trouble posting questions in the forums; thought you'd like to know.

So here you go, both on and off topic. Nice review, thanks for all your efforts.

With the preponderance of video hardware turning to 3D enabled products I'd like to know how I can un-chart the costs of 3D enabling via hardware if that's possible (I have no depth perception, just ask the US Navy). So I get no bang for the buck with this evolution that others are likely fascinated by. Any general rule of thumb that you may have already gotten your mind around?

Thanks again.
Report Comment
 
 
# RE: GTX 580 3DOlin Coles 2010-11-09 15:29
Robert: please try to start a thread in the forum, or send me a message with the problems you're having. I want to keep the comments on-topic, or else I would discuss here.
Report Comment
 
 
# RE: NVIDIA GeForce GTX 580 Video Card Performanceivor 2010-11-09 21:49
I'm inclined to think that Benchmarkreviews received a GTX580 with a lower vid core than some of the other reviewers, which explains the meager overclock but sipping power. But anyhow, it just represents the other end of the spectrum where some cards have lower vid. It may not be a good thing though, because Nvidia does not think the gpu can stand higher voltage.
Report Comment
 
 
# Outperformed by CrossFire Radeon HD 6870'sTiago 2010-11-10 06:34
How can you add this as a con!?
You are comparing TWO 6870s to ONE gtx 580 and btw the 580 is a SINGLE gpu card while one 6970 is a DUAL gpu card!!

Ofcourse it'll always outperform a 580 but i bet if you guys at benchmarkreviews added another 580 in SLI you'd get very different results.
Report Comment
 
 
# RE: Outperformed by CrossFire Radeon HD 6870'sOlin Coles 2010-11-10 06:42
How can I compare two Radeon HD 6870's to one GeForce GTX 580? Well, two 6870's cost less and provide better FPS performance. That's how. People want to know how much performance they can get for their money, not just how one card stacks up against another single card.
Report Comment
 
 
# RubbishJoesph 2010-11-10 15:15
Ive seen tests of the 6870 and there worse than the 5870 as it is a less hungry card, yeh your tests show it isw better than gtx580
what a load of crap
Report Comment
 
 
# RE: RubbishOlin Coles 2010-11-10 15:21
Joseph, please tell me that beneath your tormented grammar that you're also illiterate and blind. These tests used two 6870's in CrossFireX. That means 6870 times two.
Report Comment
 
 
# RE: NVIDIA GeForce GTX 580 Video Card PerformanceChris 2010-11-10 15:45
A more substantial upgrade than I though it would be. Thanks for taking the many hours it takes to make an objective review. It seems that a pair of 6870s represents a better deal for the buyer today that this card. I don't think that much has changed ... yet. Nvidia still has the fastest in absolute per GPU performance while ATI is better for price:performance ratios. Of course, we'll have to see the high-end ATI cards before making a final judgment.
Report Comment
 
 
# Cost Analysis for GTX 460's in SLIDon 2010-11-10 22:50
Great review, and just wanted to let you know that you are missing the cost analysis for the 460's in SLI, and just want to point out that there are much better 460 cards at a cheaper price like the MSI Cyclone 1GB, or the Hawk which are only $199 and $215 respectively, right now.

A pair of these in SLI would be $400 - $430, and would perform better than the EVGA cards.
##newegg.com/Product/Product.aspx?Item=N82E16814127534&cm_re=gtx_460_1gb-_-14-127-534-_-Product
Report Comment
 
 
# RE: Cost Analysis for GTX 460's in SLIOlin Coles 2010-11-10 22:55
Thanks Don, but I literally JUST added the GTS 460 SLI results a few minutes ago. I may or may not update the value analysis later this week.
Report Comment
 
 
# RE: Cost Analysis for GTX 460's in SLIOlin Coles 2010-11-10 23:02
How would they perform better? I think you're mistaken, because the $230 EVGA GTX 460 FTW is clocked to 850/1700/1000 MHz, and the $215 MSI HAWK is 810/1620/975 MHz.
Report Comment
 
 
# They perform better when overclockedDon 2010-11-11 09:44
Sorry I didn't clear that up.

The Hawk & the Cyclone overclock very well, especially the Hawk as it can easily reach and exceed 900 MHZ. But yeah, the EVGA has pretty good clocks as well as it comes nicely overclocked.

IMO the Hawk is the better value card for a GTX 460 right now @ $215.

Excellent job on the review!

Question - Can you confirm if NVIDIA did in fact get rid of most of their Hyper Compute Performance on the 580?
Report Comment
 
 
# No update on power consumption?LED Guy 2010-11-11 10:46
You still haven't updated the article which says:

"The temperatures discussed below are absolute maximum values..."

But they are NOT absolute maximum values, nor is the power consumption. You state that loaded system power with the 580 is 191W and 315W with the 480. But you also say that you tested temperatures and power consumption with Furmark, which throttles the card, this can be checked if you run the benchmark test where you will get a LOT lower performance compared to the 480 for example. ALL other reviews out there state the 580 gets you 15-25% better performance than the 480 for the same or SLIGHTLY less power.

But not 124 watts less. And the difference isn't just down to a lower VID like ivor said. Test again with another program, Vantage, Crysis or, like Nordichardware did, try with Kombustor and using the "Post-FX" setting.

Please update the article as soon as possible as it is hugely misleading right now.
Report Comment
 
 
# RE: No update on power consumption?Olin Coles 2010-11-11 13:51
The power section has been updated with details on how we arrived at our results. Essentially, GeForce GTX 580 hits 246W maximum power while using a modified FurMark tool.
Report Comment
 
 
# gtx460 slibelzazar 2010-11-14 18:30
i didnt read the whole article but i read the fps testings and to me it seems that geforce 460 sli rocked all the other cards or am i mistaking ? i am about to buy my self a new card and my choise is between the 580 and 460 sli it seems that the 460sli is the best perfomance and is cheaper but my question is if it is possible that the 580 will perform way better after a newer driver perhaps ? plx help me make up my mind i reaally would like the newest card but why buy it when and older in sli beforms way better.
thanx for the great articles
Report Comment
 
 
# RE: gtx460 sliOlin Coles 2010-11-14 19:52
Hopefully you read enough of the article to realize that the GTX 460's we used had the highest factory overclock available: EVGA FTW Edition. Two stock GTX 460's in SLI do not beat a GTX 580, and instead compete with the 480.
Report Comment
 
 
# TNX 1680x1050 + settingsKrusher 2010-11-14 22:48
Thanks for doing these tests in 1680x1050 AND showing your settings; most of the reviews I've seen elsewhere only show 1920x1200 and leave out the valuable setup info (so I could not attempt to reproduce it here.)
Report Comment
 
 
# RE: TNX 1680x1050 + settingsOlin Coles 2010-11-14 23:17
Back when I was a avid hardware review reader (a little over four years ago), it would always frustrate me to find very little detail on the settings and specifications for each video card tested. I always had to guess at the speeds of a particular product, and if it was an overclocked version (XFX 7900 GT doesn't say a lot). So when I started Benchmark Reviews, I made it a point to ensure our details made it possible to reproduce our results.
Report Comment
 
 
# RE: TNX 1680x1050 + settingsKrusher 2010-11-14 23:58
I have a single eVGA GTX 460 SC (OC'd to the FTW speeds) and halved your SLI numbers to give me a ball-park figure. With 3 days left on my Step-up option to the 580, your results make it really tempting...if eVGA ever catches up on the huge back order! :)
Report Comment
 
 
# Thank youJay Cee 2010-11-15 08:57
Thank you for the review and all your effort, it seems many people tend to post complaints, but those who are satisfied generally never bother to post a thanks.

Will you comment on the noise level? This seems to be a greater concern to me more and more as I seek to spend more time and money seeking out quieter components. The only reason I didn't purchase a GTX480 was due to the noise, so I have been having to do with a GTX460. The majority of reviews point out that the GTX580 is much quieter which seems to be a positive move on nVidia's part.
Report Comment
 
 
# RE: Thank youOlin Coles 2010-11-15 09:09
The noise level is nearly silent at idle (as in no audible sound), and under full gaming load it becomes slightly more audible. If I turn the fan up to full power using Afterburner, it's loud enough to hear but still more quiet than a GTX 460. That's right, less noise than a GTX 460. The fan noise on a GTX 580 doesn't even compare to a GTX 480, which seemed to get better later into production.
Report Comment
 
 
# 3d Digital ArtistKeign 2010-11-15 14:35
This car fits my needs perfectly because not everything takes advantage of SLI, nearly 90% of the programs I use that are graphic card intensive do not support SLI configs. SO I will be upgrading my two 295's to a 580 this very day ;). Thanks for the review.
Report Comment
 
 
# yetKeign 2010-11-15 14:36
And yet it will still probably outperform my 295's in gaming, which is versatile and usable on every medium I need it for.
Report Comment
 
 
# dsfgatygun 2010-11-16 08:46
Nice review, bad luck you got such horrible clueless reply's going on here.

A 580 = a package ( it is a stable solution for every game there is )
sli/crossfire/cf = raw power + cheaper but a more unstable way of dealing with older games even new top titles like aliens of predator.

While videocard do the same, the different versions of it are ment for different people.


Even if the gtx580 cost more and doesn't provide on every little aspect on the best way, the card = still a far better choise for most people
then any sli/crossfire/cf unstable solution.

The 580 = a perfect naming for this product. its 20-25% faster / lesser usage of watt / lesser heat / lesser noise.

2x 460's or 2x other budget cards or x2 solutions are great if the card actually produces 200% faster speeds then the 1 cored version. Which it clearly isn't doing. Its even below the 580 or with a minor fps above it. ( on only "newer games")
Report Comment
 
 
# Video Card PricesOlin Coles 2010-11-21 13:50
After the GeForce GTX 580 launch, I felt a little irritated to see only one model sell for $500 while all of the others sold for $520 or more. This throws off my Cost Analysis math on the very first day. Making matters worse AMD's Radeon HD 5970 could be found in a few places for $500 before launch, but now costs no less than $550 or (much) more.

This game of sinking prices just ahead of launch is getting old, and it ruins my cost analysis every time. So visitors, keep this in mind when you read these reviews, and understand how prices change on a daily (and sometimes hourly) basis.
Report Comment
 
 
# Great review12Xu 2010-12-09 09:27
Nice job, thanks for the thorough review. Glad to see Nvidia coming back around after such a long drought.
Report Comment
 
 
# meabla 2010-12-14 06:03
the only Nvidia card that has ever interested me is GTX460
in that can buy two of them for the price of one GTX480 or GTX580 and have more performance

other wise I have become disgruntled with the swift obsolesce of today's technology
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews
QNAP Network Storage Servers

Follow Benchmark Reviews on FacebookReceive Tweets from Benchmark Reviews on Twitter