Archive Home








NVIDIA GeForce GTX 280 Video Card E-mail
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Monday, 16 June 2008
Table of Contents: Page Index
NVIDIA GeForce GTX 280 Video Card
GT200 GPU: Why Now and What's New?
GeForce GTX 280 Specifications
GTX 280 Features
NVIDIA Hybrid Technology
GeForce GTX 280 Closer Look
Video Card Testing Methodology
3DMark06 Benchmarks
Crysis Benchmark Results
Lightsmark Frame Rates
SupComm: Forged Alliance Results
World in Conflict Benchmarks
GTX 280 Temperatures
GTX 280 Power Consumption
GT200 GPU Final Thoughts
GeForce GTX 280 Conclusion

GT200 GPU: Why Now?

As my review of the GeForce 9800 GTX was just being published for the April 1st launch, there were already rumors circulating about a mystery "GeForce 9900" video card. At first, I found myself just a little irritated at the prospect of working on one major GeForce product launch while another was right around the corner. For most of early May there was a strong buzz around the coming product line, but it wasn't until I attended NVIDIA Editors Day 2008 that it was all laid out in front of me. Once I witnessed first-hand how the new GT200 GPU transcoded video at speeds I never imagined (and I transcode DVD publications often) it began to make sense. Further enforcing my interest in NVIDIA's latest technology was information about CUDA that would enable me to actually leverage GeForce products into commercial environments for the purpose of increase productivity. Not only was the GT200 changing the way we will perceive a video card, but it was evident that the term "display adapter" may no longer apply.

GeForce_GTX-200_Block_Diagram.jpg

Before I share anymore information on the new architecture and the advanced technology it utilizes, I will answer the fundamental question: why now? To understand the answer, you must first accept how the industry works and that when there's a development break-through it may not always be scheduled on a calendar. Most people don't realize that it takes between 1-2 years (according to NVIDIA sources) to produce a stable graphics processor architecture. In fact, you might consider the development timeline a lot like a chess game because of constant trail and error turn-taking. So when NVIDIA finalizes a newly engineered design and makes it retail-ready, the company personnel go from a year-long yellow light to a full-blown green. So when a 1-2 year long development successfully completes with amazing results, you can understand the urgency of getting their bleeding-edge technology to market.

GT200 GPU: So What's New?

GeForce GT200 GPUs (presently the backbone of both the GTX 260 and GTX 280 products) are massively multithreaded, many-core, visual computing processors that incorporate both a second-generation unified graphics architecture and an enhanced high-performance, parallel-computing architecture. Two over-arching themes drove GeForce GT200 architectural design and are represented by two key phrases: "Beyond Gaming" and "Gaming Beyond." You may have caught this emphasis when I gave my report on NVIDIA's Editors Day 2008.

"Beyond Gaming" means the GPU has finally evolved beyond being used primarily for 3D games and driving standard PC display capabilities. This is what I was referring to when I said that calling the GTX 280 a display adapter was now inappropriate. You're going to see this be commonplace more and more often, because GPUs are accelerating non-gaming, computationally-intensive applications for both professionals and consumers. "Gaming Beyond" means that the GeForce GT200 GPUs will also enable amazing new gaming effects and dynamic realism, delivering much higher levels of scene and character detail, more natural character motion, and very accurate and convincing physics effects. The GeForce GT200 GPUs are designed to be fully compliant with Microsoft DirectX 10 and Open GL 2.1.

NVIDIAs_3_Kings.jpg

NVIDIA's second generation unified visual computing architecture as embodied in the new GeForce GTX 200 GPUs is a significant evolution over the original unified architecture of GeForce 8 and 9 series GPUs. Numerous extensions and functional enhancements to the architecture permit a performance increase averaging 1.5× the prior architecture. Improvements in sheer processing power combined with improved architectural efficiency allow amazing speedups in gaming, visual computing, and high-end computation.

NVIDIA engineers specified the following design goals for the GeForce GT200 GPUs:

  • Design a processor with up to twice the performance of GeForce 8800 GTX
  • Rebalance the architecture for future games that use more complex shaders and more memory
  • Improve architectural efficiency per watt and per square millimeter
  • Improve performance for DirectX 10 features such as geometry shading and stream out
  • Provide significantly enhanced computation ability for high-performance CUDA applications and GPU physics
  • Deliver improved power management capability, including a substantial reduction in idle power.

Features

8800 GTX

GTX 280

% Increase

Cores

128

240

87.5 %

TEX

64t/clk

80t/clk

25 %

ROP Blend

12p/clk

32p/clk

167 %

Precision

fp32

fp64

--

GFLOPs

518

933

80 %

FB Bandwidth

86 GB

142 GB

65 %

Texture Fill

37 GT/s

48 GT/s

29.7 %

ROP Blend

7 GBL/s

19 GBL/s

171 %

PCI Express

6.4 GB

12.8 GB

100 %

Video

VP1

VP2

--

The new second-generation SPA architecture in the GeForce GTX 280 improves performance compared to the prior generation G80 and G92 designs on two levels. First, it increases the number of SMs per TPC from two to three. Second, it increases the maximum number of TPCs per chip from 8 to 10. The effect is multiplicative, resulting in 240 processor cores.

Compared to earlier GPUs such as GeForce 8800 GTX, the GeForce GTX 280 provides:

  • 1.88× more processing cores
  • 2.5× more threads per chip
  • Doubled register file size
  • Double-precision floating-point support
  • Much faster geometry shading
  • 1 GB frame buffer with 512-bit memory interface
  • More efficient instruction scheduling and instruction issue
  • Higher clocked and more efficient frame buffer memory access
  • Improvements in on-chip communications between various units
  • Improved Z-cull and compression supporting higher performance at high resolutions
  • 10-bit color support

What makes the GeForce GT200 a great parallel processor?

GTX_280-260_Encoding.jpg

There are three key ingredients:

  • CUDA: The greatest obstacle to parallel computing has always been the software. The GeForce GTX 280 supports CUDA, the industry's first parallel computing language to have deep penetrating (70 million user base) on the PC. CUDA is simple, powerful and offers exceptional scaling on visual computing applications.
  • GPU Computing Architecture: The GeForce GTX 280 is designed specifically for parallel computing, incorporating unique features like shared memory, atomic operations and double precision support.
  • Many-core architecture: With 240 cores running at 1.3GHz, the GeForce GTX 280 is the most powerful floating point processor ever created for the PC.
  • Torrential bandwidth: Due to their high data content, visual computing applications become bandwidth starved on the CPU. With eight on-die memory controllers, the GeForce 280 GTX can access 141GB of data per second, greatly accelerating HD video transcoding, physics and image processing applications.

Please see our NVIDIA GPU Computing FAQ for additional information on this topic.



 

Comments have been disabled by the administrator.

Search Benchmark Reviews

Like Benchmark Reviews on FacebookFollow Benchmark Reviews on Twitter