|NVIDIA GeForce GTX 460 768MB Video Card|
|Reviews - Featured Reviews: Video Cards|
|Written by Olin Coles|
|Monday, 12 July 2010|
Page 3 of 22
NVIDIA GF104 GPU Fermi Architecture
Based on the Fermi architecture, NVIDIA's latest GPU is codenamed GF104 and is equipped on the GeForce GTX 460. In this article, Benchmark Reviews explains the technical architecture behind NVIDIA's GF104 graphics processor and offers an insight into upcoming Fermi-based GeForce video cards. For those who are not familiar, NVIDIA's GF100 GPU was their first graphics processor to support DirectX-11 hardware features such as tessellation and DirectCompute, while also adding heavy particle and turbulence effects. The GF100 GPU is also the successor to the GT200 graphics processor, which launched in the GeForce GTX 280 video card back in June 2008. NVIDIA has since redefined their focus, and GF100/GF104 proves a dedication towards next generation gaming effects such as raytracing, order-independent transparency, and fluid simulations. The new GF104 GPU is still more powerful than GT200, and delivers DirectX-11 performance for NVIDIA's mid-range Fermi-based video card family.
GF100 was not another incremental GPU step-up like we had going from G80 to GT200. While processor cores have grown from 128 (G80) and 240 (GT200), they reach 512 in the GF100 and earn the title of NVIDIA CUDA (Compute Unified Device Architecture) cores. GF104 features up to 336 CUDA cores. The key here is not only the name, but that the name now implies an emphasis on something more than just graphics. Each Fermi CUDA processor core has a fully pipelined integer arithmetic logic unit (ALU) and floating point unit (FPU). GF104 implements the IEEE 754-2008 floating-point standard, providing the fused multiply-add (FMA) instruction for both single and double precision arithmetic. FMA improves over a multiply-add (MAD) instruction by doing the multiplication and addition with a single final rounding step, with no loss of precision in the addition. FMA minimizes rendering errors in closely overlapping triangles.
Based on Fermi's third-generation Streaming Multiprocessor (SM) architecture, GF104 could be mistaken as a divided GF100. NVIDIA GeForce GF100-series Fermi GPUs are based on a scalable array of Graphics Processing Clusters (GPCs), Streaming Multiprocessors (SMs), and memory controllers. NVIDIA's GF100 GPU implemented four GPCs, sixteen SMs, and six memory controllers. Conversely, GF104 implements two GPCs. eight SMs, and four memory controllers. Where each SM contained 32 CUDA cores in the GF100, NVIDIA now configures the GF104 to deliver 48 cores per SM. As expected, NVIDIA GF100-series products are launching with different configurations of GPCs, SMs, and memory controllers to address different price points.
CPU commands are read by the GPU via the Host Interface. The GigaThread Engine fetches the specified data from system memory and copies them to the frame buffer. GF104 implements four 64-bit GDDR5 memory controllers (256-bit total) to facilitate high bandwidth access to the frame buffer. The GigaThread Engine then creates and dispatches thread blocks to various SMs. Individual SMs in turn schedules warps (groups of 48 threads) to CUDA cores and other execution units. The GigaThread Engine also redistributes work to the SMs when work expansion occurs in the graphics pipeline, such as after the tessellation and rasterization stages.
GF104 implements 336 CUDA cores, organized as 8 SMs of 48 cores each. Each SM is a highly parallel multiprocessor supporting up to 32 warps at any given time (four Dispatch Units per SM deliver two dispatched instructions per warp for four total instructions per clock per SM). Each CUDA core is a unified processor core that executes vertex, pixel, geometry, and compute kernels. A unified L2 cache architecture (384KB on 768MB version or 512KB on 1GB cards) services load, store, and texture operations. GF104 is designed to offer a total of 32 ROP units (768MB=24 / 1GB=32) for pixel blending, antialiasing, and atomic memory operations. The ROP units are organized in four groups of eight. Each group is serviced by a 64-bit memory controller. The memory controller, L2 cache, and ROP group are closely coupled-scaling one unit automatically scales the others.
GeForce GTX 400 Specifications