|AMD Phenom-II X6-1100T CPU HDE00ZFBRBOX|
|Reviews - Featured Reviews: Processors|
|Written by David Ramsey|
|Tuesday, 07 December 2010|
Page 5 of 14
SPECviewperf 11 tests
The Standard Performance Evaluation Corporation is "...a non-profit corporation formed to establish, maintain and endorse a standardized set of relevant benchmarks that can be applied to the newest generation of high-performance computers." Their free SPECviewperf benchmark incorporates code and tests contributed by several other companies and is designed to stress computers in a reproducible way. SPECviewperf 11 was released in June 2010 and incorporates an expanded range of capabilities and tests. Note that results from previous versions of SPECviewperf cannot be compared with results from the latest version, as even benchmarks with the same name have been updated with new code and models.
SPECviewperf comprises test code from several vendors of professional graphics modelling, rendering, and visualization software. Most of the tests emphasize the CPU over the graphics card, and have between 5 and 13 sub-sections. For this review I ran the Lightwave, Maya, and Seimens Teamcenter Visualization tests.
The lightwave-01 viewset was created from traces of the graphics workloads generated by the SPECapc for Lightwave 9.6 benchmark.
The models for this viewset range in size from 2.5 to 6 million vertices, with heavy use of vertex buffer objects (VBOs) mixed with immediate mode. GLSL shaders are used throughout the tests. Applications represented by the viewset include 3D character animation, architectural review, and industrial design.
The maya-03 viewset was created from traces of the graphics workload generated by the SPECapc for Maya 2009 benchmark. The models used in the tests range in size from 6 to 66 million vertices, and are tested with and without vertex and fragment shaders.
State changes such as those executed by the application- including matrix, material, light and line-stipple changes- are included throughout the rendering of the models. All state changes are derived from a trace of the running application.
Siemens Teamcenter Visualization Mockup
The tcvis-02 viewset is based on traces of the Siemens Teamcenter Visualization Mockup application (also known as VisMockup) used for visual simulation. Models range from 10 to 22 million vertices and incorporate vertex arrays and fixed-function lighting.
State changes such as those executed by the application— including matrix, material, light and line-stipple changes— are included throughout the rendering of the model. All state changes are derived from a trace of the running application.
The Lightwave results favor Intel. Performance scales well, with scores going up with both clock speed and number of cores. In the AMD arena, extra cores don't seem to buy you much, with the 3.4GHz AMD Phenom-II 965 Black Edition posting a better score than the 3.0GHz Phenom-II X6-1075T and even the 3.2Ghz and 3.3GHz 1090T and 1100T. Only the overclocked Phenom-II X6-1100T running at 4.1GHz beats it. Note here how the Intel 980X score is only 2.7% better than the overclocked 1100T's score.
The results flip for the Maya scores, with the AMD processors pulling strongly away from the Intel processors. The $159 AMD 965 even beats the $999.99 Intel 980X, as does the overclocked 1100T. The quad-core Intel processors come in far behind the rest of the pack.
The Seimens TCVIS scores are relatively even between the Intel and AMD camps. Surprisingly, the budget Core i5-750 edges ahead of the Hyper-Threading Core i7-930, and the 980X again turns in the best overall performance. The minor differences in clock speed between the hexacore AMD CPUs are actually apparent in the scores here, although not to the degree one would imagine, and the overclocked 1100T is only 10% faster in this test than it is at the stock clock speed.
One thing these tests show is that some code favors multiple cores, and some code favors clock speed.