|Hehe, I just caught your mistake Manual. It isn't x 10^9.... That is Gigahertz. It is x 10^6. Mega is million!|
|If what you say is correct, then why dont memory manufactors make DDR2-2000 kits with 10-10-15-40 timings, after all the 1200Mhz gain to todays standard is much more than a few timings (only 57 cycles extra latency than 3-3-3-9). But they dont do this.|
Why do you think we have GDDR3/4 at 2200MHz+ in Graphics Processing Units? This is because the speed is required. This graphical memory has high latencies, but high speed. Therefore the Graphics Core is not starved of information.
Judging by what you are saying you still have a long leap to understand how computers actually work. You're response was very poor in terms of Computer Science, even though grammatically correct, it lacks the surrounding of engineering knowledge (no offence).
You are stating that Latency will overrule Clock Speed within 3D applications without testing a variety of applications, with a magnitude of settings, and hardware configurations. I myself have stated that it matters less than direct computation, I did not state that it has absolutely no effect on performance impact within 3D applications.
F.E.A.R. was not a good example to have used as it is based upon a strange engine. Memory is poorly accessed by the application, and therefore has limited stress bestowed upon it. Usually OpenGL applications will address the system stores more readily (back and forth), and this will in turn increase the need for Memory speed to continue data flow without the system halting as there is not enough information.
A long time ago where bus speed was not extremely high, latency mattered, as applications of that time were phased around the ability of the CPU and GPU, memory mattered only for storing data, and thus the need for super speed memory did not arise.
With the use of more Sophisticated Games the need for higher speed memory arises.
In the past you may have been able to counteract my statements by stating: "What if a Cache Miss occurs (or equivalent), and the CPU has to retrieve data from Main Memory?"*
Originally lower latency is required in case of this problem. However now with improved Micro-Code Pipelines, increased Cache Size and updated Branch Prediction this need is no longer required.
I myself do want higher Frame Rates in the Majority of games, therefore I have increased the memory speeds to 1066MHz (5-5-5-15) over 675MHz (3-2-2-8).
I have noticed frame rate increases within:
Half Life 2 Lost Coast (Benchmark)
Counterstrike: Source (Benchmark)
Elder Scrolls 4: Oblivion
Age of Empires 3
Medieval 2: Total War
(Screenshots available upon request)
Only F.E.A.R. and BFME2 did not have an major FPS impact by changing the Memory configuration.
All of these applications written down have had frame rate increases by a minimum of 1% from increasing the clock frequency of the Memory.
F.E.A.R. even at 16xSSAA does not move around much, and stays at around the same level, which is most likely due to the coding of the application. To show these increases in frame rate I suggest you select a smaller resolution of 1024x768, and this will show moderate changes of the frame rate.
The next point I will stress upon many enthusiasts will be used to themselves.
Have you ever ran you're 3D application, but with other applications in the background?
Increasing the Memory speed will dramatically help with your 3D game if you have multiple applications open addressing the Memory.
To put it simply for you. The greater the speed the faster data can be transfered (in this manner). Therefore if you have two applications addressing Memory in let us say 2-4 (Row - Column, RAS/CAS apply) then the data can be transfered to them faster.
Say both application one and two want data at the same time. The Memory will have to send data to one first then to the second. If the memory is faster it can do this in a shorter time period, therefore the system speeds up. With lower latencies the Memory can address slightly faster, but then it has to take longer to transfer data. Transferring data takes a lot longer than just waiting for the Memory to acknowledge a command.
As I often run Windows Media player, keep downloads going and sometimes encode videos the Memory Speed wins again, regardless
* Originally CPU's used to have very little Memory Stores and poor Branch Prediction, this therefore resulted in the Central Processing Unit having to repeatedly access Memory. At the time Memory Speed was slow, as was the Bus speed, therefore there was no need to increase the Memory speed, and in truth doing so resulted to slow performance is you changed the latency timings. When the CPU misses a peace of data it goes "mental" and stops, waiting for the data to arrive. The faster the Memory latency the faster it can get hold of the data and send it to the CPU (as the bus speed was small).
Now days this problem rarely occurs, so latency matters less. Therefore it is logical to increase the clock speed of Memory, while increasing the latencies. In case you haven't noticed that is what is currently happening, even though it's half Marketing to have the fastest Memory, but they do see logic
Note: Graphics Card's have no inbuilt Cache technically in the Core, even though G80 has some in stages down it's pipeline. Therefore latencies are more irrelevant with Graphics Cards