Originally Posted by Lavent
Usually when you get into DDR3 latencies mhz is better than timings, but 2 CAS latency is a big difference. The reason timings were so important in the DDR era was that the percentage change of speed because of latency was much higher than with DDR3 (ie. 2 clocks to 3 clocks was a 33% change, while 9 clocks to 10 clocks is only a 10% change)
just with CAS according to this article http://www.thetechrepository.com/showthread.php?t=160
the theoretical access time for ram @ 1333 7-x-x-y is ~10.49ns while 1600 with 9-x-x-y is closer to 11.25ns.
This also depends on your processor speed and a lot of other X factors, but that article is something to start with.
Anyway in theory the tighter timings @ 1333 will net you better performance than the looser timings @ 1600.
Thanks for already explaining the basic part: Improving access time. I'll add in my 2 cents:
Most of the memory access in software are random access - reading Integer and Double variables, doing some stuff with them etc. Control loops in programs and drivers use a lot of these types of variables.
Having said that, latency reduction improves access of these types of memory locations over frequency. HOWEVER: For sequential memory access like copying and moving larger blocks of memory (1KB + ), it will be frequency that will shine over tighter timings as with burst operations, the 'column' and 'row' inside the DRAM that address the actual cell needs to be set only once. From there on, the DRAMS autoincrement the column internally without the need for this external access time.
So what are you doing most? Handling large amounts of memory or handle intensive computing? For the first: Freq, for the 2nd: Tight timings.