Overclock.net banner
1 - 1 of 1 Posts

2,897 Posts
A while ago I tested my ram using Aida64 Memory test and got these results : View attachment 2560277

Now a few months later I did the same test and got this result : View attachment 2560278

What changes have been made, I updated the bios from F12K to F12, one of my RAM sticks is lost it's RBG lighting , my RAM is as follows : G Skill F4-3600C17-16GTZR x 4 16gb B Die Samsung modules. I have been thinking of removing the dull stick and going 32GB instead of 64GB, might even improve my RAM overclock which is a measly 3466 Mhz. I am really stumped as to what happened to my ram, Could it be a Windows update? I have Secure boot enabled . TPM enabled via a TPM external module, I have HVCI enabled. I am trying to give you guys as much information available. Is it time to invest in a new set of 4x16GB of RAM, I really would not cause I've spent so much on this build already. Any suggestions is welcome. Thanks guys.
Hey there,

Your ram sticks are fine. What is going on is something I discovered and realized myself not too long ago. What is going on is the fact that you have lowered your cache speed by 200MHz, from 4.7G back to 4.5G.

The 'problem' is that on Intel, the L3 cache is a shared resource. It sits between the memory channels and the CPU cores. As a result of this, all 2 or 4 memory channels have to go through the L3 cache before it diverts again between the CPU cores. So, it's an L3 cache bottleneck.

I've noticed this with my 5960X, which has 4 channels. One stick gives an accurate bandwith number. With two sticks, this bandwidth doubled, a near 100% scaling. When I added the 3rd stick, I no longer had another 100% scaling. In terms op multiplication, with 3 sticks, the bandwidth increased somehwere towards 2.7x, rather than 3x. Adding in a 4th stick barely increased bandwidth.

I had noticed a long time before already that increasing the cache frequency added to the RAM bandwidth, which I found sorta odd back in time because well the AIDA test tests each ram path component individually and thus didn't expect much an increase in RAM bandwidth.

This leaves us sorta with a question. What's the point in high-speed sticks in multichannel configurations when there is an L3 cache bottleneck? My figure is that possibly with a single-threaded memory intensive operation so that the speed of only one channel matters, still makes fast sticks with tight timings worth it. But when multiple threads access memory (physics on CPU to offload GPU etc) you'll hit the L3 cache bottleneck.

I'm surprised though that you run iinto cache congestion with only dual channel. Maybe that's a tribute to your sticks ;)


Edit: Can you run a mem bench with just one stick, and set the stick to its XMP speeds and then back to your 3466 15-15-15 settings? So 2 benches for one stick. Intel docs say the memory bandwith for your CPU is 41.6 G/sec. With one stick you can check the stick bandwidth, staying below the total bandwith across 2 channels.
1 - 1 of 1 Posts