Overclock.net banner

1 - 17 of 17 Posts

·
Registered
Joined
·
1,455 Posts
Discussion Starter #1
Ahoi.

I own two 2x8gb TridentZ 3200-C14 SR kits (=4x8gb), one is older than the other, but their specs match. They run in a T-topology GB Aorus Master (9900K). As far as I know these kits use B-die.

When I run 2 dimms @3200-C14 I can lower Trfc from its default of 560c (350 ns) to 280c and lower error-free for hours. But once I populate all 4 dimms I can quickly reproduce errors running RAM Test for less than 150% coverage. That is with all other timings at stock/auto settings.

Why does the behavior change to clearly between 2 dimms and 4 dimms being used?
 

·
Registered
Joined
·
1,455 Posts
Discussion Starter #2
My PC just rebooted hard (audible relay click sound) during normal web-surfing. So it seems that the new F4-3200C14-8GTZKW kit is not stable at tRFC 280c even in 2 dimm configuration.

I will reset tRFC to stock and keep an eye on this, because this is a replacement kit after the original kit was clearly unstable at stock settings (my first GTZ sans KW kit is stable).
 

·
Iconoclast
Joined
·
30,435 Posts
tRFC is the time spent on a refresh cycle. The lowest you can set with stability largely depends on the ICs involved, the clock speed of the memory (more cycles needed to refresh for the same time period, of course), and temperature.

Number of DIMMs shouldn't have much of an impact, unless there is a problem with the memory power delivery, one or more of the additional DIMMs has a weak IC, or there is insufficient airflow that allows the higher DIMM configuration to noticeable raise temperature of the memory. However, it could be that overlapping refresh cycles are increasing the current draw of the refreshes requiring them to take more time to complete.

I would test the kits separately to ensure the difference isn't just per-sample variance. If you have isolated it to the number of DIMMs used rather than the specific DIMMs used, and temperature isn't an issue, you might try increasing vDIMM or vPP.
 

·
Registered
Joined
·
1,455 Posts
Discussion Starter #4
Well, right until that restart happened (cold boot) out of nowhere I thought that the new GTZKW kit was stable at tRFC 280 in 2 dimm configuration. These are Samsung B-die kits, which are said to run down to 110 - 160 ns tRFC, even though their stock rating is 350 ns. I would be quite surprised if the new kit really doesn't allow to run less than 200 ns.

That being said, using 4 dimms I can reproduce errors within a much shorter amount of time. This time the kit ran Ram Test much longer without error and the cold boot happened when the test was stopped already. So for the time being it still seems as if 4 dimms are less stable than 2 dimms (T-topology) when tRFC (alone) is tightened.
 

·
Iconoclast
Joined
·
30,435 Posts
I can't run my 4x8GiB B-die config anywhere near as low as 160ns tRFC either, though I didn't see as dramatic a difference from two DIMMs. This is on a daisy-chain board, but I doubt topology has much bearing on the refresh cycle time required as the number of simultaneous refreshes should be the same and the power delivery isn't what's affected by the topology differences.

You could well be right that 2 vs. 4 DIMMs is the primary culprit, in and of itself. However, I'd still thoroughly test the kits, if not individual DIMMs, separately, to confirm it's not just a weak sample somewhere.
 

·
ٴٴٴ╲⎝⧹˙͜>˙⧸⎠╱
Joined
·
6,323 Posts
more cooling to memory or more volts to SA/IO/RAM

2x8 = single rank
4x8 = dual rank, over 4 slots = more IMC strain and signal degradation/crosstalk


likely you need to back off the tRFC and tREFI until other portions of the ram can be stabilized









I'm running 135 (?) ns with 1.23v actual (1.2 set) SA/IO and 1.47v RAM

M12Apex/2x16GB b-die (3200C14, dual rank)


up to 10 passes in TM5 Extreme stable + 6 hours randomx-stress (for testing cache stability)

2465403
 

·
Registered
Joined
·
1,455 Posts
Discussion Starter #7
There is little to "back off", because I tested this with stock settings of both CPU and RAM (XMP profile). More testing revealed that given enough time 4 dimms are even unstable at full stock settings with high tRFC.

Memory temperature for sure is a problem, but stress tests (ramtest) push the dimms well over 50°C in both 2 and 4 dimm configuration. Still I would expect stock/XMP settings to not cause errors. Will have to do more tests on the 2 new KW dimms alone to see if they are really stable running 2 dimms stock (my older no KW are).
 

·
Avid Something-or-Other
Joined
·
160 Posts
There is little to "back off", because I tested this with stock settings of both CPU and RAM (XMP profile). More testing revealed that given enough time 4 dimms are even unstable at full stock settings with high tRFC.

Memory temperature for sure is a problem, but stress tests (ramtest) push the dimms well over 50°C in both 2 and 4 dimm configuration. Still I would expect stock/XMP settings to not cause errors. Will have to do more tests on the 2 new KW dimms alone to see if they are really stable running 2 dimms stock (my older no KW are).
To be fair, you are mixing two kits of ram together. The manufacturer has set those XMP profiles with the expectations of having only two sticks and now the motherboard has to contend with four of them. Modern motherboards and training processes do a pretty good job of this as you can attest to, but the reality is that you are not really running the sticks "stock" as the manufacturer intended.

Assuming that this memory is in fact Samsung B-Die, putting a fan over the memory sticks is likely your best bet to stabilize the sticks (at stock) since they are notorious ICs for temperature sensitivity. Just because the Samsung B Die process is capable of low tRFC doesn't mean that every IC is going to be happy running at that low tRFC. For example, some meh/low tier 2x8 Samsung B-Die sticks I have (F4-3400C16D-16GTZ) can't sustain lower than 300 tRFC @ 4000 16/18/18/48 regardless of how much airflow I chuck at them and they get cranky when they go above 32C with those settings. I highly recommend starting at higher than expected tRFC and working your way down until you find the temp threshold + tRFC/tREFI values that work for you rather than trying to stabilize low tRFC right off the bat.
 

·
Registered
Joined
·
1,144 Posts
Ahoi.

I own two 2x8gb TridentZ 3200-C14 SR kits (=4x8gb), one is older than the other, but their specs match. They run in a T-topology GB Aorus Master (9900K). As far as I know these kits use B-die.

When I run 2 dimms @3200-C14 I can lower Trfc from its default of 560c (350 ns) to 280c and lower error-free for hours. But once I populate all 4 dimms I can quickly reproduce errors running RAM Test for less than 150% coverage. That is with all other timings at stock/auto settings.

Why does the behavior change to clearly between 2 dimms and 4 dimms being used?
When you say one is older than the other, how much older?

I think of tRFC as time to re-f'n charge, when ram is not active
and tREFI as, well refresh time interval, when ram is active if you subtract tRFC
So if you look at proportion of overall cycle time that tRFC is you can see it is quite small really 65535-350=65185 a ratio of 1:186 tRFC:tREFI
If I use 340 it's 65535-340=65195 a ratio of 1:192 tRFC:tREFI
So if you have to raise it for 4 sticks it's not that big a time difference really and by not tightening it too much it gives you a little more room for ambient temp changes etc :)
 

·
Registered
Joined
·
1,455 Posts
Discussion Starter #10
The old set is labeled as 2017/02. The new set had to be replaced, because it was unstable even in 2 dimm configuration at stock settings, the replacement kit is labeled as 2020/09.

I know from testing that lower tRFC impacts 7zip throughput, so it's nice to have. I can live with higher settings, but after seeing my old set going down to 160 ns (256c at 3200T) I did not expect the combination of 4 dimms not even being stable at stock 560c and being so sensitive to tRFC tightening that it shows within minutes.

Once I get hold of a 5900X I will put everything in a new case with different airflow. It will stay a silent configuration, but there still should be more air moving around the dimms then. Currently there is a 5.1/4" drive cage in front of them and the GPU's PWM section radiates its heat towards the dimms (one drawback of modern large GPUs).
 

·
Banned
Joined
·
1,957 Posts
Different PCB layouts perhaps? One kit could be A0, the other one A2. A0 will go tighter but won't clock as high.
 

·
Overclock the World
Joined
·
1,984 Posts
Can you confirm tRAS nor tRP does change ?
I think what happens is, that tRC auto adjusts on intel systems between different amounts
tRFC also does depend on capacity not only remain timings and not only IC

tRFC will trigger in timely manner , tREFI can stay high but doesn't have to stay high
if triggered tRFC is in timely manner depend on an integer value, what i call "tRFC divider = tRC multiplier"
The same concept works on DDR3 SO-DIMMs too
But you seem not to have a readout for tRC. Who knows what it used , but it will increase by dimm amount
 

·
Registered
Joined
·
1,144 Posts
The old set is labeled as 2017/02. The new set had to be replaced, because it was unstable even in 2 dimm configuration at stock settings, the replacement kit is labeled as 2020/09.

I know from testing that lower tRFC impacts 7zip throughput, so it's nice to have. I can live with higher settings, but after seeing my old set going down to 160 ns (256c at 3200T) I did not expect the combination of 4 dimms not even being stable at stock 560c and being so sensitive to tRFC tightening that it shows within minutes.

Once I get hold of a 5900X I will put everything in a new case with different airflow. It will stay a silent configuration, but there still should be more air moving around the dimms then. Currently there is a 5.1/4" drive cage in front of them and the GPU's PWM section radiates its heat towards the dimms (one drawback of modern large GPUs).
I think @rares495 is correct. I had 2 sets of 4400c19 Gskill similar ages to what your's are and they were completely different.
One was A0 PCB and the other was A2 and the chips on them were different sizes also.
If you look up under the heatsinks you should be able to see if there is a difference in the chip layout :)
 

·
Registered
Joined
·
1,455 Posts
Discussion Starter #14
Thanks for the hints so far. At least it seems that the new dimms are stable at stock settings in 2 dimm configuration. Not stable in 4 dimm configuration at stock settings, but I will check agin if really all timings are the same, as suggested bei Veii.
 

·
Registered
Joined
·
123 Posts
You generally want to leave tRFC for the very end since it's a really sensitive timing. And you don't want to set it too tight, since the PC can freeze. Refer to this chart below for an estimation of where you should roughly be:

2466266
 

·
Premium Member
Joined
·
10,116 Posts
You generally want to leave tRFC for the very end since it's a really sensitive timing. And you don't want to set it too tight, since the PC can freeze. Refer to this chart below for an estimation of where you should roughly be:

View attachment 2466266
Thanks for that :)
 

·
Registered
Joined
·
1,455 Posts
Discussion Starter #17
Like I wrote, all timings were set to stock/XMP except tRFC, so it was the only timing being changed and thus was the last timing being changed. :p
 
1 - 17 of 17 Posts
Top