Overclock.net banner
25,961 - 25,980 of 31,499 Posts
Once again about tRAS.

The whole point of tRAS is to keep the row open. This is important for non-sequential reading. During sequential reading, the row will not close due to the tRTP rule.
DDR5 memory has 32 banks, it was logical to make it so that reading occurs from different banks. tRAS is still theoretically advantageous if we need data from our open line for a time <2*tRCD+tRP+tRTP. But this would be very stupid in the place of the memory controller, it needs to either immediately write and read the entire activated line from the bank, or use other banks, fortunately it has 31 more.

Therefore, it would be optimal to close our line as quickly as possible.
tRAS=tRCD+tRTP or less -
it doesn't matter because of the tRTP rule
Got it. Thanks!
 
A row only needs to be open as long as the charge flows towards databuffer and senseAmp
Once charge moved , and data got identified
RAS doesnt matter anymore. It's length only matters for charge overflow - but that is RTP's work. To prevent parasitic issues.
// RTP has no other special properties, it goes alone, by electrical design and IC density
RAS needs to be as long as RCD takes for charge to flow towards databuffer + X.

After RCD as elapsed, RowAccess Strobe is irrelevant.
Its left in used state with counter. Default DDR5 operation ~ PRAC

Once data was copied to databuffer, CollumnAccess Strobe happens
CAS hits earlier or later within its allowed duration. But hits.
Bank gets swapped in round robbin order.

PHY identifies by page-counter if bankgroup swap or collumn swap should happen
Transfer happens, background RP or RC upkeep and restore from partially left charge.
Read is destructive but in databuffer. It never fully discharges ACT row.

If 2nd collumn hit happens
Duration would be CAS+BURST for a read

If memory is busy, duration is RP+RCD+CAS
RAS is "don't care". Only becomes "care", if too low.
Then gets repeated by itself fully. Not really causing row-miss but can trigger.
If rowmiss happens full operation halts. RAS is not even triggered RCD neither.
RP pre'inserts, instead of post.
 
Wow ok, temperature sensitivity is definitely still a thing on A-Die lol.

At 44c they passed 5 hours of TM5. So I turned off the RAM fan over dinner and let TM5 run again to see at what temperature they'd get unstable. Well, that was 49c. As soon as they went over 49c, one hitting 51.5c, the errors started pooring in lol. 10 errors in the next like 12 minutes.

Which timings are the most sensitive to this? I mean, I can't even dream of keeping them under 49c in the summer with 35c ambients lol. No way.
this happens with me too, soon as it get close to 50ºC it can error anytime. Im interested in find a way to make profile more heat resistant since im on Lian Li A3 case and i cant get a fan over the ram.

techpowerup showed this over this article, concluding that 8000mt on same voltages as 6000mt is much more heat sensitive

This is 100% in line with what I have found to be a primary source of memory test errors. It can vary a little but it is normally between 45°-50°C that stability begins to be impacted. This was true for generations before DDR5. I experience the same thing, at the same temperature range, with DDR4.
 
Anyone using the G.Skill 2x32GB CL26 kit by any chance ? (part number is F5-6000J2636G32GX2-TZ5NR)

I have one on order and would be interested of anyone with experience with this kit for a stable 24/7 overclock possibly.
 
Wow ok, temperature sensitivity is definitely still a thing on A-Die lol.

At 44c they passed 5 hours of TM5. So I turned off the RAM fan over dinner and let TM5 run again to see at what temperature they'd get unstable. Well, that was 49c. As soon as they went over 49c, one hitting 51.5c, the errors started pooring in lol. 10 errors in the next like 12 minutes.

Which timings are the most sensitive to this? I mean, I can't even dream of keeping them under 49c in the summer with 35c ambients lol. No way.
Number Software Technology Screenshot Video Game Software


;)
 
This is 100% in line with what I have found to be a primary source of memory test errors. It can vary a little but it is normally between 45°-50°C that stability begins to be impacted. This was true for generations before DDR5. I experience the same thing, at the same temperature range, with DDR4.
Yeah my old DDR4 B-Dies did the same around high 47c range. 3800C15 was fine up to there but above, nope. Same with 4400C17.

I made a setup now for the summertime, 8000 34-46-40-50-90-560 (140ns) GDM Enabled with 50k tREFI @ 1.50 VDD 1.43 VDDQ and it's been running for almost an hour at ~51.5c with the fan at minimum RPM (560 ish) and no issues yet. If it pass 1 hour I'll turn the fan off entirely. They'll probably go into the high 50's maybe low 60's but it's a good test. I used 2:1 because I can run vSOC and VDDIO much much lower saving some CPU power and thermals as well. FCLK still 2200.
 
  • Helpful
Reactions: MrFox
K so this setup:
Number Screenshot Software


It holds up to 60, maybe 61c. When it got to 63c it started erroring really bad. Maybe it'll do better even with 150ns tRFC and lower tREFI still.

Turning the fan off and seeing 63.5c in minutes reminded me how terrible the airflow is inside this case for the RAM as it has a glass front so no intake front and also the Gen-Z module blocks any and all airflow from the side intake.
 
A row only needs to be open as long as the charge flows towards databuffer and senseAmp
Once charge moved , and data got identified
RAS doesnt matter anymore. It's length only matters for charge overflow - but that is RTP's work. To prevent parasitic issues.
// RTP has no other special properties, it goes alone, by electrical design and IC density
RAS needs to be as long as RCD takes for charge to flow towards databuffer + X.

After RCD as elapsed, RowAccess Strobe is irrelevant.
Its left in used state with counter. Default DDR5 operation ~ PRAC

Once data was copied to databuffer, CollumnAccess Strobe happens
CAS hits earlier or later within its allowed duration. But hits.
Bank gets swapped in round robbin order.

PHY identifies by page-counter if bankgroup swap or collumn swap should happen
Transfer happens, background RP or RC upkeep and restore from partially left charge.
Read is destructive but in databuffer. It never fully discharges ACT row.

If 2nd collumn hit happens
Duration would be CAS+BURST for a read

If memory is busy, duration is RP+RCD+CAS
RAS is "don't care". Only becomes "care", if too low.
Then gets repeated by itself fully. Not really causing row-miss but can trigger.
If rowmiss happens full operation halts. RAS is not even triggered RCD neither.
RP pre'inserts, instead of post.
Glad to see some tRP action :) Especially when cpu tries to guess what to get and sometimes it fails:)
 
Question about RCD and my kit/ setup:

6000-RCD 34 = error
6000-RCD 35 = stable
6000-RCD 34 with 1500 UCLK = Error

6200-RCD 35 = error
6200-RCD 36 = stable

6400-RCD 37 = error
6400-RCD 38 = error
6400-RCD 39 = stable

RCD-Limit from my kit or IMC?
 
Anyone using the G.Skill 2x32GB CL26 kit by any chance ? (part number is F5-6000J2636G32GX2-TZ5NR)

I have one on order and would be interested of anyone with experience with this kit for a stable 24/7 overclock possibly.


Quite a few people asking about 2x32 recently, so I linked my settings again ^^... 2x32gb kits seem to all tune almost exactly the same, so yours will be the highest bin, but same chips being used.

Depending on your cpu you might get 6200 or 6400 mt/s stable 1:1 with CL26. You'd need increased VSOC and maybe a minor bump to VDD beyond what I'm using for 6000 (and might also need to increase the SCL's and SD/DD's by 1 or 2 for 6200 or 6400. Actually even for 6000 1-2 higher for those SCL/SD/DD's can give better results, it seems to depend on the motherboard and maybe memory kit used.). -6400 also need tRCDRD bumped to 38 I'm pretty sure.

Other than that, I think you'll find it hard to beat the performance of this setup, and it should be dead stable. Basically just plug in my values and change tCL to 26 and you should be good to go (oh i think also VDD 1.45v needed for CL26 on that kit you have on order).

Also, verify your tPHYRDL value of each stick is matching for best stability and performance.

I run nitro enabled with settings of disabled, disabled, 0, 8x, 8x.. alternatively you can do 1-2-0 8x 8x (it's not stable for me, but is for others with 2x32 kits... I can't say if it's better or worse).

Oh one final note.. there is some debate about optimal TRC.. Buildzoid showed lower having a tiny bit better performance in one benchmark.. others here on the forum are using TRAS+TRCD=TRC. I haven't found instability with a "low" setting as in my Zentimings screenshot, but up to you.

Edit: Just remembered someone told me tWRRD wasn't stable at 1 for them, you can use 2 instead for that along with tRDWR 16. I didn't test enough to find if 1/14 actually beats 2/16 honestly, I can only say it is under 1% difference between the two.
 
@F77

Don't think it's IMC.

RAM IC, maybe some extra voltage might help.

Mine Patriot Venom Viper DDR5 7000C32 2x16GB SK Hynix A Die.

MCLK CL TRCD

6000 28 35 @ 1.35V
6200 28 37 @ 1.4V
6400 28 38 @ 1.45V

As ns:-

6000 9.33 11.66
6200 9.03 11.93
6400 8.75 11.87

All setups 1:1, GDM: On. I tried same kit on 4x 9000 series CPU, 2x boards, same outcome.
 
RAM IC, maybe some extra voltage might help.
Higher VDD doesn't help. I've tested 6000 with CL30/ 28/ 26 and different VDDs. Everything fine, but RCD under 35 = error.

That's why i run 6200-CL26 with RCD 35. Probably my sweetspot

White Font Screenshot
 
K so this setup:
View attachment 2697949

It holds up to 60, maybe 61c. When it got to 63c it started erroring really bad. Maybe it'll do better even with 150ns tRFC and lower tREFI still.

Turning the fan off and seeing 63.5c in minutes reminded me how terrible the airflow is inside this case for the RAM as it has a glass front so no intake front and also the Gen-Z module blocks any and all airflow from the side intake.
Are you using the stock RAM heatsinks heating blankets? If so, you could easily knock 10-15°C off the load temperatures by replacing them with some inexpensive Byski water cooling jackets without using a waterblock on top. Even removing the stock heatsinks and running the memory modules naked with air circulation is often cooler than they run stock. Two of my three desktops have water cooled memory and the third (SFF) desktop has aftermarket Byski heatsinks with an aluminum finned block screwed on where the waterblock would normally be attached. Memory thermals are no longer something to care about like when you have to use the stock garbage.
 

Attachments

Are you using the stock RAM heatsinks heating blankets? If so, you could easily knock 10-15°C off the load temperatures by replacing them with some inexpensive Byski water cooling jackets without using a waterblock on top. Even removing the stock heatsinks and running the memory modules naked with air circulation is often cooler than they run stock. Two of my three desktops have water cooled memory and the third (SFF) desktop has aftermarket Byski heatsinks with an aluminum finned block screwed on where the waterblock would normally be attached. Memory thermals are no longer something to care about like when you have to use the stock garbage.
Well, sorta. It's the stock Dominator Titanium heatsinks with the added on Corsair copper heatsink bar. They are a first edition numbered kit so I really really don't wanna break the heatsink or bust a chip off the PCB when pulling them off.

But still. Temps are a problem yes. Now playing Indiana Jones at 6400C26 1.60v they are at 45.8c with the fan screaming down on them at 1500 RPM lol. I mean there's some heat in my case obviously due to bottom rad but yeah.

Let me have a look how easy the heat spreader will come off..
 
  • Rep+
Reactions: MrFox


Quite a few people asking about 2x32 recently, so I linked my settings again ^^... 2x32gb kits seem to all tune almost exactly the same, so yours will be the highest bin, but same chips being used.

Depending on your cpu you might get 6200 or 6400 mt/s stable 1:1 with CL26. You'd need increased VSOC and maybe a minor bump to VDD beyond what I'm using for 6000 (and might also need to increase the SCL's and SD/DD's by 1 or 2 for 6200 or 6400. Actually even for 6000 1-2 higher for those SCL/SD/DD's can give better results, it seems to depend on the motherboard and maybe memory kit used.). -6400 also need tRCDRD bumped to 38 I'm pretty sure.

Other than that, I think you'll find it hard to beat the performance of this setup, and it should be dead stable. Basically just plug in my values and change tCL to 26 and you should be good to go (oh i think also VDD 1.45v needed for CL26 on that kit you have on order).

Also, verify your tPHYRDL value of each stick is matching for best stability and performance.

I run nitro enabled with settings of disabled, disabled, 0, 8x, 8x.. alternatively you can do 1-2-0 8x 8x (it's not stable for me, but is for others with 2x32 kits... I can't say if it's better or worse).

Oh one final note.. there is some debate about optimal TRC.. Buildzoid showed lower having a tiny bit better performance in one benchmark.. others here on the forum are using TRAS+TRCD=TRC. I haven't found instability with a "low" setting as in my Zentimings screenshot, but up to you.

Edit: Just remembered someone told me tWRRD wasn't stable at 1 for them, you can use 2 instead for that along with tRDWR 16. I didn't test enough to find if 1/14 actually beats 2/16 honestly, I can only say it is under 1% difference between the two.
Thanks so much for your time. I'll be using it on my X670E Gene along with a 7950X3D (currently waiting for both the memory kit and also the motherboard cause mine died a couple weeks ago unexpectedly and the replacement is somewhere within FedEx's netword on it's way, hopefully soon). I'll try your suggestions as soon as i have everything up and running and report back. Thanks again,. much appreciated.
 
  • Rep+
Reactions: anamolydetected😉
25,961 - 25,980 of 31,499 Posts