Overclock.net banner

*Official* Intel DDR4 24/7 Memory Stability Thread

3M views 22K replies 925 participants last post by  lostking 
#1 ·
*Official* Intel DDR4 24/7 Memory Stability Thread
Quote:
[UPDATE JAN 2018]

This thread now accepts Ram Test entries

Quote:
HOW TO USE

Get it from here, 9.99 euros / lifetime license:

https://www.karhusoftware.com/ramtest/

Configure the settings to your preference in the graphical user interface and
click the start button to begin testing. To stop testing, just click the same
button again.

It is recommended to run the test for at least 10 minutes before drawing any
conclusions about the stability of your system memory.


To detect intermittent memory errors you should let the test run for at least one hour.
Quote:
[UPDATE JAN 2017]

This thread will now also include results posted for the Z270 platform. However please try to remember the rules below to keep the discussion from becoming confusing, as what may work on one platform may not be viable on the other.

Quote:
[UPDATE JAN 2016]

This thread will now also include results posted for the X99 platform. However please try to remember the following to keep the discussion from becoming confusing, as what may work on one platform may not be viable on the other.

Please try to remember the following

Clarify what platform and CPU you are speaking about when asking a particular question or speaking about your experience.

Quote the user you are replying to when replying.

When posting stability results, be sure to include the CPU as described in the posting results instructions.


Happy posting!
thumb.gif
Overview
This thread is dedicated to showing the various memory configurations of users with DDR4 on Z170/Z270 and X99 chipsets.
There is no strict criteria here, all things Z170/X99 memory overclocking welcome. However to enter the stability chart certain criteria is to be met as this is generally speaking dedicated to showing what is obtainable on both platforms at an operational level.

If using ASUS within North America you can post here:

ASUS North America Z170 Support / Q&A Thread
ASUS North America X99 Support / Q&A Thread
ASUS North America Z270 Support / Q&A Thread


Broadwell-E info:


Checkout the thermal control tool especially, this will be a god send for those who really want to push things.

How to get the best performance from Broadwell-E
http://edgeup.asus.com/2016/05/get-best-performance-broadwell-e-processors-asus-thermal-control-tool/

X99-Deluxe II build:
http://edgeup.asus.com/2016/05/x99-deluxe-ii-powers-prosumer-workstation-build/

X99-A II build:
http://edgeup.asus.com/2016/05/x99-ii-motherboard-sweet-spot-broadwell-e-vr-builds/

X99-Strix:
http://edgeup.asus.com/2016/05/the-rog-strix-x99-gaming-motherboard-illuminates-a-broadwell-e-gaming-build/

Rampage V Extreme Edition 10:
http://edgeup.asus.com/2016/05/introducing-rampage-v-edition-10/

Z270 info:
http://edgeup.asus.com/2017/01/03/z270-motherboard-guide/

ROG DRAM Timing Control Guide


Quote:
Memory Presets: This is the place to start when overclocking memory. Identify the ICs used on the memory modules and select the relevant profile. We've put a tremendous amount of time configuring settings to get the most from each memory type. Once the profile is selected, various parameters in the DRAM timing section will be applied for you. From there, manual tweaking is possible as required.

Maximus Tweak: Leave on auto unless experiencing instability. Mode 1 may allow more compatibility, while Mode 2 is better for performance and some memory modules. Auto defaults to Mode 2.

Memory timings will automatically be offset according to memory module SPD and memory frequency. Should you wish to make manual adjustments, the primary settings and third timings are the most important for overall memory performance. Most timings are set in DRAM clock cycles, hence a lower value results in a more aggressive setting (unless otherwise stated).

As always, performance increases from memory tuning are marginal and are generally only noticeable during synthetic benchmarks. Either way, voltage adjustments to VDIMM, VCCIO-D, Cache Voltage and to a lesser extent CPU Core Voltage & VCCIO-A may be necessary to facilitate tighter timings.

Primary Timings

CAS: Column Address Strobe, defines the time it takes for data to be ready for burst after a read command is issued. As CAS factors in more transactions than other primary timings, it is considered to be the most important in relation to random memory read performance. (See third timing section for further info on important timings).

To calculate the actual time period denoted by the number of clock cycles set for CAS we can use the following formula:

tCAS in Nano seconds=(CAS*2000)/Memory Frequency

This same formula can be applied to all memory timings that are set in DRAM clock cycles.

DRAM RAS TO CAS Latency: Also known as tRCD. Defines the time it takes to complete a row access after an activate command is issued to a rank of memory. This timing is of secondary importance behind CAS as memory is divided into rows and columns (each row contains 1024 column addresses). Once a row has been accessed, multiple CAS requests can be sent to the row the read or write data. While a row is "open" it is referred to as an open page. Up to eight pages can be open at any one time on a rank (a rank is one side of a memory module) of memory.

DRAM RAS# PRE Time: Also known as tRP. Defines the number of DRAM clock cycles it takes to precharge a row after a page close command is issued in preparation for the next row access to the same physical bank. As multiple pages can be open on a rank before a page close command is issued the impact of tRP towards memory performance is not as prevalent as CAS or tRCD - although the impact does increase if multiple page open and close requests are sent to the same memory IC and to a lesser extent rank (there are 8 physical ICs per rank and only one page can be open per IC at a time, making up the total of 8 open pages per rank simultaneously).

DRAM RAS Active Time: Also known as tRAS. This setting defines the number of DRAM cycles that elapse before a precharge command can be issued. The minimum clock cycles tRAS should be set to is the sum of CAS+tRCD+tRTP.

DRAM Command Mode: Also known as Command Rate. Specifies the number of DRAM clock cycles that elapse between issuing commands to the DIMMs after a chip select. The impact of Command Rate on performance can vary. For example, if most of the data requested by the CPU is in the same row, the impact of Command Rate becomes negligible. If however the banks in a rank have no open pages, and multiple banks need to be opened on that rank or across ranks, the impact of Command Rate increases.

Most DRAM module densities will operate fine with a 1N Command Rate. Memory modules containing older DRAM IC types may however need a 2N Command Rate.

Latency Boundary A sets timings for the main set of Third timings, lower is faster and tighter.
Latency Boundary B sets timings for the secondary set of Third timings, lower is faster and tighter.

Manipulating Latency Boundary A and B, negates the need for setting third timings manually, unless granular control of an individual setting is required. For most users, we recommend tuning via the Latency Boundary settings. Advanced users who are tuning for Super Pi 32M may wish to set timings manually instead.

Latency Compensator when enabled tries to make opportunistic latency compensation that may increase performance or smoothen out the Memory training process. So try and compare overclocking and performance with it enabled and disabled. You can also trying enabling it when the whole system hangs at '55' or '03' or '69' when pushing tight timings with high frequencies.

Secondary Timings

DRAM RAS to RAS Delay:Also known as tRRD (activate to activate delay). Specifies the number of DRAM clock cycles between consecutive Activate (ACT) commands to different banks of memory on the same physical rank. The minimum spacing allowed at the chipset level is 4 DRAM clocks.

DRAM Ref Cycle Time: Also known as tRFC. Specifies the number of DRAM clocks that must elapse before a command can be issued to the DIMMs after a DRAM cell refresh.

DRAM Refresh Interval: The charge stored in DRAM cells diminishes over time and must be refreshed to avoid losing data. tREFI specifies the maximum time that can elapse before all DRAM cells are refreshed. The value for tREFI is calculated according to module density. A higher number than default is more aggressive as the cells will be refreshed less frequently.

During a refresh, the memory is not available for read or write transactions. Setting the memory to refresh more often than required can impact scores negatively in memory sensitive benchmarks. It can be worth tweaking the refresh interval to a larger value for improved performance. For 24/7 use, this setting is best left at default, as real world applications do not benefit to a noticeable degree by increasing this value.

DRAM Write Recovery Time: Defines the number of clock cycles that must elapse between a memory write operation and a precharge command. Most DRAM configurations will operate with a setting of 9 clocks up to DDR3-2500. Change to 12~16 clocks if experiencing instability.

DRAM Read to Precharge Time: Also known as tRTP. Specifies the spacing between the issuing of a read command and tRP (Precharge) when a read is followed by a page close request. The minimum possible spacing is limited by DDR3 burst length which is 4 DRAM clocks.

Most 2GB memory modules will operate fine with a setting of 4~6 clocks up to speeds of DDR3-2000 (depending upon the number of DIMMs used in tandem). High performance 4GB DIMMs (DDR3-2000+) can handle a setting of 4 clocks provided you are running 8GB of memory in total and that the processor memory controller is capable.

If running 8GB DIMMs a setting below 6 clocks at speeds higher than DDR3-1600 may be unstable so increase as required.

DRAM Four Activate Window: Also known as tFAW. This timing specifies the number of DRAM clocks that must elapse before more than four Activate commands can be sent to the same rank. The minimum spacing is tRRD*4, and since we know that the minimum value of tRRD is 4 clocks, we know that the minimum internal value for tFAW at the chipset level is 16 DRAM clocks.

As the effects of tFAW spacing are only realised after four Activates to the same DIMM, the overall performance impact of tFAW is not large, however, benchmarks like Super Pi 32m can benefit by setting tFAW to the minimum possible value.

As with tRRD, setting tFAW below its lowest possible value will result in the memory controller reverting to the lowest possible value (16 DRAM clocks or tRRD * 4).

DRAM Write to Read Delay: Also known as tWTR. Sets the number of DRAM clocks to wait before issuing a read command after a write command. The minimum internal spacing is 4 clocks. As with tRTP this value may need to be increased according to memory density and memory frequency.

DRAM CKE Minimum Pulse width: This setting can be left on Auto for all overclocking. CKE defines the minimum number of clocks that must elapse before the system can transition from normal operating to low power state and vice versa.

CAS Write Latency: CWL is column access time for write commands to the DIMMs. Typically, CWL is needs to be set at or +1 over the read CAS value. High performance DIMMs can run CWL equal to or up to 3 clocks below read CAS for benchmarking (within functional limits of the DIMMs and chipset).
Quote:
Third Timings

On modern architectures like Haswell, page access is optimized such that back to back read timings in the third timing section can have a bigger impact on performance than primary settings. Memory interleaving and addressing optimization leads to the possibility of lots back to back read and writes (page hits) rather than random access (page misses).

In layman terms, the best way to describe this is to use the analogy of a hard drive. If data is fragmented, the head needs to move back and forth over the platter reading small bits of data. Similarly on memory, this would mean that CAS, wCL, tRCD, tRP and tRAS would factor more often - opening and closing memory pages across the DIMMs to read or write parts of data.

If data is not fragmented, the head can seek an area of the disc and read the data without needed to move back and forth. On a crude level, memory interleaving works in a similar way, ensuring that data is arranged into rows across ICs so that pages don't have to be open and closed as often to access it - this saves on excessive primary timing command requirements. That's why some of the back to back read and write timings in the third timing section of UEFI have a bigger impact on performance than the primary timings which were more important on older platforms.

If the required data is in sequence, CAS can be performed to access it and subsequent requests can be spaced by tRDRD (as low as 4 clocks). A lot of these requests can be sent before a page close request is required - which relies on the primary timing set (tRAS then tRP (tRC must elapse) followed by tRCD and then CAS). That's why the third timing spacing has more impact in memory sensitive benchmarks (memory frequency and other factors aside).

tRDRD: Sets the delay between consecutive read requests to the same page. From a performance perspective, this setting is best kept at 4 clocks. Relax only if the memory is not stable or the system will not POST. Very few memory modules can handle a setting of 4 clocks at speeds above DDR3-2400 so you may need to relax accordingly, although the performance hit may negate any gains in frequency.

tRDRD_dr: Sets the delay between consecutive read requests where the subsequent read is on a different rank. A setting of 6 clocks or higher is required for most DIMMs.

tRDRD (dd): Sets the delay between consecutive read requests where the subsequent read is on a different DIMM. A setting of 6 clocks or higher is required for most DIMMs.

tWRRD: Sets the delay between a write transaction and read command. The minimum value we recommend is tWCL+tWTR.

Auto is preferred from a stability perspective, while setting as close to the minimum value as possible is best from a performance perspective. For Super Pi 32m, try tWCl+tWTR+2 as a starting point. If that is stable, then try -1 clock, if not, add+1 and repeat until stable.

tWRRD_dr: Sets the delay between a write transaction and read command where the subsequent read is on a different rank. Keeping this setting as close to 4 clocks as possible is advised, although it will need to be relaxed to 6+ clocks at high operating frequency or when using high density memory configurations.

tWRRD_dd: Sets the delay between a write transaction and read command where the subsequent read is on a different DIMM. Keeping this setting as close to 4 clocks as possible is advised, although it will need to be relaxed to 6+ clocks at high operating frequency or when using high density memory configurations.

Dec_WRD: May give a small performance increase at speeds lower than DDR3-1600 with CAS 6. Can be left on Auto for all other use.

The following timings have a minimum spacing of Read CAS. The default rules space these settings well, so adjustment should not be required unless as a last resort. Setting equal to CAS is stressful on the DIMMs and IMC. Voltages may need to be increased to run the minimum value that POSTs.

tRDWR: Sets the delay from a read to a write transaction.

tRDWR_dr: Sets the delay from a read to a write transaction where the write is on a different rank.

tRDWR_dd: Sets the delay from a read to a write transaction where the write is on a different DIMM.

MISC

MRC Fast BOOT: When enabled, bypasses memory retraining on warm resets. Disabled retrains memory to counter any drift due to thermal changes. At higher memory frequencies the retraining process can interfere with system stability, hence this setting is enabled with auto by default. Should not need changing from Auto unless the system becomes unstable.

DRAM CLK Period: Allows the application of different memory timing settings than default for the operating frequency. Each number in the scale corresponds to a DRAM divider. The lowest setting being DDR3-800. Ordinarily, the timing set applied automatically tracks the DRAM ratio selected. This setting allows us to force timing sets from different dividers to be used with the selected DRAM ratio.

A setting of 14 is recommended for high DRAM operating frequencies. For all other use, leave on Auto.

Scrambler Setting: Alternates data patterns to minimize the impact of load transients and noise on the memory bus. A setting of optimized is recommended for most configurations.

DQ, DQS and CMD Sense Amplifier: Alters the bias on signal lines to avoid mis-reads. The Sense Amplifiers work good at Auto which lets BIOS decide the best for each. Reducing usually is better. Reducing DQ Sense and CMD Sense to -1~ -6 may stabilize things further when high VDIMM is used (2.2+v for example)

DRAM Swizzling Bit 0, 1 ,2, 3:

Enable Bit 0 for best OC most times, but disabling may help uncommon DRAM setups.

Enable Bit 1 for best OC most times, but disabling may sometimes help some 4GB DRAM modules.

Disabling Bit 2 helps high frequency overclocking at the expense of performance. Enabling improves performance but may need several tries to boot when frequencies are high and timings are tight. You can retry training when the system hangs at '55' or '03' or '69' by pressing reset here and waiting for the rig to complete a full reset.
Enabling Bit 3 usually helps overclocking and stability unless the IMC is unstable at cold temperatures (Ln2 cooling) in which case try disabling.

RAW MHz Aid: May help to improve stability when using DRAM ratios above DDR3-3100 at the expense of performance.

IC Optimizer: IC Optimizer sets background invisible tweaks for the various DRAM ICs. Note that these were fine-tuned with specific DRAM and CPUs so it may help or harm depending on the likeness of the ones on your hands. So try with Auto first, then try with the one for your ICs and compare. These will get updated over time in future BIOSes.

For stability results, using the recommendations from Raja@ASUS found below and in the overview seem the most requisite on recent platforms:

Quote:
Google stressapp test via Linux Mint (or another compatible Linux disti) is the best memory
stress test available. Google use this stress test to evaluate memory stability of their servers
- nothing more needs to be said about how valid that makes this as a stress test tool.

To bring up system info within Mint Terminal, type: sudo dmidecode type 17 and scroll to the relevant info.

For those who do not wish to install Mint to run Stressapp test:

HCI Memtest can be run via DOS or Windows. http://hcidesign.com/memtest/

An instance needs to be opened for each individual thread, covering a total of 90-95% of memory, giving the OS a little breathing room.

As an example i5 6600K - 8GB RAM

4 instances with 1750MB per instance.

NOTE: Version 5.0 notes state that it's 30% faster than previous versions. For testing densities beyond 16GB - it's recommended you use 5.0 Pro.

http://hcidesign.com/memtest/

Stability Results

Please submit results with the following format.

GSAT Results
For sake of simplicity submitted results will only record primary timing sets, but feel free to show subsequent secondary and terts within screenshot.
Linux Mint's Stressapp test needs to be run for a minimum of 1 hour by typing stressapptest -W -s 3600 in the Terminal.
To take a screenshot in Terminal type: gnome-screenshot

HCI
HCI consider 1000% to be the 'golden standard' however for larger densities this can be time consuming. A minimal coverage of two laps (200%) is required to be added to the table for HCI for density over 16GB. 16GB or less requires a minimum of 4 laps (400%)


Example:

Silent Scone--i56600K @4.6/4.3---3000Mhz-C15-16-16-39-2T----1.37v---SA 1.05v---Stressapptest----1 Hour
Or
Silent Scone--i56600K @4.6/4.3---3000Mhz-C15-16-16-39-2T----1.37v---SA 1.05v---HCI 1500%

NOTE: This is not a leaderboard, as it is not a benchmark. This threads main purpose is to both discuss information and various results and to gauge what is possible between different configurations, DIMM capabilities and CPU samples. Results are welcome all the way up the frequency spectrum. If it's obtainable, it should be posted!
smile.gif
.

Should go without staying that general system and CPU stability should be gauged via the suggested means before attempting an outright memory stability test.

I will organise the results at some point (as well as post some of my own) this weekend and update whenever I get time
smile.gif
.

https://docs.google.com/spreadsheets/d/1xwlVy-ZL1o_59Z7A8th6iHXtXEYnDgrseT-VxAapBWU/pubhtml?widget=true&headers=false

Have fun!
 
See less See more
3
#17,601 · (Edited)
I know it's toasty, but I got these stable even with all subtimings lowered and even on 57°C:
3600 14-15-15-32 - 1.36V
4000 16-16-16-34 - 1.41V
4200 16-16-16-34 - 1.48V
4266 16-16-16-34 - 1.52V
4400 17-17-17-36 - 1.49V
Except for tREFI, only that one doesn't like high temp, even 20000 is not stable.

Here is even the proof:


So you want to say if I cool these DIMMs to 30°C, they will become stable at 1.40V?
I think G.Skill knows very well not everyone will have some extreme cooling, I even have a 120mm fan pointed at them, so it's certainly not the worst possible cooling.
I always thought they do +0.05V just to avoid RMAs. I could RMA this kit no problem as it doesn't do what it claims.
But I know very well that 4000 CL16 1.40V is hard to run. Best ones maybe do 1.38V, worst ones like mine 1.41V and some semi stability on 1.40V, which I guess for G.Skill is good enough to pass inspection.
 
#17,602 ·
So your conclusion from buying 1 kit is that all 4000c16-16-16 kits are bad? Not sure of your point?

I have a 4800c18 kit that does not do XMP, so what. Life goes on. I get sharing your experience but it seems like you are on a crusades' against gskill 400c16-16-16 kits. Maybe it is me. Who knows, really who cares ;) lol why am I posting, Oh I work for Gskill ;)

I have a 4000c16-16-16 kit also. So if it runs XMP stable then what?

The post I was replying to is now deleted?
 
#17,604 ·
If yours also doesn't run XMP stable it already proves G.Skill made a poor decision going with 1.40V. There's actually some people who don't bother to manually set RAM. XMP is intended for those after all.
XMP is not guaranteed to work for all hardware configurations. I had some Viper Steel 4400C19 sticks (2x8GB), and they did not boot with just setting XMP on my XI Apex/9900K setup. But I got them stable at 4200C16 though. IMC plays a big role here, and motherboard.
 
  • Rep+
Reactions: fray_bentos
#17,605 ·
I had the same Viper, it worked even on 10600K, just needed 1.45/1.30V SA/IO. I don't think it's IMC or mobo that is preventing me to run 4000 CL16 1.40V.
Like I said, G.Skill doesn't test Karhu for 10h, they just test a little RunMemtestPro 4.0 (with same voltage as XMP, not any lower, mind you) and call it a day.
I'm downloading this RunMemtestPro 4.0 right now.
 
#17,606 · (Edited)
Also @Waspinator you are not using XMP if you want to get technical. You are changing the voltages on IO and SA. 1.15 is pretty low for 4000c16-16-16. When I enable XMP it sets them around 1.4-1.45 on z490 Apex. I was going to test XMP but it really doesn't matter if they work at XMP for me and not for you as each kit and system will vary some. And for the $$ I would recommend 3200c14 kits if $$ is an issue. These high end/bin kits are toys/luxury in my opinion.
 
#17,607 ·
Like I said, G.Skill doesn't test Karhu for 10h
Who wanna run a memtest for 10H....At that point heat may be the problem. Are you one of those guys who run Prime95 for 24H too?
 
  • Rep+
Reactions: Waspinator
#17,608 ·
This is how stable 4000 16-16-16-36 1.40V XMP really is:

People need to know this is actually a 1.45V kit, otherwise you get too high expectations. 3600 14-15-15 1.45V on other hand could easily be marketed as 1.40V, even this 4000 kit does it with 1.36V.
52C dimm temp...
There lie's your Problem.
Xmp is working fine.
B-Die anything over 40+C gets random errors.
50+ is like asking for errors.
Increasing voltage does increase temp tolerance, thats why you got stable with higher voltage.
 
#17,609 · (Edited)
Who wanna run a memtest for 10H....At that point heat may be the problem. Are you one of those guys who run Prime95 for 24H too?
It just came from my experience. I didn't thought heat might actually cause errors, but I get your point now. During normal work/gaming they're at 40°C anyway.
These setting errored the latest, all on 4133 CL16:
tWR 10 - error after 8h 42min
tWTR_L 7 - error after 11h 9min
tREFI 20000 - error after 11h 59min
It might be that tWR 10 and tREFI would be stable outside stress testing, but what is the point in stress testing then? I could probably get away with just Prime95 Large stability where I get tWR 9, tCWL 12 and tREFI 65535. But with these settings I just can't be 100% sure, so I just run Karhu until it is error free, even up to 18-24h.

Yes, I actually did set my CPU to be Prime95 stable, 5.0/4.9 GHz 1.25V. But what I did is just 10-40min (depending on temps) and added +0.03V, no need to run 24h to get exact +/-0.01V. That I still intend to test with some not so power hungry stress test to get to 5.1/5.1 GHz, I need to try ASUS RealBench.

Increasing voltage does increase temp tolerance, thats why you got stable with higher voltage.
I'm happy with this answer.

IMC plays a big role here, and motherboard.
I was going to test XMP but it really doesn't matter if they work at XMP for me and not for you as each kit and system will vary some. And for the $$ I would recommend 3200c14 kits if $$ is an issue. These high end/bin kits are toys/luxury in my opinion.
Actually now that you talk about each system being different I remembered I had to up the voltage by +0.01-0.03V going from Z490 Tomahawk to Z590 Gaming Carbon. Exactly on 4000 16-16-16 I had to up from 1.37V to 1.40V.
I think by now we can agree and conclude and finally end this debate that they should have just gone with 1.45V. I always thought they are so smart to do that just to avoid RMAs and people like me complaining, but I guess not.
I agree that 3200 CL14 and 3600 CL16 are best buys. But I already had experience with them on single rank and 3600 14-15-15 was so much better, practically everything worked, even 4000 14-14-14 was almost stable. I just can't deal with anything but top. It's also true there's not a kit out there that couldn't be better.
But now people posting 4400 CL17 OC with these dual rank kits, it makes me think again:
 
#17,610 ·
It just came from my experience. I didn't thought heat might actually cause errors, but I get your point now. During normal work/gaming they're at 40°C anyway.
These setting errored the latest, all on 4133 CL16:
tWR 10 - error after 8h 42min
tWTR_L 7 - error after 11h 9min
tREFI 20000 - error after 11h 59min
It might be that tWR 10 and tREFI would be stable outside stress testing, but what is the point in stress testing then? I could probably get away with just Prime95 Large stability where I get tWR 9, tCWL 12 and tREFI 65535. But with these settings I just can't be 100% sure, so I just run Karhu until it is error free, even up to 18-24h.

Yes, I actually did set my CPU to be Prime95 stable, 5.0/4.9 GHz 1.25V. But what I did is just 10-40min (depending on temps) and added +0.03V, no need to run 24h to get exact +/-0.01V. That I still intend to test with some not so power hungry stress test to get to 5.1/5.1 GHz, I need to try ASUS RealBench.


I'm happy with this answer.



Actually now that you talk about each system being different I remembered I had to up the voltage by +0.01-0.03V going from Z490 Tomahawk to Z590 Gaming Carbon. Exactly on 4000 16-16-16 I had to up from 1.37V to 1.40V.
I think by now we can agree and conclude and finally end this debate that they should have just gone with 1.45V. I always thought they are so smart to do that just to avoid RMAs and people like me complaining, but I guess not.
I agree that 3200 CL14 and 3600 CL16 are best buys. But I already had experience with them on single rank and 3600 14-15-15 was so much better, practically everything worked, even 4000 14-14-14 was almost stable. I just can't deal with anything but top. It's also true there's not a kit out there that couldn't be better.
But now people posting 4400 CL17 OC with these dual rank kits, it makes me think again:
XMP is 4000 16-16-16-36 1.4v. You tested 16-16-16-34, maybe have another try with tRAS=36?
 
#17,611 · (Edited)
4000 16-19-19-39 1.40V so far 8h stable. And I already tested 4000 20-16-16-40 yesterday - 1.36V. So it might be simply tRFC. Or it's still tCL and just variance due to high heat.
Also board variance does play a role in this, looking at my voltages table, exactly 4000 16-16-16 needed the most added voltage going from Z490 to Z590. On 4400 CL17 which I will run it's the same.
Z490/Z590:
4000 16-16-16-34 - 1.37/1.40V
4000 16-19-19-39 - 1.37/1.38V
4000 16-20-20-40 - 1.37/1.41V
4000 20-16-16-40 - 1.36/1.34V
Plus there's variances in which stress tester you are using and already mentioned temperature. That's why you don't make XMPs like this. I'm sure it was made just to attract customers like me who thought that in the end they actually binned B-Die so good it would work 4000 CL16 1.35V.

bscool: It would still be interesting if you try at least 1.35V. If it BSODs or even doesn't boot like for me (you have ASUS board and lower temp?), then it proved all my points so far. In best system it just has to work 1.35V to be stable in all systems at 1.40V.
Then we can end this, I'm sure you all have enough of my whining.
 
#17,612 ·
@Waspinator
It´s not said that the Bin goes better as XMP say, but the 4000CL16-16@1,4V is no bad bin.
I had 3200Cl14Kit don´t works CL16-16@1,5V over 3600mhz, it need´s CL16-18 to raise up frequency, one Kit does 4000CL15-15@1,5V well but dont boot anything over 4400.
Kit´s which do 4600mhz or more in really stable!! are rare.Boot/training, GSat and memtest stable over a bigger temp Range.
 
#17,613 ·
@Waspinator
It´s not said that the Bin goes better as XMP say, but the 4000CL16-16@1,4V is no bad bin.
I had 3200Cl14Kit don´t works CL16-16@1,5V over 3600mhz, it need´s CL16-18 to raise up frequency, one Kit does 4000CL15-15@1,5V well but dont boot anything over 4400.
Kit´s which do 4600mhz or more in really stable!! are rare.Boot/training, GSat and memtest stable over a bigger temp Range.
This is the big variation in 2x16 Bdie binning is the stable frequency they can achieve. Someone i know bought the 4000C14 2x16GB kit, the kit only boots to 4700 max unlike the 4000C17 2x16 Kit which boots 4800. In terms of voltage the 4000C14 kit needs 1.44V for 4000 15-15 Memtest. My personal kit only boots up to 4600, 4666 boots rarely. Max GSat stable i can achieve is 4400 16-17 1.56V even though i can boot 4600 16-18 too it does not boot cleanly most of the time. I would like to test the 4400C17 kit.
 
#17,614 ·
By 5 Kit´s of 3200CL14 was only 2Kit´s boot 4600, from this 2Kit´s only one do 4600 stable, the other one is more 4400-4500mhz stable.
My Kit can do 4600, there is no matter how the sticks stuck, for 4666 it´s important that the better one in Slot1.
4666 is possible on Auto, max. boot 4770mhz, but for really stable better with "manuel settings", with that i can also do 4700CL17-17@ 1,54V,
but my kit is also "only" 4600 allstable the rest is won by settings.

I think that what the kit can do where it´s no matter how the sticks stuck is nearly the really stable frequency out of the box.
 
#17,615 · (Edited)
musician
menko2

Can you 2 test lowest stable voltage on 4000 16-16-16-36?

EDIT:
It's fine now actually, Karhu 9h+ at XMP 1.40V. I even left SA/IO at Auto 1.25/1.20V as suggested. And that's at 53°C.
For now I don't know why it errored before, could be just temperature problem, could be tRAS problem, could be higher SA/IO makes them more stable. I need to do at least 2 more runs to figure that out.
I also noticed board sets 1.392-1.396V for 1.40V and 1.384-1.388V for 1.39V. So I'm not actually running 1.40V on XMP, but 0.005V less.
On 1.39V it errored after 3min.

I would never run RAM at 1.40V if it errors at 1.39V, I would set it manually at 1.41V.
Even me just playing with RAM for 6 months for a hobby, know that with voltages you need at least 0.05V leeway. VCCIO also in two cases needed one day 1.11V, 2 days later 1.16V and one day 1.15V, one month later 1.11V. So minus points for G.Skill here, they need to do better.
I doubt in factory they only got them stable at 1.40V like me, I guess they had to have better board and better cooling and they made it stable at around 1.38V. It's all just guesses of course, I will never know what was going on here. But basically they screwed up, testing at let's say 1.38V and setting 1.40V XMP.
And we all know no B-Die makes 4000 16-16-16-36 1.35V, so this XMP was a fail from the start. Same goes for 4000 16-19-19-39 and probably also those 1.50V kits. Actually, I was too naive and actually thought they binned B-Die in the end to the absolute limits. Even if they did achieve that, they would be way more than 400€.
 
#17,617 ·
Currently trying to decide on either of these kits from G Skill:
F4-4266C16D-32GVK 16-19-19-39 1.50v
F4-4266C17D-32GVKB 17-18-18-38 1.50v
They looked binned very similarly. Any suggestions?

Thanks
I bought F4-4266C17D-32GVKB, should be here tomorrow. I'd definitely go for that over the 16-19 kit.

Luumi is running the same kit at 4700 18-18-18 1.48v, hoping for equally good luck.

Was looking at F4-4400C17D-32GVKs but figured I'd save a few bucks.
 
  • Rep+
Reactions: Mtodd256
#17,618 · (Edited)
16-19-19-39 1.50V is harder to run, on other similarly priced kits it needs around 1.51-1.52V. Equivalent 4266 17-18-18-38 should be 1.45V, but I got to 1.44V even with 3600 kit, so even 1.45V would be worse than 16-19.
Always go for lower tCL with B-Die, tRCD/tRP is not hard to make equal or at least +1.
This is all theory of course, I don't know if G.Skill does some weird binning and 17-18 is actually better.
But 20-50€ lower price for 17-18 also suggests it's worse bin.

And 4266 16-19-19-39 1.45V was already one of top bins for single rank, probably similar to 4400 16-19-19-39 1.50V. Surpassed maybe only with 4600-4800 18-22-22 kits.
 
#17,619 ·
16-19-19-39 1.50V is harder to run, on other similarly priced kits it needs around 1.52V. Equivalent 4266 17-18-18-38 should be 1.45V, but I got to 1.44V even with 3600 kit, so even 1.45V would be worse than 16-19.
Always go for lower tCL with B-Die, tRCD/tRP is not hard to make equal or at least +1.
This is all theory of course, I don't know if G.Skill does some weird binning and 17-18 is actually better.
But 20-50€ lower price for 17-18 also suggests it's worse bin.

And 4266 16-19-19-39 1.45V was already one of top bins for single rank, probably similar to 4400 16-19-19-39 1.50V. Surpassed maybe only with 4600-4800 18-22-22 kits.
It looks like the 4266 17-18-18-38 1.50V is on my motherboard QVL. I thought the 4266 16-19-19-39 1.50V could be higher binned but like you said harder to run. I'm leaning more towards 4266 17-18-18-38 since it's more compatible.

Running on a EVGA Z490 Dark MB. Right now can run 16GB F4-4400C16D-16GVK at 4200 15-18-18-35 1T at 1.51V. Will run the 4400 16-19-19-39 XMP profile when I pump up Vccio and vsa initially.

Thinking either of these 4266 kits will OC well all binning aside. I thought tCRD was harder to hit since tCL scales with voltage more or less. Is this not the case?
 
#17,620 · (Edited)
21
21
69
69
64
64
65
65
4
4
7
7
7
7

Manually input those values after a fresh boot everything on auto, Latency Mode set to dynamic, then reboot. See if you can post. If not you'll likely just have to keep rebooting until you see the RTLs train something similar to what I suggested.
i cant get exactly what you suggested. but close enough i think.
64/66/66/66
7/9/8/8

thanks for the tips, i now getting 43ns instead of 44.5ns.
still got miles to go to near 40ns.

i care about gaming. latency seems to play a big role in games.
 
Top