Overclock.net banner
21,281 - 21,300 of 22,016 Posts
Read lots of hit & miss stuff regarding the "upgrade" from 13th to 14th.
Some say that the microcode is a bit different / improved for 14th, benchmarks then mins & lows for some games definitely more stable for 14th is despite "lower" biscuits or SP compared to 13900KS for instance, even lower temps overall.

Can anyone who actually switched from 13900KS to 14900K/KF confirm that the upgrade has distinguishable effects? Is it worth the trouble? :))
Yeah, I know... it's the lottery also...

That's my current 13th KS:

 
Read lots of hit & miss stuff regarding the "upgrade" from 13th to 14th.
Some say that the microcode is a bit different / improved for 14th, benchmarks then mins & lows for some games definitely more stable for 14th is despite "lower" biscuits or SP compared to 13900KS for instance, even lower temps overall.

Can anyone who actually switched from 13900KS to 14900K/KF confirm that the upgrade has distinguishable effects? Is it worth the trouble? :))
Yeah, I know... it's the lottery also...

That's my current 13th KS:

View attachment 2654557 View attachment 2654558 View attachment 2654559
Totally not worth it unless you are chasing the last 100-200 Mhz. For gaming having a good memory controller to be able to run DDR5-8000+ is way more important.
 
Do 14th gen chips have higher core deltas than 13th gen? My 13900ks had 8C after delidding and my 14900k had 14C after deliding. Even after cleaning surfaces thoroughly redoing the liquid metal, and very very lightly sanding the die, its still like 12C. I used the supercool direct die 14th gen on both setups and I have two 360mm rads.
 
Do 14th gen chips have higher core deltas than 13th gen? My 13900ks had 8C after delidding and my 14900k had 14C after deliding. Even after cleaning surfaces thoroughly redoing the liquid metal, and very very lightly sanding the die, its still like 12C. I used the supercool direct die 14th gen on both setups and I have two 360mm rads.
The 14900 is literally a rebranded 13900 that's been binned.
What would happen if you only OC’d your ring ratio? Is it even worth it?
Sure, but if you don't overclock your P- and E-cores as well, you're wasting performance since the Vcore is shared across all three things.
 
Ring clock up to 50x is virtually free if your Vcore is in the 1.30V+ range. So test and take advantage of it. No reason not to capitalize on it.
Could probably even test up to 52x if your chip's got good ring binning.

Just think of PLL as like, decimal voltages of Vcore. You can try increasing them to see if they help with stability.
In my experience, only increasing the Core PLL actually showed an effect. The rest provided nothing.
Sorry to @ you again, but when it comes to Hogwarts Legacy for stability testing, I'm not seeing a specific boot up "building shaders" thing happening.

My system seems back to being as stable as I can have now - days of varied gaming load and usage, Geekbench ran 6x back to back with no probs and high scores, R23 is completing correctly, Hogwarts after full clearing my shaders is working fine for like 5-10mins of running around, benchmark scores are consistently high/as expected.

I've gone through the process of disabling & deleting shader cache for nvidia + programdata/hogwarts as per their instructions on their own website too - and while I can definitely tell shader compiling is happening IN GAME (ie. i get into the world & run around and things stutter like mad as I go through new areas and then eventually become normal and smooth) I seem to be missing the actual menu screen saying building shaders before even getting into the game, that I know The Last Of Us seems to have. Or is the actual test just being in game running around after clearing shaders?

Also, I'm trying to find alternatives for stability testing the Core PLL / VCore changes, but can't be sure Hogwarts is really hammering things, so wanted to try TLOU.. until... going back 50 something odd pages to when you and @tps3443 (page 969-970) talking about TLOU and Hogwarts, it seems TLOU can't really be used for stability now bcs the devs made it easier to compile?

I'd love to avoid spamming R23 to test Core PLL / VCore tuning.. :X
 
Sorry to @ you again, but when it comes to Hogwarts Legacy for stability testing, I'm not seeing a specific boot up "building shaders" thing happening.

My system seems back to being as stable as I can have now - days of varied gaming load and usage, Geekbench ran 6x back to back with no probs and high scores, R23 is completing correctly, Hogwarts after full clearing my shaders is working fine for like 5-10mins of running around, benchmark scores are consistently high/as expected.

I've gone through the process of disabling & deleting shader cache for nvidia + programdata/hogwarts as per their instructions on their own website too - and while I can definitely tell shader compiling is happening IN GAME (ie. i get into the world & run around and things stutter like mad as I go through new areas and then eventually become normal and smooth) I seem to be missing the actual menu screen saying building shaders before even getting into the game, that I know The Last Of Us seems to have. Or is the actual test just being in game running around after clearing shaders?

Also, I'm trying to find alternatives for stability testing the Core PLL / VCore changes, but can't be sure Hogwarts is really hammering things, so wanted to try TLOU.. until... going back 50 something odd pages to when you and @tps3443 (page 969-970) talking about TLOU and Hogwarts, it seems TLOU can't really be used for stability now bcs the devs made it easier to compile?

I'd love to avoid spamming R23 to test Core PLL / VCore tuning.. :X
My current approach for stability testing is having Geekbench 6 running in a loop while compiling chromium. This method has worked really well to find any kinds of instability even though the system appeared to be stable in OCCT / Prime95. Comes really close to shader compilation I'd say.

If you are interested I can post the batch files I'm using for stability testing.
 
My current approach for stability testing is having Geekbench 6 running in a loop while compiling chromium. This method has worked really well to find any kinds of instability even though the system appeared to be stable in OCCT / Prime95. Comes really close to shader compilation I'd say.

If you are interested I can post the batch files I'm using for stability testing.
I don't have the paid version of Geekbench so I don't think I can use batch scripts to make it loop, but some insight into 'compiling chromium' could be good?

I'm trying to find things that would stress the full load voltage, but without.. well, dealing with 280+ watts in R23 and similar (since we all know R23 passing doesn't really mean full stability). Just something that triggers instability if I drop my ACLL / VCore, so I can tune the Core PLL.
 
My current approach for stability testing is having Geekbench 6 running in a loop while compiling chromium. This method has worked really well to find any kinds of instability even though the system appeared to be stable in OCCT / Prime95. Comes really close to shader compilation I'd say.

If you are interested I can post the batch files I'm using for stability testing.
Try linpack on CLI or Linx and full ram, if you are stable tell me please
 
I don't have the paid version of Geekbench so I don't think I can use batch scripts to make it loop, but some insight into 'compiling chromium' could be good?

I'm trying to find things that would stress the full load voltage, but without.. well, dealing with 280+ watts in R23 and similar (since we all know R23 passing doesn't really mean full stability). Just something that triggers instability if I drop my ACLL / VCore, so I can tune the Core PLL.
I don't have the payed version of Geekbench 6 either, otherwise I'd be looping the Clang test.

Setup the Chromium sources and the required tools following this guide. This should take around 30 minutes, most of the time is waiting for VS Community to install and git to pull the sources.

Afterwards rename the two text files I've attached to *.bat. You might need to adjust paths if you changed the installation directories. You can either run the scripts individually or - as I do - in parallel. Let them run and check WHEA for errors. They usually throw pretty quickly if your overclock is not stable:

Image



I had very good success testing my overclocks this way and it doesn't pull 400+ watts if I'm at 5.9 Ghz all core on my 13900ks.
 

Attachments

I don't have the payed version of Geekbench 6 either, otherwise I'd be looping the Clang test.

Setup the Chromium sources and the required tools following this guide. This should take around 30 minutes, most of the time is waiting for VS Community to install and git to pull the sources.

Afterwards rename the two text files I've attached to *.bat. You might need to adjust paths if you changed the installation directories. You can either run the scripts individually or - as I do - in parallel. Let them run and check WHEA for errors. They usually throw pretty quickly if your overclock is not stable:

View attachment 2654800


I had very good success testing my overclocks this way and it doesn't pull 400+ watts if I'm at 5.9 Ghz all core on my 13900ks.
Seems good, I'll try set it up next time I get to tweaking things. I really want to dial in and make sure this current setup is a stable profile before touching anything else. Geekbench passed 6x in a row earlier at current settings, but I know that on this +0.050 offset for vf 10 (native 1.474) I had an error before at like the 8th or 9th run a few days back (can't remember if it was program or corrected whea). My base all-core VCore is up 1 notch since then though, so I'm curious if it was actually GB6 requiring more all-core VCore than R23, and not actually one of the OCTVB frequencies erroring out.

What's the highest watts/amps peak you remember doing this or gb6 at 5.9 all core? I notice that during the Clang test (seems that's the hardest) my watts peak at like 240-250w for that short duration, but the rest of the test before that point is like 100-150w.
 
Try linpack on CLI or Linx and full ram, if you are stable tell me please
I was about to test Linpack on command line (have it in OCCT and used it there before) for you, but no way on earth I run that thing on my 5.9 profile when it already pulls 260 watts on 5.7 with 8-ecores and HT disabled. I'm not doing 450w+ burn tests and provoke degradation to claim "I'm stable" and then crash because of transients. Geekbench Clang / Shader compilation / Chrome compile has turned out way better for stability testing than this senseless max load tests.

Image
 
Seems good, I'll try set it up next time I get to tweaking things. I really want to dial in and make sure this current setup is a stable profile before touching anything else. Geekbench passed 6x in a row earlier at current settings, but I know that on this +0.050 offset for vf 10 (native 1.474) I had an error before at like the 8th or 9th run a few days back (can't remember if it was program or corrected whea). My base all-core VCore is up 1 notch since then though, so I'm curious if it was actually GB6 requiring more all-core VCore than R23, and not actually one of the OCTVB frequencies erroring out.

What's the highest watts/amps peak you remember doing this or gb6 at 5.9 all core? I notice that during the Clang test (seems that's the hardest) my watts peak at like 240-250w for that short duration, but the rest of the test before that point is like 100-150w.
250w for the Clang test at 5.9 sounds about right. That's the most stressful test but also the best for testing stability. Compiling the lua interpreter with Clang involves a lot of multi core processing but also a lot of transients*. Again, this comes very close to what those UE5 games are doing when they are compiling shaders and are triggering blue screens and crashes for 13/14th Gen Intel cpus :poop:

* my assumption, compiling involves reading source files from the disk before the compiler can continue compiling. Where as Prime95 and the other tools just put everything in memory and then non stop blast computations.
 
I was about to test Linpack on command line (have it in OCCT and used it there before) for you, but no way on earth I run that thing on my 5.9 profile when it already pulls 260 watts on 5.7 with 8-ecores and HT disabled. I'm not doing 450w+ burn tests and provoke degradation to claim "I'm stable" and then crash because of transients. Geekbench Clang / Shader compilation / Chrome compile has turned out way better for stability testing than this senseless max load tests.

you don't need to exceed the limits, it's better to keep those limits blocked and don't go beyond 256w, like for example my 13900kf... just a few cycles are enough to understand the stability, with the same y-cruncher but a little longer
 
you don't need to exceed the limits, it's better to keep those limits blocked and don't go beyond 256w, like for example my 13900kf... just a few cycles are enough to understand the stability, with the same y-cruncher but a little longer
But how does that test stability at 5.9 Ghz if the CPU never reaches that frequency because of power limits?
 
But how does that test stability at 5.9 Ghz if the CPU never reaches that frequency because of power limits?
they are certainly tests that leave the time they find, on a daily basis you will never have consumption like that, both y-cruncher and linpack are the most unreal thing there is in the daily life. As for the speed of detecting problems on both the RAM and CPU side, it is faster, if you then tell me that it is of no use then I can only agree with you. when you test for hours and you are stable but you just need a few minutes of another test and are extremely stressed, then you need to ask yourself two questions about the overclocking performed
 
Nice
Maybe no degradation then ;)
Nope, SP values are based on the V/F recorded on the CPU at manufacture. A degraded chip would report the same SP.
 
The issue is, higher multipliers are far more fragile than lower ones.
E.g. 59x max might need 100 cumulative hours of 300W to drop down to 58x max, but 62x max might only need 10 cumulative hours. It really depends on the chip and how strong the silicon is.

So even if you think you did not push the chip much, sometimes, those few spikes from time to time were cumulatively enough to degrade those cores.
AC current scales with the square of the frequency, I believe. I.e., the current experienced in a CPU oscillating at 6 GHz is higher than that at a lower frequency and same voltage.
 
  • Helpful
Reactions: Ichirou
21,281 - 21,300 of 22,016 Posts