Overclock.net banner

Has your Raptor Lake (13/14 Gen) degraded?

  • Yes my 13th gen stability has degraded (no overclock)

    Votes: 3 5.3%
  • Yes my 13th gen stability has degraded (overclock)

    Votes: 5 8.8%
  • No my 13th gen stability has not degraded (no overclock)

    Votes: 5 8.8%
  • No my 13th gen stability has not degraded (overclock)

    Votes: 16 28%
  • Yes my 14th gen stability has degraded (no overclock)

    Votes: 5 8.8%
  • Yes my 14th gen stability has degraded (overclock)

    Votes: 1 1.8%
  • No my 14th gen stability has not degraded (no overclock)

    Votes: 8 14%
  • No my 14th gen stability has not degraded (overclock)

    Votes: 14 25%
21 - 40 of 56 Posts
I will run R15 with my baseline PL1/2=265W, 420aio, and undervolt to see if stable. I will capture HWinfo to show voltage during run. Maybe that will be interesting? Will it crash with my R23 settings? Will it run hotter?

Thats all I can do.

China has billions of people so if a small group of overclockers are reporting something (through language barrier), being denied rma by a chinese retailer, etc etc, I don't know how that has any bearing on the actual debate.

It sounds like there is a problem with UE5 compatibility and, as we know, there are a bunch of junk bin 14th gen released.

But 13th gen were generally reliable. If problems are coming to light now after adoption of UE5, then wouldn't you say it probably has something to do with that?
Those games run fine with a stable CPU. It's not the games. I have Robocop which is UE5. It'll insta-crash with my 14900ks on defaults, but that game never crashes (along with R15) with the profile I run on it daily.
 
Discussion starter · #22 ·
My KS keeps crashing. I'm now at 1.39V at stock speeds. I kept power under 300w. Max load in R23 (for stability testing) was 270-280W. Temps never approached 90*C.

When the new AM5 CPU releases, I'm going back to AMD I think.
 
  • Rep+
Reactions: JCPUser


Oh this looks fun. Not as fun as chinese translations but:

2. Gate oxide breakdown (destruction due to overvoltage)



When overvoltage is applied, the oxide film on the gate ruptures. This happens slowly, not immediately.



There is an opinion that it occurs due to the 1.6V voltage applied to TVB and preferred core boost, but as a result of testing with Comet Lake G5900, it is said that oxide breakdown did not occur even at 1.7V. Deterioration occurred only at 1.75V and death occurred at 1.8V. However, since this also took 132 hours, this reason is not convincing. And Intel guarantees normal operation even at 1.72V!!!!!!!!!!!!!



When oxide breakdown occurs, more voltage must be applied to maintain the same clock. Deteriorate it. It is said. In English it is Degradation.



The name of this incident is Intel 13/14th CPU Degradation. no see.
 
My KS keeps crashing. I'm now at 1.39V at stock speeds. I kept power under 300w. Max load in R23 (for stability testing) was 270-280W. Temps never approached 90*C.

When the new AM5 CPU releases, I'm going back to AMD I think.
1.39v bios set?

Edit: and like I've said before, unless Arrow Lake is a home run I'll be headed back to AMD too this time next year when Zen5 x3D releases.
 
So far no noticeable degradation yet (knock on wood). Then again I have hardly benched compared to a lot of other folks here. I just found somewhat modest stable clocks and stayed there. Funny enough, my original "goal" for this 13900KS was 6ghz all core. Hah. What did I know.

My only issue is temps but a bit anxious to do another reapplication of LM because of my misadventures last time. Might just revert to stock and stick with that for now.

Also considering going back AMD next gen, but I had strange issues with my 5950X in certain games such as Cyberpunk having a 1-2 second period after exiting any menu where my FPS would drop to 10 or so before jumping back up. That and my task manager was incredibly sluggish. Simply switching platform to my current one without an OS reinstall actually fixed everything.

Might be worth not having these kinds of worries though. I don't really want to go through another gen of having to delid, if I had a choice.
 
  • Rep+
Reactions: Falkentyne
So far no noticeable degradation yet (knock on wood). Then again I have hardly benched compared to a lot of other folks here. I just found somewhat modest stable clocks and stayed there. Funny enough, my original "goal" for this 13900KS was 6ghz all core. Hah. What did I know.

My only issue is temps but a bit anxious to do another reapplication of LM because of my misadventures last time. Might just revert to stock and stick with that for now.

Also considering going back AMD next gen, but I had strange issues with my 5950X in certain games such as Cyberpunk having a 1-2 second period after exiting any menu where my FPS would drop to 10 or so before jumping back up. That and my task manager was incredibly sluggish. Simply switching platform to my current one without an OS reinstall actually fixed everything.

Might be worth not having these kinds of worries though. I don't really want to go through another gen of having to delid, if I had a choice.
I actually get a brief stutter when I exit a menu in CP on my Intel machine. It was caused by one of the more recent CP updates though I think, as it didn't do it on this machine a few months ago.
 
Discussion starter · #27 ·
1.39v bios set?
Yes 😬

Edit: and like I've said before, unless Arrow Lake is a home run I'll be headed back to AMD too this time next year when Zen5 x3D releases.
If the mobile Core Ultra's are anything to go by, and Intel is truthful when they say the E and P cores are fundamentally unchanged, and the node reduction is the only source of a performance increase, and HT is removed, Arrow Lake will not be competitive with Zen 5.

I loved Raptor Lake. It's such a clock monster but I knew this small amount of silicon was not going to sustain that much wattage for very long. I tried to play it closer to the safe side but I guess not safe enough?
 
Yes 😬



If the mobile Core Ultra's are anything to go by, and Intel is truthful when they say the E and P cores are fundamentally unchanged, and the node reduction is the only source of a performance increase, and HT is removed, Arrow Lake will not be competitive with Zen 5.

I loved Raptor Lake. It's such a clock monster but I knew this small amount of silicon was not going to sustain that much wattage for very long. I tried to play it closer to the safe side but I guess not safe enough?
There's some posts on the Korean forum where people were suggesting the same conclusion that I came to awhile back: that the single core boost is cratering the boosting cores with 1.45v+.
Mentioned this on Wechat:

With his permission: (which I didn't ask to post)

Say a 14900K have a all-core-turbo 57x VID_native of 1380mV, and use ACLL 1.1.
When playing mult-core games like Cyberpunk 2077, it will use predicted current > 300A say 350A.
Because 8P+16E cores are not allowed to enter C6 states by Windows thread scheduler.
That means VID_after_ACLL = 1380 + 1.1 * 350 = 1765 mV.
That exceeds the VMAX of 1720 mV.
During the light-load phase of a frame time cycle(5-10ms), when most cores are idle at C1 state and the actual current is relatively low like 100A.
That means instantaneous Vcore = 1765 - 1.1 * 100 = 1655 mV.

And on the other hand, the TBM3.0 frequency and VID_native problem.

TBM3.0 completely depends on active Pcore count.
In other words, it doesn't care if 0 Ecore is active or 16 Ecores are active.
Intel have shown a scenerio to show case its ecores that a background render programs(blender or something?) was using 16 ecores, while Pcores are spared to deal with foreground stuffs like User Interactio and using Edge and Discord or something.
So what happens if 16 active Ecores are used by heavy background workload, and 1-2 active Pcores are used by foreground light workload?
The CPU will run TBM3.0 frequency and use its VID_native ~1500mV with both pretty high predicted current and actual current, which are mainly generated by 16 active Ecores!
 
There's some posts on the Korean forum where people were suggesting the same conclusion that I came to awhile back: that the single core boost is cratering the boosting cores with 1.45v+.
Mentioned this on Wechat:

With his permission: (which I didn't ask to post)
My cpu tops at 1.34v with -.080 undervolt. I suppose it would be 1.42v without.

Low bins would be 1.45+ without an undervolt.

Is the going hypothesis that single core boosts above 1.45v are doing the damage? If so that would be low bin K's at 6.0 and KS's at 6.2. Or anyone who tried overclocking above 6.0 with a K.

If this is correct, the problem is low bin Ks and KSs.

Question is, though, are there any realistic workloads that need two fast cores boosted to 6.0? Don't all games now use 8+ cores? Does boosting two cores above 6.0 have any real world benefit besides single thread benchmarks?
 
My cpu tops at 1.34v with -.080 undervolt. I suppose it would be 1.42v without.

Low bins would be 1.45+ without an undervolt.

Is the going hypothesis that single core boosts above 1.45v are doing the damage? If so that would be low bin K's at 6.0 and KS's at 6.2. Or anyone who tried overclocking above 6.0 with a K.

If this is correct, the problem is low bin Ks and KSs.

Question is, though, are there any realistic workloads that need two fast cores boosted to 6.0? Don't all games now use 8+ cores? Does boosting two cores above 6.0 have any real world benefit besides single thread benchmarks?
From my experience what i noticed with my 13900ks and 12700kf. When you are barely doing anything on desktop the cpu is flicking around to their boost speeds other wise any kind of load it drops down to all core freq. only if you use tvb+1 or +2 if temps allow it will boost all core up one or two bins doing stuff like gaming. To me it more a marketing gimmick. Only time i noticed full single core load at 6.0 was cb23 single core. Once temps hit 70c it drops down to 5.6. I always just sync all cores to what i want and thats as fast as it goes. No high voltage/freq flicking. Not noticable to me anyway.
 
My cpu tops at 1.34v with -.080 undervolt. I suppose it would be 1.42v without.

Low bins would be 1.45+ without an undervolt.

Is the going hypothesis that single core boosts above 1.45v are doing the damage? If so that would be low bin K's at 6.0 and KS's at 6.2. Or anyone who tried overclocking above 6.0 with a K.

If this is correct, the problem is low bin Ks and KSs.

Question is, though, are there any realistic workloads that need two fast cores boosted to 6.0? Don't all games now use 8+ cores? Does boosting two cores above 6.0 have any real world benefit besides single thread benchmarks?
I believe this was mentioned in my quote.
It's not just the P cores.
It's the E cores.
Because there is no DLVR, the E cores generate a massive amount of predicted current.
So any time there is some load on the E cores, the vcore shoots sky high just like if there P cores were being used. (If I'm understanding this correctly).
Remember, this can happen FAR faster than can appear on sensors.
 
This is what I get 10 min R23. This is just a -0.080v undervolt.

Here all the ecores are running and there does not appear to be a massive voltage spike.

I will be honest, I believe the majority of problems are low bin parts combined with overzealous/unintentional overclocks. Some small % may have degraded (had weak traces from the factory), everyone else is just pissed off they got a low bin.

My thoughts on overclocking 14th.

*Single core boost has no real world benefit.

*Multicore boost (for sustained workstation loads, which is what computer was built for), no advantage going above 5.6 just so I can finish jobs 3% faster. Important thing is keeping it relatively cool and quiet.

*For gaming, using TVB, HT off, and higher clocks might have small benefit, but I haven't run into a game that I realistically need 3% more fps. Memory overclocks offer more improvements. When I find a game that needs more performance, my first stop is GPU, second stop is memory.

*When I was in my 20s I probably wouldn't have considered the above things and would have tried to overclock with whatever cheap tower cooler I was using. Probably would have tried to find the "max" just a couple times, and wondered if I had degraded it. In my old age I avoid that kind of temptation because I don't want to wonder if I damaged it for no real world gains.



Rectangle Font Parallel Screenshot Electronic device
 
Question is, though, are there any realistic workloads that need two fast cores boosted to 6.0? Don't all games now use 8+ cores? Does boosting two cores above 6.0 have any real world benefit besides single thread benchmarks?
Yes there is. Windows scheduler is aware of Turbo Boost Max 3.0, Intel's scaled turbo levels and also knows about the different per core PLL(P-cores can clock independently)

In a nutshell in balanced power plan will sort the cores by max clock speed. Then it will schedule tasks primarily to the fastest P-cores, if HT On both logical cores will be loaded heavily and the least priority tasks will be offloaded in small intervals to other P-cores or E-cores. Someone running one of these CPU stock in balanced will have mostly 1 of the favored cores handling the bulk of the system load at its maximum clock speed and the rest do support at small intervals to keep the 1 favored core at max clock most of the time.

What would be the point of having CPU doing 6.0Ghz in single core for best responsiveness if then the OS spreads the load around keeping most cores actives which prevents the CPU from going to 6.0Ghz?

Of course multi-core loads will use all cores but for everyday procrastination the Windows scheduler will try to take advantage of the single core boost. This behavior can be changed by having all cores boosting to the same clock speed(the scheduler will spread the load), changing to power saver plan(scheduler spreads the load and tries to keep low Vcore but sacrifices system responsiveness) or using high performance plan(scheduler attempts to get best throughput: every task is handed to any available core ASAP, but thrashes TBM 3.0)
 
Some clarification about testing for stability with Cinebench R15. The version must be 15.0.3.7. There are versions 15.0.3.8 and 15.0.3.8 Extreme, but they don't work for stability testing. Also the bench should not be run through the BenchMate. Watch for WHEA in HWiNFO. If there are crashes or errors, the spec PL limit of 253W should fix it. The spec IccMax limit doesn't work on Cinebench R15.
 
Some clarification about testing for stability with Cinebench R15. The version must be 15.0.3.7. There are versions 15.0.3.8 and 15.0.3.8 Extreme, but they don't work for stability testing. Also the bench should not be run through the BenchMate. Watch for WHEA in HWiNFO. If there are crashes or errors, the spec PL limit of 253W should fix it. The spec IccMax limit doesn't work on Cinebench R15.
Got something more to back up your claims?
 
Because for some reason 15.0.3.7 catches more instabilties than 15.0.3.8. For some reason BenchMate makes it run more stable than it really is. Everyone's free to test however they like, I just share how it is easier for stability purposes.
 
21 - 40 of 56 Posts