Originally Posted by pion
Had been testing in Linux with stress-ng and stress. 49x 1.28V was very stable.
But when I go to Windows and run OCCT (large data set) I can't even run 1h at 48x 1.31V.
Are those Linux tests really that bad or am I missing something?
I wouldn't call it "bad," but the linux tests in this case simply don't provide a workload that reveals a vulnerability in your overclock. What I've gathered through poring over CPU benchmarks and overclocks while stability testing my own CPU, is that OCCT is one of the "harder" tests, along with Prime95 AVX and any form of LINPACK. Regardless of what you take away from your test results, I feel like the author of OCCT put it very simply:
If OCCT reports an error, something weird has occured. For instance, OCCT asked for 2+2 to your CPU, and it got 5 as an answer, which is obviously wrong. This indicates that something is wrong with your hardware.
Speaking of which, I've recently completed a 24 hour OCCT Large Data Set test:
Settings are as follows:
Core Multipler 51x
Uncore Multiplier 47x
AVX Offset: 0
vCore: 1.35v average reported (I think 1.36 in BIOS)
LLC: 1 (Asrock Z370 Taichi)
Even though it successfully ran throughout 24 hours, Windows logged 5 WHEA entries, so it's not entirely stable. I've restarted the test at 1.37V and I currently have 0 WHEA entries at the time of posting, 6 hours into an OCCT Large Data Set run.