Overclock.net banner

1 - 20 of 43 Posts

·
Registered
Joined
·
61 Posts
Discussion Starter #1
This thread is to invite experts and the interested to share notes on the popular x264 v2 stress test. For those unfamiliar, it's the preferred test for the Haswell Overclocking Guide [With Statistics] because it has the distinct advantages of:
  • being a real operation (doing actual video editing / processing); it's not artificial / synthetic,
  • it is highly stressful; it is clearly worse (causes more instability) than, e.g., Prime95 for me and others,
  • while at the same time being a very cool stressor; it causes more instability, but at lower temps, which makes it more relevant to real-world work (many want to know the temps to which real work would push their CPU, not artificially high temps)
  • it only needs a few EXEs from a zip; it is highly portable and doesn't install anything. You can easily run it on HDD instead of SSD (to prevent SSD "wear")
  • finally, it has its own little built-in benchmark for immediate feedback
When you consider that it offers all this in one little package, it's easy to see why it's preferred. For more details, see the section of Darkwizzie's Guide under "Stressing" called "x264: The Cool Stresser". A link to the v2 test nicely repackaged by Angelotti is the second link there. Here's a screencap of it in use - Magnify the graphic and and it's in the DOS box.

Note: x264 was only recently repackaged as v2, which is much more focused than the original (thanks, angelotti!). A smaller better stresser and results are different. So be careful when reading results - it's important to say that you're using v2.

Also if you didn't know: The i7-4770K does notably better than i5-4670K due to how the 4770K does AVX (video encoding) better. So don't feel bad if your 4670K's fps isn't so hot. If you don't edit video.
biggrin.gif


As I've been using x264, I've wondered about several aspects of it. That's what this thread is for:
  • How good a benchmark is it
  • What exactly is it doing
  • Modding the package
I'm pretty new to OC, and would appreciate input from the vets here. Perhaps this thread can help congeal some useful info on this nice test.

x264 v2 as a benchmark

This comes from the fps (frames per second) number and is exactly proportional to the time for each loop. Elapsed time is shown to the hundredths of a second in the console. fps is always 2121 (the number of frames) divided by elapsed time. fps itself is only good to three significant figures (sig figs, e.g., 3.18 fps), but you can get it to five sig figs if you calculate it using the hundredths of elapsed seconds.

This number is clearly subject to influences like any other benchmark would be (it drops if much else is running). You can also see other subtle effects; e.g. on my i5-4670K at 4.4 Ghhz (cache and RAM at 1:1), a loop finishes about 6 seconds (1%) faster if I run it on an SSD, not HDD. (It's CPU intensive, not disk intensive, but still sensitive enough to show some effect.) (My rig is in my sig and you can see more details in the "screencap" link above.)

Also FWIW, the first loop seems to take slightly longer than subsequent ones. Maybe I'm just imagining.

Anyway, your fps number is a quick benchmark you can use. You can make it more precise if you use hundredths of a second. Also see below for how to output capture this to a file, so you have it if you crash.

x264 also quotes another number which only seems to depend on the number-of-threads option, for me:
  • Auto threads always = 35905.40 kb/s
  • 8 threads always = 35912.18 kb/s
  • 16 threads always = 36016.71 kb/s
(My priority is always Normal)

Unlike fps, it's not influenced by anything else running that I've noticed. Even if I bring fps to its knees, these numbers stay exactly as shown. It must be calculated relative to the actual CPU time allocated or some such, not elapsed time per se.

x264 itself makes "ENCODE.MKV" each time it loops. This file is 398,103,597 bytes for me, but this doesn't make any sense relative to the kb/s rates versus how long it ran (e.g. 667 seconds).

Perhaps someone who knows what it's actually doing (unlike me lol) can step in here. Can the kb/s number be used as a benchmark? Perhaps at least for threads?

What's x264 doing underneath the hood?

Of course, it's encoding a video. But can someone say more? In particular, what parts of one's system does it use beside CPU?
  • When my RAM's at 1:1 (2200 Mhz for 4.4 Ghz CPU), I get ~3.18 fps. Slowed to Auto (1333), I still get 3.07, only 3% less!
  • Having cache at full 1:1 speed (44) or at Auto speed (38) makes no discernible difference. It's actually 0.01 fps slower when cache is faster, but that's well within margin of variability (Std Dev roughly 0.03).
  • As previously stated, running on an SSD versus HDD results in an average 6 seconds (1%) less.
So... what's it doing such that RAM and even cache is almost useless? (Or maybe they are critical, but just pipelined so well that you don't see it?)

Does anything at all besides CPU matter much?

"Modding" the x264 package

I say "modding" a little tongue in cheek, because I'm only talking about extremely minor changes to DOS code, or whatever.

Still, I have found changes to its batch file like this to work better for me. New lines start with [NEW] and must be removed before running this:

Code:

Code:
@echo off
setlocal enabledelayedexpansion

cls
pushd "%CD%\test"

[NEW] copy x264_log2.txt x264_log3.txt 
[NEW] copy x264_log1.txt x264_log2.txt 
[NEW] dir *.log /o-d > x264_log1.txt
del *.log >NUL 2>&1

cls
echo.
[NEW] copy x264_log1.txt con:

echo x264 Stability Test

set target=1
set /p numpass="- Number of loops:"
set threads=16
set prio=normal

[NEW] echo | TIME >run1pass2loop0.log
echo.
echo - Running Test with %threads% threads, and %prio% priority in %numpass% loops
... most of the code is the same from here.

With the [NEW] lines, a Directory listing of .log files is captured to a .txt, then displayed in the console when you start a new x264 session:

Code:

Code:
04/30/2014  02:00 AM            58,552 run1pass2loop3.log
04/30/2014  01:49 AM            60,281 run1pass2loop2.log
04/30/2014  01:38 AM            59,237 run1pass2loop1.log
04/30/2014  01:27 AM               131 run1pass2loop0.log
... which can be very helpful if your session crashed (in which case, it doesn't write out the .rtf in the usual batch file). Note how the "copy 1 to 2" type of code keeps the last 3 directory .TXTs of .logs around, just in case. Also, a dummy run1pass2loop0.log is output to capture start time. I attached my revised batch file, if anyone wants to see. I also removed some of the extra lines displayed to console, so it's a little tighter output. And hardcoded my fixed choices (16 threads, Normal priority).

There are sure to be other simple but useful modifications to the x264 batch file. Such as if the fps itself (hopefully, good to several more decimals), or time in seconds to the hundredths of a second, could be output to text files so they could be viewed later, like I showed above. It'd be really sweet if it could be summarized across however many loops have run so far. If anyone actually wants to make an EXE handler for the x264 package, a lot could be done for logging. (Don't forget to output incrementally since you might crash when OCing.)

In case anyone doesn't know, you can use BlueScreenView from Nirsoft to easily see when you crashed, and what the code was. Together with the text directories, you can still see how many loops you did, how fast, even if it crashes sometime in the middle of the night.

Ok, that's enough typing for now. If you have more insight on x264 v2, please speak up!

Thanks for helping make this place great - RK7

x264StabilityTest64bitlogthatalsocaptureslogsto.txt 2k .txt file
 

Attachments

·
Registered
Joined
·
2,049 Posts
Ill try on a 3960x 4.8Ghz which was verified with a 2.5hours Prime95 Blend with 13GB ram tested. Have had no issues encoding short videos and no BSODs since verification.
 

·
Premium Member
Joined
·
10,755 Posts
Quote:
Also if you didn't know: The i7-4770K does notably better than i5-4670K due to how the 4770K does AVX (video encoding) better.
AVX is an instruction set, all Haswell i3/i5/i7 have avx and avx2. The i7 and i3 have Hyperthreading, which is what helps them (2 threads per core)

good thread, can't read atm because in LoL lobby with some friends, just scanning ocn
thumb.gif
 

·
Registered
Joined
·
61 Posts
Discussion Starter #4
Quote:
Originally Posted by Cyro999 View Post

AVX is an instruction set, all Haswell i3/i5/i7 have avx and avx2. The i7 and i3 have Hyperthreading, which is what helps them (2 threads per core)
Thanks. Cyro. Right, I was just telling newbies that the 4770K will do better than 4670K on x264 and other video processing. But I'm no expert on precisely why, shrug.

Does anyone know what the "kb/s" numbers are for?
 

·
Registered
Joined
·
61 Posts
Discussion Starter #6
Hi bern,

I used Prime95 v27.9 build 1, which was current as of 3 weeks ago.

Have you actually compared P95 v28 to x264 v2?

Everyone said P95 was the worst. But x264 laughed at it. While being cooler, to boot.

Can you see how something kind of stable but not real stable on P95 v28 (lasts 10-30 minutes) does on x264 v2? As always, don't use Adaptive voltage when stressing.

One of the beauties of x264 is that it's so disposable. It doesn't install anything anywhere. It's just EXEs that you run wherever you plop them. Keeps your Registry cleaner.
 

·
Registered
Joined
·
2,049 Posts
Prime95 is also "disposable" and it doesnt write any data to the disk, so i dont see this as an advantage. Actuallt the thing was like 200mb to download compared to Prime95's 5mb or whatever.

I ran V2 at Normal priority a couple times on my system, no issues, temps where about 7-10c below my usual max of 75*c. Vcore fluctuated between 1.368v and 1.376v. Prime95 causes the increased load to push my Vcore up to 1.392v.

All i see is that this program is doing less work. Less work = less power consumption. Work = Time(s) * Volts(v) * Current(A). Power = Volts(v) * Current(A). Hence we see lower temps and a lower probability of failure, sometimes due to lower temps, sometimes due to lower load.
The only real way to know if this is "better" is to run Prime95 on a non temperature limited system, say controlled to 40*c, and same with the x264 test. And ill predict that a x264 system thats deemed "stable" will fail when tested with Prime95 as it simulates work at a much faster rate, therefore having a small probability of making an error will cause a failure in a shorter period of time.

The arguement that Prime95 doesnt simulate a real world scenario doesnt work. We are trying to run as many operations per second to see if any one fails. You could run 100 operations a year and never see a failure. Or you can run 100 operations per second and see a failure in 5 seconds. In both scenarios the CPU was unstable, but on the first scenario you are not running enough operations to have much chance of failure. So to speed this process up we use Prime95 or LinX. x264 is just a slower method and gives false hope, because running it for 24 hours is only a fraction of operations completed compared to Prime95 for 24hours.
 

·
Registered
Joined
·
726 Posts
Quote:
Originally Posted by bern43 View Post

When you're talking about X264 being more stressful than P95, you need to make sure you reference the version. I'm doubtful that it's more stressful than the 28 version of P95.
This..... I could easily pass the X264 v2 for hours ... but my overclock failed and BSOD within minutes of P95 v28.5 if i recall right.

Edit : I ran prime with the 1344-1344 test.
 

·
Premium Member
Joined
·
10,755 Posts
x264 won't show a lot of fails that the crazy intensive tests will, but that does not mean that it's useless for making a system extremely stable, and it's ~50c cooler than Linpack. That's why it's extremely useful.

It's also important to note that some fails in those intensive tests happen BECAUSE they are doing that much work per second, it's not just a case of more work per second being more likely to fail. When you're drawing a massive amount more power (as much as double, that kind of scale) then it has implications on things like voltage droop on vrin and maybe even the correct vrin to use (input voltage)
 

·
Iconoclast
Joined
·
30,781 Posts
Quote:
Originally Posted by RedKnight7 View Post

x264 itself makes "ENCODE.MKV" each time it loops. This file is 398,103,597 bytes for me, but this doesn't make any sense relative to the kb/s rates versus how long it ran (e.g. 667 seconds).
Two-pass encoding.
Quote:
Originally Posted by Cyro999 View Post

x264 won't show a lot of fails that the crazy intensive tests will, but that does not mean that it's useless for making a system extremely stable, and it's ~50c cooler than Linpack. That's why it's extremely useful.
It's a much better benchmark than stability test.
 

·
Premium Member
Joined
·
10,755 Posts
Quote:
Originally Posted by Blameless View Post

Two-pass encoding.
It's a much better benchmark than stability test.
It tells you when you're stable enough to run the latest version of x264 encoder for however long, that's good enough information, mostly - you can always add volts onto that if you want
 

·
Iconoclast
Joined
·
30,781 Posts
Quote:
Originally Posted by Cyro999 View Post

It tells you when you're stable enough to run the latest version of x264 encoder for however long, that's good enough information, mostly - you can always add volts onto that if you want
Running a stress test for however long does not imply with any certainty that it will pass for the same length of time again.

Even if you have a system unstable enough to fail half the time in a long run, it will still pass enough long runs often enough to give a false sense of stability.

I use x264 to transcode videos fairly often. Some of these transcodes, at the settings I use, will take 12-24 hours a run on my SB-E or Ivy-E systems. For me to say with any certainty that no 24 hour transcode I ever attempted was ever likely to fail in the future, I'd need to run a ~240 hour test, if I'm limited to testing with x264 itself. A 50/50 shot of success, which I'd call pretty piss poor stability, is still going to pass one run half the time, two runs a quarter of the time, and even a triple length run an eight of the time. More demanding software generally doesn't need to run anywhere as long to prompt an error. There is more vdroop, there is higher ripple current, there is more heat, and there are more instructions being retired in a given period of time. All of these tend to make failure more likely, and the odds of a system failing an x264 test after passing Prime95 and LINPACK tests new enough to be using the same instructions, are pretty slim. There are always exceptions, and one should certainly check the actual programs one expects to use for stability, but only after the system has proven itself in the more stressful stuff.

Good benchmarks are rarely good stress tests, and good stress tests are rarely good benchmarks. You must proof your firearm or cannon with a much heavier load than it will ever be asked to fire in combat, but must you use combat loads to sight them. To do it the other way around will result in sights that never match up to the target, while risking catastrophic failure with every shot.
 

·
Premium Member
Joined
·
10,755 Posts
Quote:
Running a stress test for however long does not imply with any certainty that it will pass for the same length of time again.
That's why i don't use it in that way, just to say if you can pass once for 4 hours, your mean time between failure is probably large (more likely to be 2hours++ than 5 minutes)
Quote:
All of these tend to make failure more likely, and the odds of a system failing an x264 test after passing Prime95 and LINPACK tests new enough to be using the same instructions, are pretty slim.
I agree, but Linpack with up to date instructions and fast RAM = some ridiculous number like > 300w power draw at 1.45vcore, the z87 boards are not designed to handle that and neither is the chip - you're limiting yourself a lot by running a test that, when not bottlenecked, reaches 50c+ hotter than encoding with 100% CPU load.

Other stuff said is why i use x264 as a tool to find approximate stability, rather than running it, passing an hour and then saying "everything is stable"
 

·
Registered
Joined
·
61 Posts
Discussion Starter #14
Quote:
Originally Posted by jasjeet View Post

I ran V2 at Normal priority a couple times on my system, no issues, temps where about 7-10c below my usual max of 75*c. Vcore fluctuated between 1.368v and 1.376v. Prime95 causes the increased load to push my Vcore up to 1.392v.

All i see is that this program is doing less work. Less work = less power consumption. Work = Time(s) * Volts(v) * Current(A). Power = Volts(v) * Current(A). Hence we see lower temps and a lower probability of failure, sometimes due to lower temps, sometimes due to lower load.
The only real way to know if this is "better" is to run Prime95 on a non temperature limited system, say controlled to 40*c, and same with the x264 test. And ill predict that a x264 system thats deemed "stable" will fail when tested with Prime95 as it simulates work at a much faster rate, therefore having a small probability of making an error will cause a failure in a shorter period of time.

The arguement that Prime95 doesnt simulate a real world scenario doesnt work. We are trying to run as many operations per second to see if any one fails.
Ok, thanks for comparing. I accept that x264 isn't "the most stressful" test. But I still think it's a very good one, and the Haswell Guide is right to prefer it.

People OC for a lot of reasons; different goals and risk levels. But hearing what you say, I can only think folks hurt themselves (lower multiplier) by trying something far more stressful than they'll ever really encounter.

To extend your reasoning - which you yourself insist you're deliberately pushing as hard as you can - your dream test would instantly crash your CPU no matter how slow you clock it. Do you see how your reasoning has you on this path?

Isn't it only logical to test it on what one actually might do? See below for more.

Of course, if you actually WILL be hunting primes all the time, then sure, test with P95. People use their machines for a lot of different things. As for me, I know I will only be doing unintensive stuff and games, and maybe a couple weeks a year total be doing video editing. So video processing is my high bar. I want to test my actual challenge, shrug.

Question: Why did your Vcore go up so much with P95... Do you have Adaptive voltage on? On my board, fixed voltage is pretty fixed.

Quote:
Originally Posted by shremi View Post

This..... I could easily pass the X264 v2 for hours ... but my overclock failed and BSOD within minutes of P95 v28.5 if i recall right.

Edit : I ran prime with the 1344-1344 test.
Thanks, Shremi. Ok fine, the new P95 is more stressful. Wouldn't you know it, apparently I made the OP the day they released a tough new P95, LOL.

Kind of begs the question what's the next version of P95 going to be like... easier than x264 again?

More seriously, I also didn't test Linpack. Or IBT. But the point of this thread was not to proclaim "x264 as THE most stressful test". It's that x264 v2 is all the things I mentioned rolled into one, which makes it wonderful for many folks.

Really, the more I think about it, the more I think folks cripple themselves with artificial tests of no relevance to their actual work. (But P95 is not artificial for people actually hunting primes!)

Quote:
Originally Posted by Blameless View Post

Two-pass encoding.
Thanks for tossing that out there but do you mind doing the math using the examples I gave? How can you relate the kb/s rate to the final file size and seconds needed? I still don't know what kb/s is showing, so I don't know if it's useful.

I've even done more testing and found that it does not change even if I change multipliers. So now I really don't know what it could be. I hope someone can step in and explain what this number really is showing mathematically. I don't even know if "kb/s" is bits or bytes (ALL the output is lowercase).

Quote:
Originally Posted by Blameless View Post

Running a stress test for however long does not imply with any certainty that it will pass for the same length of time again.

Even if you have a system unstable enough to fail half the time in a long run, it will still pass enough long runs often enough to give a false sense of stability.

I use x264 to transcode videos fairly often. Some of these transcodes, at the settings I use, will take 12-24 hours a run on my SB-E or Ivy-E systems. For me to say with any certainty that no 24 hour transcode I ever attempted was ever likely to fail in the future, I'd need to run a ~240 hour test, if I'm limited to testing with x264 itself. A 50/50 shot of success, which I'd call pretty piss poor stability, is still going to pass one run half the time, two runs a quarter of the time, and even a triple length run an eight of the time. More demanding software generally doesn't need to run anywhere as long to prompt an error. There is more vdroop, there is higher ripple current, there is more heat, and there are more instructions being retired in a given period of time. All of these tend to make failure more likely, and the odds of a system failing an x264 test after passing Prime95 and LINPACK tests new enough to be using the same instructions, are pretty slim. There are always exceptions, and one should certainly check the actual programs one expects to use for stability, but only after the system has proven itself in the more stressful stuff.

Good benchmarks are rarely good stress tests, and good stress tests are rarely good benchmarks. You must proof your firearm or cannon with a much heavier load than it will ever be asked to fire in combat, but must you use combat loads to sight them. To do it the other way around will result in sights that never match up to the target, while risking catastrophic failure with every shot.
This is all very sound reasoning Blameless, but I take a different approach. Test your final multiplier's VID pretty thoroughly. As a very rough approximation, I found that if it crashed in 1 minute, then +0.02 VID had it crash in ~10 minutes, and +0.02 more had it crash in about an hour. This can take time because no one trial is a final statement, as you say, so you might have to get averages. (Or just go with your gut... it's your own PC, you can do what you want.) I had a measure good for about 40 minutes, then added 0.05 VID to it, and have never had a crash since. It has yet to crash in several days of testing and work/play.

So, your observations are correct. But you don't have to approach it like that. Why not just skip over all the long testing? Unless someone has evidence that CPUs get weird with volts when they get near permanent stability, we'll use Occam's Razor and assume they don't.
 

·
Registered
Joined
·
2,049 Posts
Intel doesnt put out CPUs that make mistakes. So my dream test would also not fail if the CPU was truly stable. You are not suffering by reduce the multiplier when using Prime95 compared to x264 test. All you are doing it setting a speed which has a greater probability of being stable. And as BlameLess has said, just because your 24hour stable doesnt mean you can survive 24hr loads. In theory you could instant crash. By running a more rigorous test you reduce this possibility, the harder the stress test, the further its reduced.
 

·
Registered
Joined
·
61 Posts
Discussion Starter #16
Quote:
Originally Posted by jasjeet View Post

Intel doesnt put out CPUs that make mistakes. So my dream test would also not fail if the CPU was truly stable. You are not suffering by reduce the multiplier when using Prime95 compared to x264 test. All you are doing it setting a speed which has a greater probability of being stable. And as BlameLess has said, just because your 24hour stable doesnt mean you can survive 24hr loads. In theory you could instant crash. By running a more rigorous test you reduce this possibility, the harder the stress test, the further its reduced.
Good enough; that makes sense just fine. I am not saying it's wrong.

I'm saying that there's a better way.

One is the absolute approach, the other is relative to the work expected of the CPU.

Here's the absolute argument as I see it; please correct me if wrong:
  • Realistically, you can't test an OC forever to make sure it will never fail.
  • So instead, you make it pass a more stressful test. Indeed, the most stressful test you can find.
  • If it can pass the worst stress, it should be ok for everything else.
Good enough. It makes sense just fine. None of these arguments are rocket science.

But there is another way.

You can chart the CPU's stability versus volts, for work you will actually do. As part of making this line, find a voltage stable for a long time (e.g. overnight).

Then give the voltage one more nice bump up the stability line you've drawn.

So, while you can't test forever,

You don't have to.

You don't have to cripple your actual work, to be absolutely theoretically sure it can handle everything.

Of course, if you WANT to be absolutely theoretically sure, then go for it. It's your choice.
 

·
Registered
Joined
·
2,049 Posts
Just because you can pass a more rigourous test doesnt mean you will pass a less rigourous test. It all about reducing the probability of error. And the fastest way to do that is to run the most stressfull program to simulate operations as fast as possible. Like i said before, you can either run Prime95 which will do millions of operations per second, or run 1 operation per hour for a year. Id rather test for a couple of hours so its quicker to use Prime95. The amount of work doesnt make a CPU more vulnerable to error, its the rate of work which does.

Say we have a CPU that is 90% stable:
Prime95 does 100 operations in 1 second. 1 operation take 0.01 seconds.
Worst case it is possible that you could fail in 0.1seconds, or after 10 operations as you have a 1 in 10 rate of failure.

Other test does 100 operations in 2 seconds. 1 operation take 0.02 seconds.
Worst case it is possible that you could fail in 0.2seconds, or after 10 operations as you have a 1 in 10 rate of failure.

Now when you get to 99%
Prime95 does 10000 operations in 1 second. 1 operation take 0.0001 seconds.
Worst case it is possible that you could fail in 1 second, or after 9999 operations as you have a 1 in 10000 rate of failure.

Other test does 10000 operations in 2 seconds. 1 operation take 0.0002 seconds.
Worst case it is possible that you could fail in 2 seconds, or after 9999 operations as you have a 1 in 10000 rate of failure.

So you can see its getting quite likely youll have stability in the 99.99% range, however on average it would still take Prime95 half the time to prove you are unstable than the "other" test. Its not because Prime95 is pushing the CPU harder and temperatures are higher etc. But the important thing here is, that the "other" test is loading the CPU 50% less than Prime95, yet it will still fail at some point, on average twice as long as it would take Prime95 to tell you that. Its because the rate of work is higher and on average itll take less time to uncover a failure. It means your Prime95 24 hours test is much much more reliable than the "other" test.

When you scale that to millions per micro second there is a big difference. That is why you may see an instant BSOD in an x264 stable CPU in P95. Its not necessarily the type of work that matters (although to some extent it does), but more the rate at which it does work.
 

·
Optimal Pessimist
Joined
·
2,998 Posts
Quote:
Originally Posted by jasjeet View Post

Intel doesnt put out CPUs that make mistakes. So my dream test would also not fail if the CPU was truly stable. You are not suffering by reduce the multiplier when using Prime95 compared to x264 test. All you are doing it setting a speed which has a greater probability of being stable. And as BlameLess has said, just because your 24hour stable doesnt mean you can survive 24hr loads. In theory you could instant crash. By running a more rigorous test you reduce this possibility, the harder the stress test, the further its reduced.
I sort of agree - the more tests the better. x264 is repetitive on the scale of about 10 minutes. It is a decent test, but only tests transcoding and mostly non-floating (non AVX) point. Prime-95 (v28,.5) is heavily floating point oriented, but depending on how you configure it, is non-repetitive and can uncover some instabilities quicker. IBT is another test that is repetitive, but doesn't seem to uncover Haswell instabilities like it did on previous generations. Realbench is another non-repetitive stress test that can uncover instabilities over time.

If you are looking for something to cover your use cases, good luck. Barring that I think running multiple types of stress tests are your best bet.
 

·
Registered
Joined
·
516 Posts
Agreed, multiple stress tests are the way to go. But I've never had an overclock fail after running through at least a 12 hour P95 Custom Blend run. It is overkill, but I'd rather not have my rig crash while I'm working.
 
1 - 20 of 43 Posts
Top