Overclock.net banner

Need some direction for overclocking a 5900X purely for gaming

66K views 63 replies 17 participants last post by  zzztopzzz  
#1 ·
Moved one of my gaming PC's off of a 9900K platform over to a 5900X/Asus Dark Hero board. Wanted to try something new and I don't know much about AMD. This is with a 3080Ti graphics card.

Seems there are multiple different ways to get started overclocking this chip on this motherboard. PBO, Dynamic OC, Asus DOS. Looking for a recommendation on which one to focus on for purely an overclock that is used towards gaming. This is cooled by a 560mm Hardware Labs Nemesis GTX rad in push pull.

Thanks
 
#4 ·
#5 ·
Moved one of my gaming PC's off of a 9900K platform over to a 5900X/Asus Dark Hero board. Wanted to try something new and I don't know much about AMD. This is with a 3080Ti graphics card.

Seems there are multiple different ways to get started overclocking this chip on this motherboard. PBO, Dynamic OC, Asus DOS. Looking for a recommendation on which one to focus on for purely an overclock that is used towards gaming. This is cooled by a 560mm Hardware Labs Nemesis GTX rad in push pull.

Thanks
I have a 5900 and use PBO. Wish I had your video card.
 
#6 ·
See my post in below thread. Links to article and video on what I believe best BIOS settings for PBO overclock. Also, the better cooling you provide to the CPU, the larger your overclock will be as it adjusts higher if you have lower temps..

 
#7 · (Edited)
Increasing PBO limits is hit by diminishing return fast and hard. The single best thing you can do is to use "Advanced" PBO, set all limits to defaults and start via Curve Optimizer.

- Multi-threaded performance is dictated by the slowest "active" cores. So applying negative CO on the "worst" cores increase this. Your slowest cores may or may not stay stable at -30, but even then they likely will not reach the peak clock-rates of the fastest cores (CO +-0).

- Single-threaded performance is dictated by the fastest cores. So applying negative CO on the "best " cores increases this. My fastest core is unstable at -10 already.

Only once you finished setting up CO it make sense to try increasing some limits. Leave EDC at default (140 A on 5900X), increase TDC slightly by 10-20 A (95 A stock on 5900X), increase PPT only by as much as you are willing to increase power-consumption without getting much performance in return.
 
#8 ·
Also, for gaming, getting the highest memory speed & fclk at a 1:1 ratio will probably net you the most performance. And once you get that, tweaking all the primary, secondary, and tertiary memory timings as tight as possible.

Memory overclocking is really the fun part with these chips imo, tons of tweaking to do.
 
#9 ·
What resolution are you playing, what types of games, and what IQ settings?

If you're playing any modern 3D game with high IQ @ 1440p or higher, you'd be hard pressed to find any noticeable performance gains.

PBO + Curve Optimizer is likely the best approach for turning. CTR 2.1 (Hybrid) is also another good approach to dial in performance efficiency. Beyond that, Zen 2/3 cpus are very dynamic and out of the box performance is typically great. Since Zen 2/3 is very dependant on power draw and temperatures, its very difficult to gain noticeable performance without really pushing the envelope in terms of power delivery and cooling. Alternatively, tuning your RAM may bring up your bottom line / min. framerate.


Example. Look at the performance results between generations of CPU. Remember that there is a ~20% IPC lift between Zen 2 / 3. Pushing an additional 2-4% higher single core clock on a 5900x likely won't yield you anything. Just flagging this now before you spend hours down the rabbit hole.
 
#11 ·
Just flagging this now before you spend hours down the rabbit hole.
Thanks for all the useful information!
My ram is 2x16gb 3800Cl14 Trident Z Neo that I will run on this 5900X
The RamOC rabbit hole is very deep

You mostly want to move with X2 scalar and do not touch the boost override at all. It can create clock stretching
Once this thread is unlocked again , you can ask there for RAM OC help
But be aware that recently there has been a bit of drama - soo it's closed for cleaning :)
Just be friendly and honest ~ then everyone will help you

This you might want to bookmark for Vermeer
And also this post (bottom half mostly) ~ which was for Matisse (but Vermeer is kind of similar, just with 40mV stepping)
OC'ing T-Force 4133 cl18 (whole thread is interesting)

You want to focus on getting GMD off 2T, to run with your ram and try not to use any negative offsets or boost overrides at the start
You can slowly extend the EDC limit of the 5900X
See post:
CoreCycler - tool for testing Curve Optimizer settings and the other one for an orientation

Else yes, CTR 2.1 is a very valuable thing
But you should read the 8~ ish pages for it (tutorial) before using it
Loadline needs a change and no offset + no Curve optimizer , for it
 
#10 ·
Thanks for all the useful information! I have a lot of reading to do. I am gaming at 4K 120hz.

My ram is 2x16gb 3800Cl14 Trident Z Neo that I will run on this 5900X

I have a 10900K/3090 Evga that runs 10 cores 5.3ghz at 1.32 volts, it was sure a lot more straightforward on Intel to get there. A lot more involved with AMD that is for sure.
 
#22 ·
At 4k you're going to be gpu limited pretty much all the time and your 5900x will be more than fast enough to keep up.
So, I would start by optimizing the GPU.
If you want to play with the cpu, start with FCLK and memory. Use y cruncher and check for WHEA errors for flck stability, then tm5 for memory freq and timings.
One you have flck and memory stable, you can enable PBO. I would set this to motherboard limits and then try to reduce curve offsets per core, using core cycler to test for stability. This last part is not really going to gain you any fps, but it can be fun to do.
 
#12 · (Edited)
Here is my take: There is no clock stretching. Instead there are three different clock limiter conditions:

Frequency Limit - Global: This is what you care for the most. No single core can clock higher than this limit, which in turn is affected by various parameters, like temps, PBO limits, currently active cores (number and specific core limits).

Per CCD/CCX clock-reduction: This mostly affects the "faster" CCD running into the PBO limits while the "slower" CCD does not. What happens then is that one CCD has its clock reduced to allow both CCDs to run at PBO limits.

With some LLC settings it also can happen that one CCD is reduced while the other CCD has its clock even increased, while the average of both CCDs stays the same. As a result both CCDs come closer in wattage.

All CCD/CCD clock-reduction (below global frequency limit): This happens with too much voltage droop (low LLC) and heavy load in relation to Vcore scenarios.
 
#13 · (Edited)
There is no clock stretching.
Clock Stretching defines, the internal package throttle @ the same applied clock-p state :)
It has only a tiny tiny bit do with clock reduction. Real Stretching is Package-Throttling, but also can be used as "refusal for p-state to meet it's target V/F curve"

Except above 2ndary wording, it has nothing to do with frequency lowerage ~ but with package throttle because of "stretched, fake p-state" thanks to internal package throttle

EDIT:
All CCD/CCD clock-reduction (below global frequency limit): This happens with too much voltage droop (low LLC) and heavy load in relation to Vcore scenarios.
Frequency reduction can happen by one of these 7 sensors + couple more internal sensor tree's
Marked are couple of potential possibilities , for "clock reducing" to occur
Image

But this has also nothing to do with "clock stretching" :)
Not really at least , with * exception
EDIT 2:
Oh i forgot to mark programmed PROCHOT too, which like on Matisse can reduce clock up to 100mhz , each 9.5-10c after X predefined temperature range
 
#14 · (Edited)
I disagree. "Clock stretching" is a technical means to lower a clock-rate below the rated clock-source. This term has nothing to do with P-states at all and thus should not be mixed, as it only adds to confusion.

Of course I know about the sensors, which is why I listed some of them when I explained the global frequency limit. Most users will only ever run into the global frequency limit, which is why they should mostly only care about that.

In the past AMD applied clock-stretching as a means to lower frequency when voltage dropped below what was considered crash-safe. Nowadays they seem to use another technical means, but clock-reduction still happens when voltage drops too low. This only protects against crashes within some limits, though, and most of the time you run into the global frequency limit first.
 
#15 · (Edited)
I disagree. "Clock stretching" is a technical means to lower a clock-rate below the rated clock-source.
I'm not sure if it's the official wording
We probably aim towards the same
But there are two different types of reduction

Actually 3,
  • "what we call stretching" ~ for V/F curve to not meet target clock, yet apply the strap for it (associated with internal package throttle, yet different)
  • package throttling ~ V/F curve meeting targets, but reducing because of external limits by FIT
  • frequency throttle ~ V/F curve not meeting targets, not applying set strap because of power, heat / lack of voltage limits (which is what i read by your comment) above for the LLC part *
* and the reason i mentioned, that it has "nothing to do" with stretching or how it's called internally :)
Sadly stretching check remains under NDA , remain two you can see as an user with correct tools. 3rd last option is what close to everyone of us can read out with HWInfo
 
#20 · (Edited)
The way I look at it, clock stretching describes the case where the cpu is not doing work every clock cycle due to some condition, and the performance is lower than expected for that clock speed as a result.

E.g. hwinfo or other monitoring software reports a 4.85ghz clock. But the cpu performs as if the clock is lower than that.
This can be detected by looking at effective clocks in HWInfo, or by checking benchmark results. It seems to occur on zen 2 and 3 as a result of the cpu not getting as much voltage as it expects for a given frequency and is most commonly caused by setting a negative offset to vcore, which is the traditional way of undervolting a cpu.

Setting a negative value in curve offset does not cause this effect, but can lead to instability instead, as now the cpu requests and expects a lower voltage for a given frequency.
 
#23 · (Edited)
This can be detected by looking at effective clocks in HWInfo, or by checkout benchmark results. It seems to occur as a result of the cpu not getting as much voltage as it expects for a given frequency and is most commonly caused on zen2 & 3 by setting a negative offset to vcore, which is the traditional way of undervolting a cpu.
This is written point 3
  • frequency throttle ~ V/F curve not meeting targets, not applying set strap because of power, heat / lack of voltage limits (which is what i read by your comment) above for the LLC part *
Yet "clock stretching" or how they call it internally (on the 3rd stage, not 2nd stage which is package throttle)
Is when both main p-state clock is set inside V/F curve range , and effective clock is applied ~ yet performance is worse
Package throttle, just pure package throttle also shows up in the effective clock segment ~ visible by using CPU Snapshot pooling

The stretching rather applies for "fake" but "set & held" frequency, which is hard to detect
Technically, you are always stretching till you can proof you aren't :D

Slower effective clock with a met P-State Target, is throttling because of FIT conditions
Slower effective clock without a met p-state target, is also throttling, but because of first stage sensor reasons (voltage, thermals, silicon Q factor [condition])

EDIT:
At least the "clock stretching" i know, is different from the package throttle that i've experienced because of internal FIT (let's call them) "issues"
And the package throttle i can halfway read out via SMU & FIT, is different from the "basic" SMU readouts HWInfo gathers ~ which indicate "frequency throttle" only
 
#27 · (Edited)
Anyone know whats going on in the other thread and why its still locked 3 days later ?

Anyway @ Veii

I have done some more testing in windows 11 lately and it seems to behaving normally.. Have done some new sandra runs and everything is looking like it should, i think (?)

Regular PBO without CO


PBO CO -30 allcore:


CTR:


Static 4800/4700:
2515914


Have also done some other memory benches:

Image


2515918


And lastly some geekbenches:
Geekbench 3
2515919



Geekbench 4
My current GB4 highscore from earlier this year (8215/74733 points)
2515920



Geekbench 5
My current GB5 highscore from earlier this year (1844/20054 points)
2515921


I have done even more benches, but i dont think i need to post more..

I cant find any problems with thread scheduler or memory performance in win11 compared to win10.. Synthetic aida64 have some problems reading L3 bandwidth and latency for Zen3, but the real performance in programs/benches is atleast as good as with Windows10 in my testing.
2515922
 
#28 · (Edited)
I have done some more testing in windows 11 lately and it seems to be having normally.. Have done some new sandra runs and everything is looking like it should i think (?)
What did you change to get the right spike back
Now it looks how it should

Tho i had also strangely "bad looking" results
Once in a blue moon, i got L3 cache up to 612GB/s but only read
That was once and never ever - Aida keeps sitting at 120-160GB/s
still far away from the usual 660+, but at least it went once up

That + the bad SiSandra result, made me question thread scheduler and CPPC
But allcore and CPPC off, didn't change it at all ~ it remained "broken"
The only "other" time i saw it lift up, was applying threads load while testing it ~ then somehow it scored better (as if something would throttle it intentionally)
Which logically made me point to the thread scheduler ~ as other people have it too, not only me
Generally it's strange that only Insider previews have this.

Yep, how did you get SiSandra to test it correctly :D
performance generally should be better on close to everything i tested too
But SiSandra reported strangeness with cache too ~ soo i'm not sure what is going on

EDIT:
If i have programms open, generally a lot of background load
Image

Results appear even better this time (i'm on v22000.51)
Else it stays sub 150GB/s
Synthetic here and there, something randomly changes and keeps changing.
I should try to downflash to SMU 56.30 & test if i can hit least peak 360GB/s on all 3 @ 10.4ns
~ only this is what i haven't tried
 
#29 · (Edited)
First of all, measuring "Effective Clock" only works properly close to 100% load. Else it is a mess of real idle/C-states and HWiNFO putting even too much emphasis on C-states (compared to Ryzen Master's effective clocks). In the following screenshots positive Clock Reduction % means C-states, negative means clock reduction below the global frequency limit without C-states (but only relevant close to 100% load).

Curiously average (effective) clock-rates are always about 10-13 MHz below the global frequency limit when all logical cores are fully loaded by P95 (not so by CB23 or when P95 only loads a smaller number of cores).

Idle, C-states enabled, stock:
Image


Idle, C-states disabled, stock:
Image


P95, small FFT AVX, stock:
Image

Image


P95, small FFT AVX, PBO "Motherboard limits" + CO, slightly temp limited at 90°C:
Image

Image


CB23, PBO "Motherboard limits" + CO, not temp limited at 80°C:
Image

Image


P95, Small FFT AVX, PBO "Motherboard Limits" + CO, LLC Mode 8 (lowest on MSI), not temp limited at 83°C:
Image

Image


Statuscore, only EVEN cores, PBO "Motherboard Limits" + CO, LLC Mode 8, not temp limited at 77°C:
Image


All clock-reductions are measurable as such in my example. As far as I can tell my CB23 tests results of various settings always corresponded close enough to the measured clocks that nothing hidden seems to have affected them outside of test variances. So either I need another testing method to reproduce performance drops without seeing measured global limit / effective clock-rate drops, or your performance drops should be measurable as global limit / effective clock-rate drops as well.

PS: "Motherboard limits" seems to mess with Scalar, too, else CB23 would not run at 4600 MHz and P95 would not pump over 230 W through my CPU. Even when I manually set it to x1 it hits that high, while I am pretty sure it did not do so before I specifically set it to x10 in former test-runs. Might have to recheck, but that's not really relevant for these screenshots.

PPS: "Core Power Average" should be labeled "Cores Power", as it is the sum of all specific cores' power consumption sensors (per CCD) put together in a custom sensor. I only noticed the wrong labels after taking the screenshots.
 
#32 ·
First of all, measuring "Effective Clock" only works properly close to 100% load. Else it is a mess of real idle/C-states and HWiNFO putting even too much emphasis on C-states (compared to Ryzen Master's effective clocks).
Yes.
I don't use Ryzen Master. It's too annoying to wipe away it's traces from a read-only partition on ROM
In the following screenshots positive Clock Reduction % means C-states, negative means clock reduction below the global frequency limit without C-states (but only relevant close to 100% load).
No idea, sorry
Seems to be a new update ~ haven't checked what currently HWInfo reports and how the "readout-norm" changed.
But it's a good illustration, thank you.
So either I need another testing method to reproduce performance drops without seeing measured global limit / effective clock-rate drops, or your performance drops should be measurable as global limit / effective clock-rate drops as well.
I'm not sure on HWInfo's current measurement ways. A new world for me with changed readouts.
Thank you overall for illustrating how it currently logs values. Helpful to understand how it works now :)
Probably needs 2-3 weeks playing with the new update, to figure out what it logs and learn what it means
But if i run the benchmark again after something like 15seconds, then i start getting these/bugged strange numbers again..
As long as the real performance in real/other benchmarks are good, so i dont care all that much Aida64's clearly bugged L3 numbers
I can relate :)
Something remains off, and except thread scheduler being "buggy" , i can not point to anything else
Or AMD having multi-layer kernel issues with Security encryptions.
Tho performance is there where is has to be , soo only Microsoft is in fault ~ from my perspective
 
#31 ·
Same as TS, i'm also switch from Intel 9900KS to this, waiting my board and CPU to arrive in few hours time and will build it by today. Can't wait.
 
  • Rep+
Reactions: Veii
#33 · (Edited)
The "sensors" in my screenshots are not new HWiNFO sensors, but custom sensors based on already present ones. They use the formula:

"Core 1 T0 Effective Clock"*"Bus Clock"/"Core 1 C0 Residency"/"Core 1 Ratio"-100

For Core 0 I tried a slightly different formula to workaround possible rounding, but in the relevant load range the results are the same:

"Core 0 T0 Effective Clock"x100/"Core 0 C0 Residency"/"Core 0 Ratio"x"Bus Clock"/100-100

Main problem with these custom sensors are:

- The order by which HWiNFO processes its own sensors.

- The fact that "Effective Clock" is not fully correlated to C-states, or rather the clock numbers don't match C-state numbers the way they should. EC can even report 0.00000 MHz when a core is not at 100% sleeping. As mentioned, HWiNFO and RM don't always agree on effective clocks neither.

2515996

2515997


On a side-note: CTR messes with both HWiNFO's measurements and RM running at all.
 
#34 ·
On a side-note: CTR messes with both HWiNFO's measurements and RM running at all.
Yes, and does enforce Package C6-States/DF_C-States to be enabled
* also diagnose does a CPPC/ACPI/FIT core-layout remake/correction

Positive , cores are given the capability to idle down to the lowest p-state = 550Mhz
Negative, hibernation is broken on AMD and wakeup causes overboost

I hope we share the same wording:
Hibernation is hard suspension = 100% hibernation
Sleep = sleeping cores, still 0mhz parked but yet uses 0.8-0.9v for it

My 2nd CCD goes into sleep states and parks cores fully because no ACPI value was set to them, even when they are active & utilise around 16-18A for themself

I think people can learn a thing or two from you
From which community you are coming from :)
We should move it yes
The discussion reaches very broad terms already 😀
 
#36 ·
One reason being that effective clocks are heavily influenced by C-states. Any load less than 100% means that you are mostly measuring idle times instead of clock reduction. This is why my custom sensors try to calculate C-states out of the results.

Here is what I wrote on the HWiNFO forum concerning my custom sensors / effective clocks vs. load:

Unfortunately they only work properly for core load close to (90-)100%, because of the way HWiNFO calculates HLT/sleep C-states effective clocks. Even though HWiNFO already is more sensitive to C-states than RyzenMaster the listed effective clocks are still too high in linear relation to C-states (Effective Clock / C0 Residency = too high). This means that numbers higher than maybe 10% are bogus values on my sensors, which then messes with averages. It still demonstrates that effective clocks are lowered by C-states, though, which is something many people on forums don't seem to know/understand.

On the other hand HWiNFO tends to hit a minimum effective clock of 0.0000 MHz even when C0 residency never hits a minimum of 0.00000% (unless core parking is enabled). This results in my sensors sometimes hitting down to -100% due to HWiNFO's effective clocks claiming to be 0 MHz.

This is still useful for testing PBO setting with high load. Tests like Cinebench 23 can already mess a bit with averages, though, due to lower load pauses in between each run while C-states are enabled.
 
#37 · (Edited)
because of the way HWiNFO calculate HLT/sleep C-states effective clocks.
Are you sure that hwinfo do calculates effective clocks?
I'm asking because zen has built-in functionality for make it simple. Moreso, in snapshot polling mode, the calculation itself is already done by SMU.
However, there are some caveats to be considered to ensure that the results are as expected in certain scenarios.
 
#38 ·
At least I know that HWiNFO displays different results than Ryzen Master, despite the latter surely making use of similar built-in functionality. And I know (from simple calculations of EC vs. C-states) that MHz results are not correct in relation to C-states and load for anything not close to 100% load.

That being said, I just noticed that HWiNFO does not seem to report 0.0000 MHz on my cores anymore, as a result my sensors don't show -100% minimum anymore. Minimum clocks look more close to what you get when Snapshot is disabled. So one of the latter updates may have fixed something in that department. Need to check on that more thoroughly.
 
#44 ·
That being said, I just noticed that HWiNFO does not seem to report 0.0000 MHz on my cores anymore
Just noted, HWINFO occasionally reported my Core 2 eff. clock as 0.000Mhz )).
Think it's just invalid reading, for example when APERF or MPERF overflow happened, idk.

Anyway, whoever is interested in effective clock interface, should read this before:

The effective frequency interface allows software to discern the average, or effective, frequency of a given core over a configurable window of time. This provides software a measure of actual performance rather than forcing software to assume the current frequency of the core is the frequency of the last P-state requested. Core::X86::Msr::MPERF is incremented by hardware at the P0 frequency while the core is in C0. Core::X86::Msr::APERF increments in proportion to the actual number of core clocks cycles while the core is in C0.

The following procedure calculates effective frequency using Core::X86::Msr::MPERF and Core::X86::Msr::APERF:
  1. At some point in time, write 0 to both MSRs.
  2. At some later point in time, read both MSRs.
  3. Effective frequency = (value read from Core::X86::Msr::APERF / value read from Core::X86::Msr::MPERF) * P0 frequency.
Additional notes:
  • The amount of time that elapses between steps 1 and 2 is determined by software.
  • It is software's responsibility to disable interrupts or any other events that may occur in between the Write of Core::X86::Msr::MPERF and the Write of Core::X86::Msr::APERF in step 1 or between the Read of Core::X86::Msr::MPERF and the Read of Core::X86::Msr::APERF in step 2.
  • The behavior of Core::X86::Msr::MPERF and Core::X86::Msr::APERF may be modified by Core::X86::Msr::HWCR[EffFreqCntMwait].
  • The effective frequency interface provides +/- 50MHz accuracy if the following constraints are met:
  • Effective frequency is read at most one time per millisecond.
  • When reading or writing Core::X86::Msr::MPERF and Core::X86::Msr::APERF software executes only MOV instructions, and no more than 3 MOV instructions, between the two RDMSR or WRMSR instructions.
  • Core::X86::Msr::MPERF and Core::X86::Msr::APERF are invalid if an overflow occurs.
 
#39 ·
@Timur Born Give AMD μProof a try
It should also log sensors based on AMDs API

HWInfo's code is "borrowed" and reverse engineered
Martin is not to blame *, but the programm uses "unofficial" SMU code
Ryzen Master does so too ~ yet information on both tools is classified

* I mention this, as many people contributed to HWInfo
And some of the many people also have access to official AMD API , which should never belong to the public
Martin is again not to judge for such, but not everything he uses for sensoring, is based upon his own research :)

Some call it "stolen" code, others call it a given gift from close AMD™ workers & friends
This remains objective judgement, yet Martin is not to blame for "accepting" unofficial unpublic code on his tool ~ when the target is bringing AMD forward
Same goes towards other close-source public tools out there
 
#40 ·
And some of the many people also have access to official AMD API , which should never belong to the public
Martin is again not to judge for such, but not everything he uses for sensoring, is based upon his own research :)
Can you please expand on these sentences, what does it mean??

Have you ever used ATITool, ATI TrayTool, RBE Tool, just to cite only a couple of them??
 
#45 ·
I also had 0.00000 MHz again, so no improvements there. Bad luck. Since we are usually not interested in idle/C-state "effective clocks" anyway, this isn't too much of a problem.
 
#46 · (Edited)
Here is the real problem why "effective clock" needs to be measured close to 100% CPU load to get better clock reduction measurements (apart from most users not knowing about the C-state part to begin with):

Image

Image


130 MHz / 0.0171 = 7602 MHz ?! Obviously not.

On a side-note: disabling all C-states allows even more precise measurement, because then we are at 100% C0 state. This does change the load behavior for anything but all-core load, though, and might also change CPU temperatures compared to using C-states (at lower than 100% load).
 
#47 ·
#48 ·
Moved one of my gaming PC's off of a 9900K platform over to a 5900X/Asus Dark Hero board. Wanted to try something new and I don't know much about AMD. This is with a 3080Ti graphics card.

Seems there are multiple different ways to get started overclocking this chip on this motherboard. PBO, Dynamic OC, Asus DOS. Looking for a recommendation on which one to focus on for purely an overclock that is used towards gaming. This is cooled by a 560mm Hardware Labs Nemesis GTX rad in push pull.

Thanks
Many people say PBO works really good with the 5x00 cpu's i do not have one so i can not say for sure. What i can say is on my 3x00 cpu's static overclock is by far the best for my needs.

Depending on the workload static OC might still be better for you. Best way to find out is try both ways. A all core OC might net you the performance you want with less noise & heat at the same time. Or maybe the PBO will prove to perform much better. but setting a manual OC is very quick at least for a rough idea, and PBO might take a long time to really dial in.

For any mesh style uarch cpu memory latency is a huge deal. This will net your largest gains and smoothest operation, provided you do not push your infinity fabric too hard. from what i have seen leave your IF voltages at auto and just bump the soc to around 1.1~1.125v that shoudl be good. i think 3800mhz on the ram with cl 14 14 14 is what most people are able to get stable with out much effort on 5x00, but even if you just did 3600 cl 14 14 14 and did your custom sub timing tune you would be very happy with it. This way you should not run into any IF stability issues and you can just have a nice running system with great performance regardless of OC or PBO. (both are good options)
 
#53 ·
And everyone is feeling like a fighter pilot these days.

Back when we used CRT screens I could identify flickering up to about 80 Hz if I looked from the edges of my eyes (peripheral vision). Some (!) birds of prey are said to be able to see 140 fps, cockatiels are said to see 120 fps. Birds need this to not fly against stuff and being able to hunt small bugs in the air. Humans do not.

One study suggests that some people are able to discern image information (content!) within 13 ms = 77 fps. But even the 16 ms of 60 Hz are a relatively small number compared to the time most humans need to react to visual stimuli (over 180 ms), while we are faster with auditory stimuli. Being trained improves said reaction time, which brings us back to fighter pilots.
 
#57 ·
Whatever makes people "feel" better. In the end I assume that it's mostly input latency and even then the differences are very small and science disagrees with most gamers on the visual reaction thing. Many people cannot even hit a drum-beat in time, despite the auditory system being quicker in reaction.