Originally Posted by psychohawk
I have recently upgraded a couple things, first was the cpu from 1700x to 3700x, then from an rx 590 to a rtx 2070. The biggest limiting factors had previously been the memory compatibility and the gpu bottleneck from the rx 590. I hadn't realized just how much of a bottleneck the gpu really was. There was virtually no change in gaming benchmarks at all, but I could run the Gskill Trident z Dual Rank Double Sided Samsung D-die 3200 kits I've had running at 2800 for the past 18 months, at and above the advertised 3200 CL 16 but only as far as 3333 before it capped on me.
Point is, I got curious to see how well the 1700x could keep the gpu fed so I popped it back into my C6H with bios 7306 and was able to run the ram at 3200 advertised speeds with the 1700x and scored a few thousand points behind the 3700x I own, in Fire Strike.
On 1st gen i notice 2 quite annoying things - i wish it's coincidence but i don't like that it's indeed replicable on my side
Comparing Pstate OC (on allcore) - to basic Allcore
XFR overrid Pstate with cool'n quiet leads actually better latency on L2 and L3 cache ...
While it looks like normal allcore does suffer from (magically inserted) cache latency = lower ipc on it's own
On both examples L2 and L3 bandwith actually behave fully differently (not a CH6 thing) and Zenstates overriding Perf Bias mode does across all boards on gen 1 quite some IPC difference x_x
I noticed it by accident, as there was something fishy about people's PState OC always behaving better then normal allcore OC
While after time looking on this, i think it can be related to silicon stability on the same clock with the same voltage and the board behaving different to forced Zen State P-States and voltage overridings
i certainly see some difference in access time and "usable bandwith" comparing constant allcore , and PState dropping allcore (from 3.8 for example to down to 1.6Ghz)
^ both replicable on every gen 1 and gen 2 / while the determission slider rule (aida64/cb15) mode does overall lower both latencys on gen 1 by at least 1ns (L2&L3) , but not fully tested till the end barely does any change on gen 2
- could you lead some shine in that dark rabbit hole of what the actual "CB15/Aida64" Perf Bias mode changes, if it's possible to integrate that into bioses as a switch mode (just what, and where it does change what) and what is different from Determission Slider Perf/Power to Perf Bias profiles ?
Using it and comparing both gen 1 & 2 clock by clock, latency to latency - they appear equal
While The Stilt's OC3 on 4.3 P-State/PBO compared to Allcore again on the same clock, the XFR/P-State OC again is faster then the same clock allcore (in single threaded L2 & L3 cache testing)
Ignoring higher sillicon stability for higher potential OC and ignoring maybe the override of LLC between boards by using this method:
> Why does XFR P-State OC (without ZenStates just by AMD CBS) lead to better cache access time and higher higher bandwith (margin of error ?)
> What's up with this Performance Bias profiles on ZenState, why is there such a big difference in actuall Perf with and without it ?