Key point with AMD adjusting the BIOS settings there is no loss in performance but it just does not look as good for AMD with smaller numbers than Intel.People need to get over the fact that Intel has bigger numbers but AMD performance with the smaller numbers are better or matched with Intel in applications.Games Intel can be better in games just because like forever games are made for Intel CPU'S.
Anyway I agree with Nighthog AMD lowering BIOS settings.They probably need to for longevity.The CPU'S are not hitting same speeds as when first released with early BIOS.
I was playing a game with Ryzen 3600X with Beta BIOS and my CPU voltage was hitting 1.57v and CPU Boost 4525Mhz,now with the newest BIOS I can only hit 4250Mhz and never past 1.5v in idle.
No loss in performance just number changes ,I am fine with that but it does not look good for AMD.
I remember Amd Robert was gung-ho in saying 1.55v boost is intended and safe earlier. Now they are lowering it silently if not for bios wizards investigation...
Always selling a dream..
I don't recall ever seeing higher voltages under 18.104.22.168 than 22.214.171.124
Which AGESA & motherboard did you get your 1.550V boost for your 3600X? Do you run it like that daily or just short test?
I'm considering using offset to get 1.550V for max boost but it's not 100% optimal for some testing purposes of my own to see what it gives.
I've noted my 3800X will clock higher with more voltage with PBO but Cinebench R20/R15 was getting worse performance. Though other stuff improved.
I'm starting to highly suspect CB R15/R20 have "performance profiles" and have a power & clock target. Adding offset just skews and messes it up. Other software don't seem to be limited like Cinebench with PB or PBO settings and boost more freely.
Tested +0.0500V offset shortly but performance just got worse from not having any at all. So we can't use PBO + offset to increase clock, it rather clocks lower instead it looked like, benchmarks all went worse.
This is exactly what I am doing with my 3600x and its working great for my setup
cpu vcore in offset mode, + 0.025v.
and have tweaked LLC as follows
CPU, CPUNB and vdimm current set to enhanced
CPU, CPUNB set to LLC of 6 (second weakest)
CPU, CPUNB switching frequency 1000mhz, RAM switching frequency 625mhz
PBO 10x and 'Performance Boost' enabled --> 75mhz
Im seeing the following in HWInfo64 (see image)
Core 0 - 4450mhz
Core 1 - 4450mhz
Core 2 - 4425mhz
Core 3 - 4475mhz
Core 4 - 4425mhz
Core 5 - 4475mhz
Prime 95 AVX2 small FFT load im holding 4100 - 4125 mhz all core, 92C max temp @ 1.36 - 1.37 volts.
R20 is holding at 4150 - 4200 all core, 74C max temp @ 1.38 - 1.40 volts.
+0.250V which I had tried before gave some improvements but Cinebench was worse.
I was expecting more gives more but it wasn't as such. Seems to behave the same as -/negative offsets people have used to lower temperatures to boost higher with implied clock-stretching.
Issue was I had no issue with temperature and wishing the more voltage would overall give higher clocks but it barely budged and actually gave worse performance if trying to push too much.
I think between +0.006 up-to +0.0250 might wok for individual cases but it's not a universal fix. The Boost algorithm isn't made for you to alter the applied voltage and makes it perform worse if too much off from stock targets.
The solution would be adjustable max voltage setting for PBO but I don't see AMD giving us that potential to brick/fry more cpu's.
You can read through my testing here. https://community.amd.com/message/2927615
But to summarize, I ran my 3900X in a Cinebench R20 and like so many others noticed a top clock around 4.5 with precision boost, and a voltage applied of about 1.48V. Cinebench R20 single thread score was 509 in that setup. So I was curious if it really was the silicon or the boost itself.
In UEFI, I disabled "Core performance boost" altogether so the boost algorithm was taken completely out of the equation. Using Ryzen Master, I then set just my fastest core to 4.6 GHz, leaving the others at 3.8 GHz. I upped the VCore to 1.48 to simulate exactly the voltage and clock speed thresholds used by PB. Using process lasso, I then set Cinebench to only use the fastest core, I did this partially to prevent core jumping, but also so that if I accidentally hit the all core test, it wouldn't try to launch it with a 1.48V Vcore.
The result, In Ryzen Master, was that 4.6GHz was hit and maintained the entire test. Not only that, but the score improved to 524 in Cinebench R20, indicating this isn't just a clock stretching phenomenon. The silicon can do those clockspeeds, and what's more, it can do it with the voltages PB is already applying. Just when PB is allowed to do manage the boosting, it is trying to use vastly more voltage than necessary.
In the first part of my test, I set my Vcore equal to 1.3V (the maximum I saw PB use on an all core boost) and then overclocked the CCXs manually using Ryzen Master. I set CCX0 of CCD0 (contains the fastest core) to 4.5GHz and ran the Cinebench R20 single threaded test, again using process lasso to tie the thread to core 02. I got a score of 508. This is virtually identical to the 509 I was with precision boost, which I would expect since the clock speed was 4.5 in both cases. The difference was, I did it with 1.3V manually, while PB felt the need to apply 1.48V to do the identical amount of work.
So it seems that the algorithm itself is to blame here, it seems to apply vastly more voltage than is needed to do the amount of work being asked. Now it is possible that there is some sort of other internal limit at play here. Voltage, in and of itself does not generate heat, only when current is applied to do actual work does heat come into play. Maybe at 7nm, there is a per core power limit that has been applied, and PB regulates the frequency at the boost voltage to ensure that is maintained. That limit is then ignored during manual overclocking. Hard to say for sure, but I can say that the voltage being applied today is good enough to hit the boost clocks on my chip, just precision boost won't go there.
im sure you appeciate that different benchmarks tax the system in different ways, some more aggressively than others.
What may be stable running cinebench wont be stable running workloads that use AVX2/FMA3,
at least not at the same clock speed and voltage.
There are many complexities involved in achieving a perfect boost algorithm for these CPU's.
When you consider the amount of variables that need to be taken into consideration with regards to implementing an efficient power management infrastructure for the CPU then I for one am quite satisifed that they are doing the best they can and that we will continue to see incremental improvements to the boost behavoir.
I can empathize with the fact that creating UEFI additions that maintain correct boost across three Ryzen generations (that all boost differently) isn't easy and the kinks may be worked out. I was merely posting my experiences to indicate that the algorithm is likely the problem, as opposed to the silicon.
sorry should have been more clear, yes I understood your point 100%
Just should have said the manner that you were testing does not really prove that the algorithm is not working properly.
It only shows that it may not be working properly for a work load similar to cinebench, as per your testing.....
|All times are GMT -7. The time now is 06:01 PM.|
Powered by vBulletin® Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
User Alert System provided by Advanced User Tagging (Pro) - vBulletin Mods & Addons Copyright © 2019 DragonByte Technologies Ltd.
vBulletin Security provided by vBSecurity (Pro) - vBulletin Mods & Addons Copyright © 2019 DragonByte Technologies Ltd.
vBulletin Optimisation provided by vB Optimise (Pro) - vBulletin Mods & Addons Copyright © 2019 DragonByte Technologies Ltd.