edit:
################################################
See easy to use Linux script for RDNA3 in this post, ephemeral mod version (mods go away on full PC power off):
And this post for RDNA3 permanent mod script version:
https://www.overclock.net/posts/29475543/
For RDNA4 ephemeral/permanent mods script check this post:
https://www.overclock.net/posts/29481571/
For RDNA4 clock boost mod for Linux (brings back min/max clocks setting) check this post:
################################################
I made this thread as a continuation of this post:
It's meant for discussing various aspects related to the procedure, findings and questions around the subject.
What it does:
Fully removes power limit from all RDNA3 cards, if your card has MP2856/7 controllers. I think I've seen one RDNA3 card that uses different controllers.
edit: Should work to increase RDNA4 GFX TDC limit (hasn't yet been tested/confirmed). VID offsets work for all rails on RDNA3 and has been confirmed on RDNA4.
How:
Via Linux atm. You can also use a Linux live usb boot, no need for a Linux install. You just need a live usb boot that has both i2c-tools package and i2c-dev kernel module loaded. I tried CachyOS live usb and fits the bill. Use your preferred distro.
The mod is persistent across reboots but goes away on power off. So you need to reapply it when first powering on your PC.
There is a theoretical way to make the change permanent, but it hasn't yet been tried.
Everything about this is risky, so don't do anything until some simpler/safer method comes along.
If you can live with bricking your GPU then read on.
How the RDNA3 power limit is enforced:
The GPU uses the Vgfx voltage controller reported current for main Vgfx rail to apply the TDP/TDC limit. For my Hellhound 7800XT there's a 220A TDC limit.
When voltage controller reports reaching this current, GPU algorithm throttles the power.
The value you have to alter, mentioned in the above link, is a calibration value for the voltage controller output current. Altering this value will make the controller report a lower (or higher but that's not useful) current value to GPU, thus GPU throttles at higher current, and thus power.
What value should you use instead of stock one?
There's two ways of figuring out what value to use, instead of stock one. There's the extreme overclocking value, which should only be used with a waterblock, and then there's stock cooling solution value, which should be safe(r) for a stock card (!).
Extreme overclocking:
For coming up with a value for extreme overclocking, you'd want to match GPU TDC limit to your main power rail current limit.
For example, my card has a stock 220A limit, and has 8 phases at 70A each for main GFX rail. That means an absolute maximum of 560A total. If we calculate the ratio between max VRM current and GPU enforced limit, that's x2.54. You'd have to lower the calibration value by this factor, such that when the controller reports 220A current to GPU, and GPU starts applying TDC limit, it's actually at 560A.
In my case, this stock calibration value is 0x219A. In binary that's 0b0010000110011010. The calibration value is in the first 11 bits (bit 0 to bit 10, in bold). Converted to decimal that's 410.
Applying the 2.54 conversion factor to 410, it works out at 161. Or 0b10100001 in binary. Ideally you attach the leading 0's so 11 bit long, 0b00010100001.
Now we build the whole hex again, 0b00100 and we attach the calibration value, and whole value comes out at 0b0010000010100001. Converting this value back to hex: 0x20A1 is the new value we need to write back to controller.
With this value, GPU TDC limit will be applied when the controller has an output of 560A on GFX rail. This is the most extreme version. Maybe up the value a bit so you don't hit your VRM limits, even with a waterblock.
For this scenario, you'd want to adjust current gain offset for all other rails. Driver has a max of 263W for TDP which is TBP value in HWinfo. That is made up by all 5 rails power added together.
The limits are TDC/TBP/Temp whichever comes first. Usually TBP comes first, but it's averaged out, so it allows for higher TBP peaks but throttles such that averaged it's 263W max. TDC is hard limit. Same as temp.
Vsoc+Vddc_usr+Vmem+Vddci usually work out at around 80-100W total, maybe more in some scenarios. These eat into the 263W total. So you are left with ~150W that you have to squeeze in the whole TDC current you want the GPU to draw. So without adjusting the other rails current offset, you'd have to adjust the current gain value to a greater factor. 150W fits ~160A max. So you'd have to use 560/160=3.5 factor instead of 2.54, to max out your VRM current while keeping stock current gain values (and thus stock power reporting) for the other 4 rails.
This way, when GPU draws 560A + normal power reporting for other 4 rails it works out at max 263W as far as GPU algorithm is concerned, and doesn't TDP throttle until you hit 560A current draw.
So altering the 4 small rails current gain value won't make them draw more current, it will make Vgfx draw more current.
Stock cooling:
For figuring out a value for stock cooling, you have to try a few values that are lower than stock but higher than the most extreme one (VRM limit). Put the heaviest load on GPU and watch your temps. If you want more, lower the value, test again.
For my GPU that worked out at a scaling factor of 1.404. So 410/1.404=292.
Converting 292 to binary: 0b100100100, but make it 11bits long by adding leading 0's: 0b00100100100, and build the whole 16bit value with the leading 0b00100 so 0b0010000100100100.
Converting to hex: 0x2124.
For a sanity check let's look at all three:
Stock - 0x219A
Max with stock cooling - 0x2124
Max with waterblock - 0x20A1
You may be able to do a bit more than x1.404 with better thermal paste on stock cooler.
Here's a max stress test with this new value for current gain, HWinfo GFX current/power values adjusted with x1.404 multiplier
Stock 220A x 1.404 = ~309A new TDC limit. Under full load sits just below 300A current draw. Hotspot 100C. This is max I'd go with stock thermal paste.
Graphics power is somewhere around 374W. Looking at total board power value, there's around 43W difference vs total graphics power, but the total graphics power value is not correct in HWinfo. The difference between TGP and TBP is correct though. Interesting that TGP is 262W, it's wrong, but it's coincidentally the driver TDP limit of 263W. It's throttled at that max value.
On Linux, the driver allows for 280W max power draw, so using the same 1.404 factor in this example, driver will allow for more current, will probably max out to new TDC limit of 309A.
So real TBP comes at 374+43=417W. Quite a bit more vs stock config. And still manageable by stock cooling.
And that's it, once you write this value to controller, the new limit is active. GPU will draw more power, have more performance.
There's 4 other rails (Vsoc/Vddc_usr/Vmem/Vddci), they also have current gain calibration values, but they never reach the max current limit set in GPU for each of them, so no need to alter their values.
Altering this value will mess with current and thus power reporting for GFX rail, which creeps in TBP and TGP power reporting values.
As a partial fix you can add your offset multiplier (2.54 or 1.404 in above examples, or the custom value that you end up using) in HWinfo for GFX current and power reporting. Brings them back to real values. But total power reports will still be off.
I ran two benchmarks in same conditions, one with stock controller value, the other with 2.54 multiplier, and adjusted HWinfo values for GFX current and power.
This is stock current gain value:
And with altered current gain value at 2.54 multiplier, and current and power reporting adjusted with same multiplier:
Pretty much the same thing, same scores, same averages, current/power for GFX works out the same, but TGB/TBP/GPU Power Maximum are off.
I had a low clock limit so it doesn't TDP/TDC throttle during the benchmark, so modification doesn't result in more performance.
As I mentioned in the begining, there's a theoretical way of making the change permanent (and you could also "permanently" revert back to stock value), by issuing an I2C write command to the controller.
There's no official datasheet for the controllers that are used in the RDNA3 cards, but I found the public datasheet for a similar controller, MP2886A. A few of the addresses seem to match our controllers, I tried the "clear errors" command and it worked.
RDNA3 controllers have a few sort of "RAM" pages, page 0 page 1 and page 2. Page 0 is for live data reporting and active settings for loop1, page 1 the same but for loop 2, and page 2 is some config data for both loops.
They also have an internal EEPROM, where defaults are saved. At startup, data from EEPROM is copied on pages 0,1 and 2.
The current gain offset modification alters the stock value on page 2, and goes away on power off. Next startup it's repopulated with the default value that's stored in the EEPROM.
The above command would theoretically take the data that is on pages 0,1 and 2 and write them as defaults in EEPROM, thus would get restored on next startup.
This could be risky and may brick your GPU!
I'm mostly on Linux as my daily so I just have a script that sets the value on startup. For Windows users it might be a pain to constantly boot in Linux live usb on each start.
Or wait for some Windows software to pop up that sets the current offset value so no need for permanent modification.
Next up there's another way of tweaking your GPU's performance, via VID offsets for each of the 5 rails. This too can also be useful. And risky!
This is the list with addresses of interest for mods:
Current gain offset:
Vgfx: controller 0x22, page 0x02, address 0x08
Vsoc: controller 0x24, page 0x02, address 0x08
Vddc_usr: controller 0x24, page 0x02, address 0x18
Vmem: controller 0x26, page 0x02, address 0x08
Vddci: controller 0x26, page 0x02, address 0x18
VID offsets:
Vgfx : controller 0x22, page 0x00, address 0x23.
Vsoc: controller 0x24, page 0x00, address 0x23
Vddc_usr: controller 0x24, page 0x01, address 0x23
Vmem: controller 0x26, page 0x00, address 0x23
Vddci: controller 0x26, page 0x01, address 0x23
Values are two byte for VID as well, with only bits 0 to 8 (so 9 bits) used. 5mV/LSB.
@hellm explains negative offsets here:
www.igorslab.de
With the mention that as far as I can tell positive VID values translate to positive voltage offset for all rails on RDNA3 controllers. And RDNA4 rails/addresses are different to RDNA3.
LLC control might be possible on RDNA3, atm it's unknown. LLC and VID offsets should work on RDNA4 using this method, but not power limit removal. That is enforced in a different way on RDNA4 and needs hardware modification for removal.
Discuss, ask questions, post results. Do mention if you spot any mistakes in any of the details.
################################################
See easy to use Linux script for RDNA3 in this post, ephemeral mod version (mods go away on full PC power off):
And this post for RDNA3 permanent mod script version:
https://www.overclock.net/posts/29475543/
For RDNA4 ephemeral/permanent mods script check this post:
https://www.overclock.net/posts/29481571/
For RDNA4 clock boost mod for Linux (brings back min/max clocks setting) check this post:
################################################
I made this thread as a continuation of this post:
It's meant for discussing various aspects related to the procedure, findings and questions around the subject.
What it does:
Fully removes power limit from all RDNA3 cards, if your card has MP2856/7 controllers. I think I've seen one RDNA3 card that uses different controllers.
edit: Should work to increase RDNA4 GFX TDC limit (hasn't yet been tested/confirmed). VID offsets work for all rails on RDNA3 and has been confirmed on RDNA4.
How:
Via Linux atm. You can also use a Linux live usb boot, no need for a Linux install. You just need a live usb boot that has both i2c-tools package and i2c-dev kernel module loaded. I tried CachyOS live usb and fits the bill. Use your preferred distro.
The mod is persistent across reboots but goes away on power off. So you need to reapply it when first powering on your PC.
There is a theoretical way to make the change permanent, but it hasn't yet been tried.
Everything about this is risky, so don't do anything until some simpler/safer method comes along.
If you can live with bricking your GPU then read on.
How the RDNA3 power limit is enforced:
The GPU uses the Vgfx voltage controller reported current for main Vgfx rail to apply the TDP/TDC limit. For my Hellhound 7800XT there's a 220A TDC limit.
When voltage controller reports reaching this current, GPU algorithm throttles the power.
The value you have to alter, mentioned in the above link, is a calibration value for the voltage controller output current. Altering this value will make the controller report a lower (or higher but that's not useful) current value to GPU, thus GPU throttles at higher current, and thus power.
What value should you use instead of stock one?
There's two ways of figuring out what value to use, instead of stock one. There's the extreme overclocking value, which should only be used with a waterblock, and then there's stock cooling solution value, which should be safe(r) for a stock card (!).
Extreme overclocking:
For coming up with a value for extreme overclocking, you'd want to match GPU TDC limit to your main power rail current limit.
For example, my card has a stock 220A limit, and has 8 phases at 70A each for main GFX rail. That means an absolute maximum of 560A total. If we calculate the ratio between max VRM current and GPU enforced limit, that's x2.54. You'd have to lower the calibration value by this factor, such that when the controller reports 220A current to GPU, and GPU starts applying TDC limit, it's actually at 560A.
In my case, this stock calibration value is 0x219A. In binary that's 0b0010000110011010. The calibration value is in the first 11 bits (bit 0 to bit 10, in bold). Converted to decimal that's 410.
Applying the 2.54 conversion factor to 410, it works out at 161. Or 0b10100001 in binary. Ideally you attach the leading 0's so 11 bit long, 0b00010100001.
Now we build the whole hex again, 0b00100 and we attach the calibration value, and whole value comes out at 0b0010000010100001. Converting this value back to hex: 0x20A1 is the new value we need to write back to controller.
With this value, GPU TDC limit will be applied when the controller has an output of 560A on GFX rail. This is the most extreme version. Maybe up the value a bit so you don't hit your VRM limits, even with a waterblock.
For this scenario, you'd want to adjust current gain offset for all other rails. Driver has a max of 263W for TDP which is TBP value in HWinfo. That is made up by all 5 rails power added together.
The limits are TDC/TBP/Temp whichever comes first. Usually TBP comes first, but it's averaged out, so it allows for higher TBP peaks but throttles such that averaged it's 263W max. TDC is hard limit. Same as temp.
Vsoc+Vddc_usr+Vmem+Vddci usually work out at around 80-100W total, maybe more in some scenarios. These eat into the 263W total. So you are left with ~150W that you have to squeeze in the whole TDC current you want the GPU to draw. So without adjusting the other rails current offset, you'd have to adjust the current gain value to a greater factor. 150W fits ~160A max. So you'd have to use 560/160=3.5 factor instead of 2.54, to max out your VRM current while keeping stock current gain values (and thus stock power reporting) for the other 4 rails.
This way, when GPU draws 560A + normal power reporting for other 4 rails it works out at max 263W as far as GPU algorithm is concerned, and doesn't TDP throttle until you hit 560A current draw.
So altering the 4 small rails current gain value won't make them draw more current, it will make Vgfx draw more current.
Stock cooling:
For figuring out a value for stock cooling, you have to try a few values that are lower than stock but higher than the most extreme one (VRM limit). Put the heaviest load on GPU and watch your temps. If you want more, lower the value, test again.
For my GPU that worked out at a scaling factor of 1.404. So 410/1.404=292.
Converting 292 to binary: 0b100100100, but make it 11bits long by adding leading 0's: 0b00100100100, and build the whole 16bit value with the leading 0b00100 so 0b0010000100100100.
Converting to hex: 0x2124.
For a sanity check let's look at all three:
Stock - 0x219A
Max with stock cooling - 0x2124
Max with waterblock - 0x20A1
You may be able to do a bit more than x1.404 with better thermal paste on stock cooler.
Here's a max stress test with this new value for current gain, HWinfo GFX current/power values adjusted with x1.404 multiplier
Stock 220A x 1.404 = ~309A new TDC limit. Under full load sits just below 300A current draw. Hotspot 100C. This is max I'd go with stock thermal paste.
Graphics power is somewhere around 374W. Looking at total board power value, there's around 43W difference vs total graphics power, but the total graphics power value is not correct in HWinfo. The difference between TGP and TBP is correct though. Interesting that TGP is 262W, it's wrong, but it's coincidentally the driver TDP limit of 263W. It's throttled at that max value.
On Linux, the driver allows for 280W max power draw, so using the same 1.404 factor in this example, driver will allow for more current, will probably max out to new TDC limit of 309A.
So real TBP comes at 374+43=417W. Quite a bit more vs stock config. And still manageable by stock cooling.
And that's it, once you write this value to controller, the new limit is active. GPU will draw more power, have more performance.
There's 4 other rails (Vsoc/Vddc_usr/Vmem/Vddci), they also have current gain calibration values, but they never reach the max current limit set in GPU for each of them, so no need to alter their values.
Altering this value will mess with current and thus power reporting for GFX rail, which creeps in TBP and TGP power reporting values.
As a partial fix you can add your offset multiplier (2.54 or 1.404 in above examples, or the custom value that you end up using) in HWinfo for GFX current and power reporting. Brings them back to real values. But total power reports will still be off.
I ran two benchmarks in same conditions, one with stock controller value, the other with 2.54 multiplier, and adjusted HWinfo values for GFX current and power.
This is stock current gain value:
And with altered current gain value at 2.54 multiplier, and current and power reporting adjusted with same multiplier:
Pretty much the same thing, same scores, same averages, current/power for GFX works out the same, but TGB/TBP/GPU Power Maximum are off.
I had a low clock limit so it doesn't TDP/TDC throttle during the benchmark, so modification doesn't result in more performance.
As I mentioned in the begining, there's a theoretical way of making the change permanent (and you could also "permanently" revert back to stock value), by issuing an I2C write command to the controller.
There's no official datasheet for the controllers that are used in the RDNA3 cards, but I found the public datasheet for a similar controller, MP2886A. A few of the addresses seem to match our controllers, I tried the "clear errors" command and it worked.
RDNA3 controllers have a few sort of "RAM" pages, page 0 page 1 and page 2. Page 0 is for live data reporting and active settings for loop1, page 1 the same but for loop 2, and page 2 is some config data for both loops.
They also have an internal EEPROM, where defaults are saved. At startup, data from EEPROM is copied on pages 0,1 and 2.
The current gain offset modification alters the stock value on page 2, and goes away on power off. Next startup it's repopulated with the default value that's stored in the EEPROM.
The above command would theoretically take the data that is on pages 0,1 and 2 and write them as defaults in EEPROM, thus would get restored on next startup.
This could be risky and may brick your GPU!
I'm mostly on Linux as my daily so I just have a script that sets the value on startup. For Windows users it might be a pain to constantly boot in Linux live usb on each start.
Or wait for some Windows software to pop up that sets the current offset value so no need for permanent modification.
Next up there's another way of tweaking your GPU's performance, via VID offsets for each of the 5 rails. This too can also be useful. And risky!
This is the list with addresses of interest for mods:
Current gain offset:
Vgfx: controller 0x22, page 0x02, address 0x08
Vsoc: controller 0x24, page 0x02, address 0x08
Vddc_usr: controller 0x24, page 0x02, address 0x18
Vmem: controller 0x26, page 0x02, address 0x08
Vddci: controller 0x26, page 0x02, address 0x18
VID offsets:
Vgfx : controller 0x22, page 0x00, address 0x23.
Vsoc: controller 0x24, page 0x00, address 0x23
Vddc_usr: controller 0x24, page 0x01, address 0x23
Vmem: controller 0x26, page 0x00, address 0x23
Vddci: controller 0x26, page 0x01, address 0x23
Values are two byte for VID as well, with only bits 0 to 8 (so 9 bits) used. 5mV/LSB.
@hellm explains negative offsets here:
AMD - RED BIOS EDITOR und MorePowerTool - BIOS-Einträge anpassen, optimieren und noch stabiler übertakten | Navi unlimited
Seit AMD mit den Adrenalin-Treibern ab 2020 die Verwendung der SoftPowerPlayTables und damit indirekt auch des MorePowerTools massiv eingeschränkt hat, trauert die Community um den einstigen Übertaktungs-oder Untervoltungs-Bonus, der die Navi-Karten wie eine Radeon RX 5700 (XT) zumindest etwas...

LLC control might be possible on RDNA3, atm it's unknown. LLC and VID offsets should work on RDNA4 using this method, but not power limit removal. That is enforced in a different way on RDNA4 and needs hardware modification for removal.
Discuss, ask questions, post results. Do mention if you spot any mistakes in any of the details.