Overclock.net banner
4,401 - 4,420 of 10,210 Posts
Yeah even though I lost performance overall to the Waterforce, I love the integration of the RGB into my ASUS Dark Hero, and that the block is fully maintenance-able. The Waterforce block can not be taken apart without ruining the cosmetics by prying off the aluminum + plastic cover that they for some reason put over the block screws.

Once new pads and paste is on, I plan to do another high wattage run with the new voltage adjustment options that Weleh shared. But after that I plan to put the daily driver nice and low to reduce overall wattage and not turn my office into a sauna.
I think the waterforce issue of covering the screws is not insurmountable. All of the screws are visible so drilling in to gain access to the screws should be possible. If done cleanly it wouldn't look much worse then new. It's aggravating that they did it but it could be worse. They could have used opaque materials and left us hunting for the screws.
 
Open question: How many of you guys undervolt GPU/CPU combo here?

Given the environmental and monetary cost of electricity, I've come to a conclusion that watercooling combined with undervolt is the ultimate solution for me. Recently undervolted my 5600X and 6900XT, I was able to shave off 160W, from 535W --> 375W of the max peaks. The PC runs overall more efficiently due to the extremely low temps. CPU or GPU never goes past 50C now, junction for GPU stays under 70C in stress tests. Both are roughly 4% faster than when in stock. Surely, there's room to gain 6% more performance for both CPU and GPU, but it would cost 160W, or in other words 30% more power. The green little man in me says that I should think about the environment.

This is not really a monetary issue, but it just makes me think that do I really need that 3-5fps more for 160W. Quite frankly, I didn't notice any performance difference in games I tried. I played all the games as per normal, and if there were stutters before, there were equally stutters in the same places, and if the fps was high at places it was pretty much exactly the same when using 160W less power. Additionally, I've recently downscaled internal resolution to 90% in games and upscaling that to 4K just to reduce GPU load and save watts, because I can't see the difference on my 65" 4K screen between upscaled 1944p and native 2160p. I only see difference in slightly smoother fps and 30W-40W less power consumption. I get strange satisfaction out of this!

My thinking is that surely I will be buying the latest and greatest parts moving forward, such as 7900XT MCM GPU and V-cache 6900X CPU, but I'd like to put them under water and heavily undervolt them for maximum efficiency. The goal I have is to have higher than stock performance with at least 20% less wattage.

I hope undervolting with an overclock is still a suitable topic for overclockers, because it is a form of OC after all.
I think there's nothing wrong to have an environmental awareness and still enjoy building, gaming on, oc'ing and benching a nice system. On the contrary, kudos for bringing it up - not least as power consumption is one side of the performance coin, and related cooling setups the other...the more peak watts - ultimately a heat energy parameter - the more cooling you have to throw at it.

As to undervolting, given the way newer CPUs and GPUs work with boost algorithms which may just claw back some of the 'savings' automatically, you may also want to consider limiting PL / max EDC/TDC. That is my primary tool to keep things reasonable....that said, I'm no saint, though many of the dozen or so systems in my 'hone-office' are also used for work functions (including dev and back-up servers, firewalls and the like). Electricity costs are low here (plus 97% of our power in BC comes from hydro), but still, it ain't free.

I mentioned before that I used to do sub-zero benching back in the day, and there have been times when total power consumption of a bencher w/ Quad-SLI and hyped HEDT exceeded 4000W via 4 linked PSUs...once you get into exotic cooling, custom bios/vBios and so forth, power consumption sky-rockets. The good thing is that I still have a host of hi-po PSUs which during their new 'daily' setups barely run at 50% these days....haven't bought a new PSU for my systems since 2015...

My most power-hungry current system is a TR2950X (oc'ed) and 2x 2080 Ti Waterforce Extreme GPUs, usually running in SLI-CFR. Even on stock vBios, those cards max out at 380W each, so 760W total peak for the GPUs alone. By the time all is said and done, with all peripherals as well, it can hit as high as 1150W total. Reducing the PL on the GPUs a bit and moving the CPU oc down a notch can save about 200W, without noticeable reduction in perceived performance, ie. smooth game play. It of course does show up in benching, but that is now a private fun thing rather than record chasing at HWBot.

The latest crop of CPUs and GPUs I run (3950X / 6900XT; 5950X/ 3090) for my home office are all much more power-efficient, including via the built-in algorithms. It makes sense to exploit those features in daily settings, unless you take a break and re-set things for some benching or demanding gaming.
 
I don't have read everything, but XTXH GPU's/dies, like on the LC Sapphire toxic EE, can be flashed to the reference liquid RX6900XT with increased perf. due lesser vram timings, but higher frequency?
Is 18 Gbps then possible? Is thatmemory physically the same as the 16Gbps?
To bad you have to hassle with Linux.
 
I don't have read everything, but XTXH GPU's/dies, like on the LC Sapphire toxic EE, can be flashed to the reference liquid RX6900XT with increased perf. due lesser vram timings, but higher frequency?
Is 18 Gbps then possible?
It seems to work with most, but it doesn't work correctly with the Red Devil Ultimate for some reason.
 
^ Thanks, I am not afraid to experience somewhat, but it would comfort me if someone with the same card has done it with succes.
 
Yeah even though I lost performance overall to the Waterforce, I love the integration of the RGB into my ASUS Dark Hero, and that the block is fully maintenance-able. The Waterforce block can not be taken apart without ruining the cosmetics by prying off the aluminum + plastic cover that they for some reason put over the block screws.

Once new pads and paste is on, I plan to do another high wattage run with the new voltage adjustment options that Weleh shared. But after that I plan to put the daily driver nice and low to reduce overall wattage and not turn my office into a sauna.
just dont forget to let the new pads "cure" even tho its not like how paste settles. should break em in slowly. then let em rip. if you go 0-100 the way they are made chemically is the reason why they get brittle and dry out. only reason i know is because ive changed about 200 of my gpus pads throughout my adventures in mining... and DONT WORRY i only use nvidia cards to mine with the exception of a few 5700xt's and radeon VIIs. :LOL:
 
just dont forget to let the new pads "cure" even tho its not like how paste settles. should break em in slowly. then let em rip. if you go 0-100 the way they are made chemically is the reason why they get brittle and dry out. only reason i know is because ive changed about 200 of my gpus pads throughout my adventures in mining... and DONT WORRY i only use nvidia cards to mine with the exception of a few 5700xt's and radeon VIIs. :LOL:
Interesting info on the break in period. Gives me formula 1 tire break in vibes.

What do you recommend for break in temps/period?
 
So quite a few posts back was troubleshooting TDR/crashing for a few DirectX 11 titles... I picked up a few notes that others might be interested in if you are looking at sensors data on these cards:
  1. HWiNFO64 "GPU Memory Usage" reporting 17-70+GB of memory usage, this was confirmed by HWiNFO author to be bug in AMD ADL and confirmed by AMD who is working on a fix.
  2. Memory Clocks reading non-sensical values (3000+MHz), this was confirmed in AMD driver release notes for latest optional release as "Radeon performance metrics and logging features may intermittently report extremely high and incorrect memory clock values."
 
So quite a few posts back was troubleshooting TDR/crashing for a few DirectX 11 titles... I picked up a few notes that others might be interested in if you are looking at sensors data on these cards:
  1. HWiNFO64 "GPU Memory Usage" reporting 17-70+GB of memory usage, this was confirmed by HWiNFO author to be bug in AMD ADL and confirmed by AMD who is working on a fix.
  2. Memory Clocks reading non-sensical values (3000+MHz), this was confirmed in AMD driver release notes for latest optional release as "Radeon performance metrics and logging features may intermittently report extremely high and incorrect memory clock values."
This has always been int the todo list and notes for the AMD drivers for quite some time now..rarely it gets fixed and then breaks on another driver update..its nothing crazy that could affect the stability for you to crash on games (like you mentioned) your issue might be underlying else where..

just dont forget to let the new pads "cure" even tho its not like how paste settles. should break em in slowly. then let em rip. if you go 0-100 the way they are made chemically is the reason why they get brittle and dry out. only reason i know is because ive changed about 200 of my gpus pads throughout my adventures in mining... and DONT WORRY i only use nvidia cards to mine with the exception of a few 5700xt's and radeon VIIs. :LOL:
quite subjective, for mining (torturing) those pads will be pressed under pressure with a significant amount of heat and that doesn't really incur "curing" time or settling time (or what ever you may want to call it), that process is more like "baking" it, the reason pads would be brittle/hard by the end of their lifespan is due to the lack of moisture the pad holds onto it (chemical not water), it keeps/maintains the synthetic/chewable/gummy properties of the pads, if those dry out, then its roast, pads doesn't require much "curing" time unlike some thermal pastes (some pastes/putty doesn't even need/require something like that), typically a few hours gaming with it would already allow it to meld properly already, unless you have mounting issues..
 
Blimey I see the Timespy scores are now nearing 27K in the Hall of Fame.

Excellent workm you have a better graphics score than me now Jon well done. (y)

I have am going to revisit Timespy soon and have another crack at it.
 
Blimey I see the Timespy scores are now nearing 27K in the Hall of Fame.


Excellent workm you have a better graphics score than me now Jon well done. (y)

I have am going to revisit Timespy soon and have another crack at it.
seems the last few pages (moar) seem to be dedicated on the podium finish for Hall of Fame ehh??..

I hope nobody fries their card in the process..and wish you good luck on that score hunting..
 
seems the last few pages (moar) seem to be dedicated on the podium finish for Hall of Fame ehh??..

I hope nobody fries their card in the process..and wish you good luck on that score hunting..
We all like to wax our carrots over Futuremark benchmark scores periodically. :ROFLMAO:

AMD cards have historically almost always lost to Nvidia in all the Futuremark benchmarks, that is no longer the case with RDNA2. :)
 
We all like to wax our carrots over Futuremark benchmark scores periodically. :ROFLMAO:

AMD cards have historically almost always lost to Nvidia in all the Futuremark benchmarks, that is no longer the case with RDNA2. :)
well Futuremark has to nerf this...lol..

Futuremark where never fair to begin with, favoring the Green Team's GPU scheduler (software/driver based) more rather than being neutral..
 
What GPU clock frequency did you set? For some reason my average clock is a lot lower than what I have it set to.
The MAX GPU clk was set to 2910MHz and the MIN GPU clk was set to 2780 MHz, if my memory is correct. This was achieved using the driver 21.10.3.
 
Wow, 2910 is very high. How much was before the extra voltage?
 
4,401 - 4,420 of 10,210 Posts