Overclock.net banner
1 - 20 of 26 Posts

s1rrah

· Premium Member
Joined
·
9,881 Posts
Discussion starter · #1 · (Edited)
For those not in the know, reddit user u/CriticalQuartz is providing high quality, custom cut, complete thermal pad sets for EVGA 30 series GPUS from his website, Kritical pads . He is also currently introducing more custom sets for other manufacturer cards (Asus Strix, etc.). He advertises the pads to be rated at 20W/mk thermal conductivity but as with all such products, ones mileage may vary.

Here's a shot of my kit from Kritical; the overall packaging, shipping time (US to US) and especially the consistency and quality of the actual cuts is far superior to the set of thermal pads I ordered from EVGA for backup purposes. No slant on EVGA, as their stock pads do perform quite acceptably but the extra set they sent me looked like they'd been cut out by a child with a pair of bad scissors. The Kritical pads are obviously machine cut and I found the fitment to be perfect in all regards.

r/nvidia - Kritical Thermal Pad Results - EVGA 3080ti XC3 Ultra Gaming
So today I finally got around to installing the Kritical thermal pad set and the results are just about as I was expecting. My particular card already had quite decent VRAM temps with the stock EVGA thermal pads and compared to others I've seen posting online, I couldn't see how they could or would get dramatically better.

Long story short? The Kritical pads performed better during the Nbminer test (4C drop at load) and almost identical to the EVGA stock pads during the TimeSpy Extreme stress test. I did do a random game, 4 hour test as well and have posted the results at the end of this feckin essay lol.

Another bit to note is that I had previously replaced the EVGA thermal putty that covers the VRM areas with T-Global TG-PP-10 putty a couple months ago when I first got the card, while leaving the stock EVGA thermal pads in place. This may or may not have had some influence on reported VRAM temps when using the stock EVGA thermal pads during these tests. But to insure testing consistency, I removed and then re applied the T-Global putty today (had an extra bottle) when installing the new Kritical pads (I did not use the special 1mm pads that Kritical supplies to replace the putty over the VRM areas).

A Few Notes:
  • GPU is an EVGA 3080 ti XC3 Ultra Gaming
  • All tests conducted at a constant ambient temperature of 22 Celsius.
  • All tests conducted in a closed case (Corsair 780T)
  • NBMiner EHT for 15 minutes was used as a pure VRAM max/avg temperature test.
  • 20 loops of Timespy Extreme stress test was used a general gaming max/avg temp test.
  • The NBMiner test used a static fan speed (details below)
  • The Timespy Extreme stress test used a fan curve (details below)
  • T-Global TP-PP-10 putty was used on VRM's in place of stock EVGA putty for all tests
  • Simple clock adjustments were made via MSI Afterburner (details below)

And here are the results:


NBMiner EHT - 15 minutes
----------------------------------------------------------------------------
Static Fan Speed: 80% / 2,615 RPM
Power Limit: 85%
Core OC (Afterburner: +0)
Mem OC (After burner: +500)
Room Ambient: 22C


EVGA Stock Pads
--
Max VRAM temp:
84C
Avg VRAM temp:
73C
Max CORE temp:
57C

KRITICAL Pads

--
Max VRAM temp:
80C
Avg VRAM temp:
66C
Max CORE temp:
55C


Timespy Extreme Stress Test - 20 loops
-----------------------------------------------------------------------------------
Fan Curve Max: 72% / 2,376 RPM
Power Limit: 100%
Core OC (Afterburner: +150)
Mem OC (Afterburner: +500)
Room Ambient: 22C


EVGA Stock Pads
--
Max VRAM temp: 78C
Avg VRAM temp: 74C
Max CORE temp:
71C

KRITICAL Pads

--
Max VRAM temp: 78C
Avg VRAM temp: 72C
Max CORE temp:
68C


So definitely some gains in the mining scenario and pretty much identical temps with the Timespy Extreme 20 loop test . One unusual bit of data is that with the Kritical pads installed, the Timespy test showed a 3C drop in GPU core temps, which is interesting. Real world gaming thermals differ dramatically from game to game, however and perhaps similar gains as seen with the mining test will be more apparent in different games.


Kritical Pads: General Gaming Test

I followed the "controlled tests" above with about 4 hours of non stop gaming at my cards max overclock and fans at +/- 73% ... Horizon Zero Dawn, Cyberpunk 2077, Metro Exodus, Rise of the Tomb Raider, Control and Hellblade: Senua's Sacrifice. All gamees running at 3840x1600 resolution and ultra settings. These are the games that I've personally found to most push a GPU. During that real world gaming scenario, VRAM temps with the Kritical pads never exceeded 78C while core temps never went past 68C ... those two numbers were eerily consistent throughout the entire 4 hour gaming session, with a near constant 10C delta between VRAM and core temps throughout.

Here's the data as reported by HWINFO64 during the above mentioned gaming test...

(Image direct link)
r/nvidia - Kritical Thermal Pad Results - EVGA 3080ti XC3 Ultra Gaming


Others may see better or worse temps with their particular 3080Ti but I'm personally quite pleased with all aspects of the cards thermal performance at this point. It's a long term investment for me so I'm not touching it again moving forward unless I see obvious thermal degradation over time.

Overall, I'm pleased with the results and it was fun to test. I do hope the new pads/putty last a good while. I honestly didn't want to install the Kritical pads cause things were already working very well with the card and thought I might jinx it LOL ... I'm very much of the opinion that if it works, then don't try and fix it but my curiosity generally wins out and such was the case today.


Final Thoughts?

It's safe to say the Kritical pads are a great option and well worth the money (I paid $29.95 for a complete 3080ti XC3 set). I definitely recommend them if you desire to re apply thermal pads to your EVGA 30 series GPU. The pads come very well packaged and the cut quality is precise and consistent. The owner has built a very clean and functional ordering system and the entire process was snag free from click to ship. The pads themselves are also very easy to work with and do not deform too easily but also are highly compressible. Another plus is that the Kritical pads come in the precise EVGA spec'd heights that are nearly impossible to find short of custom sourcing (ie: 2.25mm pads over VRAM / 2.75mm pads over other areas, etc.).

So again, absolutely no reason to buy whole sheets and cut your own pads which frankly, unless your prone to self mutilation, is quite the PIA. I would much rather simply order demonstrably high quality pre cut kits from Kritical. It should also be said that one can order complete pad sets direct from EVGA too if you request them and as the above tests have shown, the EVGA pads are really not that bad but clearly fell behind the Kritical pads in extreme VRAM usage scenarios like bit coin mining. Unlike Kritical sets, EVGA does not include pads to replace the thermal putty over the VRM areas and you'd need to buy the EVGA putty separately from EVGA's websit (or even better, use the far superior T-Global TG-PP-10 putty).

Oh yeah, I repasted the GPU core today (Noctua NH-1 same as before) and max core temps went from 70C to 68C in the Timespy Extreme test so I'll take that too, thank you.

~s1rrah
 
Discussion starter · #4 ·
Great review, thanks!

I’m tempted to order a set for my 3090 FE.
I've read that 3090 users have had far better results than those in my review. And the 3090 kits from Kritical Pads include pads for the back of the GPU as well as the regular front pads (same with any other GPU that includes VRAM on the back of the card). For whatever reason, my 3080ti always had fairly decent VRAM temps straight out of the box, generally maxing out in the mid 80C range but many users of various 30 series cards are seeing VRAM hitting 100C quite regularly and for those users I think going through the pad replacement process would be a logical first step.
 
I've read that 3090 users have had far better results than those in my review. And the 3090 kits from Kritical Pads include pads for the back of the GPU as well as the regular front pads (same with any other GPU that includes VRAM on the back of the card). For whatever reason, my 3080ti always had fairly decent VRAM temps straight out of the box, generally maxing out in the mid 80C range but many users of various 30 series cards are seeing VRAM hitting 100C quite regularly and for those users I think going through the pad replacement process would be a logical first step.
I had horrible luck with figuring out pad thickness on my 3090 so I ended up with a custom loop. If/when I put the card back to the stock cooler, I’ll definitely order a set of pads.

If I had known if this option a month ago, I would’ve definitely given it a shot before going with the custom loop option.
 
Dam ur 3080 Ti runs cool. I though XC3 was bad. My Tuf 3080 ti hit 103c memory.
 
Discussion starter · #7 ·
Dam ur 3080 Ti runs cool. I though XC3 was bad. My Tuf 3080 ti hit 103c memory.
I wanted the TUF 3080 Ti to be honest because I had seen countless reviews showing it to have the best cooling but maybe I was focusing just on GPU core temps. And I too had read that the XC3 was meh in regards to general cooling, sort of middle of the pack but it was $1800 on Amazon and the TUF was like $2200 so ended up with the XC3 and the 5 year extended warranty for $35 was also a major selling point.

Hard to believe that some folks, no matter the card make/model, are seeing memory temps at 100C and higher with their 30 series cards; I can't get my memory past 85C and to get it that high I have to intentionally run a mining stress test. Hours of gaming in any title never see it past 80C and that's a very rare occasion. I have no clue why. All I've done is repaste the core, add new thermal pads , replace the stock EVGA putty with T-Global TP-GG-10 putty and also added the same T-Global putty between the VRAM points on the back of the card so that it squishes into the backplate once I put everything back together. I also have really good air flow in my case and ambient temps are always between 22C-23C.

I will say this, though (and might have said it before) but when I did the KRitical repad, I also re pasted (for the second time) with the same Noctua compound and core temps are much better now, especially at idle. For whatever reason, my card was consistently idling on the core at around 45C which was bugging the hell out of me ... now it sits at 30 to 35C all the time at idle which pleases me as I dig seeing CPU/GPU idle temps all about the same in the task bar tray area LOL ... keeps my OCD in check lol. And load temps have dropped an easy 4C too so must have been something up with my previous repaste application.
 
From what I understand, the Kritical pads are very squishy which should allow the heatsink better contact with the GPU die which would be my guess why your core temps dropped.
 
  • Rep+
Reactions: s1rrah
I wanted the TUF 3080 Ti to be honest because I had seen countless reviews showing it to have the best cooling but maybe I was focusing just on GPU core temps. And I too had read that the XC3 was meh in regards to general cooling, sort of middle of the pack but it was $1800 on Amazon and the TUF was like $2200 so ended up with the XC3 and the 5 year extended warranty for $35 was also a major selling point.

Hard to believe that some folks, no matter the card make/model, are seeing memory temps at 100C and higher with their 30 series cards; I can't get my memory past 85C and to get it that high I have to intentionally run a mining stress test. Hours of gaming in any title never see it past 80C and that's a very rare occasion. I have no clue why. All I've done is repaste the core, add new thermal pads , replace the stock EVGA putty with T-Global TP-GG-10 putty and also added the same T-Global putty between the VRAM points on the back of the card so that it squishes into the backplate once I put everything back together. I also have really good air flow in my case and ambient temps are always between 22C-23C.

I will say this, though (and might have said it before) but when I did the KRitical repad, I also re pasted (for the second time) with the same Noctua compound and core temps are much better now, especially at idle. For whatever reason, my card was consistently idling on the core at around 45C which was bugging the hell out of me ... now it sits at 30 to 35C all the time at idle which pleases me as I dig seeing CPU/GPU idle temps all about the same in the task bar tray area LOL ... keeps my OCD in check lol. And load temps have dropped an easy 4C too so must have been something up with my previous repaste application.
Probably need to check how much power the memory is drawing especially during mining. Evga might have limits.
 
Discussion starter · #10 ·
Probably need to check how much power the memory is drawing especially during mining. Evga might have limits.
That is a good idea. I happen to have the hwinfo64 data from the mining test I ran so will have a look at that...
 
That is a good idea. I happen to have the hwinfo64 data from the mining test I ran so will have a look at that...
Mine does about 130-135W MVDDC Power Draw.
 
One of the shortfalls you will have with air cooling is the heatsink is generally shared with the GPU and memory modules so your memory temps generally won't fall below your GPU temps.
 
One of the shortfalls you will have with air cooling is the heatsink is generally shared with the GPU and memory modules so your memory temps generally won't fall below your GPU temps.
For ASUS TUF its not share meaning GPU temp are very low but memory temps are high.
 
For ASUS TUF its not share meaning GPU temp are very low but memory temps are high.
Yeah, it will all depend on the heatsink. The pads just facilitate heat transfer; they don't do much for dissipating heat and can only fix a poor cooler design so much.
 
Yeah, it will all depend on the heatsink. The pads just facilitate heat transfer; they don't do much for dissipating heat and can only fix a poor cooler design so much.
I dont think anyone was expensing memory to pull 100W+.
 
I added thermal pads to the back of my EVGA 3080 XC3 Ultra to use the backplate as a heatsink. Now my backplate reaches 53.5 ÂşC under memory load such as mining, even hotter than my CPU's die temperature during gaming. This is just the heat conducting from the front memory dies through the PCB and to the backplate. Can't imagine how hot a 3090 would reach.
 
I dont think anyone was expensing memory to pull 100W+.
NVIDIA and their AIBs always knew the memory would do this. It's part of the spec of the memory ICs, and the power delivery that feeds them. Most of them chose to fit insufficient cooling anyway.
 
  • Rep+
Reactions: bearsdidit
Discussion starter · #19 ·
Did some more tests with the new pads/paste this morning. Short capture here from about an hour in to a Horizon Zero Dawn session... 3840x1600, Ultra everything ... ambient temp is about a degree cooler than my initial tests (20/21C or so). Going back through and doing all the errands and side quests LOL ... I've found this to be a really good game for GPU stressing.

I don't know too much about overclocking GPUS so I just use MSI Afterburner for max voltage on slider, power limit at 100% on the slider and temp limit at 83C on the slider ... then set the GPU to +150 and the Memory to +500 ... for some reason, the only time I see max core boosts of up to 2085+ that actually stay at that speed are during use of menus and loading screens. During gameplay, the GPU always sticks mainly around 1850mhz to 1925mhz or so ... would using a voltage curve instead of the sliders allow me to more often maintain clock speeds in the 2000+mhz range?

...


...
 
Method I've been using in afterburner:

1. Apply an aggressive core offset, for gaming, I can get away with about +200, benchmarks around 250.

2. Figure out what core frequency gets you to your target frame rate, for me, I actually just used the 1965MHz stock boost limit as my target. Go into F/V curve editor and select everything above that frequency, Ctrl+Arrow down and bring everything to the right below your peak desired frequency. Hit apply and you should end up with a smooth curve up to your peak frequency and then a horizontal flat line to the right of it.

Doing this, it prevents light loads from spiking up to high frequencies where it is likely to crash. If you used +200 core offset, the end result is the GPU uses the same power the card would at 1765MHz in stock setup, but gets you 1965MHz. It also allows a higher core offset, as my card for example has to be down at +90MHz to be game stable if I don't cap the peak frequency.

You also want to look at peak power though as the 3080Ti will start running into the power limit at like 850mV. I run +112% power but I rarely every see over 300W. I think the cards do get power spikes though that don't show up in the OSD because it runs smoother with the power limit up despite GPU-Z never showing that it is hitting the limiter.
 
1 - 20 of 26 Posts