Originally Posted by Derp
"95C is the optimal temperature that allows the board to convert its power consumption into meaningful performance for the user. Every single component on the board is designed to run at that temperature throughout the lifetime of the product."
Running at 95c should lower the lifespan of that card though compared to say a card that never runs over 75c. I'm sure they think the lifespan is the two year warranty, then it's free to die. In AMD's mind you're back throwing money at them so this is a good thing.
What if we want to use a card well after it's warranty dies? One man's slow card that needs replacing is another mans significant upgrade.
I reply to your the second paragraph by saying that I read on a credible website(I don't recon which right now) that electronics suffer more due to heating cycles than actual temperatures. These parts are specified and don't cross their intended operational guidelines.
Which brings me to my other point: why everyone is sidestepping the main argument that is this card consuming so much power?
Well, this is the one card you get for 4K resolution. Other than that, pointless imo. Ironically, Nvidia does not have a "respectable" runner up right now because SLI drivers are so far behind AMD that a card half its price beats its opponent in dual gpu benchmarking: Yes, I do not mind double the power consumption hypocritical, or not. If you would need that kind of performance> well, current physics implementations for this vendor do not employ any better efficiencies.
On that note, the sidestep of the whole plot is pretty evident. The card does not "actually" benefit from a higher temperature, the software control and the cooler does, being sufficient at 33 controls a second. If we were to test at lower temperatures, we would indeed have a higher overclock available as capacitors-resistors-inductors would not convert more electrons for heat.
I wish scientists would invent superconductance at room temperature any faster.