Overclock.net banner

[PCGH/PcPer] Nvidia to Pascal: "Maximum tension shortens life to one year"

21K views 177 replies 95 participants last post by  Kokin 
#1 ·
Nvidia does not allow a concrete voltage increase in its Pascal generation. In tools such as Afterburner and Precision X, only a relative controller can be set which brings the GPU closer to the maximum standard voltage. According to Nvidia, this has a good reason: already at 100 percent the lifetime of a GPU sink to one year.
Quote:
Previously, OC tools such as MSI's Afterburner and EVGAs Precision X were able to increase the voltages of Nvidia GPUs. Although 87 millivolts were not the world, they increased the potential for over-exploitation. Since the pascal generation, Nvidia has given the boardpartners new limits: In the bracket behind the voltage there is no mV, but a percentage symbol. The difference is significant: If users were able to set a concrete voltage earlier, it is now only possible to determine how close the GPU is to the predefined voltage limit. For Pascal GPUs, this is typically 1.0 to 1.075 volts depending on chip quality. If you set the controller to 100 percent, the GPU always remains at this value under load, as long as the power limit is not limited before.

In an interview with pcper.com , Tom Petersen, Director of Technical Marketing at Nvidia, explains why they chose the changeover. The interview is already two weeks old, but the statement was not known to us (via 3DCenter , the passage starts at about 16:00). According to this, Nvidia chooses a voltage curve with which the average of the graphics cards will be about five years old before electromigration begins. With the percent controller, users would effectively have the option to increase the voltage within Nvidias predetermined frames. According to own statements, the game goes 100 percent so far that the life should be only one year.

Of course, marketing departments tend to point out issues. However, it is also clear that Pascal Nvidia's first GPU generation is a FinFET process. These scatter less on the voltage and react more sensitively to increases than their planar predecessors. With the voltage limitation, Nvidia can almost gain experience with FinFET processes without investing too much in the RMA area. And if the one year should be a bit off - from which we do not go out - corresponding reports should increase since the coming May. The Geforce GTX 1080 is one year old.
sources:

https://translate.google.com/translate?sl=de&tl=en&js=y&prev=_t&hl=de&ie=UTF-8&u=http%3A%2F%2Fwww.pcgameshardware.de%2FGeforce-GTX-1080-Ti-11G-Grafikkarte-265855%2FNews%2FPascal-Spannungserhoehung-Lebensdauer-1224184%2F&edit-text=


nothin more to say...
madsmiley.png
 
See less See more
1 1
#2 ·
Quote:
According to this, Nvidia chooses a voltage curve with which the average of the graphics cards will be about five years old before electromigration begins.
I don't think this is true at all. More like that time frame until the effects are noticeable, i.e. stability issue due to transistor deterioration (electromigration and more). You might see what pretty much every one have had with intel using tri-gate (similar to FINFET), inability to maintain high overclock / need to bump the voltage. The CPU just don't die right away...

NV stance with overvolting & overclocking has in general been quite rigid, way before FINFET.

Edit: I think that bad cooler and low quality VRM are the prime cause of graphic card death. Overvolting will strain the VRMs and I am not surprised if that's really the reason why nvidia is so against overvolting.
 
#3 ·
Quote:
However, it is also clear that Pascal Nvidia's first GPU generation is a FinFET process. These scatter less on the voltage and react more sensitively to increases than their planar predecessors. With the voltage limitation, Nvidia can almost gain experience with FinFET processes without investing too much in the RMA area.
Removing even more features while, probaly, improving profits. Not really new in a business
 
#4 ·
Better safe then sorry. Really from all my experience with GPU most of the time Voltage control only helped me to undervolt not overvolt. In cases where voltage made a difference I could easily get away with 2-5% less performance bringing back huge p/p ratio. A good GPU will OC well with whatever stock volts it comes with.
 
#5 ·
While I have a GTX 1080 now (bought used) and love it's performance and specs I still feel that Nvidia is a shady corporation. Sure all computer component manufacturers like to lie and tout about stuff that might not be true but Nvidia always seems to get caught with major issues and problems with their products and business practices.

I really feel it's one of the reasons AMD seems to be picking up in the sales figures. People are starting to realize that Nvidia takes a lot of liberties on behalf of it's customers and isn't quite the gamer friendly all star company they were once looked as being. So when I read stuff like this from them I can't help but just think, "Oh, not surprising. What's next?".

Nvidia I think really needs to pull it together and start being more open with their products and services or they'll continue to fall. With AMD's CPU tech getting incredibly good they'll start getting more and more money and more and more engineers and before you know it their GPU's will be TRULY competitive again and Nvidia is gonna get caught with their pants down.

Not like it hasn't happened before, ya know?
 
#6 ·
remember before the voltage limits you had people doing this to cards. Nvidia put a cap on the voltage because too many 580's and 590's went up in smoke..



wink.gif
 
#11 ·
Quote:
Originally Posted by Mand12 View Post

So here's a question I've asked many an overclocker:

Does headroom over stock really matter to you? Why isn't it the final performance that is the appropriate metric?
Yes... because you pay for stock. If they do all the overclocking for you they are selling you that performance you could have gotten yourself.
 
#13 ·
Normally I would be upset, but realistically voltage hasn't done squat for clocks since Kepler. Maxwell only gained a couple % with more voltage IF the core was super cold. Even if we could adjust the voltage to higher levels, I doubt it would do anything. Now, I am mad at Nvidia for taking this stance in the first place... because well it is boring. Imagine how boring cars would be if we couldn't modify and tune them...

With that said, I wonder if the EVGA KPE card will find a way around this...
 
#14 ·
Quote:
Originally Posted by AndroidVageta View Post

While I have a GTX 1080 now (bought used) and love it's performance and specs I still feel that Nvidia is a shady corporation. Sure all computer component manufacturers like to lie and tout about stuff that might not be true but Nvidia always seems to get caught with major issues and problems with their products and business practices.

I really feel it's one of the reasons AMD seems to be picking up in the sales figures. People are starting to realize that Nvidia takes a lot of liberties on behalf of it's customers and isn't quite the gamer friendly all star company they were once looked as being. So when I read stuff like this from them I can't help but just think, "Oh, not surprising. What's next?".

Nvidia I think really needs to pull it together and start being more open with their products and services or they'll continue to fall. With AMD's CPU tech getting incredibly good they'll start getting more and more money and more and more engineers and before you know it their GPU's will be TRULY competitive again and Nvidia is gonna get caught with their pants down.

Not like it hasn't happened before, ya know?
This is hilarious. Nvidia basically owns the gpu gaming market. Amd has been irrelevant for so long that they will likely never recover. So lets be mad at Nvidia for not allowing us extreme gamers to not crank up the voltage and fry our cards to then try to RMA them.
 
#17 ·
Quote:
Originally Posted by Syan48306 View Post

remember before the voltage limits you had people doing this to cards. Nvidia put a cap on the voltage because too many 580's and 590's went up in smoke..



wink.gif
Quote:
As a first step, I increased the voltage from 0.938 V default to 1.000 V, maximum stable clock was 815 MHz - faster than GTX 580! Moving on, I tried 1.2 V to see how much could be gained here, at default clocks and with NVIDIA's power limiter enabled. I went to heat up the card and then *boom*, a sound like popcorn cracking, the system turned off and a burnt electronics smell started to fill up the room. Card dead! Even with NVIDIA power limiter enabled. Now the pretty looking, backlit GeForce logo was blinking helplessly and the fan did not spin, both indicate an error with the card's 12V supply.
After talking to several other reviewers, this does not seem to be an isolated case, and many of them have killed their cards with similar testing, which is far from being an extreme test.
https://www.techpowerup.com/reviews/ASUS/GeForce_GTX_590/26.html
 
#18 ·
Quote:
Originally Posted by rcfc89 View Post

This is hilarious. Nvidia basically owns the High end gpu gaming market. Amd has been irrelevant for so long that they will likely never recover. So lets be mad at Nvidia for not allowing us extreme gamers to not crank up the voltage and fry our cards to then try to RMA them.
FTFY. The R7's have sold quite well from what ive seen. enough to pick AMD's GPU market share up a fair bit and take it from Nvidia.
Remember, OCN is an enhusiast space, where a Titan, 1080 etc is what we care mostly about. The top end., But for Mr avg Joe on the street, a 480/1060 is a big spend and that where the majority of the market is. AMD is far from irrelevent there at the moment.
 
#20 ·
Quote:
Originally Posted by Damn_Smooth View Post

Quote:
Originally Posted by Mand12 View Post

So here's a question I've asked many an overclocker:

Does headroom over stock really matter to you? Why isn't it the final performance that is the appropriate metric?
I ask you this in return. Do you know where you are?
He's probably in the wrong place, yeah. He's acting way too reasonable.

God forbid they take away our hobby of messing with some voltage knobs.
 
#21 ·
8760 hours in 1 year of continuous uninterrupted max voltage, then death.
 
#22 ·
Quote:
Originally Posted by ZealotKi11er View Post

Better safe then sorry. Really from all my experience with GPU most of the time Voltage control only helped me to undervolt not overvolt. In cases where voltage made a difference I could easily get away with 2-5% less performance bringing back huge p/p ratio. A good GPU will OC well with whatever stock volts it comes with.
This is what I say. Push a card to its limits 24/7 and unless you keep it at an optimal operating temp... you're toast. Maybe not right away, but given the same scenarios, you could be shaving as much as half the lifespan of that card off. I mean, I have a friend with a 980 pushed to 1500mhz core under water, and it's nice and chilly obviously, but I can't help but think of how much voltage is being pushed through that GPU to maintain such an overclock. Makes me cringe when I think about it really. :S
 
#23 ·
Quote:
Originally Posted by Imglidinhere View Post

This is what I say. Push a card to its limits 24/7 and unless you keep it at an optimal operating temp... you're toast. Maybe not right away, but given the same scenarios, you could be shaving as much as half the lifespan of that card off. I mean, I have a friend with a 980 pushed to 1500mhz core under water, and it's nice and chilly obviously, but I can't help but think of how much voltage is being pushed through that GPU to maintain such an overclock. Makes me cringe when I think about it really. :S
My 290X, 1100MHz stock voltage, 1150MHz + 50mV, 1200MHz +125mV. Really not worth it even at +50mV but had it set because I had it under water. +125mV was like extra 150-200W. I understand people that benchmark but for 24/7 use I dont like to use volts for my cards. CPUs are totally different.
 
#24 ·
Quote:
Originally Posted by rcfc89 View Post

This is hilarious. Nvidia basically owns the gpu gaming market. Amd has been irrelevant for so long that they will likely never recover. So lets be mad at Nvidia for not allowing us extreme gamers to not crank up the voltage and fry our cards to then try to RMA them.
Two of the three major console players use AMD. Surely Nvidia holds the performance crown but AMD is competitive in the lower regions. Meanwhile, Nvidia IS the more shady company. Physx, g-sync and gameworks to name the big proprietary gems. Sure they might work well, but it is a blatant attempt at dividing the hardware market in a way that definitely harms consumers. Meanwhile, AMD introduced us to Mantle and made Vulkan possible.

Also, you're almost talking as if the 4870 never happened. I'm looking forward to a healthy competition between future Volta and Vega cards.
 
  • Rep+
Reactions: Sempre
#25 ·
Quote:
Originally Posted by ZealotKi11er View Post

My 290X, 1100MHz stock voltage, 1150MHz + 50mV, 1200MHz +125mV. Really not worth it even at +50mV but had it set because I had it under water. +125mV was like extra 150-200W. I understand people that benchmark but for 24/7 use I dont like to use volts for my cards. CPUs are totally different.
Oh if we go back far enough, like the 480 and 470 cards when Fermi was released, if you had adequate cooling and a beefy enough PSU, those cards could pull 80% overclocks and almost double performance out of the box. Hell, the only limitation for my GTX 470 back then was heat. Damn furnaces...
 
#26 ·
Quote:
I mean, I have a friend with a 980 pushed to 1500mhz core under water, and it's nice and chilly obviously, but I can't help but think of how much voltage is being pushed through that GPU to maintain such an overclock.
My 970 shipped with 1.225v applied out of the box and OC'd to 1470 with no voltage change.. many got higher
 
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top