Overclock.net › Forums › Graphics Cards › NVIDIA › How loud is the noise on EVGA GTX670/680 "reference" cards?
New Posts  All Forums:Forum Nav:

How loud is the noise on EVGA GTX670/680 "reference" cards? - Page 4

post #31 of 51
Thread Starter 
Warning: Spoiler! (Click to show)
Quote:
Originally Posted by Rei86 View Post

That's why I said check the price wink.gif And if the 7950/7970 is cheaper I would go with that.

Well a lot of people has theory and with how Nvidia named the Fermi code since the Kepler's actually kept kind of the code name. IE Fermi GF100 was the GTX480. The GF104 was the GTX460. The GTX680 is called the GK104 rolleyes.gif So a lot people has theorized that the GTX680 is actually a GTX660 or 670 and everyone else falls a number shorter than what it really is.

Anyways also remember that the GTX670 is a lobotomized 680 since it did have a SMX unit disabled.


> Huh? What's a SMX unit? Wasn't aware of that. Please do enlighten me on this one.

No, my theory is not about the chip name, it's about the memory interface width. 256-bit for the 680 is (largely!) inferior to the 580's 384-bit. Since they left the 256-bit for both 670 and 680 I assume this gen was meant to be a midrange GPU.

As for Radeon, I'm a little shy jumping aboard the AMD ship but not only that:

1 - I can't find any 7970 model that really pleases me apart from the Lightning and it doesn't have dual-link DVI: deal-breaker. (what was MSI thinking?)

2- Since they're cheaper, much less resale value so I'll have to spend more to upgrade to 780^s when I sell my 7970s.

Gear mentioned in this thread:

post #32 of 51
Thread Starter 
Warning: Spoiler! (Click to show)
Quote:
Originally Posted by Rei86 View Post

EVGA has the GTX680 + 4GB which is pretty much their own shroud/fan design with Nvidia's reference cooler + PCB

Like you said the FTW is a step below EVGA's Classified line and is supposed to be "binned" chips. Also the FTW 680 4GB model is their own design

1) PCB is .5 inches longer vs. the reference GTX680 design
2) Vapor cooler is used instead of the standard heatsink - fan is located differently to accomodate the vapor cooler design
3) Card is equipped with 8-Phase VRMs
4) Extra 4GB of DDR5 is located on the backside of the card, making the backplate actually worth something more than just looking nice as it allows for passively cooling of the VRAM

Wow, Okay. Thanks. +Rep for all this useful info I've been researching for so long now.

Just to clarify:
The 4 points you described (8 phase VRM, vapor cooler etc.) describes the EVGA 680 FTW+ card? Not the classified? I'll assume the binning process is the same as the classified though (as the classified uses even more phases, etc.)
post #33 of 51
We had this exact discussion on the EVGA forums and a respected member posted this

Link to the thread
Quote:
Originally Posted by lehpron 

(interface width) x (effective clockspeed in GHz) ÷ (8 bits per byte) = memory bandwidth in GB/s.

Only the bandwidth number matters, don't look at interface width or frequency independently for determining which is better.

For example, GTX680's 256-bit at 6GHz = 192GB/s; but if we lower the interface to 64-bit, we'd have to raise the frequency to 24GHz just to keep the 192GB/s bandwidth, thus keep the same performance due to matching the bandwidth. HD7970 GHz Edition has a 384-bit at 6GHz = 288GB/s. It seems larger, but the GPU renders the scene, the memory bandwidth is just a conduit. Many reviews show GTX680 and HD7970 head-to-head, so those games tested didn't use the advantages of HD7970's specs and ended up rendering the same.

Ultimately, the memory bandwidth can determine the maximum rate your scenes can refresh.

(memory bandwidth in GB/s) ÷ (Vram usage in GB) = max buffer refreshes per second.


For GTX680: The 2GB buffer can refresh as low as 192GB/s ÷2GB = 96 times per second.
For HD7970 GHz Edition: The 3GB buffer refresh as low as 288GB/s ÷ 3GB = 96 times per second, but a 2GB buffer can refresh 144 times per second.

What does this mean? In the event that the GPU can render more scenes than the memory bandwidth can handle, all you can visually see are those frame rates from the memory bandwidth. If the GPU rendered 160 frames at 2GB each in the GTX680, you would only see 96 of them as they are bottlenecked by the memory bandwidth. But, almost no game in existence today pushes full Vram at a rate higher than max allowed without having more cards in multi-GPU, so essentially you need not worry about hitting limits due to bandwidth-- which is helped by simple overclocking the VRam as far as stable. Most folks get them to 7GHz, for reference.

BTW the GTX580 runs at the same speed memory wise as the GTX680
Quick rundown of what a SMX unit is
http://www.fudzilla.com/home/item/26393-nvidia-gk104-gpu-explained

The Classified is supposed to be "binned" also, different PCB design, power phase, etc etc. The Classified is also a monster in size too, its bigger than a GTX690.
Edited by Rei86 - 1/13/13 at 8:56pm
average
(11 items)
 
  
CPUMotherboardGraphicsRAM
i7-4790K Asus Z97 TUF Gryphon Sapphire Radeon RX Vega 64 G.Skill TridentX 
Hard DriveHard DriveCoolingOS
Samsung 850 EVO Samsung 850 EVO NZXT Kraken X61 Windows 10 
MonitorPowerCase
LG 24GM77 Corsair AX1200i Phanteks Enthoo EVOLV ATX 
  hide details  
Reply
average
(11 items)
 
  
CPUMotherboardGraphicsRAM
i7-4790K Asus Z97 TUF Gryphon Sapphire Radeon RX Vega 64 G.Skill TridentX 
Hard DriveHard DriveCoolingOS
Samsung 850 EVO Samsung 850 EVO NZXT Kraken X61 Windows 10 
MonitorPowerCase
LG 24GM77 Corsair AX1200i Phanteks Enthoo EVOLV ATX 
  hide details  
Reply
post #34 of 51
Thread Starter 
Warning: Spoiler! (Click to show)
Quote:
Originally Posted by Rei86 View Post

We had this exact discussion on the EVGA forums and a respected member posted this

Link to the thread
BTW the GTX580 runs at the same speed memory wise as the GTX680
Quick rundown of what a SMX unit is
http://www.fudzilla.com/home/item/26393-nvidia-gk104-gpu-explained

The Classified is supposed to be "binned" also, different PCB design, power phase, etc etc. The Classified is also a monster in size too, its bigger than a GTX690.

OK, very informative, Rep+ for you.

Now, I imagine that since:
"For GTX680: The 2GB buffer can refresh as low as 192GB/s ÷2GB = 96 times per second."

Then:
> At 4gb: The 4GB buffer can refresh as low as 192GB/s ÷4GB = 48 times per second.
Am I right?

Which is why I've been told that 4gb models were mostly aimed at SLI users, as I imagine then that you'd have twice the refresh rate (assuming a perfect scaling for simplicity sake) so 384GB/s ÷4GB = 96 times per second.

Am I right here?

Also, you said the 670 is a "lobotomized" eek.gif GPU with one less SMX, is that right? SO 7 SMX instead of 8?
post #35 of 51
Yes if you follow the formula.

You can call it whatever you want, the GTX670 is a GTX680 with a "lazy SMX" unit disabled.
average
(11 items)
 
  
CPUMotherboardGraphicsRAM
i7-4790K Asus Z97 TUF Gryphon Sapphire Radeon RX Vega 64 G.Skill TridentX 
Hard DriveHard DriveCoolingOS
Samsung 850 EVO Samsung 850 EVO NZXT Kraken X61 Windows 10 
MonitorPowerCase
LG 24GM77 Corsair AX1200i Phanteks Enthoo EVOLV ATX 
  hide details  
Reply
average
(11 items)
 
  
CPUMotherboardGraphicsRAM
i7-4790K Asus Z97 TUF Gryphon Sapphire Radeon RX Vega 64 G.Skill TridentX 
Hard DriveHard DriveCoolingOS
Samsung 850 EVO Samsung 850 EVO NZXT Kraken X61 Windows 10 
MonitorPowerCase
LG 24GM77 Corsair AX1200i Phanteks Enthoo EVOLV ATX 
  hide details  
Reply
post #36 of 51
Quote:
Originally Posted by DaGoat View Post

Warning: Spoiler! (Click to show)

OK, very informative, Rep+ for you.

Now, I imagine that since:
"For GTX680: The 2GB buffer can refresh as low as 192GB/s ÷2GB = 96 times per second."

Then:
> At 4gb: The 4GB buffer can refresh as low as 192GB/s ÷4GB = 48 times per second.
Am I right?

Which is why I've been told that 4gb models were mostly aimed at SLI users, as I imagine then that you'd have twice the refresh rate (assuming a perfect scaling for simplicity sake) so 384GB/s ÷4GB = 96 times per second.

Am I right here?

I don't think it works that way - each card has to do it's own processing in SLI, so it would be the same as the single card. But in practical use it doesn't get limited that way because the card isn't pushing the full VRAM each frame. Otherwise you'd never see anything above 48 FPS for a 4GB card, which obviously doesn't happen. And 4 GB cards are pushed at SLI users because a single card runs out of GPU horsepower before it runs out of VRAM.
post #37 of 51
Thread Starter 
Makes sense.
post #38 of 51
Think he's generalizing the memory buffer formula or we're not understanding it right.

Got done playing BF3 on my machine and I know I'm not running below 60FPS.
average
(11 items)
 
  
CPUMotherboardGraphicsRAM
i7-4790K Asus Z97 TUF Gryphon Sapphire Radeon RX Vega 64 G.Skill TridentX 
Hard DriveHard DriveCoolingOS
Samsung 850 EVO Samsung 850 EVO NZXT Kraken X61 Windows 10 
MonitorPowerCase
LG 24GM77 Corsair AX1200i Phanteks Enthoo EVOLV ATX 
  hide details  
Reply
average
(11 items)
 
  
CPUMotherboardGraphicsRAM
i7-4790K Asus Z97 TUF Gryphon Sapphire Radeon RX Vega 64 G.Skill TridentX 
Hard DriveHard DriveCoolingOS
Samsung 850 EVO Samsung 850 EVO NZXT Kraken X61 Windows 10 
MonitorPowerCase
LG 24GM77 Corsair AX1200i Phanteks Enthoo EVOLV ATX 
  hide details  
Reply
post #39 of 51
Thread Starter 
OK. Please Rei86, since you seem to know a lot about these EVGA cards, I need advice (other members' opinions are also welcome of course). What's the best deal?

EVGA 680 4gb for 519 Euros
or
EVGA 680 4gb FTW+ for 569 Euros? That's a total 100 euros difference for the SLI (two cards). Is it really worth it?

Also, Warning: Spoiler! (Click to show)
Quote:
Originally Posted by Rei86 View Post

EVGA has the GTX680 + 4GB which is pretty much their own shroud/fan design with Nvidia's reference cooler + PCB

Like you said the FTW is a step below EVGA's Classified line and is supposed to be "binned" chips. Also the FTW 680 4GB model is their own design

1) PCB is .5 inches longer vs. the reference GTX680 design
2) Vapor cooler is used instead of the standard heatsink - fan is located differently to accomodate the vapor cooler design
3) Card is equipped with 8-Phase VRMs
4) Extra 4GB of DDR5 is located on the backside of the card, making the backplate actually worth something more than just looking nice as it allows for passively cooling of the VRAM

What's your source on this? I can't find any review online, or on the EVGA website, detailing the specific differences of the FTW+ versions.
Edited by DaGoat - 1/14/13 at 4:01pm
post #40 of 51
My opinion is that the FTW card is not going to be 10% faster than the normal card (it's not going to do 1320 to a normal card's 1200, for instance) - so it's not worth the extra money. Unless a card offers some tangible difference, basically a better cooler or more VRAM, it's not worth paying more than a nominal extra cost.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: NVIDIA

Gear mentioned in this thread:

Overclock.net › Forums › Graphics Cards › NVIDIA › How loud is the noise on EVGA GTX670/680 "reference" cards?