Overclock.net › Forums › Industry News › Hardware News › [HH] 7950 OC vs 7970 OC vs 580 OC vs 6970 OC
New Posts  All Forums:Forum Nav:

[HH] 7950 OC vs 7970 OC vs 580 OC vs 6970 OC - Page 8  

post #71 of 86
Quote:
Originally Posted by Shiveron View Post

Wait so you're saying these cards are out of your price range but you want a 6x display port model..........
That's a bit ironic dontcha think? 6x dp monitors is gonna cost twice what a 7950 does, and that's assuming you're buying monitors as cheap as 150$.

As far as monitors go I'm willing to shell out some extra for these as a good display will last me considerably longer than good gfx card. GFX card is "a bit old" few years down the road while a good display I expect to use approx 5 years. Plus you can buy the monitors in "chunks" when the budget allows it. A gfx card is a lot harder to buy in "chunks" - unless you do crossfire ofc.

For the ones I have I bought few at first and then slowly bought more of the similar until I had 5 of the displays I wanted. Took me approx a year.

But you are ofc correct in this regard that usually the 5x and 6x DP models cost more than reference ones. For example, the 6770 with 5x displayport was 120 euros while the reference models of the card were going around 90 to 100 euros at that time. But one can dream, right wink.gif

I'm more of a bang-for-buck guy than need-the-latest-at-any-price guy.
Edited by Carniflex - 2/3/12 at 12:50am
Kohver v.4
(25 items)
 
Kohver v.3
(21 items)
 
Soliton
(23 items)
 
CPUMotherboardGraphicsGraphics
Intel Core i7 3820 X79-UP4 Gigabyte 390X G1 Club3D 7870 Eyefinity 6 
RAMHard DriveHard DriveHard Drive
64 GB Barracuda 3 TB WD Green 3 TB Crucial M4 256 GB 
Hard DriveHard DriveHard DriveCooling
Crucial M4 64 GB Barracuda 4 TB Corsair LE 480HB Sythe big shuriken rev B 
CoolingCoolingCoolingOS
Nexxos 280mm radiator DC-LT Alphacool GPX 390M04 Windows 7 Pro 64 bit 
MonitorMonitorMonitorMonitor
DELL U2311H (x3) DELL U2312HM (x2) 9.7'' 2048x1536 (x3) IPAD3 retina Asus PB287Q 4K 
KeyboardPowerCaseMouse
CM Quickfire TK (red) Corsair AX-1200i Significantly modded Jupiter .. something alumi... Logitech G700s 
Mouse Pad
3M Ergonomic 
CPUMotherboardGraphicsGraphics
Intel Core i7 3820 X79-UP4 Gigabyte 390X G1 Club3D 7870 Eyefinity 6 
RAMHard DriveHard DriveHard Drive
64 GB Barracuda 3 TB WD Green 3 TB Crucial M4 256 GB 
Hard DriveHard DriveCoolingOS
Crucial M4 64 GB Barracuda 4 TB Sythe big shuriken rev B Windows 7 Pro 64 bit 
MonitorMonitorMonitorMonitor
DELL U2311H (x3) DELL U2312HM (x2) 9.7'' 2048x1536 (x3) IPAD3 retina Asus PB287Q 4K 
KeyboardPowerCaseMouse
CM Quickfire TK (red) Corsair AX-1200i Modded Sharkoon VS-3 Logitech G700s 
Mouse Pad
3M Ergonomic 
CPUMotherboardGraphicsGraphics
Intel Core i7 3820 X79-UP4 Sapphire 7950 FleX OC XFX 6770 Eyefinity 5 
GraphicsRAMHard DriveHard Drive
Club3D 7870 Eyefinity 6 64 GB Samsung 750 GB F3 Barracuda 3 TB 
Hard DriveHard DriveHard DriveHard Drive
WD Green 3 TB Crucial M4 256 GB Crucial M4 64 GB Barracuda 4 TB 
CoolingOSMonitorMonitor
Custom loop Windows 7 Pro 64 bit DELL U2311H (x3) DELL U2312HM (x2) 
MonitorMonitorKeyboardPower
9.7'' 2048x1536 (x3) IPAD3 retina Asus PB287Q 4K CM Quickfire TK (red) Corsair AX-1200i 
CaseMouseMouse Pad
Modded Chieftec Smart WH-02B-B-OB Logitech G700s 3M Ergonomic 
  hide details  
Kohver v.4
(25 items)
 
Kohver v.3
(21 items)
 
Soliton
(23 items)
 
CPUMotherboardGraphicsGraphics
Intel Core i7 3820 X79-UP4 Gigabyte 390X G1 Club3D 7870 Eyefinity 6 
RAMHard DriveHard DriveHard Drive
64 GB Barracuda 3 TB WD Green 3 TB Crucial M4 256 GB 
Hard DriveHard DriveHard DriveCooling
Crucial M4 64 GB Barracuda 4 TB Corsair LE 480HB Sythe big shuriken rev B 
CoolingCoolingCoolingOS
Nexxos 280mm radiator DC-LT Alphacool GPX 390M04 Windows 7 Pro 64 bit 
MonitorMonitorMonitorMonitor
DELL U2311H (x3) DELL U2312HM (x2) 9.7'' 2048x1536 (x3) IPAD3 retina Asus PB287Q 4K 
KeyboardPowerCaseMouse
CM Quickfire TK (red) Corsair AX-1200i Significantly modded Jupiter .. something alumi... Logitech G700s 
Mouse Pad
3M Ergonomic 
CPUMotherboardGraphicsGraphics
Intel Core i7 3820 X79-UP4 Gigabyte 390X G1 Club3D 7870 Eyefinity 6 
RAMHard DriveHard DriveHard Drive
64 GB Barracuda 3 TB WD Green 3 TB Crucial M4 256 GB 
Hard DriveHard DriveCoolingOS
Crucial M4 64 GB Barracuda 4 TB Sythe big shuriken rev B Windows 7 Pro 64 bit 
MonitorMonitorMonitorMonitor
DELL U2311H (x3) DELL U2312HM (x2) 9.7'' 2048x1536 (x3) IPAD3 retina Asus PB287Q 4K 
KeyboardPowerCaseMouse
CM Quickfire TK (red) Corsair AX-1200i Modded Sharkoon VS-3 Logitech G700s 
Mouse Pad
3M Ergonomic 
CPUMotherboardGraphicsGraphics
Intel Core i7 3820 X79-UP4 Sapphire 7950 FleX OC XFX 6770 Eyefinity 5 
GraphicsRAMHard DriveHard Drive
Club3D 7870 Eyefinity 6 64 GB Samsung 750 GB F3 Barracuda 3 TB 
Hard DriveHard DriveHard DriveHard Drive
WD Green 3 TB Crucial M4 256 GB Crucial M4 64 GB Barracuda 4 TB 
CoolingOSMonitorMonitor
Custom loop Windows 7 Pro 64 bit DELL U2311H (x3) DELL U2312HM (x2) 
MonitorMonitorKeyboardPower
9.7'' 2048x1536 (x3) IPAD3 retina Asus PB287Q 4K CM Quickfire TK (red) Corsair AX-1200i 
CaseMouseMouse Pad
Modded Chieftec Smart WH-02B-B-OB Logitech G700s 3M Ergonomic 
  hide details  
post #72 of 86
Quote:
Originally Posted by brettjv View Post

  • Do I care to bench real games? Sure, as long as we're playing 'you name a game, then I'll name a game'. That way you can pick an ATI-favoring game, and you'll win
  • The fact that 3dMark11 wasn't developed *for* either company, and both gpu makers do EVERYTHING in their power to get their cards as fast as possible in it
  • I take it you're aware that a 6990 won't even score much better than my 470's in 3dMark11 (let alone SLI 580's), and that's why you're taking issue with it as measurement?
  • I just have to ask, PS ... at what point do you acknowledge that your point that 6990 > 580SLI is not borne out by the facts, but rather, that the opposite appears to be the case?
.
  • You seemed to have completely missed the point. That is exactly the opposite of what I said is relevant when comparing performance.... You don't pick a game that is clearly developed for a particular brand and use that as an example... Why would you even think that that kind of nonsense would fly here? Games such as Dragon Age 2 are prime examples of driver developers not being apart of the game dev process or neglecting to create proper profiles even months after a game is released. You don't fault Nvidia cards for performing badly there, you fault the driver developers. Or games such as BC2 with extremely poor GPU usage on Fermi cards in C2Q and Phenom systems. Is the card bad? No, it's the driver devs who even admitted that they don't care about fixing the drivers for those older systems.
    Oh and I hate to break it to you but BF3 is not an AMD only developed game. It's an example of a game that both cards were heavily involved in with the development process.
  • You're mistaken. Nvidia has a history of cheating to skew results in 3DMark 2001 and 2003. There was a major issue that 2001 favored Nvidia cards heavily. Changes were later made to allow the test to more fairly bench ATI cards.
    Even still, 3DMark is just ONE game engine. The issue will always be: how do these cards perform in a variety of games using many different propreitary game engines? 3Dmark cannot tell you this.
  • I, honestly, didn't even know or really care that 470s perform better in 3DMark. I don't pay attention to synthetic benchmarks because I buy cards to be used on games I actually play that are run on completely different game engines than 3DMark.
  • When the average gameplay results show a 6990 at its stock freq of 880Mhz losing to 580s in SLI. The words to pay attention to here is "average gamplay". The part that actually matters to end users.


Quote:
Originally Posted by brettjv View Post

  • I said exactly what I meant re: the 590, and what I said was accurate.
.
  • You're completely missing the point. A 590 could never be used as an example of two 580s in SLI. But a 6990 is exactly two stock 6970s combined into one card. The 590 is horribly down clocked and limited due to having far fewer VRMs than regular 580s. Only something like the MARS II could represent somewhat similar performance. Even then, the power draw of a single PCB limits the OC headroom on that card as Fermi simply draws too much power to share one PCB. The people trying to argue that the two 6970s beating 580s wasn't indicative of the performance of a 6990 is what this discussion was about. Not whether or not the graphics core is the same on a 590...



Quote:
Originally Posted by brettjv View Post

  • even if it's 'unknown' what the exact effect of the post-rendering 'metering system' is with nV cards, the graphs themselves (in the above quoted article), using the data that is KNOWN, CLEARLY shows that nV cards are doing a much better job of producing evenly distributed timings when it comes to frame rendering.
  • You can't admit that you have no clue what is going on one moment and then say you know which one performs better the next... Fraps does not measure the image being output to the display. It can only gather data from the DirectX API. Did you not comprehend what was written about the "frame-metering" device adding "tens of milliseconds" of delays to the frame after fraps had taken a measurement? Tens of milliseconds means what? 10-30 milliseconds. That's actually MORE than the average frame delay time-frame which we are concerned with. The average frame delay is LESS than 10 milliseconds on the Tahiti cards.
    This would actually make input lag "tens of milliseconds" worse on Nvidia, in addition to not being able to read the true frame delays. You have no clue whether the frames are being evenly distributed or not. You can't read them beyond the DirectX API... Speculating on what you think is happening is irrelevant. There is zero evidence that frames are more evenly distributed on Nvidia because you can't even measure them on the monitor side with fraps. Trying to talk about render time when you're measuring at the DirectX API level is extremely ignorant as the time that has passed from rendering until showing up at the DirectX API is completely different. The information fraps is getting is not indicative of what you see on screen so why are trying to argue about which one has less micro-stutter?


Quote:
Originally Posted by brettjv View Post

  • I believe that AMD/ATI achieves it's higher 'scaling' by throwing caution to the wind, as it were, when it comes to microstutter, and nVidia does not (except in common benchies like 3dMark, where they both do it).
  • I believe that all available evidence at this point in time suggests that AMD get's it's superior scaling through driver programming that entirely favors 'scaling' over 'evenly distributed frametimes' ...
  • AMD's lack of a mechanism of 'waiting' (and I'm talking waiting to RENDER, not waiting to DISPLAY ... which is the focus of the 'end of the article' that you're talking about) .. is why the AMD cards 'scale better',
.
  • What evidence would even remotely support this? Both AMD and Nvidia have methods that are focused on smoothing out frame delays. There is nothing in the article that discusses drivers ignoring frame-time rendering in AMD any less than Nvidia...
  • Again, you're not comprehending what is written but instead are trying to inject your own personal opinions into this discussion. You need to point out what specific evidence shows this to be true before making statements like this.
  • Nowhere did it state that AMD lacks the ability to wait for rendering completion. AMD has its own method of averaging out frame times. Just because they don't use a frame-metering device on the GPU itself, doesn't mean they don't have a mechanism to address frame-time delays either... Did you even read the article? They discussed what AMD uses.






Now, the biggest problem with your entire set of statements, besides individually being incorrect in and of themselves, is that they are all based around the idea of multi-GPU scenarios. Unfortuneately, micro-stutter still happens with only a single card! All your opinions as to what you think AMD is doing to purposely ignore frame-times from the other cards goes right out the window. We have no clue what Nvidia's real frame-time delays are and until Nvidia releases an API for us to measure with, trying to discuss which one you think is better is completely and utterly pointless.
Edited by PoopaScoopa - 2/3/12 at 5:24am
post #73 of 86
Amazed by the in depth knowledge displayed by the forum members as I, am not really a gamer or video junkie. Each time a new breed comes out we get these subtle flame threads citing every review known to man and they all have different variables and blah blah blah. We have more than enough content right here on our forum to conclude this hypothesized battle of the giants. I'm quite sure from the collage of threads started in the benchmarking section we can find a playing field to exact a definitive "king of the hill" or take it to Hwbot in a personal challenge.1 vs 1, dos para dos, 3 blind mice vs 3 blind pieces of cheese is fair game. All I can think about with the ranting and raving is simple driver support, thermal output, FPS only matter to me for a few minutes at a time anyway so let's put it on "e-paper" and real time this. Somebody get their Sli 580s and someone get their 6990 and ring the bell...no control allowed. because basically in a real world race between cars...you are not going to have the same motor, identical displacement, torque, hp at the flywheel, yeah you know where I'm going with this.You cannot base the performance of the cards that everyone owns by a friggin review, or any hardware for that.

And on a real note not to get off topic, Poopa send me the link so I can unlock my 6950s when they get here sir...I'll put them up against 2 Sli 580s rolleyes.gif
Edited by PROBN4LYFE - 2/3/12 at 4:10am
PRONE
(19 items)
 
Cap't Crunch
(14 items)
 
 
CPUMotherboardGraphicsRAM
i7 3930k @ 5Ghz Msi X79A-GD45 Gigabyte Radeon 7950 Patriot Sector 7 
Hard DriveOptical DriveCoolingCooling
2xCorsair Force Series 3 Raid-0 LG DVDR/W Custom Water Loop Swiftech Apogee Drive II waterblock/pump 
CoolingCoolingOSMonitor
Swiftech MCRES-Micro Rev 2 Larkooler 240mm Rad server 2008 RC2 HP 2207HD 
KeyboardPowerCaseMouse
HP Classic wireless PC Power and Cooling Silencer Mk II 950W High P... NZXT Phantom...again lol HP Classic wireless 
Mouse PadAudioOther
MicroCenter Onboard TBA 
CPUMotherboardGraphicsRAM
2500k Maximus V Extreme Radeon 6950HD Avexir 
Hard DriveOptical DriveCoolingOS
Corsair Force 3 LG DVD RW Corsair H100i WIndowd 8.1 
MonitorKeyboardPowerCase
HP 2159M Logitech CX750M Cosair Carbide 540AIR 
MouseMouse Pad
logitech logitech 
  hide details  
PRONE
(19 items)
 
Cap't Crunch
(14 items)
 
 
CPUMotherboardGraphicsRAM
i7 3930k @ 5Ghz Msi X79A-GD45 Gigabyte Radeon 7950 Patriot Sector 7 
Hard DriveOptical DriveCoolingCooling
2xCorsair Force Series 3 Raid-0 LG DVDR/W Custom Water Loop Swiftech Apogee Drive II waterblock/pump 
CoolingCoolingOSMonitor
Swiftech MCRES-Micro Rev 2 Larkooler 240mm Rad server 2008 RC2 HP 2207HD 
KeyboardPowerCaseMouse
HP Classic wireless PC Power and Cooling Silencer Mk II 950W High P... NZXT Phantom...again lol HP Classic wireless 
Mouse PadAudioOther
MicroCenter Onboard TBA 
CPUMotherboardGraphicsRAM
2500k Maximus V Extreme Radeon 6950HD Avexir 
Hard DriveOptical DriveCoolingOS
Corsair Force 3 LG DVD RW Corsair H100i WIndowd 8.1 
MonitorKeyboardPowerCase
HP 2159M Logitech CX750M Cosair Carbide 540AIR 
MouseMouse Pad
logitech logitech 
  hide details  
post #74 of 86
Poopa, if you're so sure HD6990/HD6970 in CF is faster than GTX 580 SLI, go ahead and bench them and we can compare the results.

The sheer amount of ignorance is amazing, GTX 580 SLI > HD6990/HD6970 CF fact.

http://www.xbitlabs.com/articles/graphics/display/geforce-gtx-580-sli_7.html#sect0
Quote:
Finally, we’ve reached the Ultra HD display mode which is the main battlefield for premium-class graphics subsystems. The GeForce GTX 580 SLI subsystem shows its best, beating the Radeon HD 6990 by 26% on average and by 60-80% in individual tests. The only exception is Dragon Age II which, incidentally, is used by AMD for promoting its graphics cards. The increased speed of the GeForce GTX 580 SLI can make a difference in terms of playability in such games as Aliens vs. Predator and Total War: Shogun 2. Otherwise, the GeForce GTX 590 should be quite enough.

Edited by Clairvoyant129 - 2/3/12 at 5:52am
 
Surface Pro 3
(7 items)
 
 
CPUMotherboardGraphicsRAM
Core i7 3720QM @ 2.6GHz/3.6GHz Turbo  HM77 Geforce GT650M 1GB GDDR5 @ 900MHz 16GB @ 1600MHz  
Hard DriveOSMonitor
256GB Samsung PM830 SSD OSX 10.8 Mountain Lion 2880x1800 Retina Display 
CPUGraphicsRAMHard Drive
Core i5-4300U @1.9GHz/2.5GHz Turbo Intel HD4400 8GB @ 1600MHz 256GB SSD 
OSMonitorKeyboard
Windows 8.1 Pro 2160x1440 ClearType HD  Surface Pro Type Cover 3 
  hide details  
 
Surface Pro 3
(7 items)
 
 
CPUMotherboardGraphicsRAM
Core i7 3720QM @ 2.6GHz/3.6GHz Turbo  HM77 Geforce GT650M 1GB GDDR5 @ 900MHz 16GB @ 1600MHz  
Hard DriveOSMonitor
256GB Samsung PM830 SSD OSX 10.8 Mountain Lion 2880x1800 Retina Display 
CPUGraphicsRAMHard Drive
Core i5-4300U @1.9GHz/2.5GHz Turbo Intel HD4400 8GB @ 1600MHz 256GB SSD 
OSMonitorKeyboard
Windows 8.1 Pro 2160x1440 ClearType HD  Surface Pro Type Cover 3 
  hide details  
post #75 of 86
Quote:
Originally Posted by Clairvoyant129 View Post

Poopa, if you're so sure HD6990/HD6970 in CF is faster than GTX 580 SLI, go ahead and bench them and we can compare the results.
The sheer amount of ignorance is amazing, GTX 580 SLI > HD6990/HD6970 CF fact.
http://www.xbitlabs.com/articles/graphics/display/geforce-gtx-580-sli_7.html#sect0
Quote:
Finally, we’ve reached the Ultra HD display mode which is the main battlefield for premium-class graphics subsystems. The GeForce GTX 580 SLI subsystem shows its best, beating the Radeon HD 6990 by 26% on average and by 60-80% in individual tests. The only exception is Dragon Age II which, incidentally, is used by AMD for promoting its graphics cards. The increased speed of the GeForce GTX 580 SLI can make a difference in terms of playability in such games as Aliens vs. Predator and Total War: Shogun 2. Otherwise, the GeForce GTX 590 should be quite enough.
Look at the review date of the article you posted... doh.gif

There is no "think"
1321868452wmGLudOmqb_4_1.jpg
1321868452wmGLudOmqb_4_2.jpg
1327929236vMBu0Xi2wl_10_3.gif
1327929236vMBu0Xi2wl_10_4.gif
1320736801Zaszfytca6_8_5.gif
Edited by PoopaScoopa - 2/3/12 at 6:19am
post #76 of 86
This thread... rolleyes.gif
post #77 of 86
Quote:
  • You seemed to have completely missed the point. That is exactly the opposite of what I said is relevant when comparing performance.... You don't pick a game that is clearly developed for a particular brand and use that as an example... Why would you even think that that kind of nonsense would fly here? Games such as Dragon Age 2 are prime examples of driver developers not being apart of the game dev process or neglecting to create proper profiles even months after a game is released. You don't fault Nvidia cards for performing badly there, you fault the driver developers. Or games such as BC2 with extremely poor GPU usage on Fermi cards in C2Q and Phenom systems. Is the card bad? No, it's the driver devs who even admitted that they don't care about fixing the drivers for those older systems.
    Oh and I hate to break it to you but BF3 is not an AMD only developed game. It's an example of a game that both cards were heavily involved in with the development process.

He's saying that if two people were to bench their cards against each other each would choose a game that had a higher chance of them winning. But you keep coming back to BF3 as though that's this great example of the 6990 being better than SLI 580's. I happen to have BF3 and would be happy to compare FPS with someone with a 6990. I hope they average over 100 FPS... wink.gif

(Btw, it would probably be best if you stopped posting that ridiculous [H] BF3 comparison as [H] is a known AMD-biased site and, more crucially, Multiplayer should never be used in serious benchmarking due to the many unkown variables involved)
Quote:
You can't admit that you have no clue what is going on one moment and then say you know which one performs better the next... Fraps does not measure the image being output to the display. It can only gather data from the DirectX API. Did you not comprehend what was written about the "frame-metering" device adding "tens of milliseconds" of delays to the frame after fraps had taken a measurement? Tens of milliseconds means what? 10-30 milliseconds. That's actually MORE than the average frame delay time-frame which we are concerned with. The average frame delay is LESS than 10 milliseconds on the Tahiti cards.
This would actually make input lag "tens of milliseconds" worse on Nvidia, in addition to not being able to read the true frame delays. You have no clue whether the frames are being evenly distributed or not. You can't read them beyond the DirectX API... Speculating on what you think is happening is irrelevant. There is zero evidence that frames are more evenly distributed on Nvidia because you can't even measure them on the monitor side with fraps. Trying to talk about render time when you're measuring at the DirectX API level is extremely ignorant as the time that has passed from rendering until showing up at the DirectX API is completely different. The information fraps is getting is not indicative of what you see on screen so why are trying to argue about which one has less micro-stutter?

Umm, I don't know about you but I don't need FRAPS to confirm to me what I see with my own eyes. Just ask Vega or some other former 6990 owners about microstutter and they will tell you all you need to know (and apparently its as bad with the 7xxx cards as well)
Quote:
What evidence would even remotely support this? Both AMD and Nvidia have methods that are focused on smoothing out frame delays. There is nothing in the article that discusses drivers ignoring frame-time rendering in AMD any less than Nvidia...
Again, you're not comprehending what is written but instead are trying to inject your own personal opinions into this discussion. You need to point out what specific evidence shows this to be true before making statements like this.
Nowhere did it state that AMD lacks the ability to wait for rendering completion. AMD has its own method of averaging out frame times. Just because they don't use a frame-metering device on the GPU itself, doesn't mean they don't have a mechanism to address frame-time delays either... Did you even read the article? They discussed what AMD uses.

Dude, for whatever reason, noticeable microstutter IS occuring on AMD cards. Brett is just postulating a cause and it seems more than reasonable to me considering the fact that Nvidia does not suffer such issues nearly as bad. It really doesn't matter that you can't prove what the cause is, IT IS HAPPENING, so obviously whatever AMD is doing (or not doing) in the driver department isn't working.
Quote:
Now, the biggest problem with your entire set of statements, besides individually being incorrect in and of themselves, is that they are all based around the idea of multi-GPU scenarios. Unfortuneately, micro-stutter still happens with only a single card! All your opinions as to what you think AMD is doing to purposely ignore frame-times from the other cards goes right out the window. We have no clue what Nvidia's real frame-time delays are and until Nvidia releases an API for us to measure with, trying to discuss which one you think is better is completely and utterly pointless.

Again, all I can offer here is my personal experience with both GTX 560Ti's and GTX 580's. I've run both in single and dual card configs and have noticed microstutter only on very rare occasions (typically during 3DMark and Heaven benchmark runs). While I don't own any AMD cards, there are plenty of AMD owners here on OCN that will testify that AMD cards tend to display an inordinate amount of microstutter. Some have even gotten rid of their cards because of it.

I think its perfectly reasonable to assume that, given the disparity between AMD and Nvidia cards both in CF/SLI performance and presence of microstutter, that Nvidia is sacrificing scaling performance in order to combat microstutter while AMD is not....
post #78 of 86
''GTX 580 SLI > HD6990/HD6970 CF fact''

No. It's an internet myth. In some games 580 SLi is faster, and in others 6970 CF is faster.

6970 scaling is better then SLI scaling, that the reason why.
    
CPUCPUMotherboardGraphics
i7-5930k @ 4.9  i7-3930k @ 5.1 (2nd rig) Asus Rampage V (X99) + Asus Rampage IV (2nd rig) 2X Nvidia 1080 Ti SLI watercooled 
GraphicsRAMRAMHard Drive
2X Nvidia 980 Ti SLI watercooled (2nd rig)  4X8GB=32GB G.Skill DDR4-2700 4X4GB=16GB Ripjaws DDR3 2400 CL9 (2nd rig) 3X Samsung 840 Evo 1TB SSD 
Hard DriveHard DriveOptical DriveCooling
Samsung 850 Pro 250GB 3XSeagate 3TB LG Blu-ray writer 2X Quad-120 MCR420 
CoolingCoolingCoolingCooling
2X Triple-120 MCR320 + 1X Triple-120 Feser 2X Dual-120 XSPC RX240 32 Gentle Typhoon fans (mix of 3000 and 1850rpm) Reservoir 2X Koolance RP-452x2 with 4X MCP655 p... 
CoolingOSMonitorKeyboard
2X EK Supreme HF CPU waterblock Windows 10 Pro x64 Acer XB270HU IPS G-Sync 2440p (main rig) 3X 30'... CoolerMaster Masterkeys Pro S MX blue switch, C... 
PowerCaseMouseAudio
Antec HCP1200 + AX850 dual PSU main rig, AX120... Mountain Mods Extended Ascension + Pedestal 24 ... Logitech G502 x 2 PSB speakers 5.1 
  hide details  
    
CPUCPUMotherboardGraphics
i7-5930k @ 4.9  i7-3930k @ 5.1 (2nd rig) Asus Rampage V (X99) + Asus Rampage IV (2nd rig) 2X Nvidia 1080 Ti SLI watercooled 
GraphicsRAMRAMHard Drive
2X Nvidia 980 Ti SLI watercooled (2nd rig)  4X8GB=32GB G.Skill DDR4-2700 4X4GB=16GB Ripjaws DDR3 2400 CL9 (2nd rig) 3X Samsung 840 Evo 1TB SSD 
Hard DriveHard DriveOptical DriveCooling
Samsung 850 Pro 250GB 3XSeagate 3TB LG Blu-ray writer 2X Quad-120 MCR420 
CoolingCoolingCoolingCooling
2X Triple-120 MCR320 + 1X Triple-120 Feser 2X Dual-120 XSPC RX240 32 Gentle Typhoon fans (mix of 3000 and 1850rpm) Reservoir 2X Koolance RP-452x2 with 4X MCP655 p... 
CoolingOSMonitorKeyboard
2X EK Supreme HF CPU waterblock Windows 10 Pro x64 Acer XB270HU IPS G-Sync 2440p (main rig) 3X 30'... CoolerMaster Masterkeys Pro S MX blue switch, C... 
PowerCaseMouseAudio
Antec HCP1200 + AX850 dual PSU main rig, AX120... Mountain Mods Extended Ascension + Pedestal 24 ... Logitech G502 x 2 PSB speakers 5.1 
  hide details  
post #79 of 86
Quote:
Originally Posted by Majin SSJ Eric View Post

He's saying that if two people were to bench their cards against each other each would choose a game that had a higher chance of them winning. But you keep coming back to BF3 as though that's this great example of the 6990 being better than SLI 580's. I happen to have BF3 and would be happy to compare FPS with someone with a 6990. I hope they average over 100 FPS... wink.gif
(Btw, it would probably be best if you stopped posting that ridiculous [H] BF3 comparison as [H] is a known AMD-biased site and, more crucially, Multiplayer should never be used in serious benchmarking due to the many unkown variables involved)
Quote:
You can't admit that you have no clue what is going on one moment and then say you know which one performs better the next... Fraps does not measure the image being output to the display. It can only gather data from the DirectX API. Did you not comprehend what was written about the "frame-metering" device adding "tens of milliseconds" of delays to the frame after fraps had taken a measurement? Tens of milliseconds means what? 10-30 milliseconds. That's actually MORE than the average frame delay time-frame which we are concerned with. The average frame delay is LESS than 10 milliseconds on the Tahiti cards.
This would actually make input lag "tens of milliseconds" worse on Nvidia, in addition to not being able to read the true frame delays. You have no clue whether the frames are being evenly distributed or not. You can't read them beyond the DirectX API... Speculating on what you think is happening is irrelevant. There is zero evidence that frames are more evenly distributed on Nvidia because you can't even measure them on the monitor side with fraps. Trying to talk about render time when you're measuring at the DirectX API level is extremely ignorant as the time that has passed from rendering until showing up at the DirectX API is completely different. The information fraps is getting is not indicative of what you see on screen so why are trying to argue about which one has less micro-stutter?
Umm, I don't know about you but I don't need FRAPS to confirm to me what I see with my own eyes. Just ask Vega or some other former 6990 owners about microstutter and they will tell you all you need to know (and apparently its as bad with the 7xxx cards as well)
Quote:
What evidence would even remotely support this? Both AMD and Nvidia have methods that are focused on smoothing out frame delays. There is nothing in the article that discusses drivers ignoring frame-time rendering in AMD any less than Nvidia...
Again, you're not comprehending what is written but instead are trying to inject your own personal opinions into this discussion. You need to point out what specific evidence shows this to be true before making statements like this.
Nowhere did it state that AMD lacks the ability to wait for rendering completion. AMD has its own method of averaging out frame times. Just because they don't use a frame-metering device on the GPU itself, doesn't mean they don't have a mechanism to address frame-time delays either... Did you even read the article? They discussed what AMD uses.
Dude, for whatever reason, noticeable microstutter IS occuring on AMD cards. Brett is just postulating a cause and it seems more than reasonable to me considering the fact that Nvidia does not suffer such issues nearly as bad. It really doesn't matter that you can't prove what the cause is, IT IS HAPPENING, so obviously whatever AMD is doing (or not doing) in the driver department isn't working.
Quote:
Now, the biggest problem with your entire set of statements, besides individually being incorrect in and of themselves, is that they are all based around the idea of multi-GPU scenarios. Unfortuneately, micro-stutter still happens with only a single card! All your opinions as to what you think AMD is doing to purposely ignore frame-times from the other cards goes right out the window. We have no clue what Nvidia's real frame-time delays are and until Nvidia releases an API for us to measure with, trying to discuss which one you think is better is completely and utterly pointless.
Again, all I can offer here is my personal experience with both GTX 560Ti's and GTX 580's. I've run both in single and dual card configs and have noticed microstutter only on very rare occasions (typically during 3DMark and Heaven benchmark runs). While I don't own any AMD cards, there are plenty of AMD owners here on OCN that will testify that AMD cards tend to display an inordinate amount of microstutter. Some have even gotten rid of their cards because of it.
I think its perfectly reasonable to assume that, given the disparity between AMD and Nvidia cards both in CF/SLI performance and presence of microstutter, that Nvidia is sacrificing scaling performance in order to combat microstutter while AMD is not....

I don't know why you'd call [H] an AMD biased site, especially when they constantly bash on AMD. They've been called an Nvidia biased site plenty of times as well. If that's your excuse for dismissing the results, we can post from other sites giving the same performance. They weren't all from multiplayer and it wasn't just BF3... I would purposely not include games that favor AMD or Nvidia to be fair. Some people also seem to not realize that driver versions make a huge impact on performance as well. Drivers need time to properly support games, especially ones that weren't sponsored by a particular brand. The difference between a 920 @ 3.7 and a SB @ 4.8 makes a huge difference in SLI too.


I've had 7950 GX2, 8800GTX SLI, 280 SLI, 460 SLI and 570 SLI and I can tell you that I've noticed micro-stutter with Nvidia as well. The problem is, we're unable to actually measure it like we can somewhat do with AMD cards. Finding out that Nvidia cards have "tens of milliseconds" more of input-lag than AMD isn't all that exciting either. Brett is trying to use measurements that are invalid as evidence to argue AMD having worse rendering time-delays when we have nothing to compare it to. He likes to mention "all the evidence" when there isn't any other then personal opinions. I'm not interested in debating opinions. It's perfectly fine if that's what he believes. Just don't try to pass it off as fact.



Here's another example with a downclocked 800Mhz(compare to the 880Mhz 6970s):
AVP1600.png
COD1600.png
F11600.png
JC21600.png[img]MET1600.png
Edited by PoopaScoopa - 2/3/12 at 7:47am
post #80 of 86
Quote:
Originally Posted by Majin SSJ Eric View Post

(Btw, it would probably be best if you stopped posting that ridiculous [H] BF3 comparison as [H] is a known AMD-biased site and, more crucially, Multiplayer should never be used in serious benchmarking due to the many unkown variables involved)


Which source do you trust and why, thanks for the anwser.
pho-ku
(13 items)
 
  
CPUGraphics
Zilog Z-80 MDA 80×25 4Kb 
  hide details  
pho-ku
(13 items)
 
  
CPUGraphics
Zilog Z-80 MDA 80×25 4Kb 
  hide details  
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Hardware News
This thread is locked  
Overclock.net › Forums › Industry News › Hardware News › [HH] 7950 OC vs 7970 OC vs 580 OC vs 6970 OC