Overclock.net › Forums › Graphics Cards › NVIDIA › NVIDIA GTX 590 Owners Club
New Posts  All Forums:Forum Nav:

NVIDIA GTX 590 Owners Club - Page 308

post #3071 of 5154
Thanks for the tip. Went from 13xxx to 14xxx PPD, but taking alot longer to render. I guess it's similar to bigadv on CPU.

I'm not OCing so it's not that. Today i launched F@H and now it's back to normal. Gotta love PCs, never boot the same way twice. Probably had something loaded that interfered with folding. All is good.

172

BTW, jealous of your (and others) dual water cooled 590s. I was going to do that recently when one online retailer got 12+ in stock, but they sold out in no time, so missed out on that. Maybe i'll just wait for 6 series.
X79
(18 items)
 
  
CPUMotherboardGraphicsRAM
Core i7 3930K Rampage IV Formula GeForce GTX 980 Ti Strix Dominator GT 16GB 
Hard DriveHard DriveHard DriveHard Drive
850 EVO 1TB 840 EVO 1TB 840 EVO 1TB 840 EVO 1TB 
Optical DriveCoolingOSMonitor
BH10 NH-U12S + NF-F12 (2) Windows 10 Pro Predator X34 
KeyboardPowerCaseMouse
Black Widow AX1200 Obsidian 450D Taipan 
Mouse PadAudio
Goliathus Sound Blaster Zx 
  hide details  
Reply
X79
(18 items)
 
  
CPUMotherboardGraphicsRAM
Core i7 3930K Rampage IV Formula GeForce GTX 980 Ti Strix Dominator GT 16GB 
Hard DriveHard DriveHard DriveHard Drive
850 EVO 1TB 840 EVO 1TB 840 EVO 1TB 840 EVO 1TB 
Optical DriveCoolingOSMonitor
BH10 NH-U12S + NF-F12 (2) Windows 10 Pro Predator X34 
KeyboardPowerCaseMouse
Black Widow AX1200 Obsidian 450D Taipan 
Mouse PadAudio
Goliathus Sound Blaster Zx 
  hide details  
Reply
post #3072 of 5154
I feel i have to chip in and say that while some comments about the 590's VRAM are accurate and constructive i do think on these forums there are a hell of a lot of people jumping on the bandwagon QQ ing about the 590's VRAM. It is particularly bad when there could be other issues which the poster has with their card but all their hear back is " that will never work. Not enough vram". This is way too much of a brash statement and not constructive to people solving their problems. This post isnt in realtion to any immidiate posts but its something i have wanted to say for a fair bit of time while reading these forums. The last couple investivations show fairly well the effect of vram running out (and that effect is not subtle)

I am a Quad SLI 590 user and they can handle large resolutions just fine. You just have to be aware of its limitations (all things have their limits). I run MSI afterburner and so keep a keen eye on the ram usage. Playing in surround at 5760 x 1080 takes a hell of a lot of grunt and while one 590 does the job it can get a bit pushed at times and for example BF3 you have to turn the settings down to keep good frames. Having 2 really makes the experience more pleasurable. It also makes 3d vision at good frames and settings plausable across 3 screens. All you have to do is steer clear of stupidly high AA settings and you are laughing.

At 5760 i can assure you the cards are not so powerful that you would have it pinneed up super high anyway. I prefer to just keep maximum settings and be able to run at 80 - 90 frames on my 120hz screens. Which is a noticably better experience than 50-60 fps on mediums. I have never been a massive fan of AA anyway and so am quite happy using FXAA or some minimal settings when it permits. You just have to keep an msi afterburner open and monitor your usage. Which shouldnt be a problem if you have such a big desktop. Hell you dont even need to do that because you will soon know when you run out of ram because it stutters to ****. For example this happens on crysis 2 with the DX11 texture pack on maximum settings. All you need do is turn a couple things down one click and the problem is gone. Not that i found that game any fun anyway smile.gif BF3 for instance i run on high textures and it uses 1.2-1.3 gigs of ram. I have run it on ultra textures and while its capping out the ram i have not noticed any negative effect. To avoid stutter in multiplayer i therefore keep it to high settings and all is good. BF3 is a bad topic really because after beta they quite badly broke tripple support. Can you honestly say the diference between high and ultra textures is that much visually??

Other games commonly dont even come close to useing up all the vram. SKyrim for example runs at around 900 megs maybe 1100. It is also worth reminding people that going from single screen to tripple does not tripple the ammount of vram used. It adds 250-350 to the usage most of the time if i had eyeball it.

If you want to have a quad SLI solution with lots of grunt and not too much noise the 590 Quad comes recomended from me. Particularly if you want to use that extra gunt to play 3d surround. Skyrim in 3d on tripple screens really is a sight to behold.

So there you have it. My rant is over. Yes the 1.5 gig of ram is something to know about when your getting into it and you might need to fiddle your settings around but no it is not the end of the world and yes you can still have a fantastic setup with them. I would rate the vram issue less of an issue than simply having to wait for decent quad sli profiling to arrive as quad is a bit more fussy than a single 590 it would seem.

One of the reasons i went away from ATI and their dreadfully slow to come out crossfire profiles.
Edited by timd78 - 11/20/11 at 12:25pm
post #3073 of 5154
Quote:
You're reading into that statement too much, replication does occur.

http://developer.download.nvidia.com...s_2011_Feb.pdf

I stand corrected. Frame buffer memory replication *does* occur in SLI.

I can only think of two reasons for this: Either it's more efficient to replicate in order to output frames from one frame buffer memory set as opposed to routing it all from different un-replicated sets, or the four threads are sharing the same references. If it's the latter reason, I'm curious as to why they would? For frame sync'ing?
Quote:
A note on GPU Memory in SLI
In all SLI-rendering modes all the graphics API resources (such as buffers or textures) that would normally be expected to be placed in GPU memory are automatically replicated in the memory of all the GPUs in the SLI configuration. This means that on an SLI system with two 512MB video cards, there is still only 512MB of onboard video memory available to the application. Any data update performed from the CPU on a resource placed in GPU memory (for example, dynamic texture updates) will usually require the update to be broadcast other GPUs. This can introduce a performance penalty depending on the size and characteristics of the data. Other performance considerations are covered in the section on SLI performance.
THE RIG
(20 items)
 
  
CPUMotherboardGraphicsRAM
Intel Core i7 2600K @ 4.6Ghz + 1.38V Asus P8P67 Pro EVGA GTX 780 Hydro Copper 2-Way SLI Corsair Dominator 8 GB PC3-12800 1600mHz Dual C... 
Hard DriveHard DriveCoolingCooling
OCZ RevoDrive 3 OCZ Vertex 4 EK + Swiftech Blocks Durelene Clear Tubing 
CoolingCoolingCoolingOS
Mayhem's Pastel Sunset Yellow coolant 2 x 3 x 120mm EK Rads 7x 120mm Gentle Typhoon Windows 7 Pro 
MonitorKeyboardPowerCase
Dell 30" Logitech DiNovo Edge Corsair AX1200 MountainMods U2-UFO 
MouseAudioAudioAudio
Logitech Performance MX Beyerdynamic 880 Pro 250Ohm Schiit Audio Magni / Modi Amp + DAC stack Giant Squid Audio Omni Directional Lapel Mic 
  hide details  
Reply
THE RIG
(20 items)
 
  
CPUMotherboardGraphicsRAM
Intel Core i7 2600K @ 4.6Ghz + 1.38V Asus P8P67 Pro EVGA GTX 780 Hydro Copper 2-Way SLI Corsair Dominator 8 GB PC3-12800 1600mHz Dual C... 
Hard DriveHard DriveCoolingCooling
OCZ RevoDrive 3 OCZ Vertex 4 EK + Swiftech Blocks Durelene Clear Tubing 
CoolingCoolingCoolingOS
Mayhem's Pastel Sunset Yellow coolant 2 x 3 x 120mm EK Rads 7x 120mm Gentle Typhoon Windows 7 Pro 
MonitorKeyboardPowerCase
Dell 30" Logitech DiNovo Edge Corsair AX1200 MountainMods U2-UFO 
MouseAudioAudioAudio
Logitech Performance MX Beyerdynamic 880 Pro 250Ohm Schiit Audio Magni / Modi Amp + DAC stack Giant Squid Audio Omni Directional Lapel Mic 
  hide details  
Reply
post #3074 of 5154
Quote:
Originally Posted by jpongin View Post

Quote:
You're reading into that statement too much, replication does occur.
http://developer.download.nvidia.com...s_2011_Feb.pdf
I stand corrected. Frame buffer memory replication *does* occur in SLI.
I can only think of two reasons for this: Either it's more efficient to replicate in order to output frames from one frame buffer memory set as opposed to routing it all from different un-replicated sets, or the four threads are sharing the same references. If it's the latter reason, I'm curious as to why they would? For frame sync'ing?
Quote:
A note on GPU Memory in SLI
In all SLI-rendering modes all the graphics API resources (such as buffers or textures) that would normally be expected to be placed in GPU memory are automatically replicated in the memory of all the GPUs in the SLI configuration. This means that on an SLI system with two 512MB video cards, there is still only 512MB of onboard video memory available to the application. Any data update performed from the CPU on a resource placed in GPU memory (for example, dynamic texture updates) will usually require the update to be broadcast other GPUs. This can introduce a performance penalty depending on the size and characteristics of the data. Other performance considerations are covered in the section on SLI performance.

I genuinely don't know how to dumb this down so, follow me.

In a standard SLI situation, slot to slot, the cards are primary/secondary etc...in a Dual GPU card, there is no true master/slave.

SLI USED to be basically half your screen...Each card would literally take, top/bottom or top/middle/bottom (three) etc etc etc...This doesn't happen anymore because the world figured out they do AFR (frame by frame) instead of SFR (Half/Half).

So right now, basically 90% of the world's SLI is enabled to do AFR.

AFR in a dual gpu card is truly RANDOM and this is what I was discussing.

I have 2 cards at work where generally, core 2 does the brunt of the work and at home I have a card where core 1 does 20-30% more work. It is genuinely, random.

In every dual GPU card, there's an inherent bridge...Always has been, always will be...It's typically @ 400mhz...But as I said, the frame buffer process is truly random, it has to be for the cards to work in a true tandem...As we all know, not all cores are equal.

I'm EXTREMELY familiar with the 7900GX2 because it was Alienware's first custom card when I was a lowly tech here...That's another infamous card that NEVER worked in Vsync, always had to be AFR.

Ram is split, it's ALWAYS been split because it has to be for the ram to be mirrored, if it wasn't then the SLI configuration would operate on the lower memory as max ramdec...So if you had 1500 on 1 card and 1200 on the other, 1200 becomes Max/Primary...Regardless of which core it is.

You're genuinely over-thinking how SLI works...

At any resolution higher than 1900, you very simply need more Vram...There's no real discussion to be had because it's a fact...

With 1.5gb you'll choke the VRAM and won't receive the results you're looking for, regardless of how many cards you dump on the project.

If you're going to do a single monitor then you're fine until the standards change with Kepler JUST down the road because, I have a very strong feeling, 1.5gb isn't going to cut it anymore.

Now, again, the genuine max ram usage for most games/apps atm is 1750...JUST over the 1500 which is why it actually performs "well" but, not as well as it should...

In my opinion, you're better off getting 2x 580 3gbs, overclock them and frag to the max.
post #3075 of 5154
Good news everyone!
Quote:
Originally Posted by EK CS 
Thank you for your interest.

We are familiar with PCB redesing Nvidia did, and all our EK-FC590 released within last 6 months already have neccessary changes made and block fit.

There is 0.64mm gap between inductors and block according to my 3D model, which is more then enough.
Teatime
(14 items)
 
Epic $620 Build
(11 items)
 
 
CPUMotherboardGraphicsRAM
6700k Z170 Classified OG Titan x2 Trident V 3000 16gb 
Hard DriveCoolingOSMonitor
SM951 512 + 840Pro 256 + others Raystorm, PMP-500, EK-FCTitan x2, 10 fans of rad Win10 Pro x64  Multi ASUS PB278Q / 85Hz 
KeyboardPowerCaseMouse
Maxkeyboards x8 Purple Corsair AX1200i Enthoo Primo SE (Red) Logitech G502 
Mouse PadAudio
Desk TiHD + DT770Pro/250 +LDMkII 
CPUMotherboardGraphicsRAM
Phenom II X4 960T Zosma @ X6 ASUS M4A88T-M Sapphire HD 6870 G.SKILL Sniper LV 1600 
Hard DriveOptical DriveCoolingOS
WD Caviar Green 1TB Some LG Thing Hyper 212+ Win 7 Pro x64 
MonitorPowerCase
Samsung 173v Antec 430D Rosewill mATX Cheapie 
  hide details  
Reply
Teatime
(14 items)
 
Epic $620 Build
(11 items)
 
 
CPUMotherboardGraphicsRAM
6700k Z170 Classified OG Titan x2 Trident V 3000 16gb 
Hard DriveCoolingOSMonitor
SM951 512 + 840Pro 256 + others Raystorm, PMP-500, EK-FCTitan x2, 10 fans of rad Win10 Pro x64  Multi ASUS PB278Q / 85Hz 
KeyboardPowerCaseMouse
Maxkeyboards x8 Purple Corsair AX1200i Enthoo Primo SE (Red) Logitech G502 
Mouse PadAudio
Desk TiHD + DT770Pro/250 +LDMkII 
CPUMotherboardGraphicsRAM
Phenom II X4 960T Zosma @ X6 ASUS M4A88T-M Sapphire HD 6870 G.SKILL Sniper LV 1600 
Hard DriveOptical DriveCoolingOS
WD Caviar Green 1TB Some LG Thing Hyper 212+ Win 7 Pro x64 
MonitorPowerCase
Samsung 173v Antec 430D Rosewill mATX Cheapie 
  hide details  
Reply
post #3076 of 5154
Thanks for the explanation Masked. I completely agree with you about VRAM and how it's used in games at large resolutions. I dig *deep* and I'm all about learning new stuff so two more questions to the experts:

1) On VRAM - I guess I'm still confused as to why the VRAM still needs to be mirrored in the first place. Do the threads share the same variables? Or Is it faster to output frames from a replicated single frame buffer memory set? Or is it some other reason?

2) On AFR randomness - I understand the split and alternate methods, and how AFR is the leading and more popular one. So I guess my next question is if these frames are being displayed randomly, how come in a Quad SLI configuration, it appears to me that all my frames are being rendered in order? Is it the result of First-In-First-Out timing between the four GPUs? And if the frames are rendered more out of order than in order, is this what causes the "micro stuttering" effect? If it's truly random with no controller logic to order frames, am I right to conclude that this wouldn't horizontally scale well? Because increasing threads should increase the likelihood of randomness right?
THE RIG
(20 items)
 
  
CPUMotherboardGraphicsRAM
Intel Core i7 2600K @ 4.6Ghz + 1.38V Asus P8P67 Pro EVGA GTX 780 Hydro Copper 2-Way SLI Corsair Dominator 8 GB PC3-12800 1600mHz Dual C... 
Hard DriveHard DriveCoolingCooling
OCZ RevoDrive 3 OCZ Vertex 4 EK + Swiftech Blocks Durelene Clear Tubing 
CoolingCoolingCoolingOS
Mayhem's Pastel Sunset Yellow coolant 2 x 3 x 120mm EK Rads 7x 120mm Gentle Typhoon Windows 7 Pro 
MonitorKeyboardPowerCase
Dell 30" Logitech DiNovo Edge Corsair AX1200 MountainMods U2-UFO 
MouseAudioAudioAudio
Logitech Performance MX Beyerdynamic 880 Pro 250Ohm Schiit Audio Magni / Modi Amp + DAC stack Giant Squid Audio Omni Directional Lapel Mic 
  hide details  
Reply
THE RIG
(20 items)
 
  
CPUMotherboardGraphicsRAM
Intel Core i7 2600K @ 4.6Ghz + 1.38V Asus P8P67 Pro EVGA GTX 780 Hydro Copper 2-Way SLI Corsair Dominator 8 GB PC3-12800 1600mHz Dual C... 
Hard DriveHard DriveCoolingCooling
OCZ RevoDrive 3 OCZ Vertex 4 EK + Swiftech Blocks Durelene Clear Tubing 
CoolingCoolingCoolingOS
Mayhem's Pastel Sunset Yellow coolant 2 x 3 x 120mm EK Rads 7x 120mm Gentle Typhoon Windows 7 Pro 
MonitorKeyboardPowerCase
Dell 30" Logitech DiNovo Edge Corsair AX1200 MountainMods U2-UFO 
MouseAudioAudioAudio
Logitech Performance MX Beyerdynamic 880 Pro 250Ohm Schiit Audio Magni / Modi Amp + DAC stack Giant Squid Audio Omni Directional Lapel Mic 
  hide details  
Reply
post #3077 of 5154
Quote:
Originally Posted by Masked View Post

I genuinely don't know how to dumb this down so, follow me.
In a standard SLI situation, slot to slot, the cards are primary/secondary etc...in a Dual GPU card, there is no true master/slave.
SLI USED to be basically half your screen...Each card would literally take, top/bottom or top/middle/bottom (three) etc etc etc...This doesn't happen anymore because the world figured out they do AFR (frame by frame) instead of SFR (Half/Half).
So right now, basically 90% of the world's SLI is enabled to do AFR.
AFR in a dual gpu card is truly RANDOM and this is what I was discussing.
I have 2 cards at work where generally, core 2 does the brunt of the work and at home I have a card where core 1 does 20-30% more work. It is genuinely, random.
In every dual GPU card, there's an inherent bridge...Always has been, always will be...It's typically @ 400mhz...But as I said, the frame buffer process is truly random, it has to be for the cards to work in a true tandem...As we all know, not all cores are equal.
I'm EXTREMELY familiar with the 7900GX2 because it was Alienware's first custom card when I was a lowly tech here...That's another infamous card that NEVER worked in Vsync, always had to be AFR.
Ram is split, it's ALWAYS been split because it has to be for the ram to be mirrored, if it wasn't then the SLI configuration would operate on the lower memory as max ramdec...So if you had 1500 on 1 card and 1200 on the other, 1200 becomes Max/Primary...Regardless of which core it is.
You're genuinely over-thinking how SLI works...
At any resolution higher than 1900, you very simply need more Vram...There's no real discussion to be had because it's a fact...
With 1.5gb you'll choke the VRAM and won't receive the results you're looking for, regardless of how many cards you dump on the project.
If you're going to do a single monitor then you're fine until the standards change with Kepler JUST down the road because, I have a very strong feeling, 1.5gb isn't going to cut it anymore.
Now, again, the genuine max ram usage for most games/apps atm is 1750...JUST over the 1500 which is why it actually performs "well" but, not as well as it should...
In my opinion, you're better off getting 2x 580 3gbs, overclock them and frag to the max.

Hey Masked,

Will 2x 580 3GB be able to rock 3 2560x1600 monitors at Ultra Settings and stay above 60fps? (Or say even 120fps when the day comes that 120hz displays can do that resolution?)

If not two cards, how about three?

Not that I plan on switching over. My next upgrade will be Kepler or later.

I'm just curious...

Also -

I don't know about anyone else, but I've been in a bit of a gaming funk since I got a chance to be in the last weekend beta of SW:TOR, two weeks ago.

Played some BF3 last night, which was great and helped. But I can't get SW:TOR off my mind. I'm not really an MMO player, but as an OG Star Wars geek and as a burgeoning screenwriter/games writer, I realized this game is going to take away my whole life. Because I'm probably going to roll every class simply to see how each story plays out. I've already warned the missus that the marriage is in jeopardy wink.gif

I'm just going to get her into the game with me.

Lastly, I upgraded my Internet to the top tier speed. It was an extra $20 a month on top of what I'm already paying, so I said, Furk it. (Time Warner Cable RR Extreme for those in the SoCal or TW hoods) -

and Dam! That increase in upload speed pretty much doubled and tripled my average Kill score in BF3! Highly recommend it, or similar if you have the Robert DeNiro's for it.
Edited by Shinobi Jedi - 11/21/11 at 8:47am
The Omni
(13 items)
 
  
CPUMotherboardGraphicsRAM
I7 920 @ 4.2ghz EVGA X58 SLI 2X EVGA GTX 590 Classified LE Quad SLI 12gb Corsair Vengeance DDR3 1600 
Hard DriveOptical DriveOSMonitor
WD Velociraptor 300GB LG Super Multi Blu BD Rewriter Windows 7 64 BenQ XL2410T 120hz Full HD/3D Ready 
KeyboardPowerCaseMouse
Logitech G19 Corsair AX1200 Coolermaster HAF-X Razer Naga 
Mouse Pad
Razer Kabuto 
  hide details  
Reply
The Omni
(13 items)
 
  
CPUMotherboardGraphicsRAM
I7 920 @ 4.2ghz EVGA X58 SLI 2X EVGA GTX 590 Classified LE Quad SLI 12gb Corsair Vengeance DDR3 1600 
Hard DriveOptical DriveOSMonitor
WD Velociraptor 300GB LG Super Multi Blu BD Rewriter Windows 7 64 BenQ XL2410T 120hz Full HD/3D Ready 
KeyboardPowerCaseMouse
Logitech G19 Corsair AX1200 Coolermaster HAF-X Razer Naga 
Mouse Pad
Razer Kabuto 
  hide details  
Reply
post #3078 of 5154
Quote:
Originally Posted by Shinobi Jedi View Post

Hey Masked,
Will 2x 580 3GB be able to rock 3 2560x1600 monitors at Ultra Settings and stay above 60fps? (Or say even 120fps when the day comes that 120hz displays can do that resolution?)
If not two cards, how about three?
Not that I plan on switching over. My next upgrade will be Kepler or later.
I'm just curious...

Yes.

The VRAM is a massive help in that situation...

3 is where it's hit or miss but, 2, definitely.
Quote:
Originally Posted by jpongin View Post

Thanks for the explanation Masked. I completely agree with you about VRAM and how it's used in games at large resolutions. I dig *deep* and I'm all about learning new stuff so two more questions to the experts:
1) On VRAM - I guess I'm still confused as to why the VRAM still needs to be mirrored in the first place. Do the threads share the same variables? Or Is it faster to output frames from a replicated single frame buffer memory set? Or is it some other reason?
2) On AFR randomness - I understand the split and alternate methods, and how AFR is the leading and more popular one. So I guess my next question is if these frames are being displayed randomly, how come in a Quad SLI configuration, it appears to me that all my frames are being rendered in order? Is it the result of First-In-First-Out timing between the four GPUs? And if the frames are rendered more out of order than in order, is this what causes the "micro stuttering" effect? If it's truly random with no controller logic to order frames, am I right to conclude that this wouldn't horizontally scale well? Because increasing threads should increase the likelihood of randomness right?

I honestly don't have any answers beyond what I've already said.

My understanding of SLI is that it's random...I know it used to be, if that's changed then I'm incorrect...I genuinely don't even think about it to the extent we're discussing...Apologies.
post #3079 of 5154
But the VRAM concern is only for resolution though. Not screen size, correct?

So, if I get a 27" or larger (if and when they're made) of a 120hz 1080p monitor, I shouldn't have a VRAM issue, yes?

Batman: AC is unlocked on Steam! Ahh Yeah, Sucka!
The Omni
(13 items)
 
  
CPUMotherboardGraphicsRAM
I7 920 @ 4.2ghz EVGA X58 SLI 2X EVGA GTX 590 Classified LE Quad SLI 12gb Corsair Vengeance DDR3 1600 
Hard DriveOptical DriveOSMonitor
WD Velociraptor 300GB LG Super Multi Blu BD Rewriter Windows 7 64 BenQ XL2410T 120hz Full HD/3D Ready 
KeyboardPowerCaseMouse
Logitech G19 Corsair AX1200 Coolermaster HAF-X Razer Naga 
Mouse Pad
Razer Kabuto 
  hide details  
Reply
The Omni
(13 items)
 
  
CPUMotherboardGraphicsRAM
I7 920 @ 4.2ghz EVGA X58 SLI 2X EVGA GTX 590 Classified LE Quad SLI 12gb Corsair Vengeance DDR3 1600 
Hard DriveOptical DriveOSMonitor
WD Velociraptor 300GB LG Super Multi Blu BD Rewriter Windows 7 64 BenQ XL2410T 120hz Full HD/3D Ready 
KeyboardPowerCaseMouse
Logitech G19 Corsair AX1200 Coolermaster HAF-X Razer Naga 
Mouse Pad
Razer Kabuto 
  hide details  
Reply
post #3080 of 5154
Yes
My computer
(9 items)
 
  
CPUMotherboardGraphicsGraphics
3930k  Asus Sabertooth X79 Gigabyte G1 1080 Gigabyte G1 1080 
RAMHard DriveCoolingMonitor
16gb ddr3  OCZ Vertex 3  Noctura D-14 ASUS MG279Q 
Power
Corsair TX850 
  hide details  
Reply
My computer
(9 items)
 
  
CPUMotherboardGraphicsGraphics
3930k  Asus Sabertooth X79 Gigabyte G1 1080 Gigabyte G1 1080 
RAMHard DriveCoolingMonitor
16gb ddr3  OCZ Vertex 3  Noctura D-14 ASUS MG279Q 
Power
Corsair TX850 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: NVIDIA
Overclock.net › Forums › Graphics Cards › NVIDIA › NVIDIA GTX 590 Owners Club