Overclock.net › Forums › Graphics Cards › NVIDIA › SLI 9600GT 1GB with another 1GB or 512MB?
New Posts  All Forums:Forum Nav:

SLI 9600GT 1GB with another 1GB or 512MB? - Page 3

post #21 of 49
Quote:
Originally Posted by Liability View Post
Disregard everything Palit_Guy said. Anything over 512 is a complete and utter waste of money. You're limited by the 256 bit bus.
So agreed
Smee 2.0
(14 items)
 
  
Reply
Smee 2.0
(14 items)
 
  
Reply
post #22 of 49
Quote:
Originally Posted by Palit_Guy View Post
The 1GB card currently outperforms the 512MB version in AoC in this circumstance. If this isn't important to you, that's fine, go with the 512MB.
Buddy, it's been proven over and over by many reviews that 256-bit buses can't utilize 1 gb
GE60
(13 items)
 
  
CPUGraphicsHard DriveOS
Intel Core i5 4200M Geforce GTX760M 750GB WD Scorpio Black Windows 7 Home Premium 
MonitorKeyboardMouseMouse Pad
15.6" CM Quickfire Rapid CM Xornet Steelseries QcK+ 
  hide details  
Reply
GE60
(13 items)
 
  
CPUGraphicsHard DriveOS
Intel Core i5 4200M Geforce GTX760M 750GB WD Scorpio Black Windows 7 Home Premium 
MonitorKeyboardMouseMouse Pad
15.6" CM Quickfire Rapid CM Xornet Steelseries QcK+ 
  hide details  
Reply
post #23 of 49
Quote:
Originally Posted by sLowEnd View Post
Buddy, it's been proven over and over by many reviews that 256-bit buses can't utilize 1 gb
And this is proven how? That the bus is "X" bits wide so therefore it cannot handle enough memory reads and writes? I am not sure what threads you are referring to but let's do the math here so we can all see.

9600 GT
256-bit wide bus @ 900 MHz (1,800 DDR)

I will stick to the 9600 GT ASIC with the Samsung K4J5234QE BJ1A memory Palit uses on the 512 MB and 1 GB versions on the market and work through this. (Samsung Memory Data Sheet - http://www.samsung.com/global/system...24qe_rev12.pdf)

MHz x Bus Width x Data Rate per clock x bytes (8 bits = 1 byte **therefore divide by 8**) = theoretical maximum bandwidth

9600 GT:
900 million cycles per second x 256 bits x 2 bits per clock / 8 bits per 1 byte = 41.6 GB/s

If you take the entire memory and ask the memory controller to access every piece of data each clock over time, you would need "X" time to do it. Based on the bandwidth calculated above, you could access the entire 1 GB (or 1000^3 Bytes or 8,000,000,000 bits) in 0.017361 seconds.


9600 GT:

57,600,000,000 bits per second = Bandwidth
1,000,000,000 or 1 GB = Total Memory Density

1,000,000,000 GB / 57,600,000,000 GB/s = 0.017361~ seconds

1 / 0.17361 = 57 full density reads per second (without latency penalties)


A 9600 GT with 1 GB of memory could do full write followed by a read of that data 3.6 times each second. That is theoretical with a performance penalty for latencies as the Samsung GDDR3 has a 4:4:4 timing (tRCD : CAS : tRP) or 12 cycles of latency per read (CL12) and 4 cycles per write (WL4).

That is PLENTY OF TIME to access all of the memory every second. Granted this is theoretical, but even if you could only get this performance 1/3 of the time you could still write to and then read all of the memory once every second. So saying that 1 GB cannot be accessed in a given "X" time frame is an incorrect statement as the bandwidth is clearly there. This is a simplified version of what can happen as latencies can be hidden but I am doing it this way to prove the point that even memory with a game that is coded horrendously with a terrible driver can do it.

The question that really matters is "Will the extra memory make a difference?" That is where Palit_Guy is going.

Yes FunCom has stated that the more memory a card has the better the performance per "X" draw distance but you don't even need to use Age of Conan to test this out. Use any MMORPG or any other game that periodically loads textures into memory. Graphics memory used to be broken into two general types, Texture Memory and Frame Buffer Memory. This doesn't happen any more as everything is virtualized in graphics memory. So unlike the 3DFX Voodoo2 days which had separate buses and memory modules, all of this data goes onto the same modules.

In a normal frame cycle you have verticies which have textures applied, then post processing. So in this cycle you will see memory fill with texture data before frame data. This is why you will see some cards with higher resolutions and effects like AA crap out as the memory on the card fills up. A stock 8800 GT with 512 MB of memory will start to fail at 1600x1200 with detail level high and 4xAA in Crysis while a 1 GB version at the same clock frequencies will be 10 fps faster. In fact, at 2560x1600 the 512 MB reference card cannot even render... you get a game crash and sometimes a BSoD. While it is not a "playable" frame rate, the 1 GB version will still be doing 10-11 fps. This is all because of memory density.

You will see a difference in a game that is texture intensive. MMOs are normally good to show this. As you walk from point A to point B in the virtual world, the scenery changes as you move around the world. You need to load the textures into memory. GDDR3 is fine as it can Read and Write at the same time (however not to the same addresses). If you need to keep fetching textures you will start slowing the system as you need to clear out older data at some point. With more memory this will happen less frequent and in some cases not much at all if there is sufficient data addressing to copy over the majority of a locations textures. For most cases there will be some form of texture loading needed.

Taking the previous example and going a bit further, say you walk from point A to point B and then turn around and go back to A, you may have to load textures for Point A twice. The 1 GB card many not have to. This has been verified in games by people with 1 GB cards. You will have to fetch data at some point but if you have more addressing space this balance of fetch and compute can be done easier as the controller has more places to read and write data.

That being said, you will not see a huge advantage in ALL GAMES right this second. Most game developers have moved to a 256 MB minimum for current generation games. There is already a move to 512 MB for future titles. To say that having 1 GB is a stupid idea is unfounded. if you go back about 10 years ago, the consumer card with the largest memory density was Voodoo2 with 16 MB. 5 years ago it was Radeon 9700 and GeForce Ti 4800 with 256 MB. The Radeon X1000 series was the first to 512 MB with GeForce 7800 after that. At each step of the way people have argued the same points in this thread, that more memory now is stupid. The scary thing is that people still use these cards... including the 9600, 9700 and 9800 with 256 MB of memory. 3% of the Valve Software Steam Survey are using Radeon 9600 cards... that is almost 55,000 people. 101,000 are using GeForce 7600 256 MB cards with a 128-bit bus. (http://www.steampowered.com/status/survey.html)

The point is that having larger memory densities many not protect you from the next major game title's appetite for rendering horsepower but you may be able to satisfy its texture demands. This means that you might be able to play a game 3 years from now (as it will most likely have a Direct3D 9 code path as we may be at D3D/DX 12 by then). It may only play at lower resolutions or with certain features disabled (AA and other blending effects), but you could still play it where some people with a 512 MB card may have a harder time of it. Hopefully you will buy a new card (as all companies want consumers to buy new products) but if you are like or know people like the "Average Joe Consumer" they will upgrade every 2-3 years. Palit is just helping to maximize the life of a purchase by going to 1 GB NOW.
Edited by Voice-Of-Palit - 5/30/08 at 2:48am
post #24 of 49
Quote:
Originally Posted by Voice-Of-Palit View Post
And this is proven how? That the bus is "X" bits wide so therefore it cannot handle enough memory reads and writes? I am not sure what threads you are referring to but let's do the math here so we can all see.

9600 GT
256-bit wide bus @ 900 MHz (1,800 DDR)

I will stick to the 9600 GT ASIC with the Samsung K4J5234QE BJ1A memory Palit uses on the 512 MB and 1 GB versions on the market and work through this. (Samsung Memory Data Sheet - http://www.samsung.com/global/system...24qe_rev12.pdf)

MHz x Bus Width x Data Rate per clock x bytes (8 bits = 1 byte **therefore divide by 8**) = theoretical maximum bandwidth

9600 GT:
900 million cycles per second x 256 bits x 2 bits per clock / 8 bits per 1 byte = 41.6 GB/s

If you take the entire memory and ask the memory controller to access every piece of data each clock over time, you would need "X" time to do it. Based on the bandwidth calculated above, you could access the entire 1 GB (or 1000^3 Bytes or 8,000,000,000 bits) in 0.017361 seconds.


9600 GT:

57,600,000,000 bits per second = Bandwidth
1,000,000,000 or 1 GB = Total Memory Density

1,000,000,000 GB / 57,600,000,000 GB/s = 0.017361~ seconds

1 / 0.17361 = 57 full reads/writes per second (without latency penalties)


A 9600 GT with 1 GB of memory could do full write followed by a read of that data 3.6 times each second. That is theoretical with a performance penalty for latencies as the Samsung GDDR3 has a 4:4:4 timing (tRCD : CAS : tRP) or 12 cycles of latency per read (CL12) and 4 cycles per write (WL4).

That is PLENTY OF TIME to access all of the memory every second. Granted this is theoretical, but even if you could only get this performance 1/3 of the time you could still write to and then read all of the memory once every second. So saying that 1 GB cannot be accessed in a given "X" time frame is an incorrect statement as the bandwidth is clearly there. This is a simplified version of what can happen as latencies can be hidden but I am doing it this way to prove the point that even memory with a game that is coded horrendously with a terrible driver can do it.

The question that really matters is "Will the extra memory make a difference?" That is where Palit_Guy is going.

Yes FunCom has stated that the more memory a card has the better the performance per "X" draw distance but you don't even need to use Age of Conan to test this out. Use any MMORPG or any other game that periodically loads textures into memory. Graphics memory used to be broken into two general types, Texture Memory and Frame Buffer Memory. This doesn't happen any more as everything is virtualized in graphics memory. So unlike the 3DFX Voodoo2 days which had separate buses and memory modules, all of this data goes onto the same modules.

In a normal frame cycle you have verticies which have textures applied, then post processing. So in this cycle you will see memory fill with texture data before frame data. This is why you will see some cards with higher resolutions and effects like AA crap out as the memory on the card fills up. A stock 8800 GT with 512 MB of memory will start to fail at 1600x1200 with detail level high and 4xAA in Crysis while a 1 GB version at the same clock frequencies will be 10 fps faster. In fact, at 2560x1600 the 512 MB reference card cannot even render... you get a game crash and sometimes a BSoD. While it is not a "playable" frame rate, the 1 GB version will still be doing 10-11 fps. This is all because of memory density.

You will see a difference in a game that is texture intensive. MMOs are normally good to show this. As you walk from point A to point B in the virtual world, the scenery changes as you move around the world. You need to load the textures into memory. GDDR3 is fine as it can Read and Write at the same time (however not to the same addresses). If you need to keep fetching textures you will start slowing the system as you need to clear out older data at some point. With more memory this will happen less frequent and in some cases not much at all if there is sufficient data addressing to copy over the majority of a locations textures. For most cases there will be some form of texture loading needed.

Taking the previous example and going a bit further, say you walk from point A to point B and then turn around and go back to A, you may have to load textures for Point A twice. The 1 GB card many not have to. This has been verified in games by people with 1 GB cards. You will have to fetch data at some point but if you have more addressing space this balance of fetch and compute can be done easier as the controller has more places to read, and write data.

That being said, you will not see a huge advantage in ALL GAMES right this second. Most game developers have moved to a 256 MB minimum for current generation games. There is already a move to 512 MB for future titles. To say that having 1 GB is a stupid idea is unfounded. if you go back about 10 years ago, the consumer card with the largest memory density was Voodoo2 with 16 MB. 5 years ago it was Radeon 9700 and GeForce Ti 4800 with 256 MB. The Radeon X1000 series was the first to 512 MB with GeForce 7800 after that. At each step of the way people have argued the same points in this thread. The scary thing is that people still use these cards... including the 9600, 9700 and 9800 with 256 MB of memory. 3% of the Valve Software Steam Survey are using Radeon 9600 cards... that is almost 55,000 people. 101,000 are using GeForce 7600 256 MB cards with a 128-bit bus. (http://www.steampowered.com/status/survey.html)

The point is that having larger memory densities many not protect you from the next major game title's appetite for rendering horsepower but you may be able to satisfy its texture demands. This means that you might be able to play a game 3 years from now (as it will most likely have a Direct3D 9 code path as we may be at D3D/DX 12 by then). It may only play at lower resolutions or with certain features disabled (AA and other blending effects), but you could still play it where some people with a 512 MB card may have a harder time of it. Hopefully you will buy a new card (as all companies want consumers to buy new products) but if you are like or know people like the "Average Joe Consumer" they will upgrade every 2-3 years. Palit is just helping to maximize the life of a purchase by going to 1 GB NOW.

tl;dr
dont you palit people have anything better to do? maybe if you spent this much effort in improving your products then you'd be up there with the likes of evga and xfx. sheesh...get a life palit
    
CPUMotherboardGraphicsRAM
Intel Core 2 Duo P7550 2.26GHz Intel GM45 Chipset GeForce GT240M GT 1GB 2x2GB DDR3-1066 
Hard DriveOSMonitorPower
WD 500GB Windows Vista SP1 15.6" LED Backlit Display 6-Cell Li-Ion 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Intel Core 2 Duo P7550 2.26GHz Intel GM45 Chipset GeForce GT240M GT 1GB 2x2GB DDR3-1066 
Hard DriveOSMonitorPower
WD 500GB Windows Vista SP1 15.6" LED Backlit Display 6-Cell Li-Ion 
  hide details  
Reply
post #25 of 49
Before this thread got sent sideways... the original question was:

"I currently have the 9600GT from PALIT thats 1GB, 700/1000MHz, and i want to SLI it with another 9600GT, but my question is, should i get another 1GB card or go with the 512MB one? what would be more better and effective?"

Palit_Guy went into an answer and someone took it sideways. Then sLowEnd jumped in with a comment about 1GB of addressing space.

The best answer to the original question is to get another 1GB card. Other than that all I did was answer the bandwidth comment with facts. I guess I should apologize for actually taking the time to be helpful.
post #26 of 49
http://www.yougamers.com/articles/13...ly_need-page7/
vram usage is above 512mb for high settings and growing,as for 256bit bus i don't think that limits a card's usable vram to 512mb though i still don't get why nvidia use 768mb of ram on the gtx and will use 896mb on the gt260
    
CPUMotherboardGraphicsRAM
e2140@3.2Ghz abit IP35-E HIS IceQ4 4850 4GB 667@800 5-5-5-15 
OSPower
win xp 32 bit Corsair 450VX@stock 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
e2140@3.2Ghz abit IP35-E HIS IceQ4 4850 4GB 667@800 5-5-5-15 
OSPower
win xp 32 bit Corsair 450VX@stock 
  hide details  
Reply
post #27 of 49
I think the 512MB version should work fine with the 1GB version but be prepared to lose half of the vram.

Those who said he should return it...shush. Do you think you'd do that? I'm sure he made an honest mistake when he bought it.
post #28 of 49
Quote:
Originally Posted by gamervivek View Post
http://www.yougamers.com/articles/13...ly_need-page7/
vram usage is above 512mb for high settings and growing,as for 256bit bus i don't think that limits a card's usable vram to 512mb though i still don't get why nvidia use 768mb of ram on the gtx and will use 896mb on the gt260
That is a good link, in fact the article is a nice synopsis of memory utilization. It points to the reason why Nvidia used 768 MB of memory with a 768 bit wide bus. It was done for optimal situations with the most advanced games (of its time) at high resolution. Most if not all current generation graphics cards can support resolutions up to 2560x1600. Those 3 graphs show the memory density needed per resolution and the post processing needed. World in Conflit, CoD4 and Crysis is proof that 256 MB isn't enough anymore as developers are moving to 512 MB. It is kind of that "Field of Dreams" idea "if you build it he will come" but for memory. If you have more memory programmers will figure out a way to use it. The same is true for most hardware features which have a usefulness and ease of taking advantage of. It also shows that the game does not have an intense memory footprint when adding post processing effects.

Current AA is a blending method where data A is added to data B and then divided by 2 ([A+B]/2=C type averaging). The second graph shows almost the same thing. However, as you add resolution there is a definite impact to memory addressing. The best way I can describe the difference between WiC and CoD4 is twofold, the first is programing and the game engine used as well as when the release date in terms of development. The second is how much extra stuff needs to be rendered and then worked on. I like the graph (http://www.yougamers.com/articles/13...livion_preset/) showing the post processing demands based on presets in Oblivion. That game was a card killer in its day as the draw distances coupled with post processing effects could cripple most GPU ASICs regardless of memory. But when you look at the memory needed, in its day it was killing cards on that front as well. Going back to WiC and CoD4, you see that there is a difference in how much needs to be done after the frame is rendered and then some form of blending (AA) has to be done on data stored in the memory before it hits the output buffer and put on the screen. Some of the difference is the AA method as well as the scene drawn. A 10,000 ft level view of a map in WiC is very different to a FPS which has massive differentiation in a frame (lighting and color) and LoD (Level of Detail). It makes sense that there will be less need in WiC for blending when there are less noticeable needs for AA at 10,000 ft vs. 100 ft. in an FPS with a lot of long straight lines in objects and shadows.

Crysis is another animal unto itself. In my other post I mentioned that 512 MB cards will crap out around 1600x1200 in Crysis at "High" detail with a lot of post processing and that gets exponentially larger as resolution and post processing is done. HDR is a post processing effect, as is AA and motion blur, etc. You can see at 1600x1200 usage is almost at 580-600 MB. The 8800 GTX and Ultra with their 768 MB of memory have no problem addressing all of the memory needs. A 512 card (regardless of what GPU ASIC is on the card) is out of luck at this point. While the rendering needs are extremely high, even an 8800 GT or 9600 GT with 1 GB of memory will at least meet the memory needs. Granted this article was published using the demo, while the gold RTM version has better memory utilization optimizations the scenario with both Crysis versions are the same for the most part.

At the end of that article the author states "Don't be naive in thinking that games won't be using more than 512MB any time soon, because they already are in certain cases. We're still some way off before developers start making games that require half a gigabyte of video RAM to play properly, but if you want the best possible visuals, get the most RAM that you can." That is a good point. While at lower resolutions with little effects enabled will show a 512 MB version beat 1 GB version in a some games. (The answer why that happens is simple, more overhead to addressing but this is normally a nominal amount.) However, losing a single or couple frames per second because you have too much addressing space is nothing compared to not being able to play at a particular resolution with a specific level of detail and effects. The latter is far worse, as not having enough as described in the article you linked to.

Thanks for posting that link. It was a good read with a lot of good visuals.
post #29 of 49
I just got off the phone with palit_guy and I think this is funny.

If your going to keep your cards going into the next year late 2008 and 2009 I would go with the 1GB cards. Developers are starting to use the 1GB, and not setting up the applications for 512 and lower. The 1GB cards are becoming more popular, and because of that they will start to design games that take advantage of that.


Of course by that time there will be FASTER 512 cards that will be able to do it faster. That's how this stuff works we all know that. The longer you want to wait before the next upgrade. Means you want more goodies on it you want on it to make it last younger.

If I happened to buy a x1950 256 card last year because I didn't see the advantage for 512 card. Would I be running slower on newer games today than if I was to of happened to buy the 256 card then?

Yes or no
My System
(13 items)
 
  
CPUMotherboardGraphicsRAM
QX9650 ASUS 790I E Wating for cards DDR3 1333 
Hard DriveOptical DrivePower
SATA II Raid-0 two 500Gig drives DVD-L 16x 1000 Antec 
  hide details  
Reply
My System
(13 items)
 
  
CPUMotherboardGraphicsRAM
QX9650 ASUS 790I E Wating for cards DDR3 1333 
Hard DriveOptical DrivePower
SATA II Raid-0 two 500Gig drives DVD-L 16x 1000 Antec 
  hide details  
Reply
post #30 of 49
I really fail to see why this absurd myth regarding 1GiB of memory being too much for a 256-bit bus contiunes to persist. It's not even a useful rule of thumb anymore.

The reason why 1GiB of memory is usually senseless on a 9600GT or HD38xx is not because of a 256-bit bus. The GPUs on these cards are simply too slow to play move games at the kind of settings that would require much more than 512MiB of video memory. The 256-bit bus is not the cause of this.

Get a game like Oblivion, were some of the texture mods will not fit in 512MiB of VRAM, then you will see a noticeable improvement in performance from going from 512MiB to to 1GiB, even on a card with a 128-bit bus.

All that matters is what facet of a cards performance bottlenecks it the most in a given situation.

Quote:
Originally Posted by gamervivek View Post
vram usage is above 512mb for high settings and growing,as for 256bit bus i don't think that limits a card's usable vram to 512mb though i still don't get why nvidia use 768mb of ram on the gtx and will use 896mb on the gt260
With the number of memory chips needed to make a 384-bit bus, 768MB of ram was the best option. 384MiB would have been too little, 1536MiB would have been a waste.

It's the same deal with the GT 260. It has a 448-bit bus, which means they removed two of the memory chips compared to the 280. This makes 896MiB of memory.

Quote:
Originally Posted by damulta View Post
If I happened to buy a x1950 256 card last year because I didn't see the advantage for 512 card. Would I be running slower on newer games today than if I was to of happened to buy the 256 card then?

Yes or no
On some games, absolutely yes, they would be much slower than with the same card with 512MiB of memory.
Quote:
Originally Posted by damulta View Post
If I happened to buy a x1950 256 card last year because I didn't see the advantage for 512 card. Would I be running slower on newer games today than if I was to of happened to buy the 256 card then?

Yes or no
On some games, absolutely yes, they would be much slower than with the same card with 512MiB of memory.
Edited by Blameless - 5/30/08 at 9:04am
Primary
(15 items)
 
Secondary
(13 items)
 
In progress
(10 items)
 
CPUMotherboardGraphicsRAM
5820K @ 4.2/3.5GHz core/uncore, 1.175/1.15v Gigabyte X99 SOC Champion (F22n) Gigabyte AORUS GTX 1080 Ti (F3P) @ 2025/1485, 1... 4x4GiB Crucial @ 2667, 12-12-12-28-T1, 1.34v 
Hard DriveHard DriveHard DriveCooling
Plextor M6e 128GB (fw 1.06) M.2 (PCI-E 2.0 2x) 2x Crucial M4 256GB 4x WD Scorpio Black 500GB Noctua NH-D15 
OSMonitorKeyboardPower
Windows 7 Professional x64 SP1 BenQ BL3200PT Filco Majestouch Tenkeyless (MX Brown) Corsair RM1000x 
CaseMouseAudio
Fractal Design Define R4 Logitech G402 Realtek ALC1150 + M-Audio AV40 
CPUMotherboardGraphicsRAM
X5670 @ 4.4/3.2GHz core/uncore, 1.36 vcore, 1.2... Gigabyte X58A-UD5 r2.0 w/FF3mod10 BIOS Sapphire Fury Nitro OC+ @ 1053/500, 1.225vGPU/1... 2x Samsung MV-3V4G3D/US @ 2000, 10-11-11-30-T1,... 
RAMHard DriveHard DriveHard Drive
1x Crucial BLT4G3D1608ET3LX0 @ 2000, 10-11-11-3... OCZ (Toshiba) Trion 150 120GB Hyundai Sapphire 120GB 3x Hitachi Deskstar 7k1000.C 1TB 
CoolingOSPowerCase
Noctua NH-D14 Windows 7 Pro x64 SP1 Antec TP-750 Fractal Design R5 
Audio
ASUS Xonar DS 
CPUMotherboardGraphicsRAM
i7-6800K @ 4.3/3.5GHz core/uncore, 1.36/1.2v ASRock X99 OC Formula (P3.10) GTX 780 (temporary) 4x4GiB Crucial DDR4-2400 @ 11-13-12-28-T2, 1.33v 
Hard DriveHard DriveCoolingOS
Intel 600p 256GB NVMe 2x HGST Travelstar 7k1000 1TB Corsair H55 (temporary) Windows Server 2016 Datacenter 
PowerCase
Seasonic SS-860XP2 Corsair Carbide Air 540 
  hide details  
Reply
Primary
(15 items)
 
Secondary
(13 items)
 
In progress
(10 items)
 
CPUMotherboardGraphicsRAM
5820K @ 4.2/3.5GHz core/uncore, 1.175/1.15v Gigabyte X99 SOC Champion (F22n) Gigabyte AORUS GTX 1080 Ti (F3P) @ 2025/1485, 1... 4x4GiB Crucial @ 2667, 12-12-12-28-T1, 1.34v 
Hard DriveHard DriveHard DriveCooling
Plextor M6e 128GB (fw 1.06) M.2 (PCI-E 2.0 2x) 2x Crucial M4 256GB 4x WD Scorpio Black 500GB Noctua NH-D15 
OSMonitorKeyboardPower
Windows 7 Professional x64 SP1 BenQ BL3200PT Filco Majestouch Tenkeyless (MX Brown) Corsair RM1000x 
CaseMouseAudio
Fractal Design Define R4 Logitech G402 Realtek ALC1150 + M-Audio AV40 
CPUMotherboardGraphicsRAM
X5670 @ 4.4/3.2GHz core/uncore, 1.36 vcore, 1.2... Gigabyte X58A-UD5 r2.0 w/FF3mod10 BIOS Sapphire Fury Nitro OC+ @ 1053/500, 1.225vGPU/1... 2x Samsung MV-3V4G3D/US @ 2000, 10-11-11-30-T1,... 
RAMHard DriveHard DriveHard Drive
1x Crucial BLT4G3D1608ET3LX0 @ 2000, 10-11-11-3... OCZ (Toshiba) Trion 150 120GB Hyundai Sapphire 120GB 3x Hitachi Deskstar 7k1000.C 1TB 
CoolingOSPowerCase
Noctua NH-D14 Windows 7 Pro x64 SP1 Antec TP-750 Fractal Design R5 
Audio
ASUS Xonar DS 
CPUMotherboardGraphicsRAM
i7-6800K @ 4.3/3.5GHz core/uncore, 1.36/1.2v ASRock X99 OC Formula (P3.10) GTX 780 (temporary) 4x4GiB Crucial DDR4-2400 @ 11-13-12-28-T2, 1.33v 
Hard DriveHard DriveCoolingOS
Intel 600p 256GB NVMe 2x HGST Travelstar 7k1000 1TB Corsair H55 (temporary) Windows Server 2016 Datacenter 
PowerCase
Seasonic SS-860XP2 Corsair Carbide Air 540 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: NVIDIA
Overclock.net › Forums › Graphics Cards › NVIDIA › SLI 9600GT 1GB with another 1GB or 512MB?