Overclock.net › Forums › Industry News › Hardware News › [VC] AMD Radeon R9 290X with Hawaii GPU pictured, has 512-bit 4GB Memory
New Posts  All Forums:Forum Nav:

[VC] AMD Radeon R9 290X with Hawaii GPU pictured, has 512-bit 4GB Memory - Page 101

post #1001 of 1055
Quote:
Originally Posted by KyadCK View Post

That one motherboard had a PLX chip in it that enabled full 3.0x16 bandwidth between the cards, while using 2.0 x32 back to the NorthBridge.

I recall about NVidia saying about Kepler 3.0 cards, they are able to communicate directly between themselves when they are on the same PCI-E lanes, aka they are able to initiate and control communication with another PCI-E 3.0 compatible device. Perhaps AMD is doing it too.
post #1002 of 1055
Quote:
Originally Posted by Moragg View Post

A quick google shows there are no AMD CPU's which actually support PCI-E 3.0, so a 990FX mobo with PCI-E3.0 is kinda useless. Oh AMD applaud.gif

the northbridge on amd boards supplies the pci-e (except fmX boards) lanes and the cpu has a link to the northbridge...nice troll though.
Gaming Rig
(15 items)
 
Sons Rig
(11 items)
 
File Server
(13 items)
 
CPUMotherboardGraphicsGraphics
AMD Ryzen 1700x ASUS Prime X370 Pro Gigabyte GeForce GTX 980 Gigabyte GeForce GTX 980 
RAMHard DriveCoolingOS
16gb's Team Select 3200Mhz RGB  Crucial M500 SSD 240gb Corsair H55 Windows 7 Ultimate 
MonitorMonitorPowerCase
ASUS PB287Q 4K Monitor HTC Vive Coolermaster Silent Pro M 1000W Rosewill Blackhawk Ultra 
MouseMouse PadAudio
Razer Deathadder 2013 Steelseries Qck Mass Super Thick Cloth Mouse Pad Genius SW-G2.1 1250 4PC Gaming Speakers 
CPUMotherboardGraphicsGraphics
Core i7 3770k Asus P8Z68 Deluxe Galaxy Geforce GTX 780 Galaxy Geforce GTX 780 
RAMHard DriveCoolingOS
16 Gb's G.Skill DDR3 1866 Crucial M500 SSD 240Gb Coolit ECO C240 Windows 7 Ultimate 
MonitorPowerCase
Benq G2420HD Coolermaster Silent Pro 1000 Watt Deepcool Kendomen 
CPUMotherboardRAMHard Drive
Core i3 2120 Supermicro X9SCM-F 4 Gb Kingston 1600mhz DDR3 ECC 12 Segate 2tb drives (RAID 6) 
Hard DriveHard DriveOSPower
ADATA SP600 SSD 24 Toshiba DT01ACA200 drives (2 RAID 6's) Windows 7 Ultimate 64 bit. OCZ ZT750 Supermicro PWS-665-PQ 
CaseOtherOtherOther
Norco 4020 and Norco 4224 LSI 9260-4i Raid Card Intel RES2SV240 20 port Expander. HP SAS 24 por... Voltaire 410Ex Hca Infiniband HBA 
  hide details  
Reply
Gaming Rig
(15 items)
 
Sons Rig
(11 items)
 
File Server
(13 items)
 
CPUMotherboardGraphicsGraphics
AMD Ryzen 1700x ASUS Prime X370 Pro Gigabyte GeForce GTX 980 Gigabyte GeForce GTX 980 
RAMHard DriveCoolingOS
16gb's Team Select 3200Mhz RGB  Crucial M500 SSD 240gb Corsair H55 Windows 7 Ultimate 
MonitorMonitorPowerCase
ASUS PB287Q 4K Monitor HTC Vive Coolermaster Silent Pro M 1000W Rosewill Blackhawk Ultra 
MouseMouse PadAudio
Razer Deathadder 2013 Steelseries Qck Mass Super Thick Cloth Mouse Pad Genius SW-G2.1 1250 4PC Gaming Speakers 
CPUMotherboardGraphicsGraphics
Core i7 3770k Asus P8Z68 Deluxe Galaxy Geforce GTX 780 Galaxy Geforce GTX 780 
RAMHard DriveCoolingOS
16 Gb's G.Skill DDR3 1866 Crucial M500 SSD 240Gb Coolit ECO C240 Windows 7 Ultimate 
MonitorPowerCase
Benq G2420HD Coolermaster Silent Pro 1000 Watt Deepcool Kendomen 
CPUMotherboardRAMHard Drive
Core i3 2120 Supermicro X9SCM-F 4 Gb Kingston 1600mhz DDR3 ECC 12 Segate 2tb drives (RAID 6) 
Hard DriveHard DriveOSPower
ADATA SP600 SSD 24 Toshiba DT01ACA200 drives (2 RAID 6's) Windows 7 Ultimate 64 bit. OCZ ZT750 Supermicro PWS-665-PQ 
CaseOtherOtherOther
Norco 4020 and Norco 4224 LSI 9260-4i Raid Card Intel RES2SV240 20 port Expander. HP SAS 24 por... Voltaire 410Ex Hca Infiniband HBA 
  hide details  
Reply
post #1003 of 1055
Quote:
Originally Posted by Raghar View Post

Quote:
Originally Posted by KyadCK View Post

That one motherboard had a PLX chip in it that enabled full 3.0x16 bandwidth between the cards, while using 2.0 x32 back to the NorthBridge.

I recall about NVidia saying about Kepler 3.0 cards, they are able to communicate directly between themselves when they are on the same PCI-E lanes, aka they are able to initiate and control communication with another PCI-E 3.0 compatible device. Perhaps AMD is doing it too.

Technically speaking, there will always need to be a chip in the way to route traffic if there are more than two devices on a given bus. They could bounce off the DMI or Northbridge without bothering the CPU no problem, but unless we're getting into another stage of IOMMU that allows for every card to understand every other chip on the same bus and route accordingly (which would be awesome), nVidia would need to add address-aware chips to their GPUs, and the motherboard would have to understand that the information is not for it. That last bit is the most important.

This does not prevent them from doing it 7990 style though where the GPUs can talk to one another through the PLX chip directly without bothering the Northbridge/CPU, but that is not the same as direct-connect since the PLX chip is still there.

If anything, this is exactly why SLI/XFire bridges came to be in the first place, to avoid the inherent latency in being forced to go back to the PCI/AGP/PCI-e controller. We just now have a crapload of bandwidth that we aren't using, so why not.
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
post #1004 of 1055
Quote:
Originally Posted by KyadCK View Post

Warning: Spoiler! (Click to show)
You... Don't understand how AMD's FX boards work, do you?

There is no such thing as "CPU Support" because the NorthBridge is on the motherboard. The CPU connects to the motherboard in only 3 ways.

1: Power
2: RAM
3: HyperTransport

That's it. All PCI-e interaction is handled by the NorthBridge and any chips under it. That one motherboard had a PLX chip in it that enabled full 3.0x16 bandwidth between the cards, while using 2.0 x32 back to the NorthBridge.

Basicly it made two 7970s work exactly like a 7990.


Oh, and for future reference, "raw data" transfers at my resolution (3510x1920 32-bit color) are 25MB per frame. 1.5GB/s for 60fps. About 2GB/s per 4k monitor.

If using two GPUs with AFR, the second GPU only needs to send ~1GB/s of information to the other (30fps at 4k 32-bit) over PCI-e. That is the equivalent of a PCI-e 3.0 x1, or a PCI-e 2.0 x2.

Keep in mind this is already far higher than the resolution most people will play at. A 2560x1440 user will have to send half that. Same for a 1080p 120-fps user.

Even PCI-e 2.0 x8 slots will be able to deal with this just fine.

In other words, please stop crying about it. It will not affect you in any way unless you're trying to run quad-fire on PCI-e 2.0 x4's. Every Z68, Z77, Z87, and 990FX motherboard will be able to handle this.

doh.gifdoh.gifdoh.gifdoh.gif
My bad, I've never even looked into this before. I just searched "8350 PCI-E 3.0" and saw a few places which said it didn't support PCI-e 3.0. If I understand your post correctly the GPUs communicate through the NB without using the CPU, so the CPU can send it the data using PCI-e 2.0 and the GPUs can communicate using 3.0?

But I also read that because of latency reasons you needed the full 3.0x16 to CFX Hawaii, or was that wrong too?
Quote:
Originally Posted by Master__Shake View Post

the northbridge on amd boards supplies the pci-e (except fmX boards) lanes and the cpu has a link to the northbridge...nice troll though.

It wasn't a troll, just pure ignorance. I've edited the original post so others don't get the wrong idea.
post #1005 of 1055
Quote:
Originally Posted by Moragg View Post

Quote:
Originally Posted by KyadCK View Post

Warning: Spoiler! (Click to show)
You... Don't understand how AMD's FX boards work, do you?

There is no such thing as "CPU Support" because the NorthBridge is on the motherboard. The CPU connects to the motherboard in only 3 ways.

1: Power
2: RAM
3: HyperTransport

That's it. All PCI-e interaction is handled by the NorthBridge and any chips under it. That one motherboard had a PLX chip in it that enabled full 3.0x16 bandwidth between the cards, while using 2.0 x32 back to the NorthBridge.

Basicly it made two 7970s work exactly like a 7990.


Oh, and for future reference, "raw data" transfers at my resolution (3510x1920 32-bit color) are 25MB per frame. 1.5GB/s for 60fps. About 2GB/s per 4k monitor.

If using two GPUs with AFR, the second GPU only needs to send ~1GB/s of information to the other (30fps at 4k 32-bit) over PCI-e. That is the equivalent of a PCI-e 3.0 x1, or a PCI-e 2.0 x2.

Keep in mind this is already far higher than the resolution most people will play at. A 2560x1440 user will have to send half that. Same for a 1080p 120-fps user.

Even PCI-e 2.0 x8 slots will be able to deal with this just fine.

In other words, please stop crying about it. It will not affect you in any way unless you're trying to run quad-fire on PCI-e 2.0 x4's. Every Z68, Z77, Z87, and 990FX motherboard will be able to handle this.

doh.gifdoh.gifdoh.gifdoh.gif
My bad, I've never even looked into this before. I just searched "8350 PCI-E 3.0" and saw a few places which said it didn't support PCI-e 3.0. If I understand your post correctly the GPUs communicate through the NB without using the CPU, so the CPU can send it the data using PCI-e 2.0 and the GPUs can communicate using 3.0?

But I also read that because of latency reasons you needed the full 3.0x16 to CFX Hawaii, or was that wrong too?
Quote:
Originally Posted by Master__Shake View Post

the northbridge on amd boards supplies the pci-e (except fmX boards) lanes and the cpu has a link to the northbridge...nice troll though.

It wasn't a troll, just pure ignorance. I've edited the original post so others don't get the wrong idea.

Why would you need it? It needs to be able to transfer that much data in that much time. While more speed and lanes will get it there faster, it only needs to be there by the deadline, which for a 60Hz monitor is 30 times every second for 2-card AFR.

For a 4k 60hz screen, it needs to be able to transfer a full 34MB of data every 33 milliseconds to get there by the deadline.

PCI-e 2.0 x8 (4GB/s) can transfer that in 8 milliseconds.
PCI-e 2.0 x8 can transfer one 1080p frame in 2 milliseconds.

Theoretically speaking, a PCI-e 2.0 x8 has enough speed to handle 4k eyefinity at 60hz. It would be like running crossfire in a PCI-e x2 slot, but you could. Practically speaking, you want a 2.0 x16 or 3.0 x8. Even at this most extreme of resolutions you still don't need a PCI-e 3.0 x16

oh, and because math:

PCI-e 3.0 x16 (16GB/s) can transfer 4k 60hz in 2 milliseconds.
PCI-e 3.0 x16 can transfer one 1080p frame in 0.5 milliseconds.

Pretty simple math to work out how much other resolutions and refresh rates will need.
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
post #1006 of 1055
Quote:
Originally Posted by KyadCK View Post

Why would you need it? It needs to be able to transfer that much data in that much time. While more speed and lanes will get it there faster, it only needs to be there by the deadline, which for a 60Hz monitor is 30 times every second for 2-card AFR.

For a 4k 60hz screen, it needs to be able to transfer a full 34MB of data every 33 milliseconds to get there by the deadline.

PCI-e 2.0 x8 (4GB/s) can transfer that in 8 milliseconds.
PCI-e 2.0 x8 can transfer one 1080p frame in 2 milliseconds.

Theoretically speaking, a PCI-e 2.0 x8 has enough speed to handle 4k eyefinity at 60hz. It would be like running crossfire in a PCI-e x2 slot, but you could. Practically speaking, you want a 2.0 x16 or 3.0 x8. Even at this most extreme of resolutions you still don't need a PCI-e 3.0 x16

oh, and because math:

PCI-e 3.0 x16 (16GB/s) can transfer 4k 60hz in 2 milliseconds.
PCI-e 3.0 x16 can transfer one 1080p frame in 0.5 milliseconds.

Pretty simple math to work out how much other resolutions and refresh rates will need.
maybe i am reading you wrong but, since the PCI-2.0 x8 is already saturated and adding enough data to fill the other 2.0 x8 (= PCI-E 2.0 x16) i don't think there is any room for any overhead and would need a PCI-E 3.0 x16 connection.


btw, thanks for the very informative post -thumb.gif
loon 3.2
(18 items)
 
  
CPUMotherboardGraphicsRAM
i7-3770K Asus P8Z77-V Pro EVGA 980TI SC+ 16Gb PNY ddr3 1866 
Hard DriveHard DriveHard DriveOptical Drive
PNY 1311 240Gb 1 TB Seagate 3 TB WD Blue DVD DVDRW+/- 
CoolingCoolingOSMonitor
EKWB P280 kit EK-VGA supremacy Win X LG 24MC57HQ-P 
KeyboardPowerCaseMouse
Ducky Zero [blues] EVGA SuperNova 750 G2 Stryker M [hammered and drilled] corsair M65 
AudioAudio
SB Recon3D Klipsch ProMedia 2.1  
  hide details  
Reply
loon 3.2
(18 items)
 
  
CPUMotherboardGraphicsRAM
i7-3770K Asus P8Z77-V Pro EVGA 980TI SC+ 16Gb PNY ddr3 1866 
Hard DriveHard DriveHard DriveOptical Drive
PNY 1311 240Gb 1 TB Seagate 3 TB WD Blue DVD DVDRW+/- 
CoolingCoolingOSMonitor
EKWB P280 kit EK-VGA supremacy Win X LG 24MC57HQ-P 
KeyboardPowerCaseMouse
Ducky Zero [blues] EVGA SuperNova 750 G2 Stryker M [hammered and drilled] corsair M65 
AudioAudio
SB Recon3D Klipsch ProMedia 2.1  
  hide details  
Reply
post #1007 of 1055
Quote:
Originally Posted by looniam View Post

maybe i am reading you wrong but, since the PCI-2.0 x8 is already saturated and adding enough data to fill the other 2.0 x8 (= PCI-E 2.0 x16) i don't think there is any room for any overhead and would need a PCI-E 3.0 x16 connection.


btw, thanks for the very informative post -thumb.gif
Wait how do you get x8 = x16?!
If both are on a 2.0 x8 bus they won't cause any problems I figure most will be running either 2.0 x16 or 3.0 x8 which effectively are the same.
post #1008 of 1055
Quote:
Originally Posted by looniam View Post

maybe i am reading you wrong but, since the PCI-2.0 x8 is already saturated and adding enough data to fill the other 2.0 x8 (= PCI-E 2.0 x16) i don't think there is any room for any overhead and would need a PCI-E 3.0 x16 connection.

I think looniam has hit the nail on the head, PCI-e 2.0 x 16 is saturated already. And (maybe?) if Mantle allows us to utilise the cards better that could require greater bandwidth anyway, hence why PCI-E 3.0x16 may be required.
post #1009 of 1055
Quote:
Originally Posted by Moragg View Post

I think looniam has hit the nail on the head, PCI-e 2.0 x 16 is saturated already. And (maybe?) if Mantle allows us to utilise the cards better that could require greater bandwidth anyway, hence why PCI-E 3.0x16 may be required.
All speculation I figure that it won't need more than 2.0 x16 and even that is a over assessment.

Not that it is any of my concern since with 40 Pci-e 3.0 lanes per processor I think I have plenty 320 Pci-e lanes actually lacking the physical slots biggrin.gif
(Well I don't know until I see whatever platform Supermicro makes of it)
post #1010 of 1055
Quote:
Originally Posted by Moragg View Post

Quote:
Originally Posted by looniam View Post

maybe i am reading you wrong but, since the PCI-2.0 x8 is already saturated and adding enough data to fill the other 2.0 x8 (= PCI-E 2.0 x16) i don't think there is any room for any overhead and would need a PCI-E 3.0 x16 connection.

I think looniam has hit the nail on the head, PCI-e 2.0 x 16 is saturated already. And (maybe?) if Mantle allows us to utilise the cards better that could require greater bandwidth anyway, hence why PCI-E 3.0x16 may be required.

You can crossfire on PCI-e 2.0 x4 with "only" a minimal impact of maybe 10%.

PCI-e 2.0 x8 has zero impact.

So my ass it's saturated. We aren't even close.
Quote:
Originally Posted by maarten12100 View Post

Quote:
Originally Posted by looniam View Post

maybe i am reading you wrong but, since the PCI-2.0 x8 is already saturated and adding enough data to fill the other 2.0 x8 (= PCI-E 2.0 x16) i don't think there is any room for any overhead and would need a PCI-E 3.0 x16 connection.


btw, thanks for the very informative post -thumb.gif
Wait how do you get x8 = x16?!
If both are on a 2.0 x8 bus they won't cause any problems I figure most will be running either 2.0 x16 or 3.0 x8 which effectively are the same.

This is actually where more cards hurts more.

As you offload more and more data, you start to need more and more bandwidth. It never gets as bad as 100% more or anything, but with 3-card crossfire, you need enough bandwidth to handle 40hz out of the 60. With Quadfire you'll need 45hz out of the 60.

For 1080 60hz you need:
1 Card: no bandwidth.
2 Card: 237MB/s (per card)
3 Card: 316MB/s (for the first card, 158MB/s for the others)
4 Card: 356MB/s (for the first card, 118MB/s for the others)

For 1440 60hz or 1080 120hz you need:
1 Card: no bandwidth.
2 Card: 474MB/s (per card)
3 Card: 632MB/s (for the first card, 316MB/s for the others)
4 Card: 712MB/s (for the first card, 237MB/s for the others)

For 1080 60hz eyefinity you need:
1 Card: no bandwidth.
2 Card: 711MB/s (per card)
3 Card: 948MB/s (for the first card, 474MB/s for the others)
4 Card: 1068MB/s (for the first card, 356MB/s for the others)

For 4k 60hz eyefinity you need:
1 Card: no bandwidth.
2 Card: 948MB/s (per card)
3 Card: 1264MB/s (for the first card, 632MB/s for the others)
4 Card: 1424MB/s (for the first card, 474MB/s for the others)

For 1440 60hz eyefinity you need:
1 Card: no bandwidth.
2 Card: 1422MB/s (per card)
3 Card: 1896MB/s (for the first card, 948MB/s for the others)
4 Card: 2136MB/s (for the first card, 712MB/s for the others)

Note how even with Quadfire 1440 Eyefinity at 60hz it's -still- only PCI-e 2.0 x4 worth of bandwidth. That's x2 of PCI-e 3.0. And this isn't for every card, this is only for the first card displaying the image. That leaves a full x6 worth of bandwidth with PCI-e 3.0 x8. More than enough.

You could easily get away with 3.0 x8 (or 2.0 x16) for the first card and 3.0 x4 (or 2.0 x8) on the others with extreme resolutions and be fine. SB-e/Ivy-e users have nothing to fear at all. SB/Ivy/Haswell users don't have enough lanes to really Quadfire without a PLX chip (which fixes the problem anyway), and AMD users probably shouldn't be doing more than Tri-fire anyway.
Quote:
Originally Posted by maarten12100 View Post

Quote:
Originally Posted by Moragg View Post

I think looniam has hit the nail on the head, PCI-e 2.0 x 16 is saturated already. And (maybe?) if Mantle allows us to utilise the cards better that could require greater bandwidth anyway, hence why PCI-E 3.0x16 may be required.
All speculation I figure that it won't need more than 2.0 x16 and even that is a over assessment.

Not that it is any of my concern since with 40 Pci-e 3.0 lanes per processor I think I have plenty 320 Pci-e lanes actually lacking the physical slots biggrin.gif
(Well I don't know until I see whatever platform Supermicro makes of it)

No Intel Extreme/Xeon user will have any problems at all with this, even in Quadfire. thumb.gif
Edited by KyadCK - 9/29/13 at 2:43pm
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Hardware News
Overclock.net › Forums › Industry News › Hardware News › [VC] AMD Radeon R9 290X with Hawaii GPU pictured, has 512-bit 4GB Memory