Overclock.net › Forums › Industry News › Hardware News › [Fudzilla] Micron's answer to HBM - the company to double the speed of GDDR5 in 2016 (possibly named GDDR6)
New Posts  All Forums:Forum Nav:

[Fudzilla] Micron's answer to HBM - the company to double the speed of GDDR5 in 2016 (possibly named GDDR6) - Page 10

post #91 of 119
Thread Starter 
well, all I'm curious about is that AMD supporters now think HBM is so much better since it brings down power consumption on VRAM, which is a ridiculous thing to come up with since this is only a few, maybe a few dozen watts. when GTX 970 was launched and beat R9 290 while drawing 150W less they didn't give a damn about that. They didn't give a damn when certain "power improvements" on 390X made it draw 50W more than 290X. You want power savings HBM enthusiast - look at 980 SLI vs 390X CF power draw and performance charts.
Edited by Klocek001 - 12/14/15 at 8:29am
post #92 of 119
Quote:
Originally Posted by mtcn77 View Post

Can I quote you on that? I need a source in order to bring about some disappointment in another forum.

Dont quote me on it. Quote Hynix.

Remember that Q2 2016 is when they begin HBM2 production...Depending on yields, nobody knows how long it will take before its available in a quantity that can supply Nvidia or AMD for their graphic cards.
Most likely we wont see HBM2 cards until after summer if I were to guess. Late Q3 perhaps. Maybe Q4.
Another thing to note is that HBM1 production began in Q2 2014 but a product with that wasnt available until 1 year after. Of course that depends on how far AMD was with Fiji engineering as well, but there is no doubt it takes a while until production is up to the level that it can feed the production line of a product and until yields are acceptable to meet their quality criterias as well as good enough to keep price down for the chips


Edited by iLeakStuff - 12/14/15 at 8:40am
post #93 of 119
Thread Starter 
Closing down the price per kwh part of discussion, I found a nice kwh/price calculator online, and in Poland the difference between running 980 SLI vs 390X Crossfire, 3 hours a day of gaming, would amount to 240 PLN a year, which is the cost of a GeForce GT730.
post #94 of 119
Quote:
Originally Posted by Klocek001 View Post

Closing down the price per kwh part of discussion, I found a nice kwh/price calculator online, and in Poland the difference between running 980 SLI vs 390X Crossfire, 3 hours a day of gaming, would amount to 240 PLN a year, which is the cost of a GeForce GT730.

Someone who runs XFire or SLI cares about power consumption? That's new.

The girlfriend.
(15 items)
 
The Mistress
(13 items)
 
Media Server
(11 items)
 
CPUMotherboardGraphicsRAM
A8-6410 Lenovo Lancer 4B2 K16.3 R5 128 Shaders/M230 Hynix 8GB DDR3 1600 
Hard DriveHard DriveOSMonitor
Samsung 840 120 GB SSD Seagate Momentus 1TB 5400rmp Linux Mint 18.3 CMN1487 TN LED 14" 1366*768 
KeyboardPowerMouseMouse Pad
Lenovo AccuType 2900mAh/41Wh Elan Trackpad/Logitech M90 Super Flower 
Audio
AMD Avalon(Connexant) 
  hide details  
Reply
The girlfriend.
(15 items)
 
The Mistress
(13 items)
 
Media Server
(11 items)
 
CPUMotherboardGraphicsRAM
A8-6410 Lenovo Lancer 4B2 K16.3 R5 128 Shaders/M230 Hynix 8GB DDR3 1600 
Hard DriveHard DriveOSMonitor
Samsung 840 120 GB SSD Seagate Momentus 1TB 5400rmp Linux Mint 18.3 CMN1487 TN LED 14" 1366*768 
KeyboardPowerMouseMouse Pad
Lenovo AccuType 2900mAh/41Wh Elan Trackpad/Logitech M90 Super Flower 
Audio
AMD Avalon(Connexant) 
  hide details  
Reply
post #95 of 119
Quote:
Originally Posted by Asmodian View Post

That quote is for total card power isn't it? I don't think AMD improved the efficiency of GCN much if at all and the Fury has 27% more shaders than the 290X.
I assume the power requirements will go up for GDDR5X v.s. GDDR5, twice the bandwidth never comes for free using basically the same technology.
To me it looks like HBM2 and GDDR5X will have a relationship similar to HBM1 and GDDR5, though at 16GB HBM2 has 4x the max capacity of HBM1, compared to only double when using GDDR5X instead of current GDDR5.

HBM2 offers ~1000MB/s for ~60W while GDDR5X offers ~680MB/s for ~80W (maybe more). HBM2 will max out at 16GB while GDDR5X will max out at 24GB, with HBM2 also being quite a bit more expensive per GB.

That said, I would be happy to buy a GP102 with 12GB of GDDR5X next year. Of course it depends on how the entire package works, but I think GDDR5X would be good enough given a well balanced architecture. Double the current bandwidth sounds fine to me, especially if it keeps the price from increasing; 16GB of HBM2 would still be better if I could afford it. tongue.gif

HBM2 is better than GDDR5X in performance no doubt. It will replace GDDR5 eventually. But I think it might take longer time than what some people hope for.

What if the bandwidth benefit doesnt matter in games, that ~700GB/s GDDR5X will perform just as good as ~1000GB/s HBM2 and if GDDR5X cost less than HBM2? What do Nvidia see here? Profit?
Why rush to buy HBM2 stacks? Not only do you need to retool to make a new type of die, with HBM2 stacks and logical dies under the same silicon. With GDDR5X you can just get them from Micron and continue to use current designs with cores in the middle surrounded by GDDR5X stacks.

I think Nvidia might go with GDDR5X for GP108, GP106, GP104 and GP102 the entire 2016 year if I should guess. Then start using HBM2 in 2017 or late 2016 perhaps?
Nvidia have GDDR5X coming up for Pascal without doubt. The signs are there


Edited by iLeakStuff - 12/14/15 at 8:58am
post #96 of 119
A lot of talk about HBM using less power and producing less heat. Keep in mind it's also has much less surface area to dissipate the heat and since it's directly on the GPU die, it means that the GPU itself is going to be forced to eat some of the heat produced by the HBM unlike with GDDR5.

Quite the mixed bag for overclocking. Sure it free's up more voltage to go to the GPU core increasing possible overclocking, but the core will run hotter overall, decreasing overclocking.

I almost feel like air cooled cards would be better off with GDDR5X while watercooled cards will be better off with HBM2 when it comes to overclocking.
post #97 of 119
Quote:
Originally Posted by iLeakStuff View Post

HBM2 is better than GDDR5X in performance no doubt. It will replace GDDR5 eventually. But I think it might take longer time than what some people hope for.

What if the bandwidth benefit doesnt matter in games, that ~700GB/s GDDR5X will perform just as good as ~1000GB/s HBM2 and if GDDR5X cost less than HBM2? What do Nvidia see here? Profit?
Why rush to buy HBM2 stacks? Not only do you need to retool to make a new type of die, with HBM2 stacks and logical dies under the same silicon. With GDDR5X you can just get them from Micron and continue to use current designs with cores in the middle surrounded by GDDR5X stacks.

I think Nvidia might go with GDDR5X for GP108, GP106, GP104 and GP102 the entire 2016 year if I should guess. Then start using HBM2 in 2017 or late 2016 perhaps?
Nvidia have GDDR5X coming up for Pascal without doubt. The signs are there


Yes, if HBM loses its power advantage then GDDR5X might even allow a higher performance design. I had forgotten the lower voltage of GDDR5X. Wasting power budget on unnecessary bandwidth is counter productive.
Desktop
(19 items)
 
RAID
(17 items)
 
 
CPUMotherboardGraphicsRAM
Intel i9-7900X @ 4.7GHz Asus ROG Rampage VI Apex Titan X (Pascal) @ 2.05GHz 32GB DDR4 4000-17-17-17-37 CR1 
Hard DriveHard DriveCoolingOS
480GB - Intel Optane 900P 2TB - Samsung 960 Pro EK Monoblock + GPU + 560 Rad Windows 10 Pro x64 
MonitorMonitorMonitorKeyboard
Acer XB270HU bprz HTC Vive LG OLED55C7P Logitech G810 
PowerCaseMouseMouse Pad
Seasonic PRIME 1200 Platinum Old Marble Slab Logitech G900 Logitech G440 
AudioAudioOther
Sennheiser HD 600 Creative SoundBlasterX AE-5 Mellanox ConnectX-3 MCX312A-XCBT 10 GbE Adapter 
CPUMotherboardGraphicsRAM
i7-5960X @ 4.2GHz Asus Rampage 5 Extreme Nvidia GeForce GT 545 32GB DDR4 (2400-12-12-12-28-1T) 
Hard DriveHard DriveHard DriveOptical Drive
Samsung 950 Pro M.2 512GB HGST NAS 4TB x8 - 21.8TB RAID6 Western Digital Black 4TB Samsung SH-S183L 
CoolingOSMonitorKeyboard
Noctua NH-D15 Windows 10 Pro Asus VG278H WASD "CODE" Keyboard 
PowerCaseMouseOther
SeaSonic Platinum-1000 DIYPC Alpha-GT3 Logitech G700s Mellanox ConnectX-3 MCX312A-XCBT 10 GbE Adapter 
Other
Adaptec RAID 71605 
  hide details  
Reply
Desktop
(19 items)
 
RAID
(17 items)
 
 
CPUMotherboardGraphicsRAM
Intel i9-7900X @ 4.7GHz Asus ROG Rampage VI Apex Titan X (Pascal) @ 2.05GHz 32GB DDR4 4000-17-17-17-37 CR1 
Hard DriveHard DriveCoolingOS
480GB - Intel Optane 900P 2TB - Samsung 960 Pro EK Monoblock + GPU + 560 Rad Windows 10 Pro x64 
MonitorMonitorMonitorKeyboard
Acer XB270HU bprz HTC Vive LG OLED55C7P Logitech G810 
PowerCaseMouseMouse Pad
Seasonic PRIME 1200 Platinum Old Marble Slab Logitech G900 Logitech G440 
AudioAudioOther
Sennheiser HD 600 Creative SoundBlasterX AE-5 Mellanox ConnectX-3 MCX312A-XCBT 10 GbE Adapter 
CPUMotherboardGraphicsRAM
i7-5960X @ 4.2GHz Asus Rampage 5 Extreme Nvidia GeForce GT 545 32GB DDR4 (2400-12-12-12-28-1T) 
Hard DriveHard DriveHard DriveOptical Drive
Samsung 950 Pro M.2 512GB HGST NAS 4TB x8 - 21.8TB RAID6 Western Digital Black 4TB Samsung SH-S183L 
CoolingOSMonitorKeyboard
Noctua NH-D15 Windows 10 Pro Asus VG278H WASD "CODE" Keyboard 
PowerCaseMouseOther
SeaSonic Platinum-1000 DIYPC Alpha-GT3 Logitech G700s Mellanox ConnectX-3 MCX312A-XCBT 10 GbE Adapter 
Other
Adaptec RAID 71605 
  hide details  
Reply
post #98 of 119
Quote:
Originally Posted by Asmodian View Post

Yes, if HBM loses its power advantage then GDDR5X might even allow a higher performance design. I had forgotten the lower voltage of GDDR5X. Wasting power budget on unnecessary bandwidth is counter productive.

We saw very little gains with 512GB/s Fury X against 330GB/s 980Ti. Everyone was expecting the Fury X to be in a different league because of that. Turns out 980Ti beat it in 1440p and they was equal in 4K which is the limit on what gamers will ever need in terms of bandwith.

One could argue that faster cards with 2x the transistor count will need more bandwidth. Its most likely true. But will it need 1000GB/s? Or will it be just another 980Ti vs Fury X example if Nvidia go for 800GB/s GDDR5X against AMDs 1000GB/s R9 490X?
One could think Nvidia would want to make Geforce (gamer) cards with GDDR5X while leaving HBM2 with more bandwith for the bandwidth hungry users, aka Tesla/Quadro. Which is why I`m curious to why GP102 is there in the mix.

Yes, the GDDR5X use less power than GDDR5. It runs on 1.35V while GDDR5 runs at 1.5V. Not sure how much less it will be, but considering HBM2 increase power consumption while GDDR5X go down, and the HBM1 used ~25W less power than GDDR5, I think the difference between will be moot.

It will probably all come down to availability (yields) and cost. Business as usual.
I also think mobile cards, MXM cards, will require GDDR5X due to MXM specifications may need to be completely reworked to accomodate bigger silicon with HBM. This may take a year or more to make. MXM sig is OEMs guideline for designing mobile cards, and mobile is very important for both Nvidia and AMD. GDDR5X works with exisiting specifications. No change needed.
And mobile always get Gx104/204 chips from Nvidia. I doubt Nvidia will do GDDR5X for mobile and HBM for desktop.
Edited by iLeakStuff - 12/14/15 at 10:16am
post #99 of 119
Quote:
Originally Posted by Asmodian View Post

Quote:
Originally Posted by iLeakStuff View Post

HBM2 is better than GDDR5X in performance no doubt. It will replace GDDR5 eventually. But I think it might take longer time than what some people hope for.

What if the bandwidth benefit doesnt matter in games, that ~700GB/s GDDR5X will perform just as good as ~1000GB/s HBM2 and if GDDR5X cost less than HBM2? What do Nvidia see here? Profit?
Why rush to buy HBM2 stacks? Not only do you need to retool to make a new type of die, with HBM2 stacks and logical dies under the same silicon. With GDDR5X you can just get them from Micron and continue to use current designs with cores in the middle surrounded by GDDR5X stacks.

I think Nvidia might go with GDDR5X for GP108, GP106, GP104 and GP102 the entire 2016 year if I should guess. Then start using HBM2 in 2017 or late 2016 perhaps?
Nvidia have GDDR5X coming up for Pascal without doubt. The signs are there


Yes, if HBM loses its power advantage then GDDR5X might even allow a higher performance design. I had forgotten the lower voltage of GDDR5X. Wasting power budget on unnecessary bandwidth is counter productive.

Except, again, the controller matters more than the chips. Guess what requires added complexity of the controller.
Quote:
Originally Posted by DNMock View Post

A lot of talk about HBM using less power and producing less heat. Keep in mind it's also has much less surface area to dissipate the heat and since it's directly on the GPU die, it means that the GPU itself is going to be forced to eat some of the heat produced by the HBM unlike with GDDR5.

Quite the mixed bag for overclocking. Sure it free's up more voltage to go to the GPU core increasing possible overclocking, but the core will run hotter overall, decreasing overclocking.

I almost feel like air cooled cards would be better off with GDDR5X while watercooled cards will be better off with HBM2 when it comes to overclocking.

Still irrelevant. HBM uses no where near the power the GPU does, and if the card is even remotely active, the GPU will be heating the HBM, not the other way around. If anything it makes the card easier to cool since they can use a single vapor chamber for the entire thing.
Quote:
Originally Posted by Klocek001 View Post

well, all I'm curious about is that AMD supporters now think HBM is so much better since it brings down power consumption on VRAM, which is a ridiculous thing to come up with since this is only a few, maybe a few dozen watts. when GTX 970 was launched and beat R9 290 while drawing 150W less they didn't give a damn about that. They didn't give a damn when certain "power improvements" on 390X made it draw 50W more than 290X. You want power savings HBM enthusiast - look at 980 SLI vs 390X CF power draw and performance charts.

Very few people care that power consumption for the card as a whole goes down.

Everyone cares that if power does not need to be assigned to VRAM, it can be assigned to the GPU instead, allowing for a stronger design inside the GPUs 300w cap. That is the entire premise of efficiency.

What I personal find curious is why you are being such a fanboy of GDDR5X when both AMD and nVidia will be using HBM2 for their best parts. There is no argument. GDDR of any kind, at least in the foreseeable future, is less efficient and less powerful than HBM as a whole. It's only benefit is being cheaper, and that benefit does not apply to flagship cards that are sold as $4000 Teslas and FirePros.
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
Forge
(17 items)
 
Forge-LT
(7 items)
 
 
CPUMotherboardGraphicsGraphics
Intel i7-5960X (4.625Ghz) ASUS X99-DELUXE/U3.1 EVGA 1080ti SC2 Hybrid EVGA 1080ti SC2 Hybrid 
RAMHard DriveCoolingOS
64GB Corsair Dominator Platinum (3000Mhz 8x8GB) Samsung 950 Pro NVMe 512GB EK Predator 240 Windows 10 Enterprise x64 
MonitorKeyboardPowerCase
2x Acer XR341CK Corsair Vengeance K70 RGB Corsair AX1200 Corsair Graphite 780T 
MouseAudioAudioAudio
Corsair Vengeance M65 RGB Sennheiser HD700 Sound Blaster AE-5 Audio Technica AT4040 
Audio
30ART Mic Tube Amp 
CPUMotherboardGraphicsRAM
i7-4720HQ UX501JW-UB71T GTX 960m 16GB 1600 9-9-9-27 
Hard DriveOSMonitor
512GB PCI-e SSD Windows 10 Pro 4k IPS 
  hide details  
Reply
post #100 of 119
Who cares what they use as long as the performance is there. If GDDR5x doesn't bottleneck the processor and is cheaper to implement then I am all for it, the power difference is most likely not going to be enough to make much of a real difference... we will unlock voltage and blow through the power ceiling anyways.
Computer
(14 items)
 
  
CPUMotherboardGraphicsRAM
i7-4790k Gigabyte Z97X Gaming 7 Nvidia GTX 1080ti FE (EK block) CORSAIR Vengeance 
Hard DriveHard DriveCoolingOS
Samsung 850 Evo Mushkin Reactor Custom Loop Windows 10 
MonitorKeyboardPowerCase
Ben Q BL3201PT Logitech G110 EVGA 1300w Thermatake Core P5 
MouseAudio
LOGITECH G502 PROTEUS CORE Corsair 2100 
  hide details  
Reply
Computer
(14 items)
 
  
CPUMotherboardGraphicsRAM
i7-4790k Gigabyte Z97X Gaming 7 Nvidia GTX 1080ti FE (EK block) CORSAIR Vengeance 
Hard DriveHard DriveCoolingOS
Samsung 850 Evo Mushkin Reactor Custom Loop Windows 10 
MonitorKeyboardPowerCase
Ben Q BL3201PT Logitech G110 EVGA 1300w Thermatake Core P5 
MouseAudio
LOGITECH G502 PROTEUS CORE Corsair 2100 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Hardware News
Overclock.net › Forums › Industry News › Hardware News › [Fudzilla] Micron's answer to HBM - the company to double the speed of GDDR5 in 2016 (possibly named GDDR6)