Originally Posted by Asmodian
Yes, if HBM loses its power advantage then GDDR5X might even allow a higher performance design. I had forgotten the lower voltage of GDDR5X. Wasting power budget on unnecessary bandwidth is counter productive.
We saw very little gains with 512GB/s Fury X against 330GB/s 980Ti. Everyone was expecting the Fury X to be in a different league because of that. Turns out 980Ti beat it in 1440p and they was equal in 4K which is the limit on what gamers will ever need in terms of bandwith.
One could argue that faster cards with 2x the transistor count will need more bandwidth. Its most likely true. But will it need 1000GB/s? Or will it be just another 980Ti vs Fury X example if Nvidia go for 800GB/s GDDR5X against AMDs 1000GB/s R9 490X?
One could think Nvidia would want to make Geforce (gamer) cards with GDDR5X while leaving HBM2 with more bandwith for the bandwidth hungry users, aka Tesla/Quadro. Which is why I`m curious to why GP102 is there in the mix.
Yes, the GDDR5X use less power
than GDDR5. It runs on 1.35V while GDDR5 runs at 1.5V. Not sure how much less it will be, but considering HBM2 increase power consumption while GDDR5X go down, and the HBM1 used ~25W less power than GDDR5, I think the difference between will be moot.
It will probably all come down to availability (yields) and cost. Business as usual.
I also think mobile cards, MXM cards, will require GDDR5X due to MXM specifications may need to be completely reworked to accomodate bigger silicon with HBM. This may take a year or more to make. MXM sig is OEMs guideline for designing mobile cards, and mobile is very important for both Nvidia and AMD. GDDR5X works with exisiting specifications. No change needed.
And mobile always get Gx104/204 chips from Nvidia. I doubt Nvidia will do GDDR5X for mobile and HBM for desktop.Edited by iLeakStuff - 12/14/15 at 10:16am