Originally Posted by Yvese
I hope nobody annihilates anyone. The last thing we need is for the next Titan to be $1500 and the next Ti to be $1000. If you mean annihilating AMD in terms of how much higher it's priced, then sure
Given AMD's finances, it would be best for us consumers if AMD "annihilated" Nvidia next round. If Nvidia annihilates AMD, we could be seeing the end of AMD, especially if Zen also fails.
Originally Posted by Blameless
Not much point in designing a part with a large and complex GDDR5 memory controller, only to have to redesign it with an HBM controller/PHY later, and a massive waste of die area to build a part with both.
This. It'd be a huge waste of precious resources. Die space is a limited and incredibly valuable thing.
They'll want to go from HBM2 from day one and have the enterprise variant have ECC HBM2 support.
Originally Posted by 47 Knucklehead
Actually the reason why "Big Pascal" will launch first is simple ... IBM and nVidia have a HUGE government contract to build the worlds fastest super computer and it is based on Big Pascal.
Also, that is why I have to laugh at stupid conspiracy theories about nVidia doing some fake garbage about using HBM1 with just 8 chips on the interposer and lying about the bandwidth. People who think that have obviously not worked on a government contract and know that lying about specs WILL land you 20 years in Federal Prison ... unlike lying in the civilian market. Speaking of which, any word on that stupid lawsuit where people were trying to sue nVidia for 3.5GB (when it actually does have 4GB)?
Even if it launches, it will not be for us consumers until likely 2017. It will be enterprise exclusive.
I'm not even sure that they'll launch a giant 600mm^2 die even for enterprise first. It'll probably be consumer first and then enterprise only a few months later then followed by a consumer one. I've been discussing this with a few people (who are a lot more credible than your assertions - engineers in some cases and some pretty knowledgeable tech journalists). Actually, ever since the Fermi disaster, Nvidia has quietly adopted the small die strategy. You can see this with the past 2 generations:
- Kepler: First the 300mm^2 die then the Titan only months later even for enterprise and the following year for consumers
- Maxwell: 750Ti > GTX 980 > Titan X
- There's a pretty clear progression there and the risk of something going wrong is much higher with a big die on a fresh process. The reason why the Fermi was so bad was that Nvidia did not work around the process limitations. AMD back then did so with a small die (I think 4770 where they learned about the limitations of the 55nm process which proved invaluable and that was why Cypress had a good power consumption for what it offered).
So we might expect a 250-400mm^2 die first (actually maybe even a 750Ti-sized die first) then the enterprise gets the big die and only then does the consumers get the rest.
At that point, also, compute is becoming more competitive. Knight's Landing will be debuting at around that point and when Volta comes out, Knight's Hill.
Die yields would be awful for a 600mm^2 chip at a relatively new process and the risk of a Fermi-repeat would be dangerously high. Neither side is going to do a big die until they appreciate the limitations of the process.
Consider what you are asking Nvidia to do:
- Pascal (which might entail changes comparable to from Kepler to Maxwell > SMX to SMM along with what is likely a very greatly changed front end)
- HBM2 Memory Controller (unlike AMD, they don't have the previous experience here)
- New node
- Technologies such as Nvlink
And you're asking for a giant chip right away? That's a recipe for disaster.
Originally Posted by 47 Knucklehead
Hopefully it will be the GTX 990 and they sweep AMD's announcement of the Fury X2 and crush them even more in the marketshare.
Be careful what you wish for. If a $1k Titan is bad, watch what will happen if AMD goes under. Oh, and it's unlikely that someone will buy AMD as a whole. I've discussed that in the other thread.
Originally Posted by iLeakStuff
They don`t need to beat Fury X2, just make one with similar performance.
Since Fury X2 have 2x8 power pin, it will probably perform like 2xNano. Nano perfom like a Fury (non X). And GTX 980 is very close to a Fury.
So GTX 990 with 2 x GTX 980 with higher clocks would perform like a Fury X.
AMD will probably have to sell it above $999 while Nvidia could easily sell it for $999 due to much smaller silicon and GDDR5. While AMD have two huge dies plus 8 stacks of expensive HBM
The big advantage of HBM is not so much the bandwidth as much as it is the power efficiency. You can get a lot more bandwidth per watt by making it wide and slow, which is what HBM has done. Fury X would be much hotter were it to use a GDDR5 controller and would have lower bandwidth. The small form factor too would not be possible.
Actually, with GDDR5, I'm not sure that if you were to tie it to Pascal whether or not the GPU itself might prove to be bandwidth starved. It's possible, probable even. Remember you're talking a GPU with a lot more power than Maxwell and that is already pushing the limits here. HBM removes that problem and HBM2 will address the 4GB issue.Edited by CrazyElf - 9/27/15 at 8:18am