I don't think that it is very likely that we'll see a 600mm^2 PCB any time soon of the consumer market. Maybe for the pros, but not the consumers. Yields are probably too low for that to happen for the next few months. We may see a large GPU in 2017 I think, but not before then. I'd love to be proven wrong though.
Originally Posted by Serandur
Intel are special. The rest of the industry are struggling with costs; remember a few years back when Nvidia were mad at TSMC and publicly showed this?Warning: Spoiler! (Click to show)
The show must go on otherwise Intel will blindside Nvidia in HPC with Knights Landing and AMD might as well for both HPC and consumer products. Nvidia need to release new stuff; it's pretty mandatory and they need a new process node to do it at this point. This is largely where the increasing prices on GPUs, belated generations/nodes, and advertising of power efficiency in place of more power are coming from. We're fast approaching silicon's wall. Actually, excluding Intel, we've slammed right into it costwise.
Intel's margins are very high. That's the big reason why they are "special". They can afford the more expensive cost per transistor - up to a point. Beyond transistors, all we have is architectural improvements and those have scaled historically much slower than transistor scaling. I suppose the Titan and similar GPUs also have high margins. AMD however is not as fortunate and the margins are not as good (they're in the Red right now, although RTG remains a bright spot).
Granted, in the case of GPUs, it is a lot easier to scale up GPUs in terms of the performance as it is parallel, but there are st5ill likely going to be GPU limitations. Frankly, I think we may hit them at 14nm, perhaps 10nm. Barring a major change elsewhere (ex: a new super material), we may be truly at the end of Moore's Law, with 20-28 nm being the cheapest per gate, and performance applications using smaller nodes (up to a point).
Originally Posted by ZealotKi11er
I think Nvidia did not get same sale numbers with Titan X. I feel like GTX980 Ti was fast answer to Fury X because if GTX980 Ti did not come out when it did Fury X would have sold a lot more even though Titan X was out. Now they can release Titan and wait for AMD and act accordingly. If AMD has Titan class card Nvidia will release GTX1080 Ti like it did with GTX780 Ti and GTX980 Ti. They do want the Titan in the market for as long s possible or at least they would like to let people know that Ti will come much later. Yes $1000 is a lot of money but I know a lot of people that would play $350 for GPU just to have it 1 year early.
The problem is that the $350 premium isn't 1 year, it's more like a couple of months.
You pay $350 basically for a few extra shaders, double the VRAM, and I guess "prestige". For a cheaper expense, you get a GPU that is within 5% on most benchmarks, has enough VRAM that you will run out of Core before you run out of VRAM, and the possibility of better quality PCBs.
Originally Posted by magnek
Pretty much. Which is why GP100 based Titan coming first and the 1080 Ti a year later makes sense, otherwise who would buy the overpriced Titan?
And by making it a Titan card, they can charge an arbitrary premium, which would offset "lost sales" due to not being to double dip if they released GP104 based x80 "fake flagship" cards first, then released GP100/102 based big die real flagships later and get people to upgrade twice.
This. We need a strong AMD unless you want to see really expensive GPUs with marginal upgrades.
Personally I hope that Mahigan is right and that Polaris will be amazing, addressing all of the bottlenecks of the Fury X, and destroying what Nvidia has to offer. That would force Nvidia to lower its prices, give AMD some much needed marketshare, money, and mindshare, then hopefully level the playing field a bit.
Originally Posted by epic1337
TDP is an issue
not because of power supplies are unable to power them, but because the die itself will get fried by such high power density on 14nm node.
no matter how good the cooling is, once you put too much current on such a thin trace you'd end up with a fried die.
i am actually rather worried about these chips with their ohh-so-small nodes, at these nanoscopic scale electromigration becomes even more deadlier.
At this point, we're at war with the laws of physics in trying to get these smaller nodes. We seriously need EUV and 450mm wafers to bring down transistor costs - sooner rather than later. Sadly neither of those technologies is working out - ASML has hit some pretty big bottlenecks.
I have wondered too about the viability of overclocking super small nodes as well. It may be that a medium sized node is the golden point for overclocking headroom as well.Edited by CrazyElf - 1/27/16 at 5:21pm