Originally Posted by Mahigan @Cyro999
We could be looking at between 1.3 to 1.6 improvement. That being said, we could be looking at more than 2.0 improvement from Fiji to Greenland. Imagine, for a moment, Fiji with twice the front end components...
That alone would cause a two fold performance improvement from current Fiji to this imagined architecture. Why? Because neither Fiji, or Hawaii, are compute limited. Especially not the case under DX12 with the upcoming engines. They're front end limited and memory bandwidth limited.
Now factor in that Greenland has twice the transistors as Fiji... An overhaul of the front end, with some compute units added, as well as HBM2 would lead to a near 2x, performance improvement under DX12 titles.
Greenland will be more than competitive with Pascal. Assuming AMD doesn't jump the shark again.
I don't think AMD next chip is going to be as big as Nvidia's.
For the last 5 generations aside from Fiji, AMD/ATI has never made a die as big as Nvidias. Fiji was the first time and it was for a couple reasons, first fiji was AMD attempt to stuff an a design made for 20nm onto a 28nm one and secondly, to take advantage of the bandwidth of HBM1, AMD needed to increase it's compute power substantially over hawaii. Add in the maturity of 28nm chip at this point in time and it opened the door for such a big chip for AMD.
Something 16-18 billion transistors from AMD is something AMD cannot afford at this point because there is a very good chance they won't make their money back and the volume of the market isn't there.
I think whatever AMD comes up with is going to be 25%-40% smaller than Nvidia's big chip which was typical of the past.
What allows Nvidia to make such big chips in the first place is they command 80% of the professional markets(and charge 33% more than a AMD equivalent) and on average can charge substantially more for the same performance when compared to AMD for their gaming products. This equals hundreds of millions more a quarter which can support the big monolithic chip design.
Considering the staggering increase in cost to design a chip at 16nm(triple), make it, the risk associated with the yields of such big chips and considering AMD resources, it just doesn't make sense for them. And the CEO's current vision agree's with this. Dumping hundreds of millions of dollars in cost/opportunity costs in a such a high risk/high failure venture just doesn't make sense. Not when they have about 300 million dollars of usable liquid asset left(they need to keep a minimum amount of 500 million). Turning their GPU division around means 10s million and at the most 100 million(which nvidia generated without their Intel payout) in net profit. Turning their CPU division means billions and it's why the company has spent most of their resources over the last few years on their CPU department. The GPU industry is a shrinking market and will implode if the cost of development exceeds the amount of potential revenue in the future. Nvidia at 10nm is going to run into this wall and it's going to mean trouble for them.
Nvidia generates about 200+ million typically from the professional market alone and can afford to take these risks. This is all because they have the money in the bank and the professional market/super computing contracts to pay for these big chips, plus the extra margins generated from the Nvidia brand luxury tax.
AMD on the other hand would struggle to generate a profit from such big chips(as Nvidia often does at times even) and if the chip comes out a dud like the gtx 480 or 2900xt, well there goes hundreds of millions of dollars and lost revenue on allocating limited 16nm chips on something not profitable. It would be far less risky for them to design a chip around 350 mm2.
These chips can sell at price points with drastically more volume and the yields are substantially better. They can also be decent pro cards too. This makes the midrange market the safest spot for AMD to play it's 16nm chips.
Heck, I doubt Nvidia is making a 17billion transistor design but they can afford to make such a chip and have it fail(which is far less likely considering their professional market presence). AMD on the other hand has to pick it's fights properly at 16nm because they have limited resources, the wafers limited availability and their brand is coming out a dog after 28nm's. The professional market simply isn't there for AMD at the moment because of AMD worse support on the drivers and products for the professional market.
Doesn't mean they can't out perform nvidia, but it is far less likely when they typically work with a chip size deficit.Edited by tajoh111 - 10/4/15 at 8:14pm