Originally Posted by Lee Patekar
Prices are segmented into tiers.. budget, mainstream, performance, enthusiast.. those are price points..
I'll clarify. if the price on a tier increases by 10% that's inflation, R&D, production costs most likely. When the price for a tier increases by 100%, or if products are shifted in tiers with a new tier added on top.. that's lack of competition. Look at intel's lineup around the release of the core architecture, which is when AMD ceased to be a viable competitor at the mid and high end.
In GPUs nvidia shifted GX104 from being a mainstream part (ex: 560 GTX) to a performance part (ex: 680 GTX) at (or near) the top of its product stack. In the same trend we now have founders edition cards, which were first to market and came with a 100$ premium on top! They've also offset production and R&D costs, or perhaps just testing the boundaries of the market, when they increase the price point of x80 GTX cards from 549$ for the 980 GTX to $599 / $699 for the 1080 GTX.
None of this is possible if your competitor is fighting for market share...
So to sum it all up in simple terms, I do expect prices of hardware in each prince point to rise over time to offset production and R&D costs. However that is not what we're seeing.. We're seeing a shift in products across price points (like selling a medium pizza as a large for the price of a large instead of increasing the price of the large) as well as an increase in price at every price point. We're seeing a monopoly and that's what I don't like.
During 40nm, Nvidia had a crazy contract in place where they only had to pay for working dies.
And wafer cost moves up faster than inflation. It went from $3500-$4000 for 40nm initially to 5000 for 28nm initially to 8000 for finfet. Considering wafer cost represents the largest portion of cost of production, any increase in wafer prices is bound to have a profound impact on prices of chips. Cost of production is somewhere along the lines of 43%. So if nvidia sells something for 100 dollars, $43 of this goes to cost of production which is why Nvidia has 57% gross margins.
What had happened between 40nm and 28nm is Nvidia who used to be a big deal to TSMC became a second or third rate customer compared to Qualcomm and Apple. These companies skyrocket the demands for semiconductor companies. And Nvidia and AMD became low priority customers. As a result, their terms became less favorable and making high risk dies impossible to sell at lower costs, which is why you don't see big flagships initially. It also limited their supply which in turn makes turning a profit through pure volume more difficult.
Also Nvidia R and D expenditure is greatly exceeding inflation. In 2011, they spend around 940 million dollars, this year they are going to spend 1360 billion in revenue. That's a 44% increase in R and D.
2011, 2010, 2009,2008 were bad years for Nvidia.
Their net incomes were 580 million(300 million coming from intel lawsuit), 251 million, 67 million dollar loss, 30 million dollar loss respectively. Considering the revenue they were doing, this was simply terrible. The fermi and tesla generation were bad years for Nvidia. And pricing like that in todays market is simply impossible. Particularly with Nvidia overhead and R and D.
The big die strategy was not a profitable one and with less favorable contract terms for wafers, they couldn't sell chips as big they they used to for the same cost today. E.g slap in the prices of 40nm chips onto 28nm and nvidia would be losing money. A 30-40% drop in their revenue results in a 350-450 million dollar drop in revenue. They can't make this up with pure volume because they are supply limited. Fabs work at near full capacity because overhead is so high. This is largely greater than their net income annually(and remember Intel has been adding 50 million dollars on their typical 200 million net profit quarter).
Chips cannot be as cheap as 40nm.
The numbers don't lie, decrease Nvidia's revenue proportionally by the price