AMD and Nvidia would be more glad to push low power consumption (ie. less hardware, cheaper to manufacture, bigger profit margin spun as though it were a win). Maxwell's market dominance suggests it would work and AMD's current PR for Polaris suggests power consumption is their current primary objective. As for VR, given current prices for the headsets, I'd think Nvidia and AMD would be more inclined to push for VR on their $1000 Titan-like SKUs instead and could get away with a GTX1080 slightly faster than a 980 Ti still being enough for VR at first.
As to why it might be right to expect more conservative performance increases:
Process Node Shrink: For the first time, the new process node isn't decreasing individual transistor cost. This means a GTX 1080 with a ~300mm^2 die size and the same transistor count and density as GM200 (980 Ti/Titan X) will be just as expensive to manufacture as GM200 (a 601mm^2 die; the largest GPU ever). In fact, it might even be a little more so at first given the maturity of 28nm vs the immaturity of 16/14nm nodes.
Given yield difficulties and initially sky high costs, it's not even very reasonable to expect a ~400mm^2 die from GP104. It's more likely that GP104 will be ~300mm^2 to ~350mm^2 maximum.
Meanwhile, looking back at the Fermi to Kepler transition, GF110 was actually significantly smaller than GM200 at a relatively small 520mm^2 (as opposed to GM200's massive 601 mm^2). The area scaling improvement going from 40nm to 28nm was also about 2x, just like TSMC's 16nm is about 2x as dense as 28nm:
http://www.tsmc.com/english/dedicatedFoundry/technology/16nm.htm 16nm is 2x 28nm
http://www.tsmc.com/english/dedicatedFoundry/technology/28nm.htm 28nm is 2x 40nm
I'll discuss architecture below as well.
New Architecture: There are limits as to just how efficient you can make an arrangement of transistors for a given task. The thing is, moving from Fermi to Kepler (especially consumer Kepler) was moving over from a more HPC-oriented compute design with significant resources dedicated to hardware scheduling to a more efficient, more gaming-oriented design (that still had its HPC-oriented parts in GK110). That saves quite a few transistors and power costs which previously didn't benefit games and allows them to be spent on gaming-oriented improvements.
Furthermore, moving from Kepler to Maxwell all-but eliminated FP64 compute capability and reallocated those costs into raw shader frequency and efficiency (therefore also FP32 compute improvements, which may actually be useful for games these days), an extremely efficient/beefy cache design, a massive increase in ROP throughput, and various other changes targeted purely at gaming and the consumer market. Unlike Fermi and likely the upcoming Pascal, Maxwell is not an HPC chip; it gained a lot of its improvements by Nvidia deliberately designing the architecture from the ground-up for consumer workloads (ex. games and some non-professional compute applications) and pretty much nothing else.
This means that Maxwell -> Pascal (a design that will, no doubt, go back to catering to HPC as well) is not analogous to Fermi -> Kepler and especially isn't analogous to Kepler -> Fermi. That's not to say Pascal won't be a big step up from Maxwell in gaming efficiency as well; the demands of newer games could shift to favoring Pascal's allocation of resources, but considering the process node shift is about the same as 40nm to 28nm in area scaling while GM200 is quite a bit bigger than GF110, transistor costs aren't decreasing, and Maxwell is a lean-and-mean gaming-oriented architecture in ways Fermi and even Kepler weren't... being a little skeptical of GP104's dominance might be warranted. It could happen, it's just not as likely to happen in my humble opinion.
I think it's even possible Nvidia deliberately kept the 980 Ti's GM200 somewhat cut-down and don't allow non-reference Titan X's specifically so the GTX 1080 or whatever they call it isn't quite as underwhelming as it might be against a fully unlocked, overclocked GM200.
good informative post +1 REP
Though at the end of the day I am still a customer. There are only a few things that count for me when making the decision to purchase a GPU and the most important ones are relative performance compared to previous/competitor products and price point.
If the GPU is not good enough in my eyes then I'll just buy something else like a Nintendo NX. It is probably best to wait until both Nvidia and AMD have products out this year before buying something. I feel hope their products don't coincidentally complement each other so well this time around, but rather compete.