New Posts  All Forums:

Posts by Serandur

edit: accidental post
This monster is incredibly impressive and a literal record-breaker in terms of processing unit die size (in general, not just GPUs). TSMC's reticle limit increase and Nvidia's continued willingness to push their upper-end designs to that limit are highly commendable. Coupled with the raw efficiency and custom tailoring of designs for specific markets which Nvidia have been achieving lately, this thing is just... wow.
...And literally every other relevant metric of GPU engineering. You know, like the performance per transistor/die area, scalability, and resulting raw performance and cost effectiveness both. Pascal is so thoroughly superior in all of those aspects that it is successful in a far broader range of products than anything GCN-based including Tegra/autonomous cars, laptops, the consumer high-end, and HPC... all of which GCN is practically worthless in at the moment (potential...
You're cherrypicking two very specific and deliberately GCN-favoring scenarios where Polaris gets the slightest leg up and trying to pretend that it somehow reverses the efficiency gap here (it doesn't), let alone simply bridging it then (still doesn't; ~8% more performance for ~40% more power consumption in that 480 vs 1060 case), let alone in general. It doesn't even in those GCN-favoring examples. That 480, even in those best-case scenarios, is barely edging out the...
I really hope those 6-core Coffee Lakes work with existing Z170 boards. It's got the same socket even, don't be pulling any artificial obsolescence tricks on us Intel.
There is absolutely no good reason to believe the problem lies with the process node instead of the architecture. Polaris is more-or-less the same as the GCN3 microarchitecture we saw on TSMC's 28nm node, the same node on which Maxwell 2 (architecturally nearly identical to Pascal) similarly pummeled GCN in both power and die space efficiency. Push two competing chips of roughly comparable size on similar nodes from Maxwell/Pascal and GCN to equivalent points along...
And that's purely their fault. Once upon a time, ATi were a separate and very competitive company whose lifeblood were GPUs. Due to poor management, AMD (once a larger company than Nvidia) overpaid a fortune to acquire ATi, messed up with Bulldozer, and sucked ATi dry to correct their CPU-side failures. Hence, this mess we see before us now. Their own management is responsible for the current state of both ATi/RTG and AMD's limited resources and I'm personally not cutting...
AMD/ATi, Nvidia, and Intel do not and never would restrict further product development in a highly competitive HPC and PC space for the primary purpose of serving a console manufacturer's desires (as if 1. console manufacturers ever swore by backwards compatibility or had any other options other than AMD to do so or 2. didn't want more efficient designs anyway for cost effectiveness). No, if GCN as it is continues to live (obviously in the HPC/PC space, not talking about...
It gets worse looking at the average power draw:GCN needs to die already. It's needed to die for years at this point, since Maxwell first arrived. There's no excusing this nonsense, it's a woefully noncompetitive architecture at this point.
Pascal's GPU boost is really annoying. My Strix 1080 Ti at stock started at 2000 MHz, got me all excited. Then it dropped as it warmed up... 1987, then 1974, then 1962, then 1949... in some scenarios as low as 1860 MHz. Maxwell never throttled so wildly. It seems stable OC'd to 2038-2063 MHz at stock voltage though, which is like 14.5+ TFLOPs of peak shader throughput. Very monstrous. If only GPU boost could be tamed.
New Posts  All Forums: