Originally Posted by sepiashimmer
50% die space for iGPU!!! Which almost no one uses! They should dedicate 70% die space for CPU and the rest 30% for iGPU.
Once you move away from OCN, you'll find that the vast majority of computers use only
the iGPU. They make such large iGPUs because the market demands them (they could save a lot of manufacturing cost by eliminating them if they weren't needed). The vast majority of computer users aren't willing to pay for dedicated GPUs and this is way better than old chipset GPUs.
Originally Posted by epic1337
i think its more of a user-scenario compatibility concern.
i wonder about that, do we even need a gigantic iGPU just to support OpenCL?
plus it could be said that the iGPU could be stripped to function only as a compute accelerator, which is simpler and smaller.
or on the other hand, theres OpenCL FPGA, its already available on intel-altera chips so its not impossible, with ASIC slave units it could be made cheaper and faster.
edit: this reminded me of structured ASIC, i wonder what happened to that one.
on a side note, shouldn't they start making dedicated GPUs?
they've been scaling their GPUs up so much these days, to the point that the CPU's shared TDP is affecting their potential.
You can technically run OpenCL via CPU emulation, but the cost makes it unusable (even a small GPU has far greater performance while using much less power). A dedicated OpenCL processor unit would be a bit smaller than a GPU, but without being able to be used as a GPU effectively making it unmarketable outside of servers.
There's another aspect of the GPU though in that getting rid of the GPU wouldn't increase maximum clockspeed in any meaningful way. The limiting factor today is almost entirely the fabrication process and removing the GPU wouldn't meaningfully increase the max clock headroom. The only time the iGPU significantly impacts CPU speed is when it's used. If you only have an iGPU, then this is expected and offers a better experience over really crappy chipset graphics. If you run a dedicated GPU, then it power gates, so there's still not much of a TDP difference. If you have a large cooler and are pushing the TDP already, the difference isn't that much.
This leaves die area. If the chip remained the same size, you'd see some benefit from lots of cache, but it would always be on (which would affect clockspeeds). You could make the die smaller and make more chips, but that may not decrease cost due to supply/demand (eg, ARM SoCs are as big or bigger than Intel chips and do more stuff, but cost less). Some servers care about the iGPU because the ever-so-slight decrease in TDP (and elimination of GPU concerns in the OS/drivers) saves them money over the decade they plan on using the chip for (there may also be some slight reliability boosts too, but I don't know). I'd actually bet that these chips may cost a little less because they are chips with iGPUs that failed inspection, so they laser out the iGPU and sell to increase yields.
EDIT: As to Intel making dedicated GPUs, they would have a bit of a hard time there. The competition in the GPU market is strong. ARM, Qualcomm, Imagination Technologies, AMD, Nvidia, Vivante, Broadcomm, etc. Intel couldn't bundle their GPUs without anti-trust actions being taken and they couldn't compete on marketing. In the low end they'd be eaten by literally everyone else. In the top-end, they may not be permitted to depending on their cross-licensing agreements with AMD and Nvidia.
More specifically, Nvidia's bought Transmeta IP which emulated the x86 ISA on it's own architecture in an attempt to get around patents. Project Denver was probably the real reason Intel sued Nvidia. They struck a cross-license agreement where Nvidia agreed not to make x86 CPUs and drop their chipsets/chipset GPU business and Intel agreed to license Nvidia patents. That's very much in Intel's favor unless they inked in somewhere that Intel can't compete in non-integrated graphics processors. I'd imagine the direct results of this were larger iGPUs by Intel, switching the Denver microcode to interpret ARMv7-8 and Intel skipped the graphics market to go straight for GPGPU by pushing Larrabee even harder culminating in the Xeon Phi.Edited by hajile - 8/30/16 at 10:30am