Originally Posted by white owl
Similar to how CPU chiplets are different from dual socket boards I'd expect but it's not an excellent comparison because CPUs are very different from GPUs.
mGPU is two complete dies with their own resources, both run almost exactly the same thing but 2 phase, one is ideally perfectly out of sync with the other (for gaming).
Chiplets are two pieces of silicon linked together to form one effective die that shares resources. Similar to a dual core CPU, the instructions on one die (or core) will differ from the other. The cache will also have different things stored in them but they will feed from the same pool of ram.
mGPU actually works really well when used for compute as you can split the problem up among the cards. For gaming that's not happening, both are running the same instructions almost parallel with the other with the goal being to get the same problem solved twice as fast, one is running the game at 60fps and so is the other. One shows you a single frame and as soon as it's done the other shows you it's frame.
The ultimate goal in my mind of mGPU (gaming) would be to stop running the same thing and learn to work together, both working on their own piece of the problem and compiling it into the finished frames. Applications had to be written to take advantage of multiple cores and I think that in the same way games would have to be written to take advantage of this future mGPU.
The use of the word cores in reference to a gpu is misleading when you compare it to a cpu. Cores in a gpu are more like instructions per cycle in a cpu. If the game is made in a way that it's frames can't be split up and run well then it can't be used in a "dual core cpu" fashion. It then needs a powerful single threaded gpu with more instructions per cycle.
Just like some poorly made games are single threaded cpu, but the good ones are multi threaded, many newer games are intentionally made with postprocessing stuff like TAA that only works in a single threaded gpu scenario.
Running Skyrim with Threadripper won't make it a multithreaded game any more than running AC Odyssey with sli 1080tis will make it mgputhreaded
. And changing the physical arrangement of the gpu parts won't change that. Any "smart" chiplet would more or less act like an independent gpu. You could make the chiplet responsible for less, but what if a game doesn't work with that segmentation?
Unless you are thinking of just having "dumb" cores sitting out there on separate dies, completely controlled by a main scheduler. But the latency that would add is big. I imagine that a 2ghz gpu needs instructions pretty frequently. A much lesser latency change was made with Kepler to Maxwell where the L2 cache was used more end vram less and the per core/ per clock performance went up by 40%. A similar scale of latency change is when the gpu runs out of vram and has to use system ram.
Like you said, the games have to be made compatible to the chiplet method, just like the mgpu method or the multicore cpu method, not the other way around.