75W can be done using the V12 lines you see here at 15w per pin. They need to make 20-40 of these pins, which could be done by expanding the slot to the "back" without breaking physical compatibility, or by adding an extension similar to PCI-x, or simply by re-purposing pins and deciding what is used based on startup chatter like most cards currently do.
1) AMD has no such solution.
2) The entire point of using PCI-e is that it is universal, like USB.
3) Absolutely no one but IBM supports NVLink, and even then only on custom chips.
4) AMD will never support NVLink.
5) Intel will never support NVLink.
6) PCI-e and NVLink are not compatible with one another, and can not be used in each other's slots unless the card supports both.
7) Even if two cards support NVLink, it will not be usable in a PCI-e slot as the PCI-e controller will have no idea what to do with it, forcing them to run in PCI-e mode anyway.
8) Adding NVLink takes die space that could otherwise be used for other things, and they will not sacrifice PCI-e lanes, thus breaking compatibility with every other add-in card ever made, for NVLink except on custom order.
9) Bandwidth doesn't mean anything on it's own.
NVLink is not a standard. It is a custom order. Only PowerPC is currently even capable of supporting it, and even then you need a special order chip and MB to use it.
If AMD wants an internal solution for their APUs and SuperComputers, they will run into similar problems.
1) No one uses AMD for real SuperComputers, they use Intel/IBM. Zen may help change it, but their server share is nonexistent.
2) Intel will never support it.
3) IBM probably will support it... on custom chips.
4) Same comparability concerns with PCI-e as NVLink.
AMD has an advantage in that it can actually make CPUs, GPUs, APUs, Chipsets, and does custom chips all the time as part of their day to day.
AMD has a disadvantage in that they don't actually exist as a brand in Datacenters.
FX 5200 128MB, 9800 Pro 128MB, 6800GS 256MB, 7600GT 256MB, HD 4850 512MB, HD 4870 1GB, 2x HD 4870 1GB, GTX280 1GB, HD 5850 1GB, GTX470 1.25GB, GTX580 1.5GB, HD 6950 2GB, HD 6950 2GB + HD 6970 2GB, HD 6990 2GB, HD 7970 3GB, 2 x HD 7970 3GB,R9 290 4GB, R9 290X 4GB, R9 Fury X 4GB, GTX 1080 8GB, GTX1080 Ti 11GB
yeah, and imagine 4slots of PCI-e 4.0, 1200watts from where? and through the motherboard no less.
No current gen consumer setup today will draw more than 700w from the wall. There's simply no way you can do that feasibly unless you're benchmarking 4-way SLI or Crossfire with really really power hungry cards, and no one does that anyway.
This is merely providing the means by which you can transfer power through a slot. Hell, laptop MXM cards can handle up to around 150w from the board. o_o this isn't anything super shocking, more along the lines of: "About time. You're late."