Originally Posted by DNMock
I'm pretty sure gpu's make up a big majority of the A.I. market, although Intel did make some pretty big strides with a prototype set up with specific instruction sets a few months back that would bring their CPU's up to near cost parity with Volta Teslas. Problem is that Amphere Teslas are gonna savage that.
Even if they were on par for cost on CPU vs GPU, that does not include RAM, motherboard, storage, or PSUs that would be a fraction of the additional cost for a many-GPU system as compared to many-CPU systems.
Which is probably why A100 uses EPYC, besides the obvious IO needs.