Artificial Intelligence (AI) training numbers based on the suite of the new MLPerf 0.7 benchmark performance numbers were released today (7/29/20) and once again, Nvidia takes the performance crown. Eight companies submitted numbers on systems based on both AMD and Intel CPU processors and using a variety of AI accelerators from Google, Huawei, and Nvidia. The increase in peak performance for each MLPerf benchmark by the leading platform was 2.5x or more. The new benchmark also added new tests for additional emerging AI workloads.
All of the systems were based on AMD and Intel CPUs paired with one of the following accelerators: the Google TPU v3, the Google TPU v4, the Huawei Ascend910, the Nvidia Tesla V100 (in various configurations), or the Nvidia Ampere A100. Noticeably absent were the chip startups like Cerebras, Esperanto, Groq, Graphcore, Habana (an Intel company), and SambaNova. This is especially surprising because all of these companies are listed as contributors or supporters of MLPerf. There is a long list of other AI chips startups that are also not represented. Intel submitted performance numbers but only in the preview category for its upcoming Xeon Platinum processors, not for its recently acquired Habana AI accelerators. With only Intel submitting processor-only numbers, there is nothing to compare them to and the performance is well below the systems using accelerators. It is also worth noting that Google and Nvidia were the only companies that submitted performance numbers for all the different benchmark categories, but Google only submitted complete benchmark numbers for the TPU v4, which is in the preview category.
Ampere benchmarks look very impressive.