IT BEGINS. @[email protected]Nvidia just announced a HPC data-center module based on Pascal GPU architecture. The Tesla P100 is a daughter add-in board that holds 150B transistors, the GPU itself however will hold 15B transistors.
The Tesla P100 will be based on Pascal architecture and is fabbed at 16nm FinFet design and comes with stacked HBM2 (16GB likely in four stacks). The Pascal based GPU driving the unit holds 15 Billion transistors which is roughly double that of the current biggest Maxwell chip. If I heard it right in the keynote the primary Pascal based GPU is huge at 600mm^2.
Tesla P100 Specifications
Specifications of the Tesla P100 GPU accelerator include:
-5.3 teraflops double-precision performance, 10.6 teraflops single-precision performance and 21.2 teraflops half-precision performance with NVIDIA GPU BOOST™ technology
-160GB/sec bi-directional interconnect bandwidth with NVIDIA NVLink
-16GB of CoWoS HBM2 stacked memory
-720GB/sec memory bandwidth with CoWoS HBM2 stacked memory
-Enhanced programmability with page migration engine and unified memory
-ECC protection for increased reliability
-Server-optimized for highest data center throughput and reliability
Parallel to the P100 announcement Nvidia is announcing the DGX-1, a deep learning super computer. It holds two Xeon processors and a lovely eight Tesla P100 units each holding 16GB of HBM2 memory. Priced at only $129,000, but it is considered to be a super-computer.