Nvidia launched its first generation of ' Saturn V ' system based on the DGX-1 server at the 2016 SC16 conference, built on Nvidia's previous generation ' Pascal ' Tesla P100 graphics chip (GPU) accelerator, But it has not yet created the industry's stunning performance, although the world's first-tier chipmakers are generally secretive about the details of their giant supercomputers designed and tested, but Nvidia still launched its next generation of ' Saturn V ' hybrid central processing unit (CPU) at the 2017 SC17 Conference. The cluster system with GPU benefits from a new generation of ' Volta ' Tesla V100 GPU accelerator in Nvidia's own DGX-1 server platform, substantially boosting the performance of next generation Saturn V and creating a broader diversity, whether it can be in the future top The 500 global supercomputer rankings see a good ranking performance, worth observing. According to the Next platform website, Nvidia's new generation Saturn V has 660 nodes, the same as the first generation Saturn V, with the same 8 GPU accelerators per node, but a faster Nvlink 2.0 bus to connect the GPU, With a total of 5,280 Volta GPU accelerators for shared storage and work, a single precision of 80,000 megabits per second floating-point peak Performance (petaflop) and dual-precision 40Petaflop Peak performance can be created. In the performance of the above, theoretically ranked in the world's top ten super computer system rankings, even in double-precision floating-point performance is the same, this can be said to benefit from the use of the tensor core point (dot) product engine, let the next generation of Saturn V system in Machine learning (ML) The payload performance can be as high as 660Petaflop. Nvidia computer server architect Phil Rogers also introduced the structure of a new generation of Saturn V systems in the SC17 conference, considering the heat dissipation problem in small cluster configurations and not being able to stack DGX-1 server platforms over the same rack, So Nvidia is configured with only 6 DGX-1 on 1 racks, with a maximum of two racks and 12 DGX-1 nodes. On a medium-size cluster configuration, it is visible that NVIDIA organizes 3 of small cluster configurations, meaning that the total of a total of 6 nodes in each rack of 6 racks, Nvidia called this cluster ' Pod ', and said to be able to replicate, and then expand the size of the cluster configuration, so can expand to large cluster size, This large cluster combines 4 groups of DGX-1 ' pod ', each ' pod ' has 36 DGX-1 nodes, so total a total of 144 DGX-1 nodes. Nvidia says the training mission is ideally performed within a ' pod ' to minimize the flow load between ' pod '. On the price side, although Nvidia did not mention prices in its next generation Saturn V system's upgrade plan, its dgx-1v already had a $149,000 price tag, and the InfiniBand network was slightly more complex, so it was reported that the next generation of Saturn The V system may be priced at about 100 million ~1.1 billion, a price forecast that expects the Saturn V system to have full artificial intelligence (AI) tiered support, no external storage and a powerful EDR InfiniBand network. The report predicts that if Nvidia's next generation Saturn V system participates in the Linpack performance test, it should get about 22.3Petaflop in 2018, and this performance is expected to make the next generation Saturn V system at top The top 500 global supercomputers ranked in the world's 3rd highest level of executive system testing, and boarded the global supercomputer list.