Stanford University recently updated DAWNBenchmark's April results.
DAWNBench is a set of benchmarks for end-to-end deep learning training and reasoning. It provides a common set of deep learning evaluation indicators for quantifying training time, training costs, reasoning delays and reasoning costs, and through different optimization strategies. , Model Architecture, Software Framework, Cloud and Hardware to Calculate Reasoning Costs.
Intel's ResNet model (Caffe framework), the Amazon EC2 platform built entirely by Xeon processors, scored first in reasoning delays and inference costs.
Specifically, The delay for the Intel platform to process 10000 images is 9.96ms, cost is 0.02 USD In terms of cost, the closest to Intel is NVIDIA's K80 graphics +4 CPU platform based on the MXNet framework, at $0.07, with a delay of 29.4ms.
In the test, Intel Xeon's major opponents in computing are Google's self-developed TPU v2 (tensor processor) and NVIDIA's GPU array (including Tesl V100).
Of course, for the total training time for pattern recognition (over 93% accuracy), ResNet50 model based on Google TPU v2, TensorFlow learning framework is the highest, only 30 minutes, 477 times more than the first generation.
This set of tests we can understand in this way, different hardware platform is equivalent to candidates, everyone at the same time began to back a set of test questions and answers, named Google's candidates first back, Intel is the speed and accuracy of the examination room to answer the highest.
Taking into account that Intel is fully developing graphics processors, deep learning has a hard battle against NVIDIA and Google.