Drawing processor The growth of data center business in the 1th quarter of Nvdia has slowed markedly over the past few years. Whether Nvidia can maintain its position in the field of artificial intelligence (AI), the emergence of custom chips such as Google TensorFlow processing Unit (TPU) threatens Nvidia's dominance in in-depth learning training. Whether Intel, Ultra Micro (AMD) and all the new ventures in the field can overtake Nvidia is worth delving into. According to Fubis (Forbes), the Market Research Institute Moor Insights & Strateg analyst Karl Freund published a study recently, said Nvidia's astonishing growth in AI caused a lot of attention, and brought many potential opponents, Many of these companies claim to have developed chips that are 10 times times faster than Nvidia and have lower power consumption. In addition to the hyper-micro GPU, all companies believe that the chip designed to deal with neural networks is a viable route. Intel Intel acquired Nervana in 2016 to build its accelerator portfolio. The original Nervana engine should have been released in 2017, and so far there has been no news. Nervana may decide to adjust its original design after Volta Tensorcores, who is 6 times times more efficient than Pascal, has shocked the community. Freund that the first batch production of Nervana chips may be published in the late 2018. The above discussion is about training the deep Neural Network (DNN), which is where Nvidia has made great success in AI. However, Intel says that by pairing a good software design with a Xeon Datacenter processor, you can achieve outstanding performance in the inference effort. The company claims to have more than 80% per cent of the inferential processing market. Google TPU and other internal ASIC Google has two types of ASIC available for AI: one for inference and the other for training. Google has made TPU a accelerator for the market. But in fact, it is made up of 4 identical ASIC, each offering about 45TOPS. By contrast, NVIDIA Volta each chip delivers up to 125TOPS. Over time, Google may have shifted most of its in-house GPU work to TPU. Micro-although ultra micro in the preparation of their software stack and Nvidia scramble for machine learning workload, but its current Vega chip in peak performance than Nvidia Volta a generation behind. The company currently has 10 new ventures in the world that are planning to compete for machine learning, some of which are ready to launch chips. The Cambrian in the mainland seems to be well-funded and supported by the mainland government. The Cambrian is focused on dealing with neural networks rather than building neural networks. Silicon Valley company Wave Computing has launched a chip that can build a training model. Wave uses a novel design called Dataflow architecture, which supposedly eliminates bottlenecks in traditional accelerators. Wave's data stream processor can train and process the neural network directly without using the CPU. Unlike Google TPU, wave supports in-depth learning of Microsoft CNTK, Amazon Mxnet and TensorFlow software. Other well-known companies such as Cerebras, Graphcore and GROQ are still in stealth mode, but have raised a lot of money to build a custom AI accelerator, but it should not be launched until 2019. Freund that Nvidia's biggest threat could be Google TPU. Google may continue to buy and use many GPU to handle a workload that is not suitable for TPU, such as a recursive neural network for language processing. Wave is a good choice for companies that do not want to use the public cloud for AI development and deployment, and do not want to set up their own GPU infrastructure. Finally, if Intel can borrow Nervana to enter the market and is willing to invest in its full support, then Nervana could pose a threat in 2019. But Nervana needs at least 3 years of time and a solid roadmap to develop 1 viable ecosystems. The 1 factors to consider are that, with the development of Nvidia 7 nanotechnology, Nvidia will be able to add an important chip area to the AI feature. As a result, the percentage of chips that focus on the AI may increase, so that this part actually becomes an ASIC that can also display graphics. Freund does not think Nvidia is 1 GPU companies, but 1 platform companies with unlimited growth aspirations. There are no other companies that have Nvidia's depth and breadth of expertise in AI hardware and software. Nvidia can design a better AI chip if it foresees a threat from a micro, Intel or ASIC. Nvidia has done this through the Deep Learning Accelerator (DLA). If the GPU is compromised, nvidia can also turn to the next step. At the same time, it has an obvious growth and market leading position in AI training chips. In terms of inferential processing, Nvidia focuses on data center workloads and visual guidance systems for applications such as self-driving.