Google, Intel, Nvidia's latest chips for artificial intelligence applications boast extremely high computing speed and accuracy. In addition, it is difficult for customers to quickly grasp the various hardware and software options on the market, ARM, Ultra Micro (AMD), Amazon, Facebook's new products so as to appeal, hoping to make the module and the combination of chips to optimize. According to the register, Google's Pixel 2-powered Synergy processor, Pixel Visual Core, is Google's first smartphone chip and is designed to implement Pixel 2 's imaging-processing machine software. Pixel Visual Core has 8 image processing fetch engines (IPU), each with 512 simple arithmetic logic units (ALU), and 3 trillion jobs per second. Google said that to play IPU efficiency need to have hardware and software close cooperation. While handing out most of the details to software can improve hardware efficiency, writing IPU in traditional programming languages makes it even more difficult. In addition to halide, TensorFlow, Google also built a customized compiler for software optimization. Pixel Visual Core is responsible for OEM and will be officially launched in the future through the Pixel 2 software update. Intel specializes in training and deploying Nervana neural processors for deep learning modules. The ASIC, formerly known as ' Lake Crest ', is said to be able to cope with the multiplication of arrays (matrix multiplication) in a neural network, and the convolution (convolution) operations. The Nervana chip uses a low precision flexpoint format, so the operation density is low, but the memory bandwidth is relatively high. The Nervana chip will be shipped before the end of 2017, and Facebook will be the first to adopt it. Terah Lyons, a former technical policy advisor to the Obama administration, led Amazon, Google, Facebook, Microsoft, DeepMind, Apple and other important machine learning providers to form the partnership of AI. The aim of the organization is to promote the well-being of the community through artificial intelligence. The new company comma AI for car enthusiasts to launch the EON, combined with the traffic recorder and the real-time display device, can upload the vehicle image to the cloud, and then through CHFFR this depth learning app analysis. The analyzed images can be instantly uploaded to the driver's smartphone and are provided with a feature like a car-raised display (HUD) by the icon tag. NVIDIA's Pegasus Chip performs 320 trillion operations per second, claiming to be the world's first operational platform capable of driving a 5-level self-driving technology. The earlier launch of the drive PX series platform using a SOC, has been able to reach the 1 to 3 level of self-driving standards. Each development team has a preference for software that allows for a smooth transfer of models written by different AI architectures, ARM, Ultra Micro, Huawei, IBM, Qualcomm (QUALCOMM), and Intel has announced support for the Open Neural Network Interchange Format (ONXX), which is dominated by Facebook and Microsoft. As a result, neural networks can be transferred to other frameworks to perform inferences after completion of training. Onxx is no doubt a great boon for manufacturers without customized chips and lack of software capabilities. Using the Gluon interface developed by the Apache Mxnet Framework, Amazon also makes it easier to build and train the prototype design of the depth learning model through the predefined layers, optimizer, and initialization software.