The ASIC chip is an integrated circuit designed for special purpose, function-specific optimal power AI chip, designed for specific purposes. Unlike GPU and FPGA flexibility, the customized ASIC cannot be changed once it is manufactured. Therefore, the initial cost is high, and the long development cycle makes the entry barrier high. At present, most of the giants with AI algorithms and dreams are good at chip development, such as Google's TPU.
The main chip technologies currently available for accelerating machine learning training and deep neural networks include ASIC chips, graphics processing chips (GPUs), field programmable logic gate arrays (FPGA) chips, and central processing units (CPUs). The four types of chip technologies have their own advantages and disadvantages in supporting AI and machine learning. The GPU is technically the ASIC technology used in the processing of the mapping algorithm. The difference is that the ASIC chip can provide instruction sets and resource libraries for the GPU. The stylization, which can be used to process locally stored data, is like an accelerator for many parallel algorithms.
ASIC chip
Basically GPUs are very fast and relatively flexible. Although ASIC technology also has the advantage of fast processing speed, it is relatively lacking in the use of elasticity. In developing ASIC chips, it is necessary to design resources and efforts for an ASIC chip. Many, may have to spend as much as tens of millions or even hundreds of millions of dollars, and need to build a team of engineers at a very low cost, showing that investment is very rampant, and ASIC chips will also have to be constantly upgraded to keep pace with new technologies and process levels. The ASIC chip designers fixed their logic early in the development process. Therefore, if there are new ideas in the fast-evolving field such as AI, the ASIC chip will not be able to respond to this quickly. In contrast, the FPGA technology can also be reprogrammed accordingly. To perform a new function.
Another future development of ASIC is the brain-like chip. The brain-like chip is based on neuromorphic engineering. It uses the information processing method of the human brain to learn. It is suitable for real-time processing of unstructured information. It has ultra-low-power chips with learning capabilities and is closer to artificial intelligence. Target. Because it is perfectly suited for neural network-related algorithms, ASICs outperform GPU FPGAs in performance and power consumption, TPU1 is 14-16 times faster than traditional GPUs, and NPU is 118 times that of GPUs. Cambrian has been released for external applications. Instruction set, it is expected that ASIC will be the core of future AI chips.
In theory, this is true, but the reality is, who will do the OEM?
Looking at the foundry side, there are many foundries around the world. However, because of the high degree of difficulty, there are not many manufacturers that can produce AI single package systems. TSMC, Samsung, and Glof are all on the list.
So, which vendors are designing AI single package systems?
You need to see which vendors are really good at 2.5D integration and have the key IP needed for the design (such as the HBM2 physical layer interface and high-speed SerDes). The HBM2 PHY and high-speed SerDes modules perform mission-critical tasks among the components in the single-package system. Sex communications. These are very demanding challenges in analog designs. Buying IP from an ASIC vendor can minimize the risk.
There are not many ASIC vendors that specialize in these areas, but these ASIC vendors will benefit from the explosive growth in the artificial intelligence market.