Is the AI ​​chip a vassal of FPGA? On the contrary, ASIC is the mainstream market.

Reviewing the history of chip development is a history of process evolution driven by Moore's Law. It is also an evolution of architecture definition of application definition. So, talking about whether ASIC will be replaced by FPGA is a problem of a layman.

The recent relationship between AI chips (ASICs) and FPGAs is being discussed in the industry. Therefore, we also invite industry investors and entrepreneurs to discuss this topic.

Share Guest: Zhao Gu, Investment Manager of Aurora Ventures

Reviewing the history of chip development is a history of process evolution driven by Moore's Law. It is also an evolution of architecture definition of application definition. So, talking about whether ASIC will be replaced by FPGA is a problem of a layman.

In the context of the slowdown of Moore's Law, the application of the defined chip architecture and even the hardware and software systems will be more important. Just as the GPU, DSP, video processing chip and other new applications brought about by the wave of a new chip architecture, AI is in this In the wave tide, as the algorithm evolves and converges, some more efficient architectures are gradually formed, and these chip architectures are highly integrated with the scene application software to balance power consumption, performance, and cost design.

The computing architecture consists of three core elements, including computing, storage, and networking. Therefore, the chip types can be basically divided into three categories, which is easy to understand.

First of all, let's talk about the computing chip, Intel and ARM CPU, NVIDIA GPU, CEVA DSP belong to this type of chip or IP, the main task is to complete logic and math operations, support the IT world of cloud computing, mobile terminal Application and signal processing, even AI, etc.; FPGA is one of the small categories, less than 5% of the entire Intel revenue, usually FPGA can do some accelerated operations that the CPU is not good at, such as signal processing, AI reasoning, etc. However, the shortcomings of FPGA are also very clear. FPGA emphasizes the versatility of logic, supports software rewriting and configuration, resulting in a computational density bottleneck, and general logic brings a lot of redundancy, which means cost and power consumption. In the era of mobile Internet and Internet of Things, the number of users and application complexity have risen sharply, and the computational density (the computing power supported by unit power consumption) is the core competitiveness. FPGAs are obviously not competent, although FPGAs can be compared in accelerated scenarios. The CPU is an order of magnitude higher, but at least an order of magnitude lower than the dedicated AI engine.

Some people will question whether ASIC is not universal. In fact, the answer is very simple. Universal and computational density are a kind of compromise. For example, in theory, CPU can do any operation, but the general architecture brings the loss of computational density. For example, the best server CPU can only provide 1Tflops of AI reasoning power; look at the GPU, it can easily do 10Tflops, but the GPU can not complete complex logic operations, so it can never replace the CPU; A species between the CPU and ASIC, with some flexibility, but low cost performance, can not meet the mainstream needs, such as the mobile phone industry, in order to save a few cents of the cost of constantly optimizing the design, facing such a huge industry, The cost savings of bit by bit are huge benefits, so the fate of FPGA has always been a transitional product in the early stage of the market or a segment that serves small batches.

Recently, we noticed an interesting thing. Intel acquired a company that is engaged in structured ASIC design. It can cut out part of the redundant logic to accelerate the development process from FPGA logic design to ASIC based on FPGA design. From this point, you can also see Out of ASIC is the ultimate answer to the mainstream market.

Aurora Borealis has invested in four AI chip companies:

▪ Separate for cloud computing,

▪ Self-driving black sesame seeds,

▪ Consumer electronics and security billions,

▪ Ours technology with ultra-low power sensor fusion,

These companies have optimized the AI ​​engine for different application scenarios. The future chip company can't just be a company that produces hardware. It must understand the user's needs and define the boundaries of flexibility to define the best products. For example, The cloud computing market needs to support more AI network models, so its architecture design is more versatile and closer to the GPGPU architecture; while Black Sesame and Yizhi deeply understand the performance requirements of the application scenario, only need to support the few users need Several algorithms, and more to pursue the balance between power consumption and performance. The customer really cares about not universality, otherwise the CPU will be fine, but the cost under the computational density that meets the scene requirements.

Others have questioned that emerging companies are unable to capture capacity. The purpose of ASIC is to use the most mainstream and relatively inexpensive processes to accomplish the things that FPGAs can do with the most advanced processes. There is no capacity problem. For example, Yizhi only needs 40nm and 28nm. The process can provide more than 1 TOPS of computing power, the cost is only 1/10 or less of the FPGA, the most advanced process is suitable for general-purpose chip design, but in the context of the slowdown of Moore's Law, it will become a huge burden. I also want to talk about the personal views of Shen Jian's acquisition. FPGA developers are very few and difficult to use. Therefore, automation tools are valuable to FPGAs. Deep software tools can accelerate the development of FPGA AI, but will Xilinx still Continued investment in the development of AI-specific chips is waiting to be seen. Intel, the industry's leader, has a dedicated AI chip layout for autonomous driving, consumer, security and cloud computing, including BAT's own development of AI chips. This direction still has considerable consensus.

Let us summarize the point of view, the scene definition AI dedicated chip and heterogeneous computing is the main theme of the next computing architecture change cycle.

In fact, China's investment in AI chip companies is not too much but too little. A mature team with real industry experience is the target that the investment community should pursue and support. It is also a strategic resource for the country's future.

2016 GoodChinaBrand | ICP: 12011751 | China Exports