The first commercial terminal smart processor IP product, the Cambricon-1A, is used as the first generation terminal intelligent processor IP product of the Cambrian era and has been applied to Huawei Mate10, P20, Glory 10 and other tens of millions of intelligent terminals. In addition to 1A, the terminal intelligent processor IP that the Cambrian has introduced is still 1H. In today's press conference, Camilla Technology founder and CEO Chen Tianshi first released the third-generation terminal IP product Cambricon 1M.
According to reports, 1M adopts TSMC 7nm process to produce 8-bit computing performance ratio of 5 Tops/watt (5 trillion operations per watt), and provides three sizes of processor cores (2 Tops/4 Tops/8 Tops) to meet different scenarios. The need for quantitative intelligent processing, and can further improve processing efficiency through multi-core interconnection.
In addition, 1M not only continues the completeness of the first two generations of IP products 1H/1A, a single processor core can support CNN, RNN, SOM and other deep learning models, 1M also supports SVM, k-NN, k-Means, Decision tree and other classic machine learning algorithms.
It is also worth mentioning that 1M performance has exceeded 10 times that of the widely used 1A. The improvement in performance enables it to support terminal equipment for local training, providing an efficient computing platform for tasks such as visual, speech, and natural language processing. In smart phones, smart speakers, cameras, autopilot and other areas. Chen Tianshi said: '1M is the world's first intelligent processor products to support local machine learning training, which means that the use of 1M devices can be personalized based on user behavior Customization. '
As for the specific landing of the product, Chen Tianshi said that the terminal carrying the 1H product will be released this year. More information is currently not available for disclosure.
However, the earliest Investor of the Cambrian Party Newsletter introduced the remaining Cambrian deep cooperation projects at the press conference. According to reports, one-hour speech data processing on a traditional processor for intelligent applications requires 10,000 hours to complete, so HKUST has been tracking advances in artificial intelligence dedicated chips.
In 2014, HKUST began to communicate with the early Cambrian research team on the implementation of related speech algorithms on the processor, and in the Cambrian era in 2016, it began to use the Cambrian chip in 2017. test.
The test results disclosed by HKUST at the conference revealed that the energy efficiency of the Cambrian processor in speech intelligent processing is more than five times that of competitors' cloud GPU solutions, and the local recognition accuracy of speech is 9.8% higher than that of traditional processors. .
Cambrian first cloud smart chip MLU100
The first cloud smart chip MLU 100 is released.
Compared with the launch of 1M, the Cambrian MLU100, the first cloud-based smart chip of the Cambrian jointly released by Chen Tianshi and his mentor Chen Guoliang, is the focus of this conference. Chen Tianshi said that we started the research and development of two chips three years ago. Prepare the Cambrian product into the cloud.
Lei Feng network understands that MLU100 uses the latest Cambrian MLUv01 architecture and TSMC 16nm process, can work in balanced mode (1GHz frequency) and high performance mode (1.3GHz frequency), the equivalent theoretical peak speed in balanced mode up to one second With 128 trillion fixed-point operations, the equivalent theoretical peak speed in high-performance mode is 166.4 trillion fixed-point operations per second, but the typical board-level power consumption is 80 watts, and the peak power consumption does not exceed 110 watts.
Chen Tianshi also said that like the Cambrian series terminal processors, the MLU100 cloud chip also extends the versatility of Cambrian products, supports various types of deep learning and classic machine learning algorithms, satisfies vision, speech, natural language processing, classics. In the field of data mining and other complex scenarios (such as large data volume, multi-tasking, multi-modality, low latency, high throughput) cloud intelligent processing needs.
At the press conference today, the board with the MLU100 was also unveiled. The board uses a PCIe interface. The exterior design was inspired by ancient marine life trilobites in the Cambrian geological era. The main colors are black and blue. Based on the MLU100 Intelligent processing card, Lenovo introduced ThinkSystem SR650, the cloud intelligent server will support the Lenovo customers in the machine learning / VDI / virtualization / cloud / database / analysis / SAP and other directions; China Branch dawn also simultaneously launched the upgraded 'PHANERON ', The server performance is more robust, support 2-10 Cambrian MLU processing card, can flexibly respond to different intelligent application load.
As for how strong the performance of the first cloud-based smart chip is, Chen Tianshi disclosed the calculation delay of the MLU100 and Tesla V100 and Tesla P4 under the R-CNN algorithm at the conference site. The data shows that the calculation delay of the MLU100 is 125ms. The Tesla V100 The delay is 174ms, and Tesla P4 has a delay of 1069ms. The result is obvious.
However, Lei Fengnet believes that the implementation of artificial intelligence hardware is only one aspect, and the cooperation between software and hardware is also very critical. It can be seen that since the Cambrian began in 2016 Cambrian NeuWare software tool chain has been gradually introduced. Both terminal and cloud products support the API compatibility of tensorflow, caffe, and mxnet. At the same time, it provides specialized libraries for Cambrian, which can facilitate the development, migration, and optimization of intelligent applications. Smart applications can be easily implemented. The development of migration and tuning, and passed a large-scale 10 million users commercial inspection.
However, currently NVIDIA's GPUs have an advantage in cloud servers. On the one hand, Nvidia is a traditional GPU maker with hardware advantages. In addition, developers can use C language to write programs for the CUDA architecture, powerful hardware and easy-to-use development software. It is more attractive to developers. In comparison, Nvidia spends more time and money on CUDA, and it is also more complete and mature. The Cambrian thinks that in the cloud, the artificial intelligence chip is paired with Ying Wei Da. Reality?
Yang Lei, managing director of Northern Light Venture Capital, told Lei Fengwang, 'I haven't seen a chip startup company doing a conference to release two products at the same time. Only if your body reaches the level of Nvidia, its GTC may have several The product was released at the same time. A chip startup company released several products at the same time, I think it is a relatively challenging thing.
'The companies we invest in are usually very deep in a vertical area and concentrate on getting things done well to be an alternative to Nvidia.' Aurora Investment Manager, former head of Intel's artificial intelligence business in China, Zhao Gu, added: 'Intel took Movidius as a consumer market, took Mobileye as a vehicle, and received Nervana as a product for cloud and edge computing. In fact, Intel's large body size cannot support different markets at the same time. Deep I think the future is not competitive.
'Our strategy is to make it deeper in different vertical markets, and to really make the full stack of things available to users, to make it really applicable. The biggest challenge for companies like Movidius in the Chinese market is not to provide a complete The solution, so there is no way to scale. ' Zhao Gu also said.
Cambrian achieves cloud-to-end coverage Future plans to release programming language
Here, we will not further discuss whether the products of the Cambrian era can currently be used against the standard Ying Weida. What is clear is that the launch of the Cambrian cloud-based smart chip has enabled the end-to-cloud coverage. Chen Tianshi said that most of the past The chip vendors are either the main attacker or the cloud, and the two are seldom taken into account. Because the end-of-the-cloud mission ecology is quite different, we think that this situation will be broken in the smart era. The AI tasks of the terminal and the cloud are one, programming and The ecology of use is also consistent.
As a manufacturer of general-purpose machine learning chips, the Cambrian combines cloud development with the promotion of the ecosystem. The Cambrian also plans to release its own programming language in the future. We hope that partners can release products based on this software system.