Nandan Nayampally stated that it is not just the mobile device's processor that has such capabilities. Even a very small microcontroller (MCU) already has basic AI execution processing capabilities such as ARM Cortex-M0 and Arm Cortex-M3. Series, etc. Now it is possible to run machine learning models for video or speech on the top.
As more and more mobile processors that feature AI features or features are introduced, he observes that there are more and more AP application processors that combine CPU and GPU functions, and are used to handle AI operations of different types of applications. For example, object detection, speech recognition, or face recognition. Not only are these application processors using ARM architecture, many smart devices with AI capabilities on the market also have ARM architectures, such as smart speech devices, smart cameras, or It is a smart TV and so on, even in the cloud, he said, and now its architecture has also been used in China, Japan's supercomputers to do different applications.
Nandan Nayampally emphasized that afterwards, AI will become more and more popular on mobile devices. "Almost all devices can be implemented." As for what type of AI applications can be run on it, he said it will depend on the hardware of the device. He Further explanation, Machine Learning (ML) and AI are both software and run on hardware. If it is a simple AI application, it doesn't need to go through too many calculations. CPU or GPU alone is sufficient; if highly complex operations are required At this time, it is necessary to use Neural Network Accelerator to speed up calculations for more advanced AI applications.
Nandan Nayampally said that in the future, the company will continue to strengthen the CPU. The processing power of GPU in machine learning or AI will not only add more instruction sets, functions or tools to the design architecture, but also provide machine learning and even deeper learning. Support, but also through the software to optimize scheduling, to play a more efficient use, for example, the recently introduced Cortex-A76 CPU architecture, in the implementation of machine learning tasks than the previous generation A75 4x performance improvement, and in addition, the Mali-G76 GPU is also 3 times more efficient than the G72.
Not only is the chip development and design combined with AI, ARM also began to focus on neural network accelerators this year. It launched two new IP series, the Machine Learning Processor (MLP) and the Object Detection Processor (ODP), in the new Project Trillium architecture. This is what ARM announced earlier this year. A new family of artificial intelligence (AI) chips.
The former is used for machine learning acceleration. The floating-point performance of each processor is up to 4.6 Teraflops, and it is also more power-efficient. It can achieve 3 Teraflops of floating-point performance per watt. The latter is used for computer vision acceleration. , Compared with the traditional DSP digital signal chip, performance up to 80 times, and can handle up to 60 frames per second Full HD high-definition video.
Not only that, Nandan Nayampally is quite optimistic about the application of Object Detection in AR and VR applications. He even said that the company bought Apical 2 years ago as a technology company focusing on embedded computer vision and imaging applications. To help accelerate its entry into the AR, VR device market, in addition to existing ARM products that already support vision computing (Vision computing), he said that there will be next-generation Vision solutions.