The tech industry giants appear to have fully embraced the AI revolution, and Apple, Qualcomm and Huawei have created a mobile chip that is designed to provide a better platform for machine learning that different companies design In a slightly different chip, Huawei unveiled the Kirin 970 at this year's IFA, calling it the first chipset with a dedicated NPU, and Apple released the A11 Bionic Smart Chip , Which provides engine power for the iPhone8, 8Plus and x. The A11 bionic chip features a Nerve Engine processor designed specifically for machine learning.
Last week, Qualcomm released the Snapdragon 845, which delivers artificial intelligence tasks to the core processor of the most suitable processor, and the design approaches for the three companies are not that different - ultimately boiled down to offering developers The access rights provided, and the amount of power consumed by each setting.
Before we discuss this issue, let's first figure out how an artificial intelligence chip differs from an existing cpu. In the industry, you often hear the term "heterogeneous computing" about artificial intelligence. Is a system that uses multiple processors and each has its own specialized features for higher performance and energy savings.The term is not new and is used by many existing chipsets - For example, these three new products have adopted this concept to varying degrees.
For the past three years, smartphones have used the ARM big.LITTLE architecture for cpu, which combines a relatively slow core of energy savings with a faster, less energy-intensive core. Our main goal is to make this chip run May consume less power for better battery life.The first handsets to adopt this architecture include the Samsung Galaxy S4, which only incorporates the company's own Exynos5 chip, and Huawei's Mate8 and honor 6.
This year's 'artificial intelligence chip' takes the concept a step further by executing a machine learning task by adding a new dedicated component or using other low-power cores for machine learning tasks, for example the Snapdragon 845 can take advantage of it Digital signal processor (DSP) to handle long-running tasks that require a lot of double counting, such as finding a hot word for a user through analysis over a long conversation Qualcomm's director of product management, Gary Bulotman, told Engadget, On the other hand, needs like image recognition can be better managed through the GPU, with Brotman solely responsible for developing artificial intelligence and machine learning technologies for the Snapdragon Smart Platform.
At the same time, Apple's A11 bionics application adds a neural engine to its GPU to speed face recognition, verbal feedback and the use of third-party applications, which means that when you start these processes on iPhoneX , A11 turns on the Nerve Engine to perform calculations to verify the identity of the user, or pour your facial expression into the 'talkative' app.
In the Kirin 970 chip, the NPU handles tasks such as scanning and translating images in Microsoft Translate, the only third-party application optimized for the chip to date, and Huawei said its 'HiAI' isomerism Computing architecture maximizes the performance of most components of its chipset, so it may allocate AI tasks to more than just NPUs.
Despite these differences, this new architecture means that in the past machine learning computing was handled only in the cloud and can now run more efficiently on the device itself. By using non-CPU parts to run AI tasks, users' phones You can do more things at the same time, so you do not have to delay until you have the app translate or search for a picture of a pet dog, for example.
In addition, running these programs on your phone eliminates the need to send users 'usage data to the cloud, which gives users greater privacy because it reduces hackers' chances of getting data.
Another big advantage of these artificial intelligence chips is that they save energy, because some work is repetitive, and our cell phone battery consumption needs to be more rationally allocated for these repetitive processes.GPUs tend to absorb more energy, so if instead Is a more energy-efficient DSP, and it can achieve similar effects as GPUs, it is best to choose the latter.
What needs to be made clear is that the chip itself does not decide which core system to use as a driver in deciding to perform certain tasks. "Today, developers and OEMs want to run AI chips," said Brotman. Use the support database (or, more specifically, its Lite mobile version) like Google's TensorFlow to choose the core from which to run their models Qualcomm, Huawei and Apple all use the popular TensorFlow Lite and Facebook's Caffe2 Options as their design support program Qualcomm also supports the new Open Neural Network Switching (ONNX) system, while Apple has added compatibility for more machine learning modes through its core ML framework.
So far, none of these chips have had any real impact in the real world, and chipmakers are touting their own test results and benchmarks, but these test results are not until AI programs become an important part of our daily lives Are meaningless because we are in the early stages of getting machines for machine learning and there are very few developers using the new hardware.
But now it is clear that competition has begun and competitors are focusing on how to make machine learning-related tasks run faster and more efficiently on user devices. We only have to wait for a while, Chip to artificial intelligence chip changes bring us life's help.