The new iPhone X integrates a neural engine for face recognition, but this is only the beginning. Embedded neural engines and dedicated intelligent processors implement artificial intelligence (AI) on edge devices, breaking the dependency on the cloud. The benefits of edge processing include reduced latency, full network coverage, increased privacy and security, and reduced communication with the cloud, resulting in lower costs. Because of these advantages, mobile devices can use artificial intelligence to achieve only recently Scenes that can appear in science fiction.
Past machines are now real-time data processing centers
I just attended our annual seminar and I had the opportunity to get acquainted with the AI technology in the embedded world. Machines that were once purely mechanical, such as cars, drones and robots, are now becoming smart and have visibility. , perception, tracking, classification, detection, identification, etc. These devices now use computer vision and sensor fusion to collect and process data and make real-time decisions. In some cases, such as driverless cars and drones, Decision making is critical, and delays in cloud processing can lead to unacceptable response times. With on-chip intelligence, these machines are more accurately defined as data centers.
A driverless vehicle is a good example. It requires a large number of visual and other sensors, as well as satellite positioning information and various connectivity solutions. It must also have a 'brain' to complete the data fusion and analysis. At the same time, cloud-based processing And information will also play a role in the autopilot function, so there must be an on-board processor that can make quick decisions. Even if sporadic cases occur, it is critical to operate the vehicle without danger. Therefore, the processor can handle Intensive deep learning calculations are necessary, not an optional feature.
Neural network edge processing is becoming mainstream
In the field of smart phones, Apple is usually a touchstone for new features that are required as mainstream or niche-market accessories. With the release of Apple's new flagship iPhone X, a specialized neural engine is built into the phone. A big deal of artificial intelligence edge processing. As my colleagues predicted before the release of the latest iPhone, this means that each camera-equipped device will include a visual DSP or other specialized neural network processor. iPhone X The neural engine in the game implements Face ID technology, allowing users to look at the phone to unlock their iPhone. The ultra-fast response time plus privacy and security level considerations require that all recognition processing must be done on the phone. Now available on the device With AI capabilities, there will surely be more exciting AI features.
Google also added similar functionality to its latest flagship phone, Pixel 2, through a processor called the Pixel Visual Core. In the highly competitive smartphone world, Google must differentiate itself. The first is to introduce a Pixel smartphone with a camera equipped with superior software. However, the intensive computing required for image enhancement, the single-lens bokeh effect, and the increased dynamic range of photos, these features are the standard processing that comes with most of today's leading smartphones. The device can't run efficiently. So Google decided to add a second chip for these features. Adding AI functionality may be another major differentiation. Huawei also recently announced the integration of a neural engine in the Unicorn 970. Many other companies have also joined the competition.
How do visual DSP-based engines achieve on-chip intelligence?
Although the benefits of edge processing are obvious, it also brings challenges. The challenge is how to put the data operations that can be done on a huge server into a small handheld device, while the power is also consumed in many other processing tasks. This is why visual DSPs are critical to successful edge AI processing. Streamlined and efficient, but powerful vectorization capabilities make the DSP processor the best choice for performing neural engine workloads.
Another challenge is how to migrate the existing neural network into the embedded DSP environment. This may consume a lot of development time, the cost is very expensive. But the automated tool chain can support 'key operations', one-stop service will be The analysis and optimization of the network is transformed into an embedded environment. It is very important for such tools to cover a large number of state-of-the-art networks to ensure that any network can be easily optimized and run on embedded devices.
After the migration and optimization process is completed, the input data is usually down-sampled to allow for faster processing with minimal information loss For example in the Faster RCNN (PDF) process, we have two processing stages, a proposal for the area regions and classify regions.
The CEVA-XM family of processors is an ultra-low-power visual DSP ideal for this type of work, with the addition of the CEVA-CNN Hardware Accelerator (HWA) to further improve performance and speed up neural network processing such as Faster RCNN. As you can see in the picture, our fifth-generation CEVA-XM6 visual processor has seen a significant improvement over the previous generation's award-winning CEVA-XM4, adding CEVA-CNN hardware accelerators to take this a big step forward.
Artificial intelligence based on deep learning offers endless possibilities for handheld devices: Enhanced DSLR-quality photos, enhanced and virtual reality applications, environment awareness, avoidance and navigation, detection, tracking, recognition, classification, segmentation, mapping, Positioning, video enhancement, etc. We have such power in our palms. It looks like the call function of the smartphone is insignificant.