Artificial intelligence (AI) has become more and more rapid, and has begun to enter the terminal device. The computational analysis has begun to shift from the cloud to the terminal node. The development of edge computing is a hot topic in the semiconductor industry, and the 2018 Taipei International Computer Show (Computex 2018) has also become In all areas (such as IP, chip, storage), the perfect occasion for fire show, new solutions or market layout plans are released during the exhibition.
Grab the edge computing market Arm action frequently
Rene Haas (picture 1), president of the Arm IP product group, said that with the boom in the Internet of Things, Arm predicts that by 2035, there will be more than 1 mega of connected devices in the world, for medical, automotive, lighting and roads. The rapid growth of the number of connected devices will also lead to the continuous development of terminals and cloud computing. Intelligent computing will continue to promote the new era of Internet of Things, leading AI revolution, and making Internet of Things intelligent computing ubiquitous.
Figure 1 Rene Haas, president of the Arm IP product group, said that the boom in the Internet of Things will rapidly increase the number of connected devices, and the future of intelligent computing will be ubiquitous.
In response to this trend, and in order to integrate the ecosystem's AI/Machine Learning (ML) applications, calculus and framework, and combine software optimization with hardware IP products, all devices and platforms can support the most commonly used machine learning frameworks. Arm recently announced three new IP products, the Cortex-A76 CPU, Mali-G76 GPU, and Mali-V76 VPU, to enhance gaming and AR/VR experience, AI and machine learning capabilities. Through these three new products Arm will continue to strengthen the company's competitive advantage in the field of action, and once again enhance the computing performance of mobile terminal devices such as smartphones, tablets, PCs.
Nandan Nayampally (Figure 2), vice president and general manager of the customer division, said that the future 5G will promote the entire mobile industry innovation, and the upcoming 5G networking applications, including VR, AI or mobile games, will drive more computing growth. In the future, there will be more different computing requirements.
Figure 2 Nandan Nayampally, vice president and general manager of the customer division, pointed out that 5G plus AI will promote innovation in the entire mobile industry, and Arm launched a new IP product to meet market demand.
Nayampally further pointed out that the game is also one of the key factors driving the continuous rise of mobile computing. The game industry has become one of the largest global revenues in the world, and is expected to reach $137.9 billion in output by 2018, which also drives consumption. The need for computing performance.
It is reported that the Cortex-A76 is based on Arm's DynamIQ technology, which has improved performance by 35% and 40% efficiency compared to the Cortex-A75 released last year; it can provide 4 times the AI/ML on the terminal device. Computational performance for a fast and secure experience on PCs and smartphones.
The Mali-G76 delivers 30% more performance than the previous generation Mali-G72 GPU, and a 30% increase in performance density, not only to meet the needs of consumers to play high-end games at any time, but also to provide developers with more performance space. , enabling them to write more new applications, bring more high-end games to mobile applications, or integrate AR/VR into their lives.
Finally, as UHD 8K demand continues to climb, Arm has introduced the Mali-V76 to support IP and smartphone encoding and other devices. You can stream four 4K resolution movies at the same time, record videos in a video conference, or watch four games in 4K; or at high resolution, still display high-definition quality (Full HD). And can support up to 16 streaming movies to form a 4×4 video wall.
Project Trillium debuts to accelerate the construction of the ML ecosystem
At the same time, in order to improve the machine learning performance of the terminal device, Arm also released the Project Trillium platform in early 2018, including the new ML Processor, Objects Processor, and Arm neural network software (Arm). NN). Compared to independent CPUs, GPUs and accelerators, the Project Trillium platform far exceeds the programmable logic of traditional DSPs.
Jem Davies (Fig. 3), vice president of Arm, and general manager of the academic and machine learning group, pointed out that the development potential of edge computing is very large. There are many independent solutions on the market, such as ASIC accelerator, CPU/GPU, etc. Terminal operators Of course, you can choose the solution you want, but the disadvantage is that you have to spend time on the integration of hardware and software (TensorFlow, Caffe).
Figure 3 Arm Vice President, Academician and Machine Learning Group General Manager Jem Davies believes that Project Trillium is expected to create a complete machine learning ecosystem for terminal devices.
Davies shows that the advantage of Project Trillium is that it is presented in the platform architecture. The hardware is not only available with ML Processor and Objects Processor, but also helps users to simplify neural networks such as TensorFlow, Caffe and Android NN through Arm NN software. Link integration between the framework and the Arm Cortex CPU, Arm Mali GPU and machine learning processor.
Davies further pointed out that software integration is a key element in the development of machine learning. Many accelerators may have the means to provide related hardware processors (CPUs, GPUs), but few resources can provide a complete platform architecture to assist customers in hardware and software. Integrating or enhancing ML model operations, Project Trillium includes the new Arm IP processor and neural network software, which meets the needs of today's market from both hardware and software. This approach also helps Arm to build a complete edge computing ecosystem. .
In addition, Davies also observed that MCU's demand for machine learning is also very strong. He revealed that on the first day of Project Trillium's launch, Arm NN software development kit open user download, more than 5,000 users started using CMSIS NN. Try to implement machine learning algorithms with Cortex-M.
Davies said that this result is actually beyond Arm's expectations. It also shows that the MCU user group's needs and interests in machine learning cannot be ignored. This also prompted Arm to decide to further strengthen this class in the new Cortex-M core launched in the future. The efficiency of the core implementation of the ML algorithm.
CMSIS NN is a Compute Library under the Arm Neural Network Software Development Kit Arm NN SDK, which improves the efficiency of Cortex-M's execution of machine learning algorithms. Even the existing Cortex-M core, in CMSIS NN With the help of it, you can also perform some very simple machine learning inferences, such as interpreting the meaning of the sensor output data. Of course, because the MCU's computing performance and memory space are not sufficient, it is impossible to perform very complex machine learning inferences. However, if it is a simple interpretation of the data output by a single sensor node, there is still a chance to achieve it.
Davies pointed out that if the MCU cannot support some basic ML algorithms, the ubiquitous future of AI applications is difficult to achieve. Currently, the artificial intelligence application services provided by the cloud data center have obvious application limitations. Edge advancement can make AI applications more popular. In order to enable MCU to execute ML algorithm more efficiently, Cortex-M will further improve the efficiency of ML implementation in Arm's future product development roadmap.
Edge computing into automatic driving, high performance processor is indispensable
On the other hand, the automotive industry will also be one of the key application areas of edge computing in the future. According to Arm's prediction, by 2020, an average of one car will be embedded with more than 200 sensors and more than 100 engine controllers ( ECU) or microcontroller (MCU) processing, how to quickly process such huge data, respond in real time and maintain the stability and security of the system, and create autonomous vehicles that meet the needs of users, will become the future automotive electronics market A big challenge.
In this regard, John Ronco (Fig. 4), vice president of Arm and vice president of embedded and automotive business unit, pointed out that the rise of edge computing makes it unnecessary for terminal devices to return large amounts of data to the cloud, but it also represents a general CPU or machine. Learning chips require higher processing power, which is why Arm introduced the Project Trillium and Cortex-A76, and these products are also quite suitable for automotive electronic components.
Figure 4 John Ronco, vice president and general manager of the embedded and automotive business unit, said that CPUs, GPUs and other processors must be more efficient in order to respond to the need for automatic driving safety.
In addition, in order to achieve automatic driving, in addition to radar, light, a car will often be equipped with a visual sensor, and therefore requires a higher GPU to cope with the huge image operation.
According to Ronco, the difference between visual computing requirements for autonomous driving and general IP network cameras is that IP network cameras are mostly single lenses and do not move often, usually placed in a corner of the house/outside. But for cars It will take several photographic lenses to detect the road conditions and the environment. The received image information is very large, and since the car is always moving, the surrounding scenery will keep changing, which will make the operation more complicated, so it needs to be improved. s solution.
According to Ronco, object detection processors such as Project Trillium are mainly used for IP network cameras. To meet the needs of automotive visual computing, it is necessary to rely on efficient GPUs such as the Mali-G76 for higher computing performance. In order to avoid rapid accidents in response to rapid environmental changes during driving.
All in all, the AI era has brought new business opportunities to various application fields, and edge computing will also enter the automotive industry. However, if edge computing is to be built into automobiles, higher-order technologies must be embedded to achieve better performance. Performance, making the car smarter, safer and more efficient.
Drive storage needs WDC has a one-stop production advantage
The rise of edge computing has not only led to an increase in processor performance, but also as storage requirements have risen, and storage companies have accelerated their product layout. Christopher Bergey (Figure 5), vice president of Western Digital Embedded Application Solutions, points out that edge computing Technology such as machine learning makes storage and operations quite complicated.
Figure 5 Christopher Bergey, vice president of the Western Digital Embedded Applications Solutions Group, said that in line with the edge computing market, the company's one-stop production model is a market competitive advantage.
Bergey further stated that edge computing will have different requirements for storage products with different application scenarios, such as paying special attention to temperature and reliability in automobiles, and adding cost and stable supply considerations for five years in recent years; In addition, in the application of mobile devices, taking smartphones as an example, for example, consumers are increasingly demanding photos, and the pixels of photos are improved. As a result, the storage capacity of mobile phones must be increased, and the demand for edge storage will become larger. As a result, the performance of related embedded flash (EFD) products has also increased.
In response to this trend, Western Digital has launched the new iNAND product line, the iNAND8521/iNAND7550, which uses the company's 64-layer 3D NAND technology and advanced UFS and e.MMC interfaces to provide better data performance and large storage capacity. For smart phones and thin and light computing devices, these two products accelerate data-centric applications, including Augmented Reality (AR), high-resolution video capture, social media experiences, and The recent rise of AI and the Internet of Things edge experience.
Bergey revealed that the future development trend of mobile devices will undoubtedly move toward higher performance. After the arrival of the 5G generation, the data will be transmitted faster and faster, and more innovative applications will be added. With the rise of AI, if the two are combined, The demand for workloads will also increase, and the storage capacity will increase. The company will also continue to work closely with mobile operators to provide the right products for their needs.
Bergey also pointed out that in response to the development of edge computing, the company has a very good strategic advantage. The reason is that WDC has a complete product line (from low-end products to high-performance products), in addition, WDC is a one-stop production strategy. From wafers, controllers, firmware and software, they are all responsible, so they can quickly launch products for market changes, or meet the needs of equipment commercialization. This is WDC's competitive edge computing market. Have the advantage.
NXP partners with partners to accelerate the development of secure edge solutions
As for NXP, it is safe to start with eco-partners such as NEXCOM, IMAGO, Accton Technology, Shenzhun Technology, etc., to jointly deploy the edge computing security infrastructure, support emerging AI and machines connected at the edge. Learning, and security edge processing deployed in the cloud.
The collaborative system vendors will develop products based on NXP's Layerscape and i.MX application processor families to meet the needs of a variety of applications that require native processing power and cloud connectivity. A perfect balance between capabilities, online capabilities and storage capacity, suitable for both corporate and industrial environments.
Through NXP's EdgeScale technology and Docker and Kubernetes' open source software, you can perform a variety of edge applications on common cloud architectures, including Amazon Web Services (AWS), Greengrass, Google Cloud IoT, Microsoft Azure IoT, Alibaba and Private Cloud architecture.
NXP points out that EdgeScale is a suite of devices and cloud services that simplify the deployment of secure computing resources at the edge of the network; NXP will work with these partners for IoT and enterprise on-premises (On-Premises The computing platform provides scalability, security, and ease of deployment for secure deployment and management.
Tareq Bustami, senior vice president and general manager of NXP's Digital Networks Group, said that the establishment of secure edge solutions is critical to the successful development of the Internet of Things and Industry 4.0. Therefore, the company is committed to working with a wide range of equipment manufacturers to provide easy access. And support for cloud-linked secure edge computing solutions. Through cooperation, the company will help launch smarter, more functional edge solutions, adding powerful security features for large-scale deployment and management.
In summary, it can be seen that whether IP vendors, storage vendors or chip suppliers are actively deploying the edge computing market, each developing an open platform and hardware architecture, allowing AI to enter various terminal devices, and Construct a sound ecological circle.