NXP Extends Embedded AI Environment to Edge Processing Applications

NXP Semiconductors today announced the introduction of an easy-to-use, generalized machine learning development environment for building innovative applications with high-end functionality. Now, from NXP's low-cost microcontrollers (MCUs) to groundbreaking cross-border i.MX Devices such as RT processors and high-performance application processors allow customers to easily implement machine learning capabilities. The machine learning development environment provides a full suite of ready-to-use solutions that users can use in the ARM Cortex core to high-performance GPU/DSP (Graphics Processing Unit/Digital The signal processor) selects the best execution engine in the complex, etc. It also provides tools for deploying machine learning models (including neural networks) on these engines.

Embedded artificial intelligence (AI) is rapidly becoming the basic technological capability of edge processing, enabling 'smart' devices to 'recognize' the surrounding environment and make decisions based on the information received with little or no human intervention. NXP's machine learning development environment helps the rapid growth of machine learning in vision, speech and anomaly detection applications. Vision-based machine learning applications provide input information through cameras to various types of machine learning algorithms (where neural networks are the most popular). These applications cover most of the subdivided vertical markets and can perform functions such as object recognition, authentication, personnel statistics, etc. Voice activation devices (VAD) are driving the need for edge machine learning to implement wake-up word detection, natural language processing, and ' Voice user interface's application. Anomaly detection based on machine learning (according to the vibration/sound mode) can identify impending failures, thereby drastically reducing equipment downtime, and achieving rapid changes in Industry 4.0. NXP offers a variety of machine learning Integrated into the application. NXP machine learning development environment provides Fee software, allowing customers to import their own trained TensorFlow or Caffe models, converting them to an optimized AI inference engine, and deploying NXP's extensive scalable processing solutions from MCUs to highly integrated i.MX and Layerscape processors ).

'When using machine learning in embedded applications, both cost and end-user experience must be taken into account. For example, AI inference engines can also be deployed in our cost-effective MCUs and get enough performance, which is still surprising for many people. Markus Levy, head of NXP's artificial intelligence technology, said, 'On the other hand, our high-performance cross-border and application processors also have powerful processing capabilities, enabling rapid AI reasoning and training in many customer applications. With AI applications As we continue to expand, we will continue to drive growth in this application area through next-generation processors designed to accelerate machine learning.

Another key requirement for introducing AI/machine learning technology into edge computing applications is the ease and security of deploying and upgrading embedded devices from the cloud. The EdgeScale platform enables secure configuration and management of IoT and edge devices. EdgeScale integrates in the cloud AI/machine learning and inference engines, and automatically deploy integrated modules to edge devices for end-to-end continuous development and delivery experience.

In order to meet a wide range of customer needs, NXP has also created a machine learning partner ecosystem that connects customers with technology suppliers, accelerates product development through proven machine learning tools, inference engines, solutions and design services. Production and time-to-market. Ecosystem members include Au-Zone Technologies and Pilot.AI. Au-Zone Technologies provides the industry's first end-to-end embedded machine learning toolkit and running inference engine DeepView, enabling developers to work at NXP Deploy and set up CNN on all SoC product portfolios (including the heterogeneous mix of Arm Cortex-A, Cortex-M cores and GPUs). Pilot.AI has built a framework that can work on various client platforms (from microcontrollers Various perceptual tasks are implemented on GPUs, including detection, classification, tracking and identification, and data collection/classification tools and pre-trained models are provided to directly implement model deployment.

2016 GoodChinaBrand | ICP: 12011751 | China Exports