MathWorks Announces MATLAB Integration with NVIDIA TensorRT to Accelerate Artificial Intelligence Applications

Beijing, China – April 11, 2018 – MathWorks announced today that MATLAB now integrates with NVIDIA TensorRT via GPU Coder. This helps engineers and scientists develop new artificial intelligence and deep learning models in MATLAB and ensures performance And efficiency meets the growing demand for data centers, embedded applications and automotive applications.

MATLAB provides a complete workflow to quickly train, validate, and deploy deep learning models. Engineers can use GPU resources without additional programming, so they can focus on the application itself rather than performance tuning. New to NVIDIA TensorRT and GPU Coder Integration makes it possible to develop deep learning models in MATLAB and then deploy them on NVIDIA GPUs with high throughput and low latency. Internal benchmarks show that MATLAB-generated CUDA code is combined with TensorRT to deploy Alexnet models for deep learning reasoning , performance is 5 times higher than TensorFlow; performance is 1.25 times higher than TensorFlow when deploying VGG-16 model for deep learning reasoning.

'Evolving images, voice, sensors, and Internet of Things (IoT) technology have prompted teams to study artificial intelligence solutions with better performance and efficiency. In addition, deep learning models have become more and more complex. All of these are given to engineers. There was tremendous pressure,' said MathWorks Director David Rich. 'Now, teams can use MATLAB to train deep learning models. NVIDIA GPUs can deploy real-time reasoning in a variety of environments, from the cloud to the data center to embedded edge devices.'

2016 GoodChinaBrand | ICP: 12011751 | China Exports