Every year at CES, there will be some amazing technologies, including cars, smart robots, drones, AR/VR and smart home appliance innovations. It's exciting to evolve from expensive futuristic toys to practically available devices. There are also significant advances in this area. This article discusses which consumer devices that use AI and computer vision will become mainstream.
Each year, we will see some amazing technologies at the International Consumer Electronics Show (CES), including automotive, intelligent robots, drones, augmented reality/virtual reality (AR/VR), innovation in the field of smart home appliances. And many other technologies. The evolution from expensive futuristic toys to practically useful devices is exciting. This year has seen significant progress in this direction. Of course there are some exaggerations, just gizmos. This article will explore which Consumer devices using artificial intelligence (AI) and computer vision will become mainstream.
Taking the camera as an eye + built-in AI intelligence
Since Amazon Echo was first introduced in 2014, the voice interface has been widely adopted in the past few years. It is very clear this year that in order to achieve a higher level, visual and artificial intelligence must be used in the edge device. This year's CES show has countless robots that include cameras. There are also some outstanding products.
The robot company Omron demonstrated its technology in a lively and interesting way - this is a robot table tennis master named Forpheus. The robot uses two cameras to track the position and speed of the ball, using a patented prediction model to calculate The ball's movement trajectory to keep back and forth with human opponents. There is an additional camera to track the human player's facial expressions, judge whether they enjoy this game process, to ensure this is an interesting game. Although this It does not mean it is a commercial product, but it shows that artificial intelligence, sensing and advanced robotics can be applied to various industrial and consumer functions.
Not all shows are as smooth as Forpheus's table tennis technology. LG released the smart home robot CLOi, there have been some embarrassing moments, such as the robot does not respond to voice commands, etc.. Similar appearance of Jibo shows its social skills, including Facial recognition. The device was launched in October last year. It uses a different approach from mainstream smart speakers to make it more social and user-friendly. SLAMtec also presents some robots, which are characterized by It is Slam positioning and navigation solution, for example, the universal robotic platform Zeus. UbTech Robotics released the Alexa-driven humanoid robot Lynx last year, this year it launched a bipedal robot that can climb stairs and play football.
Sony's robotic dog Aibo, which was introduced in the late 1990s, is now returning to people's attention with a new and more advanced version. It contains two cameras and multiple sensors so that the owner can be recognized and made to touch and sound. reaction.
Another pet-related innovation is the interactive Wi-Fi pet camera, Petcube, which allows the user to remotely inspect the condition of the pet. One of the models of the pet camera even allows you to prepare your pet with one finger. Meal.
When does the virtual reality take off?
As for the innovation of the virtual reality market, we have seen steady growth, but it has not yet exploded as expected. This is mainly due to some difficult challenges, such as limited computing resources, power consumption, and inside-out tracking. Tracking) and content quality restrictions.
At CES 2018, HTC released the HTC Vive Pro, which supports high resolution and low latency. But more importantly, it can directly stream content to headsets without having to The device also requires the use of cables. Compared to HTC Vive, Vive Pro looks much bigger, and due to its high price, it is mainly for high-end professional users.
One of the new applications of virtual reality technology is the Google VR180, which is expected to become a mainstream consumer product. It uses an innovative way to use binocular stereo camera technology to capture 3D images. It replaces inconvenience through normal viewing angles of 180 degrees. The 360 degree viewing angle. The two products dedicated to this new format are Lenovo's Mirage cameras and Yi Horizon VR180 cameras. Users can use the Google Daydream VR head-mounted display (HMD ) Watch 3D photos, or watch 2D photos on any screen.
Driverless car big head
The driverless car has become one of the most attractive displays of the past CES conferences. This year, automotive experts believe that driverless cars are already established reality and instead begin to look for necessary services and applications to meet people without driving cars. New demands arise. For example, Ford Motor CEO Jim Hackett described the entire autonomous vehicle-driven ecosystem as “the living street” in his keynote speech. Toyota Motor’s e- The Palette concept car also conveyed a similar message depicting a multi-purpose and modular configuration of the vehicle from the action casino and restaurant to the ride-sharing service and cargo transportation without a driver.
In the field of self-directed aviation, Bell Helicopter demonstrated how to implement unmanned flying in a taxi-like electric helicopter.
These examples prove that everyone clearly understands that the revolution in driverless cars is happening. The only problem is that once it is realized, what will our city look like?
Smart Edge Development
The explosive development of artificial intelligence over the past few years is arguably the most direct result of the Internet. In the past, personal computers (PCs) and hand-held devices were not strong enough to support deep learning, so they were used by large companies such as Google and Amazon. A huge server center processes data in the cloud. The advantage of this approach is that it provides almost unlimited computing power without having to worry about which processor a particular device uses. But there are also many drawbacks. The first is the delay in data transfer. It will change with the network coverage situation, not to mention the situation without network coverage. More importantly, the disadvantages of cloud processing - privacy and security issues. Therefore, when dealing with sensitive information, it is best to stay in the device On the outside, not sent to the weak security outside.
These reasons clearly show that using the cloud to handle deep learning is only a temporary solution. Once the embedded platform can provide enough performance to support artificial intelligence processing, it will begin to execute on the edge device. You may want to know when the embedded platform is powerful enough. To achieve this vision, the answer is that they are in place. The latest flagship mobile phone, such as the embedded neural engine on the iPhone X, can recognize human face to unlock the phone without sending information to the cloud.
There are many other AI features that can also be implemented on end devices, especially through powerful and efficient digital signal processors (DSPs) and specialized deep learning engines based on vector processors. Advanced processing and energy-saving technologies make these possible The system consumes less power than graphics processors (GPUs) and other processors for remote servers, so even small, battery-powered devices can use artificial intelligence processors instead of relying on the cloud. NeuPro artificial intelligence series processors with software and hardware tools enable embedded intelligence and smoother development cycles.