As long as you observe human demonstration once, AI robots learn new skills.

Tianhe Yu and Chelsea Finn, researchers at the Institute of Artificial Intelligence at the University of California at Berkeley, have shown their own AI system: Just let the robot watch the human demonstration once and learn to recognize the new object and place it in the right place.

Researchers at the Institute of Artificial Intelligence (BAIR) at the University of California at Berkeley recently used the Unknown Model Meta-Learning (MAML) algorithm to let robots learn new skills as soon as they watch human demonstrations. This algorithm combines two learning concepts, one It is Meta Learning, which is used to help robots learn new skills. The way to learn is to combine the previously learned actions, rather than learning new skills from scratch. The other is Imitation Learning. To allow the robot to learn new skills by observing the demonstration movements. To verify, the researcher experimented on the PR2 robotic arm and the Sawyer arm, and the results showed that the robot could learn to identify new objects from a single human demonstration film, and Object placement, shifting and picking, etc. (more details)

Wimbledon Tennis Open AI

AI appeared in Wimbledon, not only to gather wonderful pictures, but also as a fan assistant

The Wimbledon Tennis Open was warmly launched and artificial intelligence technology was also incorporated. Wimbledon selected IBM Watson technology to collect and capture the best scores of the game and make a selection of battle films for fans to review. It is through image and sound analysis, such as the player raising his arms, the audience applauding, etc. In addition to the selected films, Wimbledon also used AI as the assistant of the fans. Last year, he launched the ball voice assistant: Fred, according to Fans need to plan the itinerary, and users can also answer questions by voice. Not only that, this year's Wimbledon 150th anniversary commemorative poster, after analyzing and pairing 150 years of photos with IBM Watson, selected 8,400 sheets to make a Mosaic stadium poster. (more details)

Wayve self-driving

Britain's new Wayway takes 20 minutes to drive the church AI system

The British AI startup Wayve, a researcher from the University of Cambridge, has successfully enhanced learning and taught the AI ​​system to drive along country roads within 20 minutes. Researchers say that today's self-driving cars are mostly equipped with various sensors and long strings. Well-written rules, but they believe that the AI ​​system should learn to drive, just like people learn things, starting with Trial and error. So they use a deep enhanced learning algorithm (DDPG) to The single image of the monocular camera is treated as input, and the entire AI system is exploring, optimizing, and evaluating the three process iterations. The Wayve team first trains the AI ​​system in the virtual world, then goes to the actual test on the road, and When the error occurred, the human intervention was corrected. Finally, the system learned to travel on the country road within 20 minutes without deviating from the road. (more details)

California Institute of Technology DNA

Synthetic biomolecular circuit twilight, DNA artificial neural network can recognize molecular markers

The California Institute of Technology's research team has announced that they have established artificial neural networks based on synthetic DNA that can identify molecular handwriting. Researchers say this is an important step in the integration of AI into synthetic biomolecular circuits. The research team is taught by bioengineering assistants. Qian Qiang led his graduate student Kevin Cherry to develop a complex model based on the DNA neural network, which can identify numbers from 1 to 9. If an unknown number is given, the DNA artificial neural network in the test tube will carry out a series of The reaction is calculated, followed by two fluorescent signals, such as green and yellow for 5, green and red for 9. Kevin Cherry said that they developed biomolecular circuits that can identify hundreds of biomolecules in a single day and can also analyze directly. Molecular environment. (more details)

OpenAI enhanced learning

As long as humans demonstrate the game once, OpenAI relies on enhanced learning to make AI blue out of blue.

According to OpenAI's latest research, AI players can now learn game skills and get a high score of 74,500 by watching the human demonstration of Montezuma's revenge. Unlike other studies, OpenAI no longer requires AI players to imitate humans. Behavior, but through enhanced learning, to optimize the behavior of taking high scores. First, OpenAI uses a simple algorithm to select a segment from the human game demonstration, then let the AI ​​player play, and use the near-end policy optimization in the demonstration process. Enhanced learning (PPO) can achieve human level. OpenAI emphasizes that the value of this research is to allow agents to deviate from the demonstration behavior, so there is an opportunity to consider solutions that human demonstrators have not thought of. (more details)

GraphQL GraphCMS

Combined with headless and GraphQL features, GraphicCMS is officially released

Now users have a new Content Management System (CMS) to choose from! GraphCMS announces the release of the official version, in addition to the new user interface design, it also improves and unifies the GraphQL API, while improving security features. The official also promises in the future New features added within 3 months include revision history, custom workflows, custom roles and permissions, content views, and support for geographic locations. GraphCMS developer ambassador Jesse Martin said that with the development of features, GraphCMS has evolved The original small project was promoted to a headless CMS (Headless CMS) popular with independent developers and businesses, and has received a million dollar seed fund. (more details)

Sydney Airport Face Recognition

Sydney Airport, Australia began testing with face recognition, boarding

Sydney Airport and Qantas Australia announced that they will launch a test to replace passport verification with face recognition. This is part of the five-year automated customs clearance program announced by Sydney Airport last year. It is to replace the inconvenience of past paper passport operations. Qantas is the first A partner. Passengers who take a specific international flight from Qantas Airport at Sydney Airport will begin testing for face scans, baggage check-in, access to airport lounges and boarding. However, some people question that biometrics are powerful, However, the data is falling on the hands of the government and large enterprises. There may be privacy concerns. Sydney Airport said it will actively seek passengers to agree to participate in the program test, and promised to provide "the most stringent privacy protection", and will also comply with all relevant laws and regulations. full text)

Supercomputer

After five years, the United States returned to the global supercomputer leader, and "Taiwanese fir" won the 148th place.

The semi-annual global top 500 supercomputer rankings have recently been released. The US Department of Energy and IBM’s supercomputer Summit has squeezed the original ranking of China’s Shenwei·Taihu Light with 122.3 petaflops (93). Petaflops), this is the United States since November 2012, after five years of regaining the world's supercomputer leader. This year in addition to the Summit won the United States, also ranked third with Sierra (71.6 petaflops). Second place For China Shenwei and Taihu Lake, the fourth place is also from China, Tianhe-2A (61.4 petaflops) upgraded from Tianhe No. 2, and the fifth place is Japan's AI Bridging Cloud Infrastructure (19.9 petaflops). It is worth mentioning. Yes, Taiwania, which was funded by Fujitsu by the State Grid Center, was also listed in Fujitsu. It was also ranked 148. The performance of 1.33 petaflops ranked 148. (more details)

MIT voice recognition

Without human intervention, AI can recognize more than 20 instruments by watching a 60-hour movie.

MIT recently unveiled its own AI system: PixelPlayer, using self-supervised deep learning, looking for data patterns in the film with three types of neural networks, including a network for visual analysis of movies, replacing human-made labels with visual elements. And a network for film sound analysis, as well as a synthesizer that can associate specific pixels with sound and separate them separately. The algorithm self-supervises and watches the 60-hour music performance film without the need for human intervention training. The sound of 20 kinds of instruments, and can understand the correspondence between the sound and the instruments in the picture. The system allows the user to independently edit the sound, which is very helpful for the old music reproduction. (more details)

Image Source / Wayve, BAIR, IBM, MIT

AI Trends Recent News

1. Nvidia's new AI technology removes photo noise, including text and watermarks

2. Apple consolidated Core ML and Siri team, established a new AI department to lead the former Google AI general John Giannandrea

3. A number of AI innovations in Silicon Valley were exploded with real people posing as AI

4. "Beijing Artificial Intelligence Industry Development White Paper (2018)" is released! Statistics China currently has 4,040 AI companies

5. Lu 26 universities jointly proposed to add a degree of artificial intelligence

Data source: iThome finishing, July 2018

2016 GoodChinaBrand | ICP: 12011751 | China Exports