Behind the frenzy, you need to know how AI technology has developed

The basic framework of artificial intelligence existed in the 1940s, and various organizations have been innovating in the field of artificial intelligence since then. In recent years, big data and advanced deep learning models have pushed the development of artificial intelligence toward An unprecedented level of sophistication will these new technology components ultimately produce the intelligent machines envisioned in science fiction or will they maintain the current trend toward artificial intelligence, simply 'putting the same wine in the more upscale bottle'?

'It's actually a new wine, but there are a variety of bottles and different years,' said James Kobielus, lead analyst for Wikimedia's data science, deep learning and application development.

Kobielus adds that in fact most of the old wines are still quite palatable; the new generation of AI uses previous approaches and builds on these, such as the technology used by Apache's big data framework Hadoop.

However, the fanatical nowadays about artificial intelligence is due to the lack of specific development of some of the former AI candidates, which, according to Kobielus, bring us closer to those machines that look and think like humans. What counts is big data, 'he said at CUBE's studio in Marlborough, MA Why big data sparked interest in artificial intelligence because it is a huge help in training a deep learning model Enabling it to make more human-like inferences.Kobielus and Dave Vellante have made breakthroughs in artificial intelligence and machine intelligence.Dave Vellante is Wikibon's lead analyst and co-host of SiliconANGLE's live studio.

The artificial intelligence revolution will be algorithmic

Artificial intelligence in the long-term progress in smart dialogue, but also reflects its rapid revenue growth.Research agency Tractica LLC survey shows that in 2016, the artificial intelligence software market scale of 1.4 billion US dollars, by 2025 will be increased to 59.8 billion US dollars . "Artificial intelligence has applications and use cases in the verticals of almost any industry and is considered as the next major technology shift similar to what has happened in the past, such as the industrial revolution, the computer age and the smartphone revolution," the study of Tractica LCC Head of Aditya Kaul said some of these verticals include finance, advertising, healthcare, aerospace and consumer.

The next industrial revolution will focus on artificial intelligence software, which may sound like an imaginative nerdy fantasy, but even outside of Silicon Valley, this feeling is also spreading. "Time" magazine recently published an article The feature article, entitled "Artificial Intelligence: The Future of Humanity." However, this artificial intelligence vision has existed for decades in the fanatic swamps of science fiction and science and technology that have evolved over the past few years So fast? From today's artificial intelligence and the foreseeable future, what can we get from reality?

First of all, AI is a broad tag - actually a hot phrase rather than an exact technical term. "Kobielus says AI refers to 'anything that helps a machine think like a human.'" But Is not machine Thinking in the strictest sense a completely different mind than the human brain? The machine does not really think, is not it depending on the situation.If the synonym for 'thinking' is 'inferred' Then the machine might be thought of as equivalent to the brain.

When people talk about artificial intelligence, they often talk about the most popular way for artificial intelligence - machine learning - a mathematical application based on the inference of a pattern from a data set. "Kobielus said: 'For a long time , People use software to deduce patterns from their data. "Some of the existing inference methods include support vector machines, Bayesian logic, and decision trees, which have not disappeared and are still being used in the growing field of Artificial Intelligence Machine learning models or algorithms trained on data make their own inferences, often referred to as artificial intelligence outputs or insights, that do not require preprogramming to a machine and that require programming only the model itself.

The inference of a machine learning model is based on the statistical possibilities that are somewhat similar to the processes of human understanding The inferences from the data can come in the form of predictions, correlations, classifications, classifications, recognition anomalies or trends, etc. For machines , The learning pattern is hierarchical and the data classifier is called the 'perceptron', which forms an artificial neural network by layering the perceptrons. This neural network relationship between perceptrons activates their Functions, including non-linear perceptrons, such as tangents, allow the answer or output of a layer to become the next level of input through the neurological process, and the output of the last level is the final result.

Deep learning layer of neurons

Deep learning networks are artificial neural networks that have a large number of perceptual layers, and the more the layers of the network, the greater their depth. These additional layers raise more questions, handle more input, and produce more Output, which abstracts a higher level of data.

Facebook's automatic face recognition technology is driven by a deep learning network, which allows for richer descriptions of images by combining more layers. 'You might ask, is not that a face? But if it Is a scene-aware deep learning network that might recognize it as a face that corresponds to a person named Dave, who happens to be the father of the family scene, Kobielus said.

Now that you have a neural network with 1,000 sensor layers, software developers are still exploring what deeper neural networks can do, and the latest Apple iPhone face detection software relies on a 20-layer convolutional neural network. In 2015, Microsoft researchers won the ImageNet Computer Vision Competition through a 152-layer depth residual network, said Peter Lee, director of research at Microsoft, that thanks to a design that prevents data dilution, the network can collect images from the images To the information, beyond the typical depth of 20 or 30 layers of residual network, he said: 'We can learn a lot of subtle things.'

In addition to image processing, new cases of artificial intelligence and in-depth learning are emerging, ranging from law enforcement to genomics, and in a study last year researchers used artificial intelligence to predict hundreds of European Courts of Human Rights Judgment of the case, they predicted that the final resolution of human judges reached 79% accuracy.

Has the ability to think and has plenty of resources and even machines to come to conclusions more accurately than people.Recently, Stanford researchers' algorithms for deep learning are more adept at diagnosing pneumonia than human radiologists. The CheXNet algorithm uses a 121-layer convolutional neural network trained on a set of over 100,000 chest X-ray images.

Artificial Intelligence Model continues to improve in learning

This underscores a key issue of deep learning: the algorithms themselves are as good as the data they train them, the accuracy of the predictions they make is essentially proportional to the size of the dataset on which they are trained, and the training process requires expert supervision Kobielus Said: 'You need a team of data scientists and other developers who specialize in statistical modeling who specialize in getting training data and tagging them (where labeling plays a very important role) and they are good at passing Developer operations develop and deploy a model iteratively.

Machine learning model tag data is really crucial, but the human eye is still the best tool for work IBM said last year that they have been recruiting a lot of people, just to tag the data AI.University of Toronto researcher Parham Aarabi And Wenzhi Guo explored the way human brains and neural networks work together to develop an algorithm that learns from explicit human instructions rather than through a series of examples where the trainer may It tells the algorithm that the sky is usually blue and is at the top of the picture, which works better than traditional neural network training, Kobielus said: 'Without knowing the algorithm, you do not know if the algorithm works. He also concluded that a lot of training will be done in a cloud or other centralized environment, while decentralized 'networking' devices such as autonomous vehicles will make the decision on site.

2016 GoodChinaBrand | ICP: 12011751 | China Exports