'Reviewing' the development history of AI interpreters, when the Internet of Things met with AI

1. When the Internet of Things meets AI; 2. The history of AI interpreting officers; 3. AI, the tussle for big data and personal privacy has only just begun; 4. AI deduces that edge node video/voice applications each occupy half the sky; Polytechnic graduates will receive a blockchain diploma this month

1. When the Internet of Things meets AI;

According to SADA Systems' recent market survey of large-scale IT industry professional managers, artificial intelligence (AI) and Internet of things (IoT) in 2018 are becoming the most important areas for companies to invest in new technologies.

Of the 500 IT personnel surveyed, 38% of the respondents mentioned that AI is the main focus of corporate investment, while the Internet of Things is 31%, and the blockchain is based on 10%. IoT connected devices It usually generates huge data to train machine learning (ML) models.

Among the companies surveyed, the Internet of Things has more layout than AI, because the IoT and edge computing industries are relatively stable at the current stage, and such a solid foundation can improve the accuracy of machine learning and become necessary for AI. prerequisites.

Image recognition ability / Figure: Cambridge University

Companies also want to make the AI's investment finally realizable, rather than the academic organization's constant research. The foreign media CIO also raised the question of how to make AI investment oriented to the correct commercialization and an important task for business leaders.

In addition, in today's era background, the cost and difficulty of AI operation have also been greatly reduced. Not only is the "platform as a service" (PaaS) company provides companies with more data for training computers, but also makes the equipment universal. (Interoperability) The degree of improvement. Moreover, a large number of mature machine learning libraries and APIs have reduced the threshold for entering the AI.

With the advent of new technologies that support AI and the Internet of Things, business organizations must adhere to the focus of development and have prevented the spread of time scales from indefinite development cycles. In order to reach milestones, it is increasingly necessary to retain talent.

Machine Image Recognition Capability / Chart: Cambridge University

SADA also appeals that news surrounding emerging technologies will cause companies to be more willing to invest in new technologies, but security and privacy tend to lag behind the development of technologies, and investing in new technologies will certainly be hidden from security concerns. SADA believes that companies like If you really want to become a pioneer in emerging technologies, then you cannot ignore the safety testing of emerging technologies.

In a report from Oxford University in 2018, researchers pointed out that if there is a loophole in AI technology, the workflow efficiency of machine learning may be undermined, and the operation of the business may be endangered, if the importance of safety testing is neglected. Sex, I'm afraid I won't get it.

The blockchain has long attracted a lot of attention from the tech media from last year, but it lags behind the Internet of Things and AI in terms of corporate investment. As more and more companies begin to share the success of blockchain technology and practical applications, the future Investment in this field is expected to compete with the Internet of Things and AI.

2. History of AI interpreters;

In the Boao Forum for Asia in 2018, apart from the main agenda, the most striking hot spot was the introduction of artificial intelligence for real-time spoken translation at the meeting for the first time. However, artificial intelligence did not appear in the original “prompt real-time interpretation industry”. In the face of the threat of imminent unemployment, on the contrary, the translation result of serious mistakes, on the contrary, relieved the real-time interpreters. It seems that this line can still be eaten for a long time.

In chapter 11 of the Bible, Old Testament, Genesis, after the Great Flood receded, the people of this world were descendants of Noah and spoke the same language. At that time, mankind began to cooperate and build a tower called Babel. Tower of the Towers. This move alerted God, so God allowed humans in the world to begin to have different languages. Since then humans have been unable to work together. The plan to build the Sky Tower ended in failure, and language differences have become the largest Obstacles. Perhaps there is still a dream in the blood to rebuild the Tower of Babel. Therefore, translation has become a key cultural project for the continuous evolution of mankind over the past thousand years.

The linguistic barrier is not so easy to break. In particular, it is necessary to understand the same concept across languages. For the first time in human history, a cross-language parallel corpora was produced by Rosetta Stone, made in 196 BC. The ancient Egyptian language, ancient Greek, and local colloquial texts were used to record the engraved scriptures of King Ptolemy of the ancient Egyptian king. This is also a major milestone in translation.

Rule-based machine translation

As for the origin of machine translation, which dates back to 1949, information theory researcher Warren Weave officially proposed the concept of machine translation. Five years later, in 1954, IBM and Georgetown University in the United States announced the first translation machine in the world. IBM-701. It was able to translate Russian into English, not to mention that it had a huge body. Actually it had only six grammar rules and 250 words built in. But even so, it was still a major technological breakthrough. At that time, humans began to feel that they should be able to quickly break the wall of language.

It was possible that God had noticed something different and poured a bucket of cold water on the plan for human reconstruction of the Tower of Babel. In 1964, the American Academy of Sciences established the Automatic Language Processing Advisory Committee (ALPAC). Two years later, on the Committee In the report presented, it is considered that machine translation is not worth continuing to invest, because this report caused the United States to almost completely stop the machine translation study in the next ten years.

From the birth of IBM's first translation machine to the 1980s, the technological mainstream at that time was rule-based machine translation. The most common method is to directly translate words according to the dictionary, although some people later proposed to add syntax rules to correct them. But to be honest, the results turned out to be very frustrating, because it looks stupid. Therefore, by the 1980s such practices have disappeared.

Why can't languages ​​apply rules? Because languages ​​are extremely complex and vague systems, from word ambiguity to rhetoric, it is impossible to exhaust all rules. But interestingly, many recent innovations in natural language The company, still trying to solve the Chinese semantics with exhaustive rules, but this idea will definitely end in failure.

I'll give an example to illustrate why the rules are not feasible. Don't mention the complexity of translation in two languages. Just from the Chinese perspective, the concept of express delivery is fast. How many kinds of teachings can you think of? 10 kinds or 100 kinds? According to the natural language statistics we have done before, there may be 3,600 kinds of teachings in total, and this number should increase over time. A sentence with such a simple concept can be so For a complex system of rules, if you use translations, I am afraid that the amount of rules will be an astonishing astronomical number. Therefore, the rule-based machine translation idea will become a yellow flower yesterday.

Instance-based machine translation

While the whole world has fallen into the low phase of machine translation, there is a country that has strong obsessions for machine translation. That is Japan. The Japanese have poor English proficiency and therefore have a strong rigid demand for machine translation.

Professor Nagao Shinretsu of Kyoto University in Japan proposed an example-based machine translation, that is, stop thinking about letting machines translate from scratch. We only need to store enough example sentences. Even if we encounter sentences that do not match perfectly, we You can also compare example sentences by simply replacing the translation of different words. This kind of naive thinking is certainly not much better than rule-based machine translation, so it did not cause a wave. But soon, the hope of human reconstruction of the Tower of Babel Seems to see the dawn again.

Statistical machine translation

The detonation of statistical machine translation boom is still IBM. In the paper "Machine Translation Mathematical Theory" published in 1993, five word-based statistical models were proposed, namely "IBM Model 1" to "IBM Model 5".

The idea of ​​the statistical model is to treat translation as a probability problem. In principle, it is necessary to use parallel corpus and then perform statistics on a word-by-word basis. For example, although the machine does not know what “knowledge” is in English, it will be found after most corpus statistics. As long as there is a sentence with knowledge, the word “Knowledge” appears in the corresponding English example sentence. In this way, even if the dictionary and grammar rules are not maintained manually, the machine can understand the meaning of the word.

This concept is not new, because Warren Weave first proposed a similar concept, but then there was not enough parallel corpus and the ability to limit the calculator at the time was too weak and therefore not put into practice. Modern statistical machine translation from Where can we find the "modern Rosetta Stone"? The main source is the United Nations. Because the resolutions and announcements of the United Nations will all be in the language versions of various member countries, but in addition to this, we must produce parallel corpus by ourselves. Now the cost of human translation translates to knowing that this cost is astonishingly high.

In the past ten years, everyone is familiar with the Google translation is based on statistical machine translation. Hearing this, it should be clear that the statistical translation model is unable to accomplish the great cause of the tower. In your prints, the machine translation only stays in The degree of "useful" rather than "useful".

Neural network machine translation

By 2014, machine translation ushered in the most revolutionary change in history - "deep learning"!

Neural networks are not new. In fact, neural network inventions have been around for more than 80 years. However, deep learning has continued since Geoffrey Hinton (deep study of the three great gods) improved the fatal shortcomings of neural network optimization in 2006. Various miracles-like results have frequently appeared in our lives. In 2015, the machine for the first time realized image recognition beyond humanity; in 2016, Alpha Go defeated the world chess king; in 2017, speech recognition surpassed human stenographers; in 2018, Machine English reading comprehension goes beyond humans for the first time. Of course, this area of ​​machine translation has also begun to flourish because of the deep learning of this super fertilizer.

Yoshua Bengio of the deep learning God in the 2014 paper, for the first time laid the basic structure of deep learning technology for machine translation. He mainly uses a sequence-based recurrent neural network (RNN), so that the machine can automatically capture sentences The word feature, which in turn can be automatically translated into another language's translation result. This article shows that Google has won the treasure. Soon after, Google provided ample gunpowder and Great God's blessing, Google officially announced in 2016 that All statistical machine translations were off the shelf, neural network machine translations became the absolute mainstream of modern machine translation.

The biggest feature of Google’s neural network machine translation is the addition of Attention. In fact, the attention mechanism is to sweep through the eyes first when simulating human translation, and then to pick out a few key words to confirm the semantics. Process (figure 2). Sure enough, with the attention mechanism blessing, the power has greatly increased. Google claims that in the English-French, English-Chinese, and English-Western languages, the error rate has changed. The statistical machine translation system is reduced by 60%.

Although the neural network can learn from the existing parallel corpus and understand the subtle linguistic features of the sentence, it is not perfect. The biggest problem arises from the large amount of data needed and its incomprehensibility as a black box. That is to say, There is no way to make mistakes, but only to provide more correct corpus to correct “deep learning”. Therefore, the same sentence pattern can have very different translation results.

In February 2018, Microsoft made new moves to make machine language understanding beyond humanity. On March 14, researchers from Microsoft Research Asia and Redmond Research Institute announced that their R&D machine translation system was The news report test set Newstest2017's Chinese-English translation test set has reached a level comparable to that of human translation. This is naturally a major victory for the machine translation of neural networks. Of course, there are also many innovations in the architecture, of which the most noteworthy. It is joined with Dual Learning and Deliberation Networks.

Dual learning has to solve the problem of limited parallel corpus. In general, deep learning must be provided to the machine at the same time. In this way, the machine can continuously modify and improve according to the difference between its translation result and answer. As for the stimulating network, it is also a process of imitating human translation. Usually, human translators will do a rough translation first, and then adjust the content to the exact second translation result. In fact, you can find that no matter how smart the neural network is, you still have to refer to the smartest creature on the surface. For humanity we.

Language cannot be used out of context

The development of machine translation does not mean that people in the translation industry will have no food in the future. It can be noted that Microsoft's publication has emphasized the “Chinese-English translation test set” of the “Universal News Report Test Set Newstest 2017”. A good performance may not be equal to universality, which can also explain why Tencent's translator Jun Mingming has a good reputation, but why the real-time interpretation in Boao has been inaccurate.

Real-time interpreting can be said to be the culmination of a translation task. In addition to having a correct listening comprehension of the original sentence, it must be converted to other languages ​​within a limited time. And remember that the speaker will not give any time for the translation, so it is equivalent to speech recognition. Machine translation must be processed synchronously, together with noise on the spot, speaker's expression, interjections of modal words, etc., all of which may cause misjudgment by the machine.

From my point of view, Tencent’s translation of the king may be blamed on the point that it may not be enough work, and the key proper nouns are not entered. This will result in a “classic mistake” of “a highway and a belt”.

An interesting difference can also be seen in Fig. 3. Why is Western machine translation misplaced, but machine translation in the home country is almost always in control? This is because the language cannot exist without departing from human use scenarios. That is, we often learn Chinese. Context, which comes from our past culture, consists of memories that were common in the past. Google, who has not read Tang poetry, naturally cannot understand the essence of this poem. Language can be the last human barrier in the age of artificial intelligence because Languages ​​will constantly change due to the use of humans. This is a very difficult substitute for machines.

With the advancement of technology, one day, machine translation will change from being "useful" to "useful" and then evolving to "useful." But as I have always argued, machines will not rob people of their work. The fact that human beings are unemployed is only our own. How to make good use of artificial intelligence to become your own tool, and get yourself out of boredom and tedious work, this is the right posture for the future.

3.AI, the fiery battle of big data and personal privacy has only just begun;

The hottest topics in the near term are none other than big data, including the 2018 fair held in China. It is undeniable that the most important thing for big data is data collection, but the European Union has also announced the General Data Protection Regulation (GDPR). Claiming to be the most stringent personal data protection law ever, it also ignited disputes over big data development and personal data.

The reason why big data can become effective data is to collect and analyze data, and the more data, the most relevant data can be analyzed and become useful data.

However, the source of data and data has become the focus of everyone’s discussion. Under nationalism, individuals seem to have to sacrifice their own rights to allow the country to pursue policies. This is most evident in China. China’s development in big data is quite rapid. The government also Planning many policies, and even the so-called poverty alleviation, poverty alleviation, etc. Like big data, all problems are not a problem, but many big data data are in fact the parties’ right to sacrifice personal data. This is in countries that emphasize personal privacy. Impossible things.

The direction of Taiwanese business development is centered on industrial big data. The most rapid development is that of Hon Hai, which has 40 years of production data. It is the biggest advantage of Hon Hai and is also a place where Taiwan has opportunities to develop big data.

However, if Taiwan wants to develop big data like mainland China, it has a lot to do with people’s privacy. Basically, there are a lot of obstacles, and it’s just an ETC record that can trigger privacy violations, not to mention related With regard to big data directly related to personal privacy, whether Big Data will succeed or not, I am afraid we must continue to tug the public interest and the balance of personal privacy.

4.AI inferences to edge nodes Video/voice applications each account for half the sky;

In 2018, the AIoT (AI+IoT) market has grown tremendously, driving the development of various devices. At the same time, it has also caused deep learning functions to gradually shift from cloud to edge computing to achieve low latency, low network bandwidth, high privacy, and high efficiency in artificial intelligence. Application experience.

With the rapid development of technologies such as artificial intelligence (AI) and edge computing (Edge Computing) in recent years, various consumer electronics and home appliances included in the concept of smart home will undergo a revolutionary change. Ultimately, An artificial intelligence network composed of home devices may become another family member that you cannot see. The concept of the local cloud and its related equipment will be an indispensable element in implementing a home artificial intelligence network.

Smart speaker/monitoring will become two major spindles of consumer AI

Ronan de Renesse, a researcher at the research institute Ovum who is responsible for tracking consumer technology development (Figure 1), stated that the application of AI in consumer electronics has often become the focus of media attention in the past two years, but the trend of consumer electronics and AI integration, It is only now beginning to develop. In the next three to five years, many consumer electronics products will carry AI functionality and will link with each other to form an artificial intelligence network in the home.

Figure 1 Ronan de Renesse, researcher of consumer technology at Ovum, believes that various electronic devices in the future family will become an invisible family member.

For the hardware industry chain, this trend will certainly bring about many new business opportunities. However, at a higher level, the artificial intelligence network that has quietly tapped into the home will become another "family" that you cannot see. member".

On the hardware side, smart speakers that are familiar to everyone are basically relatively mature products. Although there will be significant growth in sales in the next five years, the growth will gradually slow down. It is estimated that by 2022, global smart speakers The sales amount will be close to 9.5 billion US dollars. In fact, Renesse believes that Amazon and Google may not be able to launch their own brand of smart speakers in the future, because this type of product itself has little room for profit. For home network giants, as long as hardware vendors use their platform services, they can collect the user data they need.

In the same period, changes in such products as home intelligent monitoring systems will be more pronounced than smart speakers. At present, the so-called home intelligent monitoring products do not actually have artificial intelligence components, but rather cameras, alarms, door locks, sensors and other hardware. Products are connected to each other to form a security system that supports Event Trigger. However, as the related software and hardware technologies become more mature, the proportion of home surveillance cameras carrying artificial intelligence will increase, and at the same time it will be able to achieve more. Applications, such as the use of voice assistants, provide multiple users with more accurate services in a multi-user environment.

Consumer privacy protection for AI apps

However, for the hardware industry, what is most noteworthy is that the concept of local cloud and related application products will be picked up as devices in the home generally support AI. Renesse pointed out that the electronic products equipped with AI function will Produce a large amount of user data, and many of them are data related to personal privacy. Therefore, if these home electronic products equipped with artificial intelligence completely rely on the external cloud to operate, it will obviously cause privacy concerns.

On the other hand, many consumer IoT devices with relatively simple functions are limited by power, computing power, and production costs. They may not be able to support very high-level AI algorithms. At this time, local cloud devices can play the role of the brain. Uniformly order these devices.

However, Renesse also admitted that it is still difficult to assert which device will play a local cloud center. It may be a higher-order smart speaker, it may be smart TV or other products.

Ian Smythe, Arm's senior marketing director (Figure 2) also believes that there will be more and more computing and inference engine moving to the terminal in the future. The main driving force for this transfer is to protect user privacy. By processing and analyzing the data in the terminal, you can easily anonymize the data and ensure that sensitive data is not leaked through the network. Taking home applications as an example, consumers do not want anyone to be able to know from the Internet that they are not at home. People's time, and then easily to steal at home.

Figure 2 Ian Smythe, senior marketing director at Arm, said that for consumer AI applications, whether the privacy protection mechanism is reliable will be the key to whether the application can be popularized.

For visual applications, Smythe believes that cameras that support visual recognition are inherently considered important privacy issues. Obviously, these devices must be designed so that they can be protected when stored locally or transmitted to the cloud. Privacy and sensitive information. Since the transmission is usually connected wirelessly, special attention must be paid to the security of the wireless transmission function. Engineers designing the device must ensure that the devices connected to the network are not hacked and snooped.

Battery life remains the main technical challenge

However, to push AI to the edge node, the biggest technical challenge at present is still the power consumption of the system. Taking consumer surveillance cameras as an example, consumers may expect such products to be completely wireless, and it is best not to even connect power cables. This means that these products must be battery-powered and also support wireless networks. In addition, it must be able to identify all items and need unlimited storage space.

The above requirements pose a major challenge for system design, requiring the ability to run months of uninterrupted batteries to run machine learning (ML) capabilities, and the ability to continuously upload files to the cloud for storage. These extreme scenarios are for chip design and system components. The requirements are quite demanding, and most importantly, they have mastered when to enable the orchestration of these functions to prolong battery life.

In the case of home surveillance cameras, the camera does not need to transmit the video in the room for 24 hours. It is only reasonable to upload the part of the image when there is an unconfirmed person. In the same way, if the scene like the vacancy remains unchanged. It does not make sense to enable the ML algorithm. Carefully arrange where and when these features are enabled so that the consumer device can operate in the expected mode with only 2 AA batteries and it can be used for a long period of time.

Because power consumption is one of the major obstacles for AIs to enter terminal devices, many startups on the market are now seeing this opportunity to launch a low-power neural network (NN) accelerator, the silicon intellectual property (IP), to assist the chip. While reducing power consumption, developers can meet the performance required by algorithm inference. Kneron officially released its NPU series, a dedicated artificial intelligence processor designed for terminal devices. IP. This series includes three products, namely ultra-low-power version KDP 300, standard version KDP 500, and high-performance version KDP 700, which can meet the needs of smart phones, smart homes, smart security, and a variety of IoT devices. The full range of products have low power consumption, small size, and provide powerful computing capabilities. Different from the power consumption of the processor for artificial intelligence on the market, the Kneron NPU IP consumes 100 milliwatts. (mW) class, for the KDP 300 dedicated to face recognition for smartphones, consumes less than 5 milliwatts.

Shi Yalun, Marketing and Application Manager of Endurance Products (left in Figure 3) pointed out that the need to perform artificial intelligence operations on the terminal device while meeting power consumption and performance requirements is a top priority. Therefore, it is critical to provide optimized solutions for individual applications. At present, the application of artificial intelligence can be broadly divided into two major categories: voice and video, and the neural network structure used is different. The focus of speech applications is natural language analysis. The mainstream network architecture is recurrent neural network (RNN). The main network structure used for image analysis is the convolutional neural network (CNN). In order to optimize for different network structures, the solution provided by the capability is different.

Shen Mingfeng, software design manager for energy-intensity (Figure 3, right), added that although natural language analysis has a lower demand for chip computing performance, due to the tone of the language, there are great differences in speaking habits, so the data sets needed for model training Far more than video recognition, on the other hand, because consumers are already accustomed to using cloud-based voice assistants such as Apple Siri and Google Assistant, offline semantic analysis applications are favored by consumers. The precondition is that we must provide a similar consumer experience under limited computing resources. This is a challenge for chip vendors and system developers.

Figure 3 Aaron Alan (Left), Marketing and Application Manager of Endurance Products, believes that speech and image recognition are very different in nature and need to be met by different solutions. Right is Shen Mingfeng, a software design manager for software development.

In fact, the vast majority of smart speakers are still not edge computing products. Allen Aaron pointed out that whether it is Amazon's Echo, Apple's Homepod or Baidu, Alibaba platform smart speakers, still have to The data is sent back to the cloud for processing and semantic analysis, in order to respond to users. Voice operations that can be performed directly on the end product are mostly Rule-based rather than machine-based natural semantic understanding.

Since the introduction of NPU IP, the company’s first artificial intelligence processor dedicated to terminal devices, in 2016, Nasdalem has continuously improved its design and specifications, and optimized for different industrial applications. Among the IPs currently available to customers, KDP 500 has been adopted by the system factory customers and will enter production tape production (Mask Tape-out) in the second quarter. Voice recognition in cooperation with Sogou has also achieved offline semantic analysis. Even if the terminal equipment is not connected to the network, it can Can understand the user's voice instructions.

Kneron NPU IP is a dedicated artificial intelligence processor designed for terminal devices, allowing terminal devices to run ResNet, YOLO and other deep learning networks in an offline environment. Kneron NPU is a complete terminal artificial intelligence hardware solution, including hardware IP, Compiler, and Model Compression can support various mainstream neural network models such as Resnet-18, Resnet-34, Vgg16, GoogleNet, and Lenet, as well as support mainstream deep learning. Frames, including Caffe, Keras and TensorFlow.

The Kneron NPU IP consumes 100 milliwatts of power, the ultra-low power version of the KDP 300 is even less than 5 milliwatts, and the full range of products has a performance of 1.5 TOPS/W or more per watt. Thanks to a number of exclusive technologies, Meet the needs of chip vendors, system vendors for low power consumption, high computing power.

Locking basic elements Hardware accelerators are not afraid of technical iteration

The use of hardwired circuits to improve the efficiency of the execution of certain specific computing tasks and reduce power consumption has been in the chip design field for a number of years, but at the price of low application flexibility, in the event of a significant change in the market's demand for chip functionality. , Or the software algorithm drastically changes, chip designers have to re-develop new chips.

In the situation where the market's demand for chip functions has been largely determined, this design method is not a problem. However, in emerging technology areas where technological iterations are fast, adopting this design approach will have a relatively large commercial risk. Artificial intelligence is a very fast technology iteration field. Almost every year there are new algorithms and models come out. The research institute Open AI also pointed out that in the past 6 years, the AI ​​model training demand on computing performance, will increase every 3.43 months. Times.

In this regard, Shen Mingfeng pointed out that hardware accelerators are not necessarily inflexible. Take the endurance products as an example. In terms of architecture design, the company uses a convolution kernel decomposition (Filter Decomposition) technique to convolve large convolution kernels. The operation block is divided into a plurality of small convolution operation blocks to be respectively operated, and then combined with a reconfigurable convolution acceleration (Reconfigurable Convolutional Acceleration) technology, the operation results of a plurality of small convolution operation blocks are merged to accelerate the entire operation. Operational efficiency.

With a relatively easy to understand metaphor, it is as if LEGO bricks can be assembled into various types of objects, but the entire object itself is still a stack of a few basic blocks. The endurance program is indispensable for AI algorithms. The basic elements are accelerated to improve the execution performance of the entire algorithm. Therefore, even if the AI ​​algorithm is updated at a very high speed, the performance-based solution can still exert an acceleration effect.

In addition to the accelerometer's own design that focuses on the basic elements, rather than accelerating the specific algorithm as a whole, Terrain also provides other techniques for accelerating or deploying AI applications. For example, its Model Compression technology compresses unoptimized models. Dozens of times; Multi-level Caching can reduce CPU usage and data transfer, further improving overall operational efficiency. In addition, Kneron NPU IP can be combined with Kneron image recognition software to provide real-time identification analysis. The response is not only more stable, but also meets security and privacy requirements. Because the hardware and software can be tightly integrated, the overall solution is smaller and the power consumption is lower to assist the rapid development of products.

Image recognition AI is more urgent toward the edge

On the whole, the current market demand for image recognition is more urgent. Although there is a potentially huge market for smart speakers in offline semantic analysis, there are fewer resources for the betting industry. The key reason for this phenomenon is that Transmission will occupy a large amount of bandwidth, which in turn raises the overall system cost of ownership. Voice does not have this problem.

Lin Zhiming, general manager of Jingxin Technology (Figure 4) explained that the integration of artificial intelligence and the Internet of Things will also drive the introduction of edge computing technology. The edge computing technology will be applied to a variety of emerging applications. Among this trend, flexibility and speed are the biggest advantages for Taiwanese manufacturers. For most Taiwanese companies and IC design companies, it is easier to cut into the artificial intelligence market from the edge.

Figure 4: Zhixin Lin, General Manager of Jingxin Technology estimates that IP Cam will be one of the main applications for performing AI inference on edge devices.

At the same time, due to the introduction of edge computing technology, hardware requirements such as memory and transmission will also increase, which will greatly increase manufacturing costs. Since the image-related system-on-chip (SoC) is originally more complex than other applications, The cost tolerance is also large, so the edge computing technology is expected to be the first to be imported by image-related applications such as IP Cam.

Artificial intelligence applications can be divided into training and identification. In the massive computing process of deep learning, cloud computing is still performed in a short period of time. The task that the edge computing is responsible for is to do the information collected first. The initial processing, after filtering out unimportant information, uploads the data to the cloud to save the transmission cost. On the other hand, the deep learning completed by the cloud can also make the terminal's identification function more intelligent. For example, the work of image deep learning can be completed by cloud computing first. After the standby learner recognizes the pedestrian, the IP Cam at the edge can only perform the identification work.

On the other hand, because IP Cam is widely used in security maintenance and community security, the government and enterprises are relatively willing to support investment, which will also be a reason for the rapid development of IP Cam.

Lin Zhiming shared that many manufacturers are now exploring how to import artificial intelligence into their own chips and systems. The current situation is similar to the beginning of the Internet of Things. Everyone is still trying to figure out how to use this technology. It is estimated that manufacturers will be around 2020. Will launch more actual products.

Real-time applications must use edge computing architecture

Artificial intelligence is a hot topic nowadays. The gradual shift from a cloud computing architecture to an edge computing architecture will have a significant impact on supply chain vendors. Although the development of artificial intelligence in the short term will continue to be dominated by cloud computing, However, many artificial intelligence functions regarding vision applications will begin to import edges.

Dale K. Hitt, director of market development for Xilinx's visual intelligence strategy (Figure 5), points out that in the foreseeable future, training components in AI development may still be dominated by cloud computing. However, inference/deployment components have begun. Use edge operations to support applications that require low latency and network efficiency.

Figure 5 Dale K. Hitt, director of market development for Xilinx visual intelligence strategy, believes that for applications that require very low latency, edge operations will be the best solution.

Machine-learning for vision-related applications will be one of the key and far-reaching trends for edge-operated applications. It will also be strong in industrial machine vision, smart cities, visual analysis, and the self-driving market. Growth potential. As far as industrial vision and consumer applications are concerned, because edge arithmetic must implement machine learning algorithms, the performance requirements are also much higher than previous generation solutions. In addition, machine learning edge algorithms/functions have also been rapidly evolving. Needs self-adaptive hardware to optimize for future machine learning inference architectures.

Hitt uses self-driving cars as an example. Behind each sensor in the car, there is a precise algorithm support that is responsible for producing the results of sensory interpretation from the sensor data. The latest trend is to use deep learning algorithms to generate these perceptual interpretation results. However, Deep learning algorithms must be trained through a large number of potential situations to learn how to read all possible sensor data.

After training, deep learning algorithms require extremely high computational efficiency and ultra-low latency in order to safely control the vehicle. For electric vehicles, low power consumption must be applied to limit operating temperature and extend battery power. The goal is to provide high-efficiency, low-power, adaptable solutions to meet the various needs of self-driving edge AI.

In the development of edge computing, the biggest challenge is that market demand changes too quickly. Therefore, technologies that can quickly adapt to various changes are extremely important to enable companies to maintain their competitiveness.

Hitt further explained that deep learning algorithms are continuously advancing at a rapid rate, and many of 2017's leading solutions have now faced the fate of elimination. Even with the ability to outperform many others nowadays, as computing needs continue to climb, hardware still needs to Optimize. Hardware must be updated at a faster rate to avoid being eliminated. Some hardware may even need to be updated during production. Many alternative technologies also need to be recalled to update the chip.

Hitt added that the unique advantages of FPGAs include deep hardware optimization including operations, memory architecture, and links. Compared with CPUs and GPUs, they can achieve higher performance with lower power consumption after optimization. The hardware architecture cannot be quickly optimized for new derived requirements.

Edge operation is overwhelming

Relying on AI applications running in cloud data centers, although it has extremely high computing capability support, its identification accuracy is generally higher than that of edge devices based on simplified model inference, but after considering privacy concerns, real-time response and online cost, and other factors It is still an attractive option to make inferences directly at the edge devices. On the other hand, the market size of terminal devices is much larger than that of cloud data centers, and there are strong economic incentives. This is also the slogan of AIoT shouting in the past year. The price is sky-high, and the major semiconductor companies are actively deploying.

Looking to the future, AI applications fully supported by the cloud will still exist in the market, but the proportion will be reduced year by year. Instead, a new architecture that combines cloud and edge computing will be replaced. For AI application developers, the cloud cannot be replaced. The value lies in model training, not inference. Also for this reason, whether or not the solution provider can achieve seamless integration between "cloud" and "end" for application developers will be an application developer. The most important consideration when evaluating suppliers. New Electronics

5.MIT graduates will receive a blockchain diploma this month

Sina Technology News Beijing time on June 3rd afternoon news, blockchain technology allows MIT graduates to digitally manage their academic resume.

Learning Machines, a Cambridge, Mass.-based software company, works with the MIT Media Lab and the Registration Office. Students can choose to download Blockchain wallets, securely store and share their diplomas.

According to the MIT Technology Review, after the initial success of the pilot project, MIT decided to start providing blockchain wallet services for all new graduates this month.

'I don't believe that the central authorities can control everyone's learning records in a digital way,' said Philipp Schmidt, director of learning innovation at the media lab.

The purpose of launching a blockchain form diploma is to enable students to obtain their academic credentials in a timely and reliable manner, so that potential employers do not have to call the school again to confirm the authenticity of their diplomas.

New graduates who want a digital diploma only need to download one app.

'Before graduation, MIT will send an invitation email to the student. The email wrote - 'Hey, download Blockcerts Wallet, accept the password and add MIT as the publisher'. ' Chris Jagers, CEO, Learning Machine Says, 'When MIT issues a diploma, students receive an email with a digital file that they can import directly into the application.'

2016 GoodChinaBrand | ICP: 12011751 | China Exports