'Trend' Next-generation storage technology inventory: Four technologies have the greatest potential

1. Next-generation storage technology inventory: The four technologies have the greatest potential; 2. The new demand is bursting with strong memory dynamics; 3. Immervision and Socionext will collaborate to provide the panomorphEYE development kit; 4. Personalize AI and consume less power. Neural network chip

1. Next-generation storage technology inventory: The four technologies have the greatest potential;

With the rise of mobile devices and Internet of Things applications, the demand for energy-saving data storage and memory technology is increasing. Currently, DRAM and NAND flash memory are the main memory technologies, but DRAM can't read and write data for a long time. NAND Flash Can save data, but reading and writing speed is not good.

At the same time, next-generation memory that has both computational and storage capabilities, such as magnetoresistive memory (MRAM), resistive RAM (RRAM), 3D XPoint technology, and high-potential spin-electron magnetic memory (STT-MRAM), becomes The new darling of memory technology for generations.

MRAM's technology will theoretically allow access speeds beyond DRAM to approach SRAM, and data will not be lost after a power failure. Early development by Everspin was considered as an important contender for the next generation of memory technology. 2017 was an outbreak of MRAM technology. In the year of the large-scale integrated circuit technology held in Japan, Japan held an international seminar on large-scale integrated circuit technology, systems and applications. Together with Everspin Corporation, Geodefand released the anti-thermal degaussing eMRAN technology with the ability to make data in Celsius. Save data at 150 degrees, can be as long as a few years of 22 nanometer process technology, is expected to end of 2017, 2018 production.

TSMC, which once invested in memory R&D but lost to the high cost and withdrew from the memory market, revealed in 2017 TSMC technology forum that it has 22nm process embedded magnetoresistive memory (eMRAM) production technology. Test production.

RRAM has the advantage of lower power consumption than NAND, and its write information is 10,000 times faster than NAND flash. The main players in the research are Micron, Sony, and Samsung.

TSMC has also announced that it has a production of 22nm eRRAM technology. The main manufacturers of 3D XPoint technology are Intel and Micron. They adopt a three-dimensional structure composed of multi-layer circuits and use grid wire resistances to represent 0 and 1. The principle is similar to RRAM.

It is a good replacement for storage devices and is nearly 1,000 times faster than NAND flash memory and can also be used for computational applications with low instruction cycle requirements.

STT-MRAM is an application of quantum spinel angular momentum technology, has high performance and low power consumption of DRAM and SRAM, and is compatible with existing CMOS manufacturing technologies and processes.

At present, the main input vendors are IBM and Samsung, SK Hynix and Toshiba, among which IBM and Samsung published a research paper in the IEEE that has achieved a transmission speed of 10 nanoseconds and a super power-saving architecture.

Although the next generation of memory is expected to replace some DRAM and NAND flash memory markets in the future, it will even replace the old technology. However, I believe that with the artificial intelligence, IoT devices and more data collection and sensing requirements, the next generation of memory technology will be the first Focusing on the needs of new applications, such as embedded memory locked by TSMC, and taking full advantage of the advantages of computing and storage, further reducing the size to achieve a higher market penetration of components.

However, judging from the vendor dynamics, the 22nm eMRAM technology will mature gradually after 2018, and it will begin to have a large number of market applications.

(The author is a researcher of the National Institute of Experimental Science and Technology Policy and Information Center)

2. New demand outbreaks Strong memory dynamics;

In modern electronic products, memory plays an indispensable role. In 2017, the output value of the semiconductor industry exceeded 400 billion U.S. dollars for the first time. One of the reasons is that the memory demand has increased so much that manufacturers can increase the selling price. The 2017 revenue growth is about 50 percent, South Korea's Samsung is the largest memory supplier, the largest profit. This wave of memory boom is expected to continue to be driven by emerging needs, Internet of things, wearable devices, cloud storage and huge amounts of data computing will drive the memory market kinetic energy.

According to the storage characteristics, the current memory can be divided into volatile and non-volatile memory after the power is turned off. The volatile memory can not be retained after the power is turned off. The cost is high but the speed is fast. Usually used for data temporary storage; Non-volatile The memory access speed is slow, but it can save data for a long time.

Static random access memory (SRAM) and dynamic random access memory (DRAM) are widely used in computer systems and electronic products as volatile memory for data temporary storage. DRAM is currently used in PC/NB and mobile applications. Mainly, but it will also increase year by year in support of virtualization, drawing and other complex, real-time work applications. DRAM has been manufactured by more than 20 companies worldwide since the 1980s. Currently, there are only Samsung, SK Hynix, and Micron and other three oligopoly markets, the application categories from the main PC class and consumer electronics (such as iPod), mobile phones, tablet computers, wearable devices, smart cars, driverless cars demand for DRAM is also growing.

Read-only memory (ROM) or rewritable memory such as traditional mechanical hard disk (HDD), solid-state hard disk (SSD), flash memory (Flash memory), etc. have different read/write characteristics, but they are still after the power is cut off. It can save data for a long time. Flash operation speed is faster than that of general hard disk, so it gradually becomes mainstream.

Flash memory architecture and ROM can be divided into parallel (NOR) and serial (NAND), parallel flash (NOR-Flash) is common in the motherboard BIOS, NAND-Flash (NAND-Flash) is common in consumer electronics, For example, mobile phones, flash drives, SSDs, etc., NAND Flash continues to evolve with process technology, and the cost per unit of capacity has been declining. It has been widely used in smart phones, embedded devices and industrial control applications. In recent years, it has been applied to the storage of large data data and increasingly The demand for SSDs for multi-notebook computers has increased. SSDs made of NAND-Flash have gradually replaced the trend of general hard drives. The main manufacturers are Samsung, Toshiba and SK Hynix.

DRAM and NAND Flash are complementary in characteristics and cost. The former has large transmission bandwidth per second, high unit cost, and high power consumption; the latter has slow transmission speed, low cost per unit, and low power consumption, so both are The market and function are separated, and it also constitutes two major camps of current memory products. In response to the explosive growth of the Internet of Things, big data and cloud data, memory is independent or embedded and will be a key component of the system architecture. .

Looking into 2020, the global memory market will be US$79.51 billion, of which DRAM accounts for 38.9%, NAND Flash will account for 55.1%, and memory for the next generation will jump to 2.0%.

However, due to the bottleneck and impact of mainstream memory DRAM and NAND on the miniaturization process, finding alternative solutions or changing circuits to meet future data storage needs will be the most important issue in the current memory industry.

The three major metrics for developing the next generation of memory include cost, component performance, scalability and density, etc., where costs include memory particles, modules and control circuits, etc.; component performance includes latency, reliability, and data retention durability. Wait.

The memory of the next generation is now generally oriented toward changing the way the past stored charges were used to access data, and by revising the storage state mechanism to solve the process limitations. In addition, low power consumption is the common goal of the next generation of memory and even components. Taiwan has the world's largest The advanced process and excellent component and circuit R&D talents are in a very advantageous position in the memory R&D. We should grasp the advantages, improve the electronic industry ecological chain, avoid being subject to the monopoly of the foreign market by the memory market, and maintain its position in the global market. The competitiveness of Taiwan's industries. (The author is a researcher at the National Institute of Experimental Research Policy and Information Center)

STPI Introduction

The National Institute of Experimental Research Center for Science and Technology Policy (STPI) was established in 1974. It has long been responsible for the collection, construction, analysis, processing, and services of data needed for the development of science and technology in China. On the basis of resources, we will strengthen the analysis of trends, key issues, patent intelligence analysis, innovation and entrepreneurship promotion, and assist the government in mapping the vision and strategy of China's science and technology development, and moving forward towards the professional research of science and technology policy think tanks.

3. Immervision and Socionext will collaborate to provide the panomorphEYE development kit;

MONTREAL--(BUSINESS WIRE)--Immervision and Socionext today announced the establishment of a strategic partnership for artificial intelligence (AI) and robots for robots, automobiles, drones and other smart devices. Learning (ML) applications to jointly develop the first comprehensive intelligent vision sensor system. The PanomorphEYE development kit is scheduled to be released and launched in July 2018.

Through the cooperation between our two companies, everyone has the potential to use smart vision systems developed for smart devices and machines. The unique combination of intelligent vision and advanced SoC design with onboard sensors enables anyone to imagine Smart products. '

This

The panomorphEYE incorporates stereoscopic 3D, 360-degree surround, time-of-flight (TOF), gyroscope, compass, and other sensors that are invaluable for rapid prototyping, reducing time-to-market, and enhancing product capabilities.

Immervision introduces human factors into global devices, enabling intelligent vision through unique panoramic wide-angle image capture, Data-In-Picture and image processing capabilities, and enhancing human ability to perceive the surrounding environment.

Socionext provides unparalleled expertise in system-on-chip (SoC) systems, providing imaging, networking and computing capabilities for today's leading devices. The two companies jointly pledge to provide superior visual capabilities, autonomy and intelligence to help today’s devices see More, see more intelligence.

Alessandro Gasparini, ImmerVision’s Executive Vice President and Chief Commercial Officer, explained: 'Through the cooperation between our two companies, everyone is able to use smart vision systems developed for smart devices and machines. Smart Vision and advanced SoC design and board The unique combination of sensors allows anyone to imagine smarter products.

Mitsugu Naito, Socionext’s senior vice president, said: “The successful cooperation between Immervision and Socionext has been providing cutting-edge solutions for a variety of camera and image processing applications. Today I’m pleased to announce that the strategic partnership between the two has entered a new Phase. We will further enhance human and machine vision capabilities and help people live a more efficient and safe life. ' Business Wire

4. Personalize AI and consume less power IBM develops new neural network chips

'NetEase smart news June 16th news' on the GPU running on the neural network has made some amazing progress in the field of artificial intelligence, but the cooperation between the two is not perfect. IBM researchers hope to design a kind Specially used to run new networks of neural networks to provide faster, more efficient alternatives.

It was not until the beginning of this century that researchers realized that GPUs (Graphics Processing Units) designed for video games could be used as hardware accelerators to run larger neural networks than before.

This is due to the fact that these chips can perform a large number of calculations in parallel, rather than processing them sequentially like a traditional CPU. This is especially useful for simultaneously calculating the weights of hundreds of neurons that make up a deep learning neural network.

The introduction of GPUs has allowed this area to develop, but these chips still need to separate processing and storage, which means that a lot of time and effort is spent on data transfer between the two. This prompts people to start researching new ones. Storage technologies that can store and process weight data at the same location, thereby increasing speed and energy efficiency.

The new memory devices store data in analog form by adjusting their resistance levels—that is, the data is stored in a continuous range, rather than the binary memory of binary 1 and 0. because the information is stored in the memory. In the conductance of the cell, it is therefore possible to simply transfer the voltage between the memory cells and let the system perform the calculation by physical means.

However, the inherent physical defects of these devices means that their behavior is inconsistent, which leads to the current classification accuracy of using them to train neural networks is significantly lower than the use of GPU.

"We can train on a faster system than GPU, but if the training is not so accurate, that would be useless," said Stefano Ambrogio, a postdoctoral researcher at IBM Research who led the project, in an interview with Singularity Hub. So far, there is no evidence that using these new devices can be as accurate as using a GPU.

But the research has made new progress. In a paper published last week in the journal Nature, Ambrogio and his colleagues described how they used new analog memories and more traditional electronic components to create A chip that can match the accuracy of the GPU while running faster and with less power consumption.

The reason why these new storage technologies are difficult to train deep neural networks is that this process requires the stimulation of each neuron up and down thousands of times until the network is completely aligned. Changing the resistance of these devices requires reconfiguring their atomic structures. And each time the operation process is different, Ambrogio said. These stimuli are not always the same, which leads to inaccurate adjustment of the weight of neurons.

Researchers have solved this problem by creating 'synaptic units' each of which corresponds to a single neuron in the network with both long-term and short-term memory. Each cell consists of a pair of phase-change memories ( The combination of a PCM) cell and three transistors and a capacitor, the PCM stores weight data in the resistor, and the capacitor stores the weight data as a charge.

PCM is a kind of 'non-volatile memory', which means that even if there is no external power source, it can retain the stored information, and the capacitor is 'volatile', so it can only maintain its charge within a few milliseconds. But capacitors have no variability of PCM devices and can therefore be programmed quickly and accurately.

When the neural network trains the image to complete the classification task, only the weight of the capacitor will be updated. After viewing a few thousand images, the weight data will be transferred to the PCM unit for long-term storage. The variability of the PCM means that the weight The transfer of data may still contain errors, but since the unit is updated only occasionally, the conductance can be checked again without increasing the complexity of the system. Ambrogio said that if training directly on the PCM unit, this is not feasible.

To test their equipment, the researchers conducted a series of popular image recognition benchmark tests on their networks. The results achieved comparable accuracy with Google’s leading neural network software, TensorFlow. But, importantly, they predicted that they would eventually build out. The chip will be 280 times more efficient than the GPU, and the computational power per square millimeter will be 100 times that of the CPU. It is worth noting that the researchers have not yet completely built the chip.

Although real PCM units were used in the tests, other components were simulated on the computer. Ambrogio stated that they hope to check whether this method is feasible before investing time and effort in building a complete chip. He said that they Decided to use real PCM equipment, because the simulation of these equipment is not yet reliable, but the analog technology of other components is already mature, they are confident to build a complete chip based on this design.

It can only compete with GPUs on fully connected neural networks. In this neural network, each neuron is connected to the neurons on the upper layer, Ambrogio said. But in reality, many neural networks are not fully connected. Or only some layers are completely connected together.

However, Ambrogio said that the final chip will be designed to work with GPUs, so that it can also handle full-connection-layer calculations when dealing with other connections. He also believes that this more efficient method of dealing with fully-connected layers can be More widely used.

What kind of things are possible with such a dedicated chip?

Ambrogio said there are two main applications: First, apply artificial intelligence to personal devices, and second, make data centers more efficient. The latter is a major concern for large technology companies because their servers consume a lot of electricity.

If artificial intelligence is applied directly to personal devices, users can save their privacy by not having to share their data in the cloud, but Ambrogio said that the more exciting prospect is the personalization of artificial intelligence.

He said: 'By applying this neural network to your car or smartphone, they can continue to learn from your experience.'

'Your mobile phone will be personalized for your voice, your car will also be based on your habits to form a unique driving style.'

2016 GoodChinaBrand | ICP: 12011751 | China Exports