1.VAR and smart chips unveiled the World Cup stadium. The French team made the biggest winner;
BEIJING, Beijing, June 16th The FIFA World Cup, FIFA launched an important reform: The first introduction of VAR (video assistant referee), and the implantation of smart chips in the game ball. In the 16th game to the third game On the day, the French team again became beneficiaries in the duel with the Australian team and both won the first favorable penalty.
After communicating with the video assistant's referee, Kunya escalated a penalty to the French team. Gelizman easily scored the ball, making the French team rely on VAR's commutation to move ahead. This is also the first time VAR has been mutated. The resulting penalty kick.
This penalty triggered a lot of controversy. In the press conference after the game, the French coach Deschamps said that this ball is just like the penalty scored by Portugal on Spain the previous day. For the referee's own decision, the Australian team's Dutch coach, Van Marwick, considered it to be a misjudgment: 'Although I haven't watched the video playback, I'm in a good position on the scene. I see clearly. It shouldn't be A penalty kick. The referee's position is closer, but he initially said there was no penalty and the game continued. We are the victims of VAR, but we will not challenge this decision.'
The French team continued to profit from VARs and smart chips in this World Cup and also continued their previous fortunes. In 1998, FIFA officially launched the 'Golden Ball Sudden Death' and later Silver Ball Technology in the World Cup. In the World Cup and Paraguay's one-eighth final, French team guard Blanco scored the first gold ball in the history of the World Cup to help the French team advance. With this golden ball of value, after the promotion of the French team All the way through, eventually won the World Cup of God in the local. Because this rule is accidental and too cruel, after two years of implementation, in 2004 FIFA has eliminated the gold ball system and the silver ball system.
In order to promote the development of football, protect the players and ensure the consistency of the game, FIFA has been trying to amend some rules. However, some people are happy that someone is jealous. In 1970, the first yellow and yellow cards were used in the 9th World Cup held in Mexico. The first yellow card was won by the former Soviet Union. In 1974, in the 10th World Cup in the Federal Republic of Germany in Chile against West Germany, the Turkish referee Babacan issued the World Cup to Chilean player Kaszeli. The first red card will be fined. (End)
2. GPU, FPGA chip has become a 'right-handed' arm to enhance machine learning;
'NetEase smart news June 17 news' In commercial software, computer chips have been forgotten. For commercial applications, this is a commodity. Because robot technology and personal hardware devices are more closely linked, so the manufacturing application Still more focused on the hardware part.
Since the 1970s, as a whole, the status quo of artificial intelligence (AI), and specifically the field of deep learning (DL), the relationship between hardware and software has become more closely linked than ever. And I have recently managed Articles on artificial intelligence (management AI) are related to overfitting and prejudice. There are two main risks in the machine learning (ML) system. This column will discuss in depth the hardware that many managers, especially business line managers may handle. Acronyms, These acronyms are constantly mentioned in machine learning systems: Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs).
This helps to understand the value of the GPU because the GPU accelerates the tensor processing required for deep learning applications. The point of interest for FPGAs is to find ways to research new AI algorithms, train these systems, and start deploying Many low-volume custom systems studied in industrial AI applications. Although this is a discussion of the capabilities of FPGAs for training, I believe early use was due to the use of F, the field.
For example, training an inference engine (the core of a machine learning 'machine') may require gigabytes or even terabytes of data. When running reasoning in the data center, the computer must manage a potentially increasing number of concurrent user requests. In the edge applications, whether in drones used to inspect pipelines or in smart phones, the equipment must be small and still effective, yet adaptable. In simple terms, a CPU and a GPU. There are two devices, and an FPGA can have different blocks to do different things, and it is possible to provide a robust chip system. Given all these different requirements, it is best to understand the current state of the system architecture that can support different requirements.
There are two main types of chip designs that can drive current ML systems, GPUs, and FPGAs. In the mid to future (at least a few years), it is also possible to become a hint of new technology for game converters. Let's take a look.
Graphics Processing Unit (GPU)
The largest chip in the machine learning world is the graphics processing unit GPU. This is mainly used for computer games, so how does something that looks better on a computer monitor become important for machine learning? To understand this, we Must go back to the software layer.
The current champion of machine learning is the Deep Learning (DL) system. The DL system is based on various algorithms including deep neural networks (DNN), convolutional neural networks (CNN), recurrent neural networks (RNN) and many other variants. You are The key word seen in these three terms is 'network'. The algorithm is a variation of the theme. The topic is a few layers of nodes. There are different types of communication between nodes and layers.
You are dealing with multiple arrays or matrices. Another more accurate term for matrices is the tensor, so it is used throughout the machine learning industry such as TensorFlow.
Now go back to your computer screen. You can think of it as a matrix of pixels or points by rows and columns. This is a two-dimensional matrix or tensor. When you add colors, add the bit size to each pixel, think For a fast-changing, consistent image, calculations can quickly become complicated and take up cycles in a step-by-step CPU. The GPU has its own memory and can save the entire graphic image as a matrix. Then it can use tensor math. Changes in the image then only change the affected pixels on the screen. This process is much faster than redrawing the entire screen every time the image is changed.
In 1983, NVIDIA aimed to create a chip to solve matrix problems that cannot be solved by general-purpose computers such as CPUs. This is the birth of the GPU.
Matrix operations do not care about what the end product is, but only process the elements. This is a slight oversimplification, because different operations are sparsely matrixed (when there are many zeros) and the dense matrix is different and work differently. But the fact that content does not change operations still exists. When deep learning theorists saw the development of GPUs, they soon adopted it to speed up tensor operations.
GPUs are critical to the development of machine learning, driving data center training and reasoning. For example, the NVIDIA Volta V100 Tensor Core continues to accelerate in its basic architecture and ability to run inferences with less precision (this will be another topic , means fewer bits, which means faster processing. However, there are other issues to consider when it comes to the Internet of Things.
Field Programmable Gate Array (FPGA)
In the field, all types of applications have different requirements. There are many different application areas, vehicles, pipelines, robots, etc. Different industries can design different chips for each type of application, but this may It can be very expensive and can damage the company’s return on investment. It can also delay time to market and miss important business opportunities. This is especially true for highly personalized needs that do not provide enough economies of scale.
FPGAs are chips that help companies and researchers solve problems. An FPGA is an integrated circuit that can be programmed for a variety of purposes. It has a series of 'programmable logic blocks' and a way to program the relationship between blocks and blocks. It is A universal tool that can be customized for a variety of uses. Major suppliers include Xilinx and National Instruments.
It is worth noting that the low cost of chip design does not make FPGAs a low-cost option. They are usually best suited for research or industrial applications. The complexity of circuits and designs makes them programmable, not suitable for low-cost Consumer applications.
Since the FPGA can be reprogrammed, this makes it valuable for the emerging field of machine learning. Increasing algorithms and fine-tuning different algorithms by reprogramming blocks. In addition, low-precision low-power inferred FPGAs for remote sensors It's a good combination. Although the inventor calls 'field' more as 'customer', the real advantage of FPGA in realizing AI applications is in the real world. Whether it's for the factory Infrastructures such as roads and pipelines, and drone remote inspections, FPGAs allow system designers the flexibility to use a single piece of hardware for multiple purposes, enabling simpler physical designs that can be more easily applied in the field.
New architecture is coming soon
GPUs and FPGAs are technologies that are currently helping to solve the challenge of expanding the impact of machine learning on many markets. What they are doing is letting more people focus on the development of this industry and trying to create new architectures in time to apply.
On the one hand, many companies are trying to learn the lessons of tensor computing on the GPU. Hewlett-Packard, IBM and Intel all have projects to develop next-generation tensor computing devices for deep learning. At the same time, like Cambricon, Graphcore and Wave Computing Such startups are also striving to do the same thing.
On the other hand, Arm, Intel and other companies are designing architectures to make full use of GPUs and CPUs, and also targeting devices in the machine learning market, allegedly able to do more than just focus on tensor calculations, and other around core AI processes. Processing is also more powerful.
Although some of these organizations focus on data centers and other Internet of Things, it is too early to talk about any of them.
From global companies to start-up companies, one caveat is that no information has appeared besides the earliest information. If we see the earliest device samples by 2020, then this will be a surprise, so they are at least five years old. Not listed.
(Selected from: forbes Author: David A. Teich compilation: NEW YORK intelligent participation: nariiy)
3.Intel 10nm processor exposure Small batch shipment
Intel 10nm process because the yield is not up to standard, mass production has been postponed to 2019, now only small-batch shipments, the product is known only a low-power version of the 15W thermal design power Core i3-8121U (family code Cannon Lake ), and only Lenovo is using it.
i3-8121U specification is dual-core four threads, clocked at 2.2-3.2GHz, three-level cache 4MB, memory support dual-channel DDR4/LPDDR4-2400 32GB, thermal design power consumption 15W.
The information of the nuclear part was not published. It should be because the yield problem was disabled. So Lenovo added an AMD discrete graphics card.
ComputeBase, a German hardware media company, disclosed the first 'image' of i3-8121U. It can be seen that the package layout is basically the same as the previous product. It is still a processor core, a chipset core package together, BGA integrated package method Soldered on the motherboard.
We also found an Intel official version of the eight-core Core low-voltage version of the photo, you can find the processor, the chipset core has become smaller, the package solder joints and capacitive components have also undergone great changes, it should be no longer compatible.
ComputeBase found that the overall package size of the i3-8121U is 45 x 24 mm (consistent with the official specifications), with a processor area of approximately 71 mm2 and a chipset area of approximately 47 mm2.
Although according to the study, the transistor density of the Intel 10nm process exceeded 100 million cells per square millimeter, which is equivalent to even higher than that of Samsung, TSMC, and GlobalFoundries at 7nm, but the change does not seem to be large compared with its own 14nm product.
You know, Intel's first-generation 14nm Broadwell-U processor part of the area is only 82 square millimeters, 10nm is only a shrink of 13% only, and we are all dual-core four-threaded, are 4MB three cache, just nuclear execution Units increased from 24 to 40, and support AVX512 instruction set.
In addition, the 45 x 24 mm overall package is slightly larger than the current 14 nm low-voltage version of 42 x 24 mm.