Intel Push self driving Innovation Security Algorithm | Still need to solve the problem in concept

Intel's self-driving technology driver, Mobileye, recently released a technical report asserting that the driving industry needs a mathematical model to eliminate the responsibility of driving a car accident to ensure the safety of the car. Although the experts tentatively believe that Mobileye introduced this mathematical model, it shows that Mobileye is deeply in thinking about the safety of driving and gives positive evaluation. But there are also questions, such as how to define the provable safety (provably safe), and that the mathematical model still has some problems to be solved, including whether it is possible to learn the legal and regulatory loopholes of human drilling. According to EE Times, the report was written by Intel's senior vice president, Mobileye CEO Amnon Shashua and Mobileye Technology Vice President Shai Shalev-shwartz, and two authors explained in their reports that their policies were provable security, In this sense will not lead to self-driving accident responsibility. Missy Cummings, a professor at Duke University in the United States, points out that provable security is not entirely new and that the trickiest part of the problem remains unchanged. In short, computer scientists are seen as provable security from a mathematical point of view, not as proof of safety in a way that a test engineer deems safe. Cummings and Professor Phil Koopman of Mellon University in the United States point out that Mobileye's assumptions should not be taken for granted, such as the assumptions about software bugs (software bugs), because the potential problems caused by software bugs are very small. Koopman A major concern is whether the problem of the LiDAR and the radar fault is the same as the hypothetical case discussed by the Mobileye, which must prove the authenticity of the application, not just the assumption. Even so, Koopman is still sure of this, after all, Intel has opened a test hypothesis and is concerned about the possibility of learning how to drill legal loopholes in the future. General driving in the real road when driving, a long time will know what the general road rules have holes to drill, over the years to learn how to use these vulnerabilities. If this is the case for humans, why don't they have the ability to drive their own discretion? In this case, since driving can not violate the road safety rules under the premise of finding the legal drilling loophole of the autonomous way of driving. Koopman also believes that humans should expect machine learning (ML) to be adept at learning how to drill holes. Koopman basically agreed with the Mobileye report, but if security and ' not my fault ' were equated, there might be traps, and such formal mathematical validation could prove to be true in principle, but such assumptions might not be possible in the real world. In conclusion, this is still a reasonable starting point for the future of driving safety, but it is recommended that you still need to create a security system that can actually run, in which case any vulnerability is not a problem.

2016 GoodChinaBrand | ICP: 12011751 | China Exports