In the report titled "Ethical Design: Priorities for Human Well-being and Artificial Intelligence and Autonomous Systems", there are eight divisions detailing new AI developments: general principles, artificial intelligence System Assignments; Methodology for Conducting Ethnographic Research and Design; Security and Well-being of General Artificial Intelligence and Super AI; Control of Personal Data and Personal Access; Reconstruction of Automatic Weapon Systems; Economic / Humanitarian Issues;
The general principles cover high-level ethical issues that apply to all types of AIs and autonomous systems. In determining general principles, three main factors are considered: the expression of human rights; the priority is given to maximizing the benefits to humans and the natural environment; and the weakening of artificial intelligence Risk and negative effects.
The principle of human interest requires consideration of how to ensure that AI does not infringe on human rights. The principle of responsibility relates to how to ensure that AI is accountable. To resolve the fault and avoid public confusion, the AI system must be accountable at the procedural level to justify its specific The principle of transparency means that the functioning of autonomous systems must be transparent, and the fact that AI is transparent means that people can find out how and why they make specific decisions.
In how to embed human norms and ethical values in the AI system, the report stated that as AI systems are becoming more autonomous in making decisions and manipulating their environment so that they can adopt, learn and comply with the societies they serve And the norms and values of the community are of paramount importance. The goal of embedding value into the AI system can be achieved in three steps:
First, identify the norms and values of a particular society or group;
Second, to compile these norms and values into the AI system;
Third, assess the validity of the norms and values that are written into the AI system, ie, whether they are consistent and realistic with norms and values.
Although the above related research has been ongoing, such as Machine Morality, Machine Ethics, Moral Machine, Value Alignment, Artificial Morality, Safe AI, friendly AI, etc. However, it has been plaguing people to develop computer systems that recognize and understand human norms and values and make them consider these issues when making decisions. Currently, there are mainly two paths: top-down Paths and bottom-up paths. Research in this area needs to be strengthened.
The report also pointed to the need to guide the development of methodologies for ethnographic research and design, safety and well-being of general artificial intelligence and super-artificial intelligence, the reconstruction of autonomous weapons systems, and economic / humanitarian issues such as the eight. Over 100 thought leaders from the academic, scientific, and government-related sectors in the field of artificial intelligence combine expertise in areas such as artificial intelligence, ethics, philosophy and policy, and its revised EAD (Ethically Aligned Design version will be available by the end of 2017 and the report has been expanded to thirteen chapters with over 250 global leaders in mind.
Satoshi Tadokoro, chairman of the IEEE Robotics and Automation Society, explains why they wanted to set the standard: 'Robots and automated systems will bring major innovations to society, and the public is increasingly paying attention to possible social problems and what's likely to happen Huge Potential Benefits Unfortunately, in these discussions, there may be some false messages from fictional and imaginative reasons.
Tadokoro continues: "The IEEE will introduce knowledge and wisdom based on well-recognized scientific and technological facts to help shape public decision-making and maximize the overall human benefits." In addition to AI ethical standards, there are two other AI standards Being introduced into the report, each project is led by domain experts.
The first criterion is: 'The Ethical Impetus Standard for Robotic Systems, Intelligent Systems, and Automated Systems.' This standard explores 'push', which in the AI world refers to the subtle action that AI can have on human behavior.
The second standard is 'Fail-Safe Design Criteria for Automatic and Semi-Automatic Systems.' It incorporates automated technologies that, if they fail, could be dangerous to humans, the most obvious problem for the time being being autopilot.
The third criterion is "The Welfare Metrics for Moralized AI and Automated Systems." It explains how the benefits of advanced AI techniques can benefit humanity.
These standards are likely to be implemented earlier than we think, as companies like OpenAI and DeepMind are pushing AI faster and faster, and even creating AI systems that are self-learning and expanding in the 'smart' arena. Believes that such artificial intelligence will undermine the stability of the world, lead to large-scale unemployment and war, and even turn to the creation of 'weapons of killing.' Recently, the important United Nations discussions prompted people to start thinking seriously about the need to use artificial intelligence as a weapon Stronger regulation.