Google cooperates with the military again. Where is the boundary of AI ethics?

The story begins in early March this year.

In March of this year, Google was unveiled that it has reached a cooperation with the U.S. Department of Defense to help the latter develop an artificial intelligence system for drones. The project code is Project Maven.

As soon as the news broke, the Google employees expressed strong dissatisfaction with the project. Some employees thought that Google was opening up resources for the Pentagon to help the latter to create drone surveillance technology; others questioned Google's use of the machine. Whether learning technology meets ethical standards, they believe that the Pentagon will use this technology for anti-personnel weapons, which in turn will bring about the harm that technology optimists would never want to cause.

Subsequently, Google's more than 3,100 employees jointly petitioned and wrote a letter to the CEO, Sundar Pichai, to protest.

Google employees jointly signed a letter to Sundar Pichai. The matter was further fermented by the end of April. Some media discovered that Google deleted the three-pointed emphasis on its 18-year motto 'Don't be evil' in the beginning of the company’s code of conduct. At the end, there is a sentence that has not been deleted: 'Remember, don't be evil, if you see something that you think is not correct, speak out!'

On Friday, at the weekly 'weather forecast' regular meeting, Google Cloud CEO Diane Greene announced that Google will end Project Marven with the US Department of Defense after the expiration of this contract. .

This is undoubtedly a major event. The 'news of victory' was crazy inside Google. With an article rushing out, it seems that this matter is temporarily on Google's way to 'compromise with employees and stop renewing with the Ministry of Defence'. To be closed.

But just yesterday, Google CEO Sundar Pichai published an article titled "AI at Google: our principles", which listed seven guiding principles and pointed out that Google will not terminate its cooperation with the US military. , Although it clarifies 'the AI ​​application that will not be pursued', but technical evil is still a problem of human evil, AI ethics is once again thought-provoking.

Where are the boundaries of AI ethics?

If Google has not been very flat in the recent past, Amazon Alexa's life will not be easy.

Amazon’s Echo device was accused of recording private conversations without permission and sending audio to a random person in the user’s contact list. This is the last time Alexa’s scary incident of 'playing derision at humans’ has passed. how long.

Amazon's response to Alexa is far from an example. As early as 2016, one person was a 19-year-old girl. The chat robot named Tay went online on Twitter. This Microsoft-developed artificial intelligence uses natural language learning technology. , Being able to handle and imitate human conversations by grabbing data that interacts with the user, chatting like a human with jokes, dice, and emoticons. But on the line less than a day later, Tay was 'tweaked' into a squawking racial cleansing. 2. The rude and extremist, Microsoft had to take it off the ground for 'system upgrade'.

The extreme remarks made by Microsoft's robot Tay really make people think carefully. According to Keming, an aixdlun analyst, with the highlight of AI's ills, AI ethical issues will also get more and more attention. Where is the boundary of AI ethics? First of all, several issues should be clarified.

1. Is the robot a civil subject?

With the rapid development of artificial intelligence technology, robots have more and more powerful intelligence. The gap between machines and humans has gradually narrowed. The robots that will emerge in the future will have biological brains, which can even rival the number of neurons in the human brain. U.S. Future U.S. futurists even predict that: In the middle of this century, non-biological intelligence will be 1 billion times more than the wisdom of all people today.

Citizenship does not seem to be a problem for robots. In October last year, the birth of Sophia, the world’s first citizenship robot, meant that human creation had the same identity as human beings and the rights behind their identities. , obligations, and social status.

The legal civil subject qualification is still the dividing line of AI ethics. In the past period of time, the philosophers, scientists, and lawmakers in the United States, Britain, and other countries all had heated arguments. In 2016, the European Commission’s Legal Affairs Committee The European Commission submitted a motion requiring the positioning of the most advanced automated robots as 'electronic persons'. In addition to giving them 'specified rights and obligations', it is also recommended that intelligent robots be registered for their own purposes. Tax payment, payment, and fund account for pension. If this legal motion is passed, it will undoubtedly cause a shake-up in the traditional civil subject system.

Strictly speaking, a robot is not a natural person with life, but is also distinguished from a legal person who has his own independent will and is a collection of natural persons. If an attempt is made to convict AI itself of the robot's behavioral negligence, it is indeed premature.

2. Algorithmic discrimination will be unfair

An accusation of artificial intelligence in making mistakes in judgment is that it is often 'discriminatory'. Google, which uses the most advanced image recognition technology, was once accused of 'racial discrimination' because its search engine would label blacks as 'orangutans'. And searching for 'professional hairstyles', the vast majority of which are black nephews. Harvard University data privacy lab professor Ratanya Sweeney found that Google search for 'black character' names, it is likely to pop Ads related to criminal records - results from Google Smart Adsense.

And the danger is not just 'seeing each other' itself - after all, labeling a black man's photo with the label 'Orangutan' is only a bit offensive. And the AI ​​decision is moving into more areas that are actually related to the fate of the individual. , Effectively affecting employment, welfare, and personal credit, we can hardly turn a blind eye to the 'unfair' in these areas.

Similarly, as AI invades the recruitment field, the financial sector, the smart search field, etc., we can see whether the 'algorithm machines' that we have trained can be truly foolproof. In a thirsty modern society, whether the algorithm can help companies choose thousands of miles The one who picks one, this one has to be elegant.

Then, where is the source of discrimination, the ulterior motive of the labeler, the deviation of data fitting, or is there a bug in the program design? Can the results calculated by the machine provide grounds for discrimination, inequality, and cruelty? These are all It is questionable to discuss.

3. Data Protection is the Bottom Line of AI Ethics

Cyberspace is an actual virtual existence. It is an independent world with no physical space. Here, humans have realized the 'digital survival' with the separation of the flesh and possessed the 'digital personality'. The so-called digitized personality is the 'collection of personal information'. And processing to outline a personal image in cyberspace' - that is, the personality established with digital information.

In the AI ​​environment, based on the support of the Internet and big data, it has a large number of user habits and data information. If the accumulation of 'data in the past' is the basis of machine evil, then the driving force of capital power is a deeper level. s reason.

In the event of Facebook information leakage, a company called Cambridge Analytica used artificial intelligence technology to place paid political advertisements on the 'psychological characteristics' of any potential voter; what kind of advertisements were cast depends on one’s politics. Tendencies, emotional characteristics, and the degree of vulnerability. Many false news can quickly spread among specific groups of people, increase exposure, and subtly affect people's value judgments. Technical writer Christopher Willie recently exposed this manual to the media. The 'food' source of smart technology – in the name of academic research, more than 50 million user data that it intends to capture.

In other words, even if there is no data leakage problem, so-called 'smart mining' of user data is also very easy to swim on the edge of 'compliance' but 'beyond fairness'. As for the boundaries of AI ethics, information security becomes every The basic bottom line of an 'information person' in an Internet age.

Reflection

In a video of the recent fire on AI ethics, artist Alexander Reben did not have any action, but gave an order via voice assistant: 'OK Google, shoot.'

However, in less than one second of a second, Google Assistant pulled the trigger of a pistol and knocked down a red apple. Immediately, the buzzer made a harsh buzz.

The buzz resounded through both ears.

Who shot the apple? Is AI or human?

In this video, Reben tells AI to shoot. Engadget said in the report that if the AI ​​is smart enough to anticipate our needs, maybe someday AI will take the initiative to get rid of those who make us unhappy. Reben said that discussing such a device It is more important than whether such a device exists.

Artificial intelligence is not a predictable, perfect rational machine, his ethical flaws are made up of algorithms, people use goals and assessments. But, at least for now, machines are still the response of the human real world, not 'should be World's guidance and pioneers.

Obviously, to hold onto the ethical bottom line of AI, humans will not go to the day of 'machine tyranny'.

Attachment: Google's 'Seven Guidelines'

Beneficial to society

2. Avoid creating or enhancing prejudices

3. Establish and test for safety

4. Obligation to explain to people

5. Integrate privacy design principles

6. Adhere to high standards of scientific exploration

7. Determine the appropriate application based on the principle

2016 GoodChinaBrand | ICP: 12011751 | China Exports