How to get AI out of bias?

The author discusses various biases that may be added to artificial intelligence and machine learning applications between intentional and unintentional...

We have all seen the scene in which the machine controlled the world and the destruction of human beings. Fortunately, these movies are just entertainment. In real life, these forced scenes will not happen. However, one of the more practical issues that should be noted is : algorithmic bias (algorithmic bias).

The so-called 'algorithmic prejudice' refers to the prejudice of a designer or developer in programming that appears to be malicious, or the data used is biased. The result of course brought with it a variety of issues. For example, Google search was misinterpreted, qualified candidates could not enter medical school, chat bots spread racism and sex discrimination information on Twitter.

The network 'village folks' are too powerful. Microsoft's chat robot Tay was taught to be racially-discriminating for 1 day. Microsoft urgently allowed her to 'silence' ...

One of the thorniest problems caused by algorithmic bias is that engineers involved in programming can create prejudices even if they are not racial, gender, age-discriminatory, etc. Artificial intelligence (AI) is essentially designed to learn on its own, sometimes It does make a mistake. Of course, we can make adjustments afterwards, but the best solution is to prevent it from happening in the first place. So, how can we make AI unbiased?

Ironically, one of the most exciting possibilities of artificial intelligence is: A world without human prejudice. For example, when it comes to recruiting people, the algorithm allows men and women to be equal when applying for the same job. Treatment, or avoid racism in policing.

Regardless of whether people are aware that human-created machines truly reflect how people view the world, they will also have similar stereotypes and world views. Since artificial intelligence is more and more in-depth in life, we must pay attention to this issue.

Another challenge that AI faces is that bias is not only a single form, but there are various types, including interaction bias, subconscious bias, selection bias, data-oriented bias, and confirmation bias.

Various AI bias types

'Interaction bias' refers to the bias of the algorithm caused by the user's interaction with the algorithm. When the machine is set to learn from the surrounding environment, they cannot decide which data to keep or discard. What is right? Or wrong. Instead, they can only use the data provided to them - whether it is good, bad, or ugly, can only make judgments based on this basis. The previously mentioned Microsoft chat robot Tay is an example of this kind of prejudice. It began to become racist because of the influence of an online chat community.

'Subliminal bias' refers to the algorithm linking wrong ideas to factors such as race and gender. For example, when searching for a doctor's photo, artificial intelligence first presents a picture of a male doctor, not a female physician, and vice versa. However, similar situations occur when searching for nurses.

'Selective bias' refers to an algorithm that is affected by data, resulting in over-amplification of a certain group or group, so that the algorithm is beneficial to it, and the cost is to sacrifice other groups. Take employee recruitment as an example, if artificial intelligence is Training to recognize only the résumés of men, then women job seekers can hardly succeed in the application process.

'Data-oriented bias' refers to the fact that the original data used to train the algorithm has been prejudiced. Machines are like children: They do not question the data they receive, they simply look for the patterns in them. If the data starts out Being distorted, then the result of its output will also reflect this.

The last kind is 'confirmation bias', which is similar to data-oriented bias. It biases those preconceived information. Such bias affects how people gather information and how to interpret information. For example, if you feel that people were born in August. People born more than other months are more creative and tend to search for data that reinforces this idea.

When we know that there are so many prejudices that may infiltrate an instance of an artificial intelligence system, it seems to be very worrying. But it is important to recognize the fact that the world itself is biased, and therefore, in some cases, we are concerned with artificial intelligence. The results provided are not surprising. However, this should not be the case. We need a process for testing and validating artificial intelligence algorithms and systems to detect prejudice early in development and before deployment.

The difference between the algorithm and human being is that it does not lie. Therefore, if the result is biased, there must be a reason, that is, data related to the algorithm. Humans can lie and explain why they do not hire someone. , but artificial intelligence may not be like this. Using algorithms, we may know when bias will occur and adjust it so that we can overcome these problems in the future.

Artificial intelligence can learn and make mistakes. Usually it is only after the actual algorithm is used that all internal prejudices can be discovered because these prejudices are magnified. Rather than treating the algorithm as a threat, it is better to regard it as a solution. A good opportunity for all prejudice questions and correct them if necessary.

We can develop systems to detect biased decisions and take timely action. Compared to humans, artificial intelligence is particularly suitable for Bayesian methods to determine the probability of a hypothesis and eliminate all possible Human prejudice. This is complicated, but it is feasible, especially considering the importance of artificial intelligence, and it will only become more and more important in the coming years. This is an irresponsible thing.

With the development of artificial intelligence systems, it is important to understand how it works in order to make it conscious through design and to avoid potential prejudices in the future. Do not forget that although artificial intelligence is developing very rapidly, it is still In the beginning stage, there are many areas that need to be studied and improved. This adjustment will continue for some time. At the same time, artificial intelligence will become smarter. In the future, there will be more and more ways to overcome prejudices and other issues. .

For the science and technology industry, it is very important to continuously question the operation methods and causes of the machines. Most artificial intelligences are like black boxes. The decision-making process is hidden, but the openness and transparency of artificial intelligence are established. The key to trust and avoid misunderstandings.

At this stage, there are many studies that have helped identify prejudices, such as those of the Fraunhofer Heinrich Hertz Institute. They focus on identifying different types of prejudices, such as the prejudices mentioned above, and more 'low-level' biases. There are also some Problems that may arise during the training and development of artificial intelligence.

On the other hand, what needs to be considered is unsupervised training. Nowadays, most of the artificial intelligence models are developed through supervised training. That is, only the data that humans have labeled are collected. Supervised training using unlabeled data, the algorithm must sort itself, identify and aggregate the data. This method is usually several orders of magnitude slower than the supervised learning, but this method relatively limits the human intervention. Can eliminate any conscious or unconscious human bias and avoid impact on the data.

There are also many things that can be improved in terms of infrastructure. When developing new products, websites, or features, technology companies need talented people in all fields. Diversification will provide a variety of data for the algorithm, but it will inadvertently These data are biased. If someone analyzes the output, then the likelihood of finding prejudice will be quite high.

In addition, algorithmic auditing has other roles. In 2016, a research team at Carnegie Mellon University in the United States discovered algorithmic biases in online job advertisements. They were listed on the Internet, Google Advertising. A list of people seeking jobs shows that men’s proportion of high-income jobs is nearly six times that of women. The research team concluded that if internal algorithmic audits are conducted first, it will help reduce such prejudices.

In short, machine prejudice is human prejudice. There are many kinds of artificial intelligence bias, but in fact, it comes from only one source: human.

The key is that technology companies, engineers and developers should take effective measures to avoid inadvertently producing biased algorithms. Through auditing of algorithms and keeping them open and transparent at all times, we have confidence that we can make artificial intelligence. Algorithm to get rid of prejudice.

2016 GoodChinaBrand | ICP: 12011751 | China Exports