Algorithmic Bias and AI

As we gain a more collective understanding of AI, there has been an urge to shift our attention to how this mass integration of AI into our lives could change society’s dynamic. Ever since automation was discovered, there has been a lot of fear and skepticism surrounding automation integration into society. The fear of losing employment, or even in some extreme cases, the fear that robots will outsmart us that it eventually will put an end to humanity, are just some of the common worries. These fears are valid- we cannot guarantee that this would not be the outcome of the rise of AI. However, the threat that AI could impose, realistically speaking, is often not as extreme as those portrayed in Sci-fi movies. It still could ruin people’s lives nonetheless. The problem that we face now is the bias carried out by the algorithms of AI.

The Source of Bias

This bias itself does not come from nowhere. AI systems operate under combinations of algorithms, a guide that contains all the rules and logical processes that dictate the actions it can perform. In addition to that, algorithms can also affect how an AI system learns new information and what kinds of information to process.
In machine learning, there are different learning methods that an AI system can use. One of the most common methods to use is called supervised learning. Supervised learning is a learning method in which a system has to map the input data to the output data. This is done by learning and analyzing data from the training dataset and comparing the similarity between the information extracted to the new output. For example, consider a simple AI is tasked to detect pictures of blue cups. Using this method, the AI learns from the training data set consisting of variations of blue cup pictures. The AI would eventually learn that the commonality between each picture from the training dataset; it has to be blue, it has to be bowl-shaped, and in most cases, it has to have a handle. If the AI was given a picture, it has to compare the training data set to the new input, assessing whether or not the examined properties matched. What would happen if the AI was given a picture of a blue cup without a handle? After all, it is still a functional cup, and it is also blue. For us humans, we do not have any problems in classifying this as a cup because we have intuition. For a system that is not equipped with intuition, this trivial thing can lead to a significant problem.

“One important difference between human and algorithmic bias might be that for humans, we know to suspect bias, and we have some intuition for what sorts of bias to expect (Lipton, 2016) “.

In this scenario, the blame should not be placed on the AI. The AI could not detect a blue cup without a handle because of the poor representation of a blue cup without a handle on the training dataset, and anything that has lower similarity to the majority of the data in the pool does not belong to the category.

The Bigger Picture

We create algorithms to make sense of patterns in an accurate and informative way, however just as we cannot escape bias, neither can these algorithms.

Even google translator contains bias in its algorithm. In 2017, Caliskan et al. published a study where they found an implicit bias in translating a sentence from a gender-neutral language to a gendered language. (Caliskan et al., 2017). In this study, they translated the Turkish sentence “O bir doktor, O bir hems ire” to English with google translate and found that masculine pronoun was used to refer to someone who is a doctor, and feminine pronoun was used to refer to a nurse. Google translated the sentence into “He is a doctor, and she is a nurse,” even though the language it was translated from did not use a gendered pronoun.

The algorithmic bias in translating from a gender-neutral language to a gendered language might not have been a problem in the past few decades. However, this is a very much a severe topic of discussion in our current day and age. Intelligent systems learn from patterns fed into itself and project these patterns, which in this case is how the stereotype of gender roles is constantly reinforced back into the sexist and patriarchal mindset of society. Doctors being associated with men and nurses being associated with women reflects the workplace not only in the medical field but in all fields of work where men are seen to be superior to women, and therefore, a vast majority of the people hired in any workplace is men. This algorithm is one of the many things that encourage the pay gap where men are paid more than women because they tend to have the “more difficult” jobs since women are not considered to be capable of handling great responsibilities and therefore are not hired for those kinds of jobs.
Other than gender bias, algorithms can also be racially biased. A recent study by Sweeny found that there is a racial bias in Google ads results where a google search of a white-sounding name would yield a more favourable result than an African-American-sounding name (Sweeny, 2017). A google search of a non-white sounding name such as “Trevon Jones” yields a more suggestive result associated with crime and arrest records. White sounding names, however, yield mostly a neutral result that is not associated with criminal records. On the surface level, this may not sound like a grand issue. However, when looking at what it may cause, this is a more significant issue than we think.
Finding advertisements for criminal record checks when looking up African American names does not stop there for minorities who already face many difficulties. This algorithmic bias may lead to racial profiling. Students with African American names may have their job or university applications rejected. If they request a loan, their request from the bank may be denied because of this bias.
The ethicality of algorithmic bias has been questioned many times because different people have contrasting interpretations of what is beneficial and what is harmful to society. As mentioned above, the bias of our actions and thought are reflected in how we choose to program AI. Bias in AI is a complicated problem to solve, as humans across the globe have diverse moral and ethical values, this itself explains why bias itself is a human trait, and it is a part of being a human. To further explain this topic, consider the study titled “The Moral Machine Experiment” by Awad et.al.
The study shows that there is a diversion between cultures and their values when it comes to choosing who to self in a modern trolley problem experiment. No matter which decision people choose, it will always contain bias, one way or another. This bias could affect policymaking in many countries, and the results would all be different because each country has its own culture, which in return creates bias. In countries where people of the higher class are valued more than those of a lower class, self-driving cars could be biased to hit the person of lower status when deciding who to crash into, and the policymakers would make it legal due to their cultural and economic bias which was included this algorithm. The same goes for countries in the East where it is culturally obliged of everyone to respect their elders, and so in those countries, this bias would translate to the bias in the self-driving car and, if given a choice, would hit a young person rather than an elderly person and the policymakers as well as society would be tolerable of this.
Algorithmic bias may be seen as non-threatening and does not pose a significant risk to society since many people are unaware of its effects because those are the privileged people of society who are not influenced by its limited income since they are the ones creating it. However, its effects could cause severe harm to many of society’s members. Women face sexism and the pay gap due to the contribution of algorithmic bias to gender stereotypes in the workplace. African Americans face racism and racial profiling when applying to jobs and colleges due to the bias associating the African American community with crime and crime records. These are all forms of indirect oppression to the minorities. It shows how ignorant some people working in AI are and how the industry lacks diversity and representation of minorities.

What now?

A scientific background without understanding how society is formed or how it works could have a detrimental effect on society. This is why it is necessary for people in the tech field to have some background knowledge in social sciences such as history, psychology, gender studies, and anthropology to grasp how society functions as a whole.

Merging sciences with humanities gives the holistic background needed to be aware enough of how everything put into an algorithm can affect every part of society, such as class hierarchy, race, minorities, capitalism, politics, and ethically and socially acceptable.
However, that is not enough. This bias also must be fought from within the field by making it more diverse. Having people with the same background, culture, views, or race, or religion creates a comfortable unchallenged perception or format of thinking and taking action. However, when there is more diversity, unconscious bias that would later be put into an algorithm can be eliminated before it happens. When there is a team of people with different opinions and mindsets, each one voices their perspective on the situation where they then as a team evaluate each one, discuss what they believe could work, correct one another if conscious or unconscious bias occurs and solve issues if any arise which would then lead to a more objective and accurate algorithm.
Bias is somewhat innate. It might be impossible to eradicate biases from algorithms. However, there is still room for improvement — a path that leads to more equal and fair algorithms that would be much more suited for more if not all areas of society.

Article submitted as a final project of COGS 300 : Understanding and Designing Cognitive Systems

References

Awad, Edmond, et al. “The Moral Machine Experiment.” Nature News, Nature Publishing Group, 24 Oct. 2018, www.nature.com/articles/s41586-018-0637-6.

Caliskan, Aylin, et al. “Semantics Derived Automatically from Language Corpora Contain Human-like Biases.” Science, American Association for the Advancement of Science, 14 Apr. 2017, science.sciencemag.org/content/356/6334/183.

Lipton, Zachary C. “The Foundations of Algorithmic Bias.” Approximately Correct, 7 Nov. 2016, approximatelycorrect.com/2016/11/07/the-foundations-of-algorithmic-bias/#more-41.
Rieland, Randy. “Artificial Intelligence Is Now Used to Predict Crime. But Is It Biased?”
Smithsonian.com,Smithsonian Institution, 5 Mar. 2018, www.smithsonianmag.com/innovation/artificial-intelligence-is-now-used-predict-crime-is-it-biased-180968337/.

Sweeney, Latanya. “Discrimination in Online Ad Delivery.” [Astro-Ph/0005112] A
Determination of the Hubble Constant from Cepheid Distances and a Model of the L

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

AI, BI, and BS: Demystifying Enterprise Scale Artificial Intelligence

The Future Is Artificial

How I boosted and made exponential growth for my new start up using Artificial intelligence and…

WE CREATED DOCUMENT DYSFUNCTION. IT IS TIME TO FIX IT.

Woobo — how a born-global Beijing-based start-up is disrupting the kids education industry

Why we invested in ReSpo.Vision

Dynamic obstacle avoidance based on Sensor Fusion and Q — Learning

An alternate viewpoint on Artificial Intelligence is Stuck

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
L.L. Putri

L.L. Putri

More from Medium

In b.e.d* with AI: The thing about DALL.E 2

Are we replacing science with an AI oracle?

Artificial Intelligence in China

Using Unsupervised Learning to Combat Cyber Threats

Combat Cyber Threats Using Unsupervised Learning Methods