HAW: Lesson - Big Idea 5: Societal Impacts

AI Bias Iceberg - above the surface: computational biases. Under: human biases, systemic biasesBig Idea 5: Societal Impacts

Think About It...

Can computers, and especially thinking computers like generative AI, make errors? If they can, how do these errors happen? Also, if computers make errors, what are the real world consequences?

AI Bias Iceberg

Image note: Look at the image to the right. Icebergs are famous for only showing a small part of a bigger problem. The surface issue: computational biases. Under the surface, the deeper causes: human biases and systemic biases.

Societal Impacts

As noted before, AI innovations are impacting every area of our lives. With each innovative idea and advancement it would be wise to consider the societal and ethical implications of everyone it may impact. Facial recognition (used in various sectors), autonomous vehicles (transportation sector), and autonomous robots (used in various sectors) are just three examples.

This portion of the lesson will focus on Big Idea 5: Societal Impacts.

AI Bias

When examining the societal and ethical impacts of AI it is important to understand the what, why, and how of bias. In the American Heritage Dictionary (5th Edition), bias is defined as a preference or an inclination, especially one that inhibits impartial judgment; an unfair act or policy stemming from prejudice.

One may wonder, how in the world does bias get into AI? Well, there is not a one size fit all answer to this and there is an abundance of viewpoints regarding this challenge. Let's explore three key stages of how AI bias happens: framing the problem, collecting the data, and preparing the data.

How AI Bias Happens

Now that you've read the article, let's dive into some important details about how AI bias happens.

  • Framing the problem: When computer scientist create deep-learning models, the first step is to figure out what they want the model to do. For example, a credit card company might want the model to predict if a customer is likely to pay back their loan. The company has to decide what’s most important: making money or having loans repaid. These types of decisions may value profit over fairness. If the deep-learning model determines that giving out risky loans generates the most profit, it might do so, even if it’s unfair to customers.
  • Collecting the data: When collecting data to train a computer model, bias can enter in two main ways. First, if the data collected doesn’t accurately represent real life. For example, if a facial recognition system is trained mostly on photos of people with lighter skin, it will struggle to recognize those with darker skin and vice versa. Second, bias can creep in if the data reflects existing prejudices. For example, if a company’s hiring tool was biased against women because it learned from past decisions that favored men and vice versa.
  • Preparing the data: During the data preparation stage, bias can enter when choosing which characteristics the algorithm should focus on. This step is different from deciding what the goal of the model should be. For example, when deciding whether or not to approve a loan, attributes such as age, income, number of loans paid off can be considered. Similarly, for a company deciding who to hire, attributes like gender, education level, and experience might be considered. However, measuring its impact on the deep learning model’s bias is not straightforward.

Preventing AI Bias - Why is it so difficult?

Many may wonder why AI biases are so hard to address. Well, addressing AI bias presents several challenges due to various factors. We will review three: unknown unknowns, imperfect processes, and definitions of fairness.

  • Unknown unknowns: Bias might not be obvious during model creation, making it hard to spot and correct later. For example, in the example of the company’s AI hiring tool, initially female candidates faced consequences due to explicitly gendered terms. As a response, engineers removed those terms. However, the system still showed implicit gender biases, such as favoring certain verbs more commonly associated with men than women.
  • Imperfect processes: Standard practices in deep learning often lack bias detection methods. Deep learning models are tested for performance using data that may carry same biases as the training data, making it challenging to detect prejudiced outcomes.
  • Definitions of fairness: Defining fairness in AI systems is complex. Mathematical definitions of fairness can be conflicting and mutually exclusive. For example, should fairness mean equal proportions of outcomes for different racial groups, or should it mean identical outcomes regardless of race? Choosing one definition often means sacrificing others. Also, fixing a single answer may not align with society’s evolving understanding of fairness.

Reflection on Societal Impacts of AI

Thankfully, numerous AI researchers are diligently working to solve this problem. They’re using different methods, like developing algorithms to find and reduce hidden biases. Additionally, procedures are being established to hold companies accountable for ethical outcomes.

[CC BY-NC-SA 4.0 Links to an external site.] UNLESS OTHERWISE NOTED | IMAGES: LICENSED AND USED ACCORDING TO TERMS OF SUBSCRIPTION - INTENDED ONLY FOR USE WITHIN LESSON.
Whale Design/Shutterstock.com. Image used under license from Shutterstock.com and may not be repurposed.
Citation: AI Bias source: An article in MIT Technology Review Links to an external site., called "This is how AI bias really happens—and why it’s so hard to fix Links to an external site." by Karen Hao.