Module 6: Ethics & Responsibility

Bias and Fairness

Lesson 6.1 25–35 minutes 1 activity

How Bias Enters AI Systems

You learned in Module 1 that AI learns from training data. That means if the data contains biases, the AI inherits them. But bias in AI goes deeper than just bad data. It enters at every stage:

  • Data bias: Training data overrepresents some groups and underrepresents others. If most photos of CEOs in training data are men, AI associates "CEO" with men.
  • Design bias: The choices builders make — what to measure, what to optimize for, who to test with — embed assumptions. A recommendation algorithm optimized for "engagement" might promote controversial content because it gets more clicks.
  • Deployment bias: Even a well-designed system can produce biased outcomes if deployed in a context it wasn't designed for, or if certain users can't access it equally.

As a builder, you can't eliminate all bias. But you can be aware of it, look for it in your project, and design with fairness in mind.

Answer: No. Bias can enter through data, design choices, and deployment context. A builder's decisions at every stage influence whether the final product treats people fairly.

Real-World Consequences of Biased AI

Bias in AI isn't abstract. It affects real people in real ways:

  • Hiring tools that scored women lower than men for technical roles, because historical hiring data reflected existing gender imbalances
  • Facial recognition systems that worked well on lighter skin but poorly on darker skin, because training data overrepresented light-skinned faces
  • Healthcare algorithms that gave less attention to certain patients, because they used healthcare spending as a proxy for health needs — people who spend less on healthcare aren't necessarily healthier, but the AI treated them as if they were
  • Content recommendation systems that created "filter bubbles," showing people only content that confirmed their existing beliefs

None of these were built with harmful intent. They were built by people who didn't check for bias carefully enough. The lesson: good intentions aren't enough. You have to actively look for bias.

Checking Your Project for Bias

Here's a practical framework for examining bias in your own project:

  1. Who benefits most from my project? Could it unintentionally exclude or disadvantage anyone?
  2. If my project makes recommendations or decisions, what assumptions are built into those? Are they fair to all users?
  3. If my project uses AI-generated content, have I checked that content for stereotypes or one-sided perspectives?
  4. Have I tested my project with people who are different from me? Different backgrounds, abilities, perspectives?
  5. If my project deals with categories of people (names, roles, descriptions), are those categories accurate and fair?

For most projects in this course, the bias risks are relatively small. A personal habit tracker doesn't make decisions about people. But building the habit of asking these questions prepares you for projects where the stakes are higher.

From Awareness to Action

Knowing about bias is step one. Here's how to act on it:

  • Get outside feedback: Show your project to different people — friends, family, people who aren't like you. Their feedback reveals blind spots you can't see from your own viewpoint.
  • Question AI defaults: When AI generates content that makes assumptions about people, push back. Ask for more accurate alternatives that reflect reality, not just training data patterns.
  • Design for edge cases: If your project works for the people who are hardest to serve (slow internet, older devices, different skill levels), it will work well for everyone.

Key Concepts

  • Bias enters AI through data, design choices, and deployment context — not just training data
  • Biased AI has caused real harm: unfair hiring, inaccurate recognition, unequal healthcare, filter bubbles
  • Good intentions aren't enough. Actively check for bias using structured questions
  • Get feedback from different users, question AI defaults, and design for edge cases
  • Building this habit now prepares you for higher-stakes projects in the future

Try It: Bias Case Study Analysis

Read and analyze a real-world example of AI bias.

  1. Choose one of the examples from Content Block 2 (hiring, facial recognition, healthcare, or recommendations).
  2. Ask AI: "Explain the [example] AI bias case in detail. What went wrong? When could it have been caught? What would have prevented it?"
  3. Write a 3–4 sentence analysis: What was the root cause? Who was harmed? What lesson applies to your own project?
  4. Apply the lesson: Run through the 5 bias questions from Content Block 3 for your own project. Document any concerns.

Check Your Understanding

1. Where can bias enter an AI system?

Explanation: Bias is multifaceted. Biased data is the most discussed source, but design decisions (what to optimize, who to test with) and deployment context also introduce bias.

2. Why is getting feedback from different people important for catching bias?

Explanation: Your perspective is just one viewpoint. Other people interact with your project differently based on their experience, abilities, and context. Their feedback reveals issues you might never notice on your own.

3. A hiring tool recommends only male candidates for engineering roles. What type of bias is this?

Explanation: If historical hiring data overrepresented men, the AI learns that pattern and perpetuates it. This is data bias. The tool isn't being malicious — it's doing what it was trained to do.

4. How should you approach bias in your own project?

Explanation: Bias is easy to miss from your own perspective. Use the questions in this lesson and get feedback from others. Even small projects can build bad habits or miss important issues.

Reflect & Write

Write 2–3 sentences: Is there any way your project could unintentionally treat some people differently than others? What's one step you could take to make sure it treats all users fairly?

Project Checkpoint

Run the 5 bias questions on your project and document your findings.

Level Up: Coming Next

Lesson 6.2 — Privacy, Consent, and Intellectual Property. Deeper into responsible building: using others' data and content responsibly, and what your project owes its users.

Continue to Lesson 6.2 →