Context and Implications

Key questions

Key objectives

These courses will present students with opportunities to explore a variety of types of broader contexts and implications of AI. Students will generally choose two specific areas of depth. Areas include:

At a minimum, I should at least be able to:

Specific topics may include:

Contents

Fairness and Bias

Discussion Prompt

Initial post: find and analyze one AI fairness/bias case, ideally one that your peers have not yet posted about.

In your post, please:

  1. Describe the specific issue (what system has a problem? for whom? etc.), including a link to a reputable source.
  2. Explain why it’s biased/unfair using clear criteria (e.g., disparate performance across groups, or disparate standards used between groups), including quantitative evidence if you can find it. Note that there are different definitions of what constitutes bias or fairness. A classic example is “affirmative action”: some people see it as a way to correct for past discrimination, while others see it as a form of discrimination itself. So you’ll need to be clear about what you mean by “fair” or “unfair.”
  3. Discuss why this matters (e.g., what are the potential real-world impacts on affected groups?)
  4. Consider key stakeholders in this situation:
    • Who’s affected directly and indirectly?
    • Who has power to make changes?
    • Who benefits from the current system?
  5. Acknowledge real-world constraints:
    • What technical limitations affect potential solutions?
    • What business or economic factors are relevant?
    • What tradeoffs might be necessary?

Then, respond to some peers’ posts. In your responses, you might:

Sources for Cases

Ironically, you can ask an AI for examples of AI bias! (but dig in to make sure it’s not making stuff up—which is another problem with AI that we’ll study later in the course).

A few sources you might consider:

Old Discussion Prompt

This was our discussion prompt last year. If you’ve already started thinking about it, you’re welcome to make your post with this prompt instead.

We read an article on challenges with fairness in machine learning. Choose one of the following prompts and post a brief (about 150-250 words) substantive response.

  • What sorts of decisions might AI systems make more fairly than humans – or vice versa? Give specific examples of situations, explain why your choice could be more fair, and be specific about what you mean by “fair” in each situation.
  • Do you think that social media algorithms are biased? Why or why not? Cite evidence where possible.
  • Suppose you’re hired to develop an AI system that might help identify people at risk of mental illness. What issues of fairness or bias might you be concerned about, and what might you do about them?
  • The article cited mathematical proofs about the impossibility of fair decision-making by anyone, whether machine or human. Do you believe those results, or are they missing something?
  • Suppose a friend was denied a car loan by an algorithm, and thinks he was being unfairly discriminated against. What would you tell your friend to help them understand their situation? What evidence might you want to collect to help your friend make a strong discrimination case against the loan company?
  • or a similar sort of question of your own (send it to the instructor to check)

Your post should:

  • Start with the prompt that you’re responding to.
  • Cite sources where possible.
  • Be written clearly, for an educated but non-technical audience.

Then, post substantive, thoughtful replies to two of your peers’ posts. You might, for example, raise a counterpoint to their argument, suggest a different way of thinking about the situation, or identify a connection between what they wrote about and what someone else wrote about.

Your Choice of Context/Implications Topic

We have discussed several issues about the broader context and implications of AI, but there is far more than we have time to discuss, especially if you’re not continuing with us to CS 376. So we will teach each other!

  1. Go to the Syllabus section on context and implications and choose a topic that you find interesting or important (focusing especially on areas that you haven’t yet demonstrated fulfillment of).
  2. Do a bit of research on the topic:
    • Have a discussion with an LLM about the topic to identify key issues and keywords to search for. In your conversation, include a bit about why you think the topic is interesting or important.
    • Find at least one (ideally more) reputable source that discusses the topic. Send a Teams message to the instructor if you need any help here or aren’t sure if a source is reputable. Our course Perusall has a lot of resources in the Library and there are a lot more that I haven’t yet loaded in; just ask.
  3. Write a post where you:
    • Very briefly introduce the topic and why you think it’s important or interesting.
    • Summarize the key issues and evidence from your source(s). Focus on one of the areas of depth that are listed as key objectives on the syllabus. (Note which area you’re focusing on.)
    • Raise at least one question that your colleagues might discuss in response.

Then, respond to some peers’ posts. In your responses, you might:

etc.

AI-Human Collaboration and the Future of Work

Background

God Made Work Good

“God blessed [the humans] and said to them, “Be fruitful and increase in number; fill the earth and subdue it. Rule over the fish in the sea and the birds in the sky and over every living creature that moves on the ground.”” (Genesis 1:28 NIV)

“The Lord God took the man and put him in the Garden of Eden to work it and take care of it” (Genesis 2:15 NIV)

Additional reading on theology of work:

Brief History of Automation

Since the Industrial Revolution in the 1800s there has been the fear of machines taking over human labor. Admittedly, in the short-term, these fears have been realized. However, in the long-term, automation has evolved fields like manufacture (factories), construction (bulldozers, cranes and excavators) and even research (search engines). Moreover, human roles in these evolved fields have remained, albeit changed and maybe more specialized.

What is AI-Human Collaboration?

AI-Human Collaboration is the idea that this trend can continue with the rise of AI in the workforce. Instead of outright replacement and banishment in a certain field, humans can take on more specialized roles. However, that isn’t to say that there aren’t concerns for both the short and long-term.

Uses

AI and humans taking complementary roles in the workforce. AI/robots carry out the more menial tasks, humans fulfill roles that require them to expect the unexpected.

Proper implementation of AI, like other instances of automation, could see the expansion of human roles.

Ironies of Automation

“We draw on this extensive research alongside recent GenAI user studies to outline four key reasons for productivity loss with GenAI systems: a shift in users’ roles from production to evaluation, unhelpful restructuring of workflows, interruptions, and a tendency for automation to make easy tasks easier and hard tasks harder.”

Ironies of Generative AI: Understanding and mitigating productivity loss in human-AI interactions | Abstract

The irony is not new; see, for example, this 1983 article arguing that automation doesn’t necessarily remove the difficulties in human work.

Who

Potential Concerns

Provocative Questions

Pick one of the following areas to explore in your discussion post. You may address the questions listed here or come up with your own.

Historical

Theological

Social Impact

Dispositions

Further Reading

Acknowledgments

This page was originally written by Calvin CS 344 student Caleb Vredevoogd in Spring 2022. It was revised by Ken Arnold in Spring 2025.