These courses will present students with opportunities to explore a variety of types of broader contexts and implications of AI. Students will generally choose two specific areas of depth. Areas include:
At a minimum, I should at least be able to:
Specific topics may include:
Initial post: find and analyze one AI fairness/bias case, ideally one that your peers have not yet posted about.
In your post, please:
Then, respond to some peers’ posts. In your responses, you might:
Ironically, you can ask an AI for examples of AI bias! (but dig in to make sure it’s not making stuff up—which is another problem with AI that we’ll study later in the course).
A few sources you might consider:
This was our discussion prompt last year. If you’ve already started thinking about it, you’re welcome to make your post with this prompt instead.
We read an article on challenges with fairness in machine learning. Choose one of the following prompts and post a brief (about 150-250 words) substantive response.
Your post should:
Then, post substantive, thoughtful replies to two of your peers’ posts. You might, for example, raise a counterpoint to their argument, suggest a different way of thinking about the situation, or identify a connection between what they wrote about and what someone else wrote about.
We have discussed several issues about the broader context and implications of AI, but there is far more than we have time to discuss, especially if you’re not continuing with us to CS 376. So we will teach each other!
Then, respond to some peers’ posts. In your responses, you might:
etc.
“God blessed [the humans] and said to them, “Be fruitful and increase in number; fill the earth and subdue it. Rule over the fish in the sea and the birds in the sky and over every living creature that moves on the ground.”” (Genesis 1:28 NIV)
…
“The Lord God took the man and put him in the Garden of Eden to work it and take care of it” (Genesis 2:15 NIV)
Additional reading on theology of work:
Since the Industrial Revolution in the 1800s there has been the fear of machines taking over human labor. Admittedly, in the short-term, these fears have been realized. However, in the long-term, automation has evolved fields like manufacture (factories), construction (bulldozers, cranes and excavators) and even research (search engines). Moreover, human roles in these evolved fields have remained, albeit changed and maybe more specialized.
AI-Human Collaboration is the idea that this trend can continue with the rise of AI in the workforce. Instead of outright replacement and banishment in a certain field, humans can take on more specialized roles. However, that isn’t to say that there aren’t concerns for both the short and long-term.
AI and humans taking complementary roles in the workforce. AI/robots carry out the more menial tasks, humans fulfill roles that require them to expect the unexpected.
Proper implementation of AI, like other instances of automation, could see the expansion of human roles.
“We draw on this extensive research alongside recent GenAI user studies to outline four key reasons for productivity loss with GenAI systems: a shift in users’ roles from production to evaluation, unhelpful restructuring of workflows, interruptions, and a tendency for automation to make easy tasks easier and hard tasks harder.”
The irony is not new; see, for example, this 1983 article arguing that automation doesn’t necessarily remove the difficulties in human work.
Pick one of the following areas to explore in your discussion post. You may address the questions listed here or come up with your own.
This page was originally written by Calvin CS 344 student Caleb Vredevoogd in Spring 2022. It was revised by Ken Arnold in Spring 2025.