You’ve spent several weeks learning how generative AI systems actually work — tokenization, attention, training pipelines, tool use, failure modes. You’re now much more qualified than average to answer the questions people will ask you: Where is AI going? Is it good or bad?
But even experts disagree. Some are impressed by what AI can do (“fans”). Others doubt the claims (“skeptics”). Some are optimistic about societal benefits (“optimists”). Others worry about serious harm (“concerned”). Many thoughtful people hold several of these views at once. To be wise, we need to engage honestly with perspectives we don’t naturally hold.
This Discussion addresses the course objectives Overall-PhilNarrative and Overall-Impact.
We’ll share our findings in class on the last day and compare with the results of a national survey.
Step 1: Take the survey. The Moodle forum includes a link to a brief survey about your current views on AI. Fill it out first.
Step 2: Find two articles that represent genuinely different perspectives on the future of AI. Your two articles should pull in different directions — not two versions of the same take. Read with hospitality: you’ll need to articulate the other side’s view convincingly.
For each article:
[skeptical, optimistic] or [fan, concerned]Step 3: Articulate your own position (~150-250 words), drawing substantively on both articles. Where do you land, and why?
Ground your position in something beyond personal preference. You might draw on:
The best posts will show that you’ve genuinely wrestled with a view you don’t naturally hold.
The landscape changes fast, so find your own sources rather than relying on a list. Some places to look:
Where to search:
Kinds of voices to look for:
Read several classmates’ posts. Reply to at least one (~75-150 words):
This covers both the RL unit and Human-Centered AI part 1.
Main difference is what functions we learn:
That’s what makes it hard! e.g,. in Q-learning we try to minimize the temporal difference: how much the reward we get differs from the reward we predict (by subtracting the next-state value function from the current-state value function). But that’s a difference of two predictions; if we were wrong, which of those two predictions was wrong?
In general, we’re hoping to learn something about all possible things that could happen and things we could do, given data about only a fraction of what happened and things we did.
They’re good at different things:
So, unsurprisingly, the state of the art often combines both! See, e.g., MuZero.
Hm. Pro:
Con:
Maybe somewhat, but not really:
We trust human doctors routinely, even though, despite decades of effort by cognitive scientists, we have very limited knowledge about the process by which people make their decisions.
No.
So why are we only learning about this now? Good question…
The classic algorithm for learning decision trees.