Review my blog post on Mapping to Mimicry. I wrote it in one short sprint; feedback welcome!
Rather than study theory, let’s look at two recent blog advances:
What happens when AI meets people? How can we ensure that AI results are:
The first two are the subject of a subfield called Fairness, Accountability, and Transparency; the last is the subject of much research in human-computer interaction (HCI) and computer-supported cooperative work (CSCW). We’ll explore all three in these last two weeks of class.
Read one or more of these:
Many unwittingly used a data set that contained chest scans of children who did not have covid as their examples of what non-covid cases looked like. But as a result, the AIs learned to identify kids, not covid.
Watch:
Supplemental: The Effects of Regularization and Data Augmentation are Class Dependent | Abstract
Read or watch something from Human-Centered Artificial Intelligence.
Recommended but not essential:
Reinforcement Learning (learning from feedback)
Choose from one of the following notebooks, or do the Reinforcement Learning activities at the bottom of this page.
u13n1-count-params.ipynb; show preview,
open in Colab)u13n2-seq-models.ipynb; show preview,
open in Colab)u13n3-self-attention.ipynb; show preview,
open in Colab)Open up Observable RL Playground
maze = definition to edit the environment. What does it take to get the agent to tolerate a short-term negative reward to achieve a higher long-term reward?Go to the “Playground” at the bottom of this article.
This environment isn’t rich enough for exploration to help much. So: go to a different playground, where we can actually edit the environment and see what the agent learns.