Forum Posts 22SP

Warning: This content has not yet been fully revised for this year.

Wrap-Up

Have you done your course evals?

Projects

Also, several projects might benefit from using captum (for interpreting model predictions) and streamlit (for an extremely easy way to make AI-powered web apps).

Optional Homework

As mentioned in class, I’ve posted an optional homework. It’s pretty practical and hands-on, and may actually help some people with their projects. You can choose to use it to replace any of your other mastery grades, including the project grade. (You don’t have to specify which one, I’ll just configure Moodle to drop the lowest.)

If you started it already, I made a minor revision: the example Transformer model now implements multi-head attention. (I couldn’t stop myself.)

Summer Opportunities

I’m running an open summer research team on applying language models to help people communicate. At least 8 people have expressed interest in participating in some way. I’d like to invite you to join us. There are two parts to consider:

  1. I’m going to be teaching an unofficial “May intensive” (aka bootcamp), starting May 9, for the summer team. The goal is to bring everyone up to speed on language modeling with Transformers and some basic web development. We’ll get our hands quite dirty (replicating a research paper from scratch and building a web UI for it, for example). We’ll have some students who haven’t done much CS and some who are coming from this class, so it should be a diverse crowd!
  2. After the intensive, we’ll break out into small groups to do projects. If you’ve wanted to do an AI project beyond the constraints of this class, here’s an opportunity. Participating in the intensive is highly recommended but not strictly required for this part.

If you’re interested in either part, please let me know!

Grading

Sorry I’m still behind on grading. Since this means that you haven’t had the revision opportunities for a few assignments, I’ll be extra lenient when assigning final grades.

Feedback

The course evals give only a narrow view of your experience with this course. More feedback will help us continue to improve. Simple way to leave anonymous feedback: reply to this Piazza post.

Thanks for a great semester, and see you Thursday!

Week 13: Finishing RL, Starting Human-Centered AI

What happens when AI meets people? We’ve been discussing this informally throughout the semester (haven’t lately because nobody signed up to lead a topic!). This week we’ll start asking: how can we ensure that AI results are

We’ll start this week with how we might convince ourselves that model outputs are (or aren’t) correct. But we have some important things to wrap up from our brief intro to Reinforcement Learning, so we’ll tackle that on Monday.

Notes:

Week 12: Review and Reinforcement Learning

Happy Saturday everyone!

This coming week we’ll be reviewing some things and then moving on to Reinforcement Learning, the engine behind models that have learned to play games like Go and Starcraft, and also behind some kinds of robotics (self-driving cars?). The plan after this week is to discuss human-centered topics like fairness, transparency, and HCI. (If there’s a topic you’ve been wanting to learn about, now is the time to let me know!)

Notes:

Week 10: Transformers (self-attention)

This coming week is advising week, which means we take a break on Wednesday. I also reduced the intensity of other things we’re doing, so hopefully you can take a good break to celebrate figuring out Workday Student!

You’ve now seen that Transformers-based models can do pretty neat things with text, and you’ve probably heard about how they’re taking over in image processing, audio, reinforcement learning, and more. What’s the secret to their success? Many researchers think it’s an element called “attention”. It’s actually a pretty simple idea: instead of the input to one block always coming from the same other block, the blocks get to choose which data to use as their input using a classifier. But there’s a lot to unpack about that idea. So we’ll take our two class days this week to do that.

A few things:

See you Monday!

Week 9: NLP Modeling

Welcome to week 9! This is the week we get to dig into Transformers and see how it works. I’ve found these models really useful and cool to think about because you can be so creative with how you use them. I hope this week you get a bit of a taste of that—and maybe a desire to play with them for your final project (see the Projects page for the research projects I’m offering for this class).

A few things:

Week 7: Embeddings

Welcome to unit 7, the week before Spring Break!

In this unit we introduce one of the most powerful concepts in machine learning: the embedding. It’s the idea that instead of being explicitly given a representation for something, we learn it from data based on the properties it should have. We’ve actually already seen embeddings of various kinds (the inputs to the last layer of an image classifier for one), but we’ll look at two examples where the embedding aspect is even clearer: movies and words.

Some logistics:

  1. I made some late updates to Preparation 7, and posted the quiz late, so it’s okay if you don’t get to it until Wednesday.
  2. Homework 6 is due on Thursday. It’s not the mini-project I’d hoped because I thought that would be too much, but it gets you pretty close. (If you’re interested in trying that mini-project anyway, I wrote up some instructions for it.)
  3. I recorded a video demo of how backprop works; see our Walkthroughs folder in Teams.
  4. Reflection 3 is due on Friday. It’s the same structure as last week: write your own quick summary of all the learning objectives from the past two weeks.
  5. In lieu of a normal Discussion this week, we get to start thinking about project ideas! I’ve put up a Forum on Moodle for this week if you’d like to pitch an idea and are looking for partners. We’ll talk more about projects in class this week; here’s the overall description.
  6. I’ve fixed up the gradebook, so now would be a good time to check where you stand. (I haven’t gotten to all of the revisions yet.) Things I haven’t graded yet should show up as “dropped”, but if there’s anything that’s missing or too-low and not “dropped”, please let me know.

See you Monday!

Week 6: Recap and Regularization

In this unit, after reviewing where we’ve been, we push towards state-of-the-art models (still focusing on computer vision). We’ll introduce or revisit tools that allow our models to achieve high performance, such as data augmentation and regularization. Finally, we’ll get more practice with how neural networks work from the ground up as we implement our own simple neural net image classifier from scratch (definitely something to mention in an interview!).

Logistics are basically as usual:

I had gotten behind on answering your questions; here’s some Q&A from week 5. And I’m still behind in feedback too, my apologies.

Week 5: Learning

Happy Saturday! We’re making good progress getting under the hood of how neural nets work. Last week we tackled regression; we introduced linear layers as the basic building blocks of neural nets, gradient descent as general way to find good values of parameters, and backpropagation as a general tool to compute gradients efficiently and without numerical issues. This week we extend to classification, where we’ll learn about some non-linear layers, which are where neural nets get their power.

Logistics!

See you Monday!

PS - Here’s an example of a very non-neural AI to solve Wordle. Neural nets are great tools for handling perception tasks, but not all tasks are perception tasks.

Week 3: Data, Ethics Intro

Happy Saturday everyone!

We were trying to go too fast, so I delayed starting on Chapter 4 by a week. If you’ve already started on Chapter 4, that’s great; it’s a dense chapter, so more time on it will help. Instead, this week we’ll look more into data and also how we evaluate AI systems. See this week’s page for more details.

At this point, everything from Week 1 should be done, plus Preparation 2 and Lab 2. Reminder: Preparation exercises and the quizzes are due the Monday night of the corresponding week. They close on Moodle a week later than that only to avoid having to track late submissions.

This week:

I look forward to seeing everyone on Monday!

Week 2: Data

Congratulations on making it through the first week of classes in yet another unusual semester. We covered a lot of ground in the first week! Just look at the objectives to jog your memory.

Wrapping up Week 1

Preparing for Week 2

In week 2 we will start considering what data you need to train models. You’ll collect some image data yourself and train and evaluate a model. We’ll also start looking at some of the ethics of collecting data for AI; we will return to this discussion throughout the semester.

I’m here for you. We’re all here for each other. May our Lord give us all strength to persevere, joy in the journey, and love for each other along the way.

Glossary
Announcements 23SP