Schedule - CS375

See also: CS 376 Schedule

Any content in the future should be considered tentative and subject to change.

Week 1: Introduction

Getting started with ML: impacts of AI, running Python in notebooks, training an image classifier using off-the-shelf code.

Key Questions
  • What is the essence of modern approaches to AI?
  • What optimization games are AI systems playing?
  • Can AI systems be smarter than humans?
Objectives
  • Describe the goals of artificial intelligence and machine learning
  • Describe how learning-based AI learns from data, in contrast with rule-based (symbolic) AI
  • [OG-ProblemFraming-Paradigms]: Contrast supervised learning, self-supervised learning, and reinforcement learning
  • Write and execute basic Python code using Jupyter Notebooks

Wednesday

  • Welcome discussion: hopes and concerns
  • Course logistics
    • Assessments: skills, effort, and community
    • Weekly journals, quizzes every other Friday
    • Perusall
  • Slides: Welcome to CS 375
    • My story and stance:
      • how God brought me to learn about ML/AI
      • how it’s a gift that will definitely be in the new creation but we abuse it
    • We need to work to discern AI together.
      • Importance
        • Divisiveness
        • Economic impacts
        • Existential angst
        • Identity, desires, and relationships
        • You need to be able to discern it fundamentally, not just from external behavior
      • This class:
        • This class will be about how it works at a fundamental level and what that fundamental understanding helps us understand about how it fits into God’s story
    • Tweakable Machines playing Optimization Games
      • board games
      • hook-the-human games
      • predict protein folding, guess the weather, design a molecule, …
      • imitation games: mimicking decisions, conversations, images, …
      • exploration games: control a robot, …
    • Problem framing
      • programmed vs learned
      • supervised learning: mimicry
      • self-supervised learning: reducing surprise
      • reinforcement learning: learning by trial and error

Friday

Recording

  • Tech update: Qwen-TTS

Week 2: Array Programming & Regression

Introduction to numerical computing with NumPy/PyTorch: element-wise operations, reductions, dot products, MSE. First taste of sklearn regression.

Key Questions
  • How do we represent data as arrays/tensors?
  • What is a dot product and how is it used in ML?
  • What does it mean to “fit” a model?
Objectives

This week we’ll make progress towards the following objectives:

  • [TM-TensorOps]: Implement basic array-computing operations (element-wise operations, reductions, dot products)
  • [OG-LossFunctions]: Compute MSE loss
  • [OG-ProblemFraming-Paradigms]: Contrast different types of learning machines (supervised learning, unsupervised learning, RL)
  • If you didn’t take DATA 202: use the sklearn API for basic regression tasks
Resources

Additionally, you may find these interactive articles helpful (by Amazon’s Machine Learning team):

Monday

  • Assumptions of AI: Experience (“IID” amnesia vs continual life; our mistakes matter but Jesus gives us grace)
  • Handout: Lab 1 review, intro to dot product
  • Lab 1 review
  • Intro to dot product

Wednesday

Friday

Week 3: Linear Models for Regression and Classification

Linear regression and classification from the ground up. Introduction to classification models and metrics.

Key Questions
  • How is linear regression an optimization game played by a tuneable machine?
  • How do we evaluate a classification model?
Objectives

Monday

  • Handout: PyTorch, dot products, regression metrics
  • Assumptions of AI: What’s the objective?
    • ML: optimize single numbers at huge scale
    • Reality:
      • " The thief comes only to steal and kill and destroy; I have come that they may have life, and have it to the full." (John 10:10)
      • the objective is life
        • Many wise paths
        • passing on good to children (unbounded richness)
  • Logistics:
    • Homework 1
    • Journals
    • Quiz opportunity on Wednesday
  • Slides: CS 375 Week 3
  • Lab recap: PyTorch (and sklearn notebooks)

Wednesday

Friday

Week 4: Multi-input Models & Softmax

Extending linear models to multiple inputs. Understanding softmax and cross-entropy loss.

Key Questions
  • How does linear regression extend to multiple input features?
  • What is softmax and why do we use it for classification?
  • What is cross-entropy loss?
Objectives
  • [TM-TensorOps]: Work with multi-dimensional tensors, predict shapes of matrix operations
  • [TM-DataFlow]: Trace data shapes through a multi-input linear model
  • [TM-Softmax]: Implement softmax and explain why it produces a valid probability distribution
  • [OG-LossFunctions]: Describe and compute cross-entropy loss

Monday

Wednesday

Friday

Week 5: Feature Extraction / Embeddings; MLP Architecture

Understanding feature extraction with ReLU. Introduction to classifier heads and bodies. The multi-layer perceptron (MLP) architecture.

Key Questions
  • Why are good features important for neural networks?
  • What is a classifier “head” vs “body”?
  • How does ReLU create useful features?
Objectives

Monday

Wednesday

  • Assumptions of AI: perception
    • Internal representations collapse “irrelevant” distinctions (“noise reduction”)
    • But in God’s world, nothing is “noise”. Every detail can show God’s glory, and we can learn from even the smallest things (“Go to the ant, you sluggard; consider its ways and be wise!” Proverbs 6:6).
    • Other examples:
      • meditating on texts
      • faith looks at what is unseen
      • learning to look again, to change our perception
      • “the eye is the lamp of the body. If your eyes are healthy, your whole body will be full of light. But if your eyes are unhealthy, your whole body will be full of darkness. If then the light within you is darkness, how great is that darkness!” (Matthew 6:22-23)
  • How objective grading works; course objectives
  • Feature extractors intro, also reviewing logistic regression / softmax / cross-entropy
  • Possible resources:

Friday

  • Quiz 2 return and walkthrough. Most common mistakes:
    • Q1: remember matmul shapes: to do X @ W, then X.shape[-1] == W.shape[0].
    • Q2: softmax takes vectors and returns vectors.
    • Q3: cross-entropy is not “differences from true class probabilities”.
  • Intro to ReLU features:
  • Mini-lecture on MLP architecture:
    • tax brackets example as an MLP (with ReLU activations) we can do by hand
  • Handout: ReLU features, MLP architecture, feature extraction intuition
  • MLP shapes practice

Week 6: Gradient Descent, Generalization, and LLM APIs

MLP mastery and quiz. Learning by gradient descent. Understanding why generalization matters. First look at LLM APIs.

Key Questions
  • How does gradient descent work?
  • What is overfitting vs underfitting?
  • What can we build with LLM APIs?
Objectives

Monday

First half: MLP review and practice

Second half: Quiz 3

Wednesday

  • 10 min opportunity to finish Quiz 3
  • Slides: CS 375 Week 6
  • udlbook figure
  • Mini-lecture: “How does the machine learn?” SGD intuition
    • Gradient = direction of steepest increase; we go opposite to reduce loss
    • Learning rate: too big overshoots, too small is slow
    • Why batches (stochastic): noise helps escape local minima, plus efficiency
  • No new handout today; review last time.
    • What would happen if we didn’t have ReLU?
  • Gradient game activity
  • gradient intuition: suppose a dot b is 0. How can we change each element of b (in isolation) to make the dot product 0.1 instead?
  • Live coding / notebook: training an MNIST classifier
    • Walk through: forward pass → loss → loss.backward() → optimizer.step() → zero_grad()
    • Notebook: MNIST with PyTorch (name: u06n1-mnist-torch.ipynb; show preview, open in Colab)

Friday

  • Review Quiz 3
  • Handout: SGD Lingo, Generalization, Data Augmentation
  • Generalization, based on MNIST notebook results
    • Show learning curves: identify overfitting vs underfitting
    • Adversarial examples as a dramatic illustration
    • Brief: data augmentation (Notebook: u06s2-mnist-torch-augmentation.ipynb)
  • Slides: CS 375 Week 6
  • Also a brief review of why ReLU works (regions)
  • Review SGD concepts
    • gradients, learning rates, batches
    • Interactive demo
    • Enable “show components”. What are the shapes of each component?
    • What happens if you increase the learning rate? What happens if you decrease it?
    • What happens if you increase the batch size? What happens if you decrease it?
  • Intro Kaggle Competition homework

Week 7: Embeddings & RL

Embeddings as the data structures of neural computation. Introduction to reinforcement learning.

Key Questions
  • What are embeddings and how are they used in ML?
  • How does reinforcement learning differ from supervised learning?
  • What is the difference between learning to mimic vs learning by exploring?
Objectives

Monday

Wednesday

Friday

  • Slides: CS 375: Wrap-Up
  • Learning to Mimic vs Learning by Exploring
  • Course wrap-up
Schedule - CS376
Resources