See also: CS 376 Schedule
Any content in the future should be considered tentative and subject to change.
Week 1: Introduction
Getting started with ML: impacts of AI, running Python in notebooks, training an image classifier using off-the-shelf code.
Key Questions
- What is the essence of modern approaches to AI?
- What optimization games are AI systems playing?
- Can AI systems be smarter than humans?
Objectives
- Describe the goals of artificial intelligence and machine learning
- Describe how learning-based AI learns from data, in contrast with rule-based (symbolic) AI
- [OG-ProblemFraming-Paradigms]: Contrast supervised learning, self-supervised learning, and reinforcement learning
- Write and execute basic Python code using Jupyter Notebooks
Wednesday
- Welcome discussion: hopes and concerns
- Course logistics
- Assessments: skills, effort, and community
- Weekly journals, quizzes every other Friday
- Perusall
- Slides: Welcome to CS 375
- My story and stance:
- how God brought me to learn about ML/AI
- how it’s a gift that will definitely be in the new creation but we abuse it
- We need to work to discern AI together.
- Importance
- Divisiveness
- Economic impacts
- Existential angst
- Identity, desires, and relationships
- You need to be able to discern it fundamentally, not just from external behavior
- This class:
- This class will be about how it works at a fundamental level and what that fundamental understanding helps us understand about how it fits into God’s story
- Importance
- Tweakable Machines playing Optimization Games
- board games
- hook-the-human games
- predict protein folding, guess the weather, design a molecule, …
- imitation games: mimicking decisions, conversations, images, …
- exploration games: control a robot, …
- Problem framing
- programmed vs learned
- supervised learning: mimicry
- self-supervised learning: reducing surprise
- reinforcement learning: learning by trial and error
- My story and stance:
Week 2: Array Programming & Regression
Introduction to numerical computing with NumPy/PyTorch: element-wise operations, reductions, dot products, MSE. First taste of sklearn regression.
Key Questions
- How do we represent data as arrays/tensors?
- What is a dot product and how is it used in ML?
- What does it mean to “fit” a model?
Objectives
This week we’ll make progress towards the following objectives:
- [TM-TensorOps]: Implement basic array-computing operations (element-wise operations, reductions, dot products)
- [OG-LossFunctions]: Compute MSE loss
- [OG-ProblemFraming-Paradigms]: Contrast different types of learning machines (supervised learning, unsupervised learning, RL)
- If you didn’t take DATA 202: use the sklearn API for basic regression tasks
Resources
Additionally, you may find these interactive articles helpful (by Amazon’s Machine Learning team):
- Linear Regression (originally by Amazon Web Services, some edits by Prof Arnold)
Monday
- Assumptions of AI: Experience (“IID” amnesia vs continual life; our mistakes matter but Jesus gives us grace)
- Handout: Lab 1 review, intro to dot product
- Lab 1 review
- Intro to dot product
Wednesday
- Handout: Supervised Learning
- Slides: CS 375 Week 2
- Landscape of AI/ML (supervised, unsupervised, RL)
Friday
- Handout: Problem framing, dot products review, Lab notes
- Lab 2
- Notebook: PyTorch Warmup
(name:
u02n1-pytorch.ipynb; show preview, open in Colab)
- Notebook: PyTorch Warmup
(name:
- Intro to array programming, regression losses
- If time:
- Notebook: Regression in
scikit-learn(name:u02n2-sklearn-regression.ipynb; show preview, open in Colab)
- Notebook: Regression in
Week 3: Linear Models for Regression and Classification (and LLM APIs)
Linear regression and classification from the ground up. Introduction to classification models and metrics. If time: Using LLM APIs to build AI-powered applications.
Key Questions
- How is linear regression an optimization game played by a tuneable machine?
- How do we call an LLM API?
- How do we evaluate a classification model?
Objectives
- [TM-LinearLayers]: Fit a linear regression model “by hand” using numerical computing primitives
- [OG-ProblemFraming-Supervised]: Identify regression vs classification tasks and select appropriate loss functions
- [OG-LossFunctions]: Compute and interpret cross-entropy loss
- [OG-LLM-APIs]: Use an LLM API to build an AI-powered application
Monday
- Handout: PyTorch, dot products, regression metrics
- Assumptions of AI: What’s the objective?
- ML: optimize single numbers at huge scale
- Reality:
- " The thief comes only to steal and kill and destroy; I have come that they may have life, and have it to the full." (John 10:10)
- the objective is life
- Many wise paths
- passing on good to children (unbounded richness)
- Logistics:
- Homework 1
- Journals
- Quiz opportunity on Wednesday
- Slides: CS 375 Week 3
- Lab recap: PyTorch (and sklearn notebooks)
Wednesday
- First quiz opportunity [OG-ProblemFraming-Paradigms], [OG-ProblemFraming-Supervised], [TM-DotProduct], [OG-LossFunctions], [TM-TensorOps]
- Starting “Linear Regression the Hard Way”
- Building intuition for linear regression using UDL figure or linreg explainer
- Notebook: Linear Regression the Hard Way
(name:
u03n1-linreg-manual.ipynb; show preview, open in Colab)
Friday
- Tech update: Opus 4.6 release
- Handout: Matrix product, Elo intuition
- Slides: CS 375 Week 3
- Reviewing notebooks:
- Notebook: Train a simple image classifier
(name:
u01n1-train-clf.ipynb; show preview, open in Colab)
- Notebook: Train a simple image classifier
(name:
- (we didn’t get to…)
- Notebook: PyTorch Warmup
(name:
u02n1-pytorch.ipynb; show preview, open in Colab) - Notebook: Linear Regression the Hard Way
(name:
u03n1-linreg-manual.ipynb; show preview, open in Colab) - Notebook: Regression in
scikit-learn(name:u02n2-sklearn-regression.ipynb; show preview, open in Colab) - Notebook: Classification in
scikit-learn(name:u03n2-sklearn-classification.ipynb; show preview, open in Colab)
- Notebook: PyTorch Warmup
(name:
Week 4: Multi-input Models & Softmax
Extending linear models to multiple inputs. Understanding softmax and cross-entropy loss.
Key Questions
- How does linear regression extend to multiple input features?
- What is softmax and why do we use it for classification?
- What is cross-entropy loss?
Objectives
- [TM-TensorOps]: Work with multi-dimensional tensors, predict shapes of matrix operations
- [TM-DataFlow]: Trace data shapes through a multi-input linear model
- [TM-Softmax]: Implement softmax and explain why it produces a valid probability distribution
- [OG-LossFunctions]: Describe and compute cross-entropy loss
Monday
- Classification metrics (accuracy, cross-entropy)
- Context discussion: AI fairness and bias
- LLM API intro: “use an AI to make an AI”
- Notebook: Multiple Linear Regression, the Hard Way
(name:
u04n1-multi-linreg-manual.ipynb; show preview, open in Colab)
Wednesday
- Slides: Computing
- Notebook: Softmax, part 1
(name:
u04n2-softmax.ipynb; show preview, open in Colab) - Interactive softmax demo
Friday
- Handout
- gradient intuition: suppose a dot b is 0. How can we change each element of b (in isolation) to make the dot product 0.1 instead?
- Notebook: From Linear Regression in NumPy to Logistic Regression in PyTorch
(name:
u04n3-logreg-pytorch.ipynb; show preview, open in Colab) - Hw2 soft-due: demo an AI-powered application
Week 5: Features & MLP Architecture
Understanding feature extraction with ReLU. Introduction to classifier heads and bodies. The multi-layer perceptron (MLP) architecture.
Key Questions
- Why are good features important for neural networks?
- What is a classifier “head” vs “body”?
- How does ReLU create useful features?
Objectives
- [TM-RepresentationLearning]: Explain why good features make classification easier
- [TM-ActivationFunctions]: Implement ReLU and explain what it does
- [TM-DataFlow]: Trace the data flow through an MLP, labeling shapes at each layer
- [TM-MLPParts]: Identify and explain the components of an MLP (linear layers, activations, output layer)
Monday
- Feature extractors intro
- ReLU features intro
- Classifier head and body intro
Wednesday
- Notebook: ReLU Regression Interactive
(name:
u05n00-relu.ipynb; show preview, open in Colab) - Notebook: Logistic Regression and MLP
(name:
u05n2-logreg-mlp.ipynb; show preview, open in Colab) - MLP shapes practice
Friday
- Preview of learning by gradient descent
- Review day: gradient game
- Tech presentation
Week 6: Gradient Descent & Generalization
Learning by gradient descent. Understanding why generalization matters and how to measure/improve it.
Key Questions
- How does gradient descent work?
- What is overfitting vs underfitting?
- How can data augmentation help generalization?
Objectives
- [TM-Implement-TrainingLoop]: Train an MLP classifier by gradient descent and understand each step
- [OG-Theory-SGD]: Describe how SGD uses gradients and batches to improve performance
- [TM-Autograd]: Explain what loss.backward() and optimizer.step() do
- [OG-Generalization]: Diagnose overfitting and underfitting from learning curves
- [OG-DataDistribution]: Explain how data augmentation expands the effective training distribution
- [OG-Implement-Validate]: Explain the importance of evaluating models on unseen data
Monday
- Gradient game activity
- Notebook: MNIST with PyTorch
(name:
u06n1-mnist-torch.ipynb; show preview, open in Colab)
Wednesday
- Notebook: Compute gradients using PyTorch
(name:
u06n2-compute-grad-pytorch.ipynb; show preview, open in Colab) - Review training loops, SGD, MLP model
Friday
- Will It Generalize? slides
- Data augmentation notebook
- Tech presentation
Week 7: Embeddings & RL
Embeddings as the data structures of neural computation. Introduction to reinforcement learning.
Key Questions
- What are embeddings and how are they used in ML?
- How does reinforcement learning differ from supervised learning?
- What is the difference between learning to mimic vs learning by exploring?
Objectives
- [TM-Embeddings]: Explain what embeddings are and how they represent similarity
- [OG-Pretrained]: Explain how a pretrained model can be repurposed using the body + head pattern
- [OG-ProblemFraming-Paradigms]: Contrast supervised learning and reinforcement learning
- [OG-DataDistribution]: Contrast how data distribution is given (supervised) vs shaped by exploration (RL)
Monday
- Embeddings Day: words, sentences, images
- Slides: Computing
- Notebook: Probe an Image Classifier
(name:
u07n1-image-embeddings.ipynb; show preview, open in Colab)
Wednesday
- Reinforcement Learning intro
- Notebook: A Reinforcement Learning Example
(name:
u07n2-rl.ipynb; show preview, open in Colab) - Optional: Notebook: u07n1-image-ops.ipynb
Friday
- Slides: CS 375: Wrap-Up
- Learning to Mimic vs Learning by Exploring
- Course wrap-up