Course Objectives

This page lists all course objectives with their assessment criteria.

Coverage Matrix

Q1
Q2
Q3
Q4
[TM-MLPParts]
[TM-LinearLayers]
[TM-ActivationFunctions]
[TM-Softmax]
[TM-DataFlow]
[TM-DotProduct]
[TM-TensorOps]
[TM-Embeddings]
[TM-RepresentationLearning]
[TM-Autograd]
[TM-Implement-TrainingLoop]
[TM-Convolution]
[OG-ProblemFraming-Supervised]
[OG-ProblemFraming-Paradigms]
[OG-LossFunctions]
[OG-DataDistribution]
[OG-Eval-Experiment]
[OG-Generalization]
[OG-Implement-Validate]
[OG-LLM-APIs]
[OG-Pretrained]
[OG-Theory-SGD]
[Overall-Explain]
[Overall-Faith]
[Overall-Impact]
[Overall-Dispositions]
[Overall-History]
[Overall-PhilNarrative]
activity handout notebook quiz

Detailed Objectives

Tuneable Machines

[TM-MLPParts] (375)

I can compute the forward pass through a two-layer classification neural network by hand (or in simple code) and explain the purpose and operation of each part.

Criteria

Assessed in

Notes

I'm considering this Met when a student has computed all of the pieces by hand, even if they've done it in isolation, even if they didn't do a whole forward pass by hand, if they also have a clear understanding of why nonlinear activations are needed.

[TM-LinearLayers] (375)

I can implement linear (fully-connected) layers using efficient parallel code.

Criteria

Assessed in

[TM-ActivationFunctions] (375)

I can implement and explain elementwise nonlinear activation functions.

Criteria

Assessed in

Notes

We're focusing on ReLU for CS 375. It would be nice if students also had some familiarity with other activations (GELU, Swish, etc.), but I'm not explicitly teaching that.

[TM-Softmax] (375)

I can implement softmax and explain its role in classification networks.

Criteria

Assessed in

[TM-DataFlow] (375)

I can draw clear diagrams of the data flow through a neural network, labeling each layer and the tensor shapes at each step.

Criteria

Assessed in

Notes

This overlaps with TM-MLPParts and TM-LinearLayers, so I'll be flexible about what Met means here.

[TM-DotProduct] (375)

I can compute and reason about dot products of vectors.

Criteria

Assessed in

[TM-TensorOps] (375)

I can reason about matrix multiplication and multi-dimensional tensor shapes.

Criteria

Assessed in

[TM-Embeddings] (375)

I can explain how neural networks represent data as vectors (embeddings) where geometric relationships encode meaning.

Criteria

Assessed in

[TM-RepresentationLearning] (375)

I can explain how a neural network learns useful internal representations through training.

Criteria

Assessed in

[TM-Autograd] (375)

I can explain the purpose of automatic differentiation and identify how it is used in PyTorch code.

Criteria

Assessed in

[TM-Implement-TrainingLoop] (375)

I can implement a basic training loop in PyTorch.

Criteria

Assessed in

[TM-Convolution] optional (375)

I can explain the purpose of convolution layers for image processing.

Criteria

Optimization Games

[OG-ProblemFraming-Supervised] (375)

I can frame a problem as a supervised learning task with appropriate inputs, targets, and loss function.

Criteria

Assessed in

[OG-ProblemFraming-Paradigms] (375)

I can distinguish between supervised learning, self-supervised learning, and reinforcement learning.

Criteria

Assessed in

[OG-LossFunctions] (375)

I can select and compute appropriate loss functions for regression and classification tasks.

Criteria

Assessed in

[OG-DataDistribution] optional (375)

I can reason about how the distribution of training data shapes what a model learns.

Criteria

Assessed in

[OG-Eval-Experiment] (both)

I can design and execute valid experiments to evaluate model performance.

Criteria

Assessed in

[OG-Generalization] (375)

I can diagnose and address generalization problems in trained models.

Criteria

Assessed in

[OG-Implement-Validate] (375)

I apply validation techniques correctly and proactively.

Criteria

Assessed in

[OG-LLM-APIs] optional (375)

I can apply LLM APIs (such as the Chat Completions API) to build AI-powered applications.

Criteria

Assessed in

[OG-Pretrained] (375)

I can explain the benefits and risks of using pretrained models.

Criteria

Assessed in

[OG-Theory-SGD] (375)

I can explain how stochastic gradient descent uses gradients to improve model performance.

Criteria

Assessed in

Overall

[Overall-Explain] optional (375)

I can explain basic AI concepts to a non-technical audience without major errors.

Criteria

[Overall-Faith] optional (375)

I can articulate connections between Christian concepts and AI development, demonstrating genuine engagement.

Criteria

[Overall-Impact] (both)

I can analyze real-world situations to identify potential negative impacts of AI systems.

Criteria

Assessed in

[Overall-Dispositions] optional (both)

I demonstrate growth mindset and integrity in my AI learning and practice.

Criteria

[Overall-History] optional (375)

I can trace current AI technologies back to historical developments.

Criteria

[Overall-PhilNarrative] optional (both)

I can engage with philosophical questions raised by AI systems.

Criteria

Course Objectives
Announcements 23SP