Course Objectives

This page lists all course objectives with their assessment criteria.

Tuneable Machines

[TM-MLPParts] (375)

I can compute the forward pass through a two-layer classification neural network by hand (or in simple code) and explain the purpose and operation of each part.

Criteria

Assessed in

Notes

I'm considering this Met when a student has computed all of the pieces by hand, even if they've done it in isolation, even if they didn't do a whole forward pass by hand, if they also have a clear understanding of why nonlinear activations are needed.

[TM-LinearLayers] (375)

I can implement linear (fully-connected) layers using efficient parallel code.

Criteria

Assessed in

[TM-ActivationFunctions] (375)

I can implement and explain elementwise nonlinear activation functions.

Criteria

Assessed in

Notes

We're focusing on ReLU for CS 375. It would be nice if students also had some familiarity with other activations (GELU, Swish, etc.), but I'm not explicitly teaching that.

[TM-Softmax] (375)

I can implement softmax and explain its role in classification networks.

Criteria

Assessed in

[TM-DataFlow] (375)

I can draw clear diagrams of the data flow through a neural network, labeling each layer and the tensor shapes at each step.

Criteria

Assessed in

Notes

This overlaps with TM-MLPParts and TM-LinearLayers, so I'll be flexible about what Met means here.

[TM-DotProduct] (375)

I can compute and reason about dot products of vectors.

Criteria

Assessed in

[TM-TensorOps] (375)

I can reason about matrix multiplication and multi-dimensional tensor shapes.

Criteria

Assessed in

[TM-Embeddings] (375)

I can explain how neural networks represent data as vectors (embeddings) where geometric relationships encode meaning.

Criteria

Assessed in

[TM-RepresentationLearning] (375)

I can explain how a neural network learns useful internal representations through training.

Criteria

Assessed in

[TM-Autograd] (375)

I can explain the purpose of automatic differentiation and identify how it is used in PyTorch code.

Criteria

Assessed in

[TM-Implement-TrainingLoop] (375)

I can implement a basic training loop in PyTorch.

Criteria

Assessed in

[TM-Convolution] optional (375)

I can explain the purpose of convolution layers for image processing.

Criteria

[TM-LLM-Embeddings] (376)

I can identify various types of embeddings (tokens, hidden states, output, key, and query) in a language model and explain their purpose.

[TM-SelfAttention] (376)

I can explain the purpose and components of a self-attention layer (key, query, value; multi-head attention; positional encodings).

[TM-Architectures] (376-bonus)

I can compare and contrast the following neural architectures - CNN, RNN, and Transformer. (Bonus topics - U-Nets, LSTMs, Vision Transformers, state-space models)

[TM-TransformerDataFlow] (376)

I can identify the shapes of data flowing through a Transformer-style language model.

[TM-Scaling] (376)

I can analyze how the computational requirements of a model scale with number of parameters and context size.

[TM-LLM-Generation] (376)

I can extract and interpret model outputs (token logits) and use them to generate text.

[TM-LLM-Compute] (376)

I can analyze the computational requirements of training and inference of generative AI systems.

Optimization Games

[OG-ProblemFraming-Supervised] (375)

I can frame a problem as a supervised learning task with appropriate inputs, targets, and loss function.

Criteria

Assessed in

[OG-ProblemFraming-Paradigms] (375)

I can distinguish between supervised learning, self-supervised learning, and reinforcement learning.

Criteria

Assessed in

[OG-LossFunctions] (375)

I can select and compute appropriate loss functions for regression and classification tasks.

Criteria

Assessed in

[OG-DataDistribution] (375)

I can reason about how the distribution of training data shapes what a model learns.

Criteria

Assessed in

[OG-Eval-Experiment] (both)

I can design and execute valid experiments to evaluate model performance.

Criteria

Assessed in

[OG-Generalization] (375)

I can diagnose and address generalization problems in trained models.

Criteria

Assessed in

[OG-Implement-Validate] (375)

I apply validation techniques correctly and proactively.

Criteria

[OG-LLM-APIs] (375)

I can apply LLM APIs (such as the Chat Completions API) to build AI-powered applications.

Criteria

Assessed in

[OG-Pretrained] (375)

I can explain the benefits and risks of using pretrained models.

Criteria

Assessed in

[OG-Theory-SGD] (375)

I can explain how stochastic gradient descent uses gradients to improve model performance.

Criteria

[OG-LLM-Prompting] (376)

I can critique and refine prompts to improve the quality of responses from an LLM.

[OG-LLM-Tokenization] (376)

I can explain the purpose, inputs, and outputs of tokenization.

[OG-LLM-Advanced] (376)

I can apply techniques such as Retrieval-Augmented Generation, in-context learning, tool use, and multi-modal input to solve complex tasks with an LLM.

[OG-LLM-Eval] (376)

I can apply and critically analyze evaluation strategies for generative models.

[OG-LLM-Train] (376)

I can describe the overall process of training a state-of-the-art dialogue LLM such as Llama or OLMo.

[OG-SelfSupervised] (376)

I can explain how self-supervised learning can be used to train foundation models on massive datasets without labeled data.

[OG-Theory-Feedback] (376)

I can explain how feedback tuning can improve the performance and reliability of a model / agent.

[OG-ICL] (376-bonus)

I can explain how in-context learning can be used to improve test-time performance of a model.

Overall

[Overall-Explain] (375)

I can explain basic AI concepts to a non-technical audience without major errors.

Criteria

[Overall-Faith] (375)

I can articulate connections between Christian concepts and AI development, demonstrating genuine engagement.

Criteria

[Overall-Impact] (both)

I can analyze real-world situations to identify potential negative impacts of AI systems.

Criteria

Assessed in

[Overall-Dispositions] (both)

I demonstrate growth mindset and integrity in my AI learning and practice.

Criteria

[Overall-History] optional (375)

I can trace current AI technologies back to historical developments.

Criteria

[Overall-PhilNarrative] (both)

I can engage with philosophical questions raised by AI systems.

Criteria

[Overall-LLM-Failures] (376)

I can identify common types of failures in LLMs, such as hallucination (confabulation) and bias.

Forum Posts 22SP
Course Objectives