Course Objectives

This page lists all course objectives with their assessment criteria.

Coverage Matrix

Q1
[TM-LLM-Embeddings]
[TM-SelfAttention]
[TM-TransformerDataFlow]
[TM-Scaling]
[TM-LLM-Generation]
[TM-LLM-Compute]
[OG-Eval-Experiment]
[OG-LLM-Prompting]
[OG-LLM-Tokenization]
[OG-LLM-ConversationAsDocument]
[OG-LLM-Advanced]
[OG-LLM-Eval]
[OG-LLM-Train]
[OG-SelfSupervised]
[OG-Theory-Feedback]
[Overall-Impact]
[Overall-Dispositions]
[Overall-PhilNarrative]
[Overall-LLM-Failures]
activity handout notebook quiz

Detailed Objectives

Tuneable Machines

[TM-LLM-Embeddings] (376)

I can identify various types of embeddings (tokens, hidden states, output, key, and query) in a language model and explain their purpose.

[TM-SelfAttention] (376)

I can explain the purpose and components of a self-attention layer (key, query, value; multi-head attention; positional encodings).

[TM-TransformerDataFlow] (376)

I can identify the shapes of data flowing through a Transformer-style language model.

[TM-Scaling] (376)

I can analyze how the computational requirements of a model scale with number of parameters and context size.

[TM-LLM-Generation] (376)

I can extract and interpret model outputs (token logits) and use them to generate text.

[TM-LLM-Compute] (376)

I can analyze the computational requirements of training and inference of generative AI systems.

Optimization Games

[OG-Eval-Experiment] (both)

I can design and execute valid experiments to evaluate model performance.

Criteria

Assessed in

[OG-LLM-Prompting] (376)

I can critique and refine prompts to improve the quality of responses from an LLM.

[OG-LLM-Tokenization] (376)

I can explain how inputs get chunked into tokens, how outputs are generated token by token, and how this affects usage of the model.

[OG-LLM-ConversationAsDocument] (376)

I can explain how a conversation with an LLM can be represented as a carefully structured document, including system messages, tool calls, and multimodal inputs and outputs.

[OG-LLM-Advanced] (376)

I can apply techniques such as Retrieval-Augmented Generation, in-context learning, tool use, and multi-modal input to solve complex tasks with an LLM.

[OG-LLM-Eval] (376)

I can apply and critically analyze evaluation strategies for generative models.

[OG-LLM-Train] (376)

I can describe the overall process of training a state-of-the-art dialogue LLM.

[OG-SelfSupervised] (376)

I can explain how self-supervised learning can be used to train foundation models on massive datasets without labeled data.

[OG-Theory-Feedback] (376)

I can explain how feedback tuning can improve the performance and reliability of a model / agent.

Overall

[Overall-Impact] (both)

I can analyze real-world situations to identify potential negative impacts of AI systems.

Criteria

Assessed in

[Overall-Dispositions] optional (both)

I demonstrate growth mindset and integrity in my AI learning and practice.

Criteria

[Overall-PhilNarrative] optional (both)

I can engage with philosophical questions raised by AI systems.

Criteria

[Overall-LLM-Failures] (376)

I can identify common types of failures in LLMs, such as hallucination (confabulation) and bias.

Course Objectives
Course Objectives