After this course, I will be able to:
llm libraryWe’ll play with this model briefly:
u05n00-relu.ipynb; show preview,
open in Colab)For more, try out Figure 3.3a in the Understanding Deep Learning interactive figures
u05n1-img-classifier-feature-extractor.ipynb; show preview,
open in Colab)Today we’re going to try out some of the open-weights models that are available on Hugging Face.
Objectives:
Hugging Face is a company that provides a platform for sharing and using pre-trained models. They provide:
transformers for working with models.datasets for working with datasets.Let’s start by playing with a few Spaces, then we’ll try out some models.
Go to the Spaces page. Try out two or three different Spaces:
A few that I tried out in February 2025:
u08s1-sentence-embeddings.ipynb; show preview,
open in Colab) (this one needs Colab; Kaggle doesn’t seem to support the tensorboard viewer that this uses.). Try computing the embeddings for some sentences that you create by hand, and then computing the cosine similarity between them. What do you find? Are similar sentences closer together in the embedding space?Today we’re going to try out using an AI (a chatbot) to make an AI (an LLM-powered web-app).
Objectives:
You’ll need an API key to access most LLM APIs. For this activity, we’ll use the OpenAI Chat Completions API - but we don’t necessarily need an OpenAI account! Several companies provide APIs that are compatible with OpenAI’s API format.
Think of it like phone chargers: even though we often call them “iPhone chargers” or “Android chargers”, any USB-C charger works with any USB-C device. Similarly, we can use any API that’s “OpenAI-compatible” with code that expects to talk to OpenAI’s API.
You have two options for getting an API key:
Important: Save your API key in a safe place. You’ll need it to access the API, and the page only shows it once.
Before running any code, you need to store your API key in a secrets.toml file. This file should never be committed to version control. It lives in a special folder named .streamlit in your project directory.
Create a .streamlit/secrets.toml file with this content:
OPENAI_API_KEY = "your-key-here"
If you’re using the Google Gemini API key, use the same format; the code will still look for OPENAI_API_KEY since we’re using the OpenAI-compatible interface.
First, choose what task you want to have the AI do. Here’s a few I’ve tried:
Once you’ve chosen a task, use a chatbot to generate some starter code. For example, I prompted Claude with: “Can we build a Streamlit app that uses the OpenAI API to have the user play a game of tic-tac-toe?”.
To ensure that the generated code used the correct OpenAI API, I pasted an example of the setup code. Specifically, I went to the Gemini docs on OpenAI-compatible API and copied the example code snippet on that page, then at the end of the prompt I added: “Here’s a code example for the variant of the API we’re using.” and pasted the code snippet.
Note: you might need to change the GEMINI_API_KEY secret to OPENAI_API_KEY in the generated code.
Don’t treat the chatbot as a one-and-done tool; have a conversation. For example, for the tic-tac-toe game, I followed up with “The AI opponent is pretty weak. Maybe it could explain a few moves that it’s thinking about before making its choice?” and then “I was getting an error because the AI response was ```json instead of the JSON object itself. That was hard to figure out because the st.rerun swallowed the error message.”
If you’re lucky, you just got some starter code for the app you want to build. Now to run it. Two options: local or Streamlit cloud.
uv. This tool makes it really easy to manage your Python environment..streamlit/secrets.toml file as described above.uv init and then uv add streamlit openai.streamlit_app.py.uv run streamlit run streamlit_app.py. (Yes there’s two “run"s in that command.)secrets.toml file. e.g.,:
OPENAI_API_KEY="your-key-here"
.streamlit/secrets.toml file with your API key (even though you set it in the template).Find the part of the code that calls the OpenAI API.
print statements to the code to see the output.Notice specifically that messages is a list of messages, both user and assistant.
Think about how you might use this API to do the code-generating task that we just did with the chatbot. Sketch out what the messages object might look like for a two-step conversation about generating and refining code.
Once you have a working app, upload it to Moodle in the provided dropbox.
Note: you don’t have to include the answers to the reflection questions in your submission. They’re just to help you understand the code you’re working with. When using this activity to demonstrate meeting a course objective, expect to discuss these questions in a meeting with your instructor.