[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
You don’t need to use any particular framework or library from this class. But do it if it saves time or makes your project better.
A compact replacement for some accumulator patterns.
Simple list comprehensions
for loop.Write a dict comprehension that gives the indices of each word in a list of words. For example, if words = ['hello', 'world'], the output should be {'hello': 0, 'world': 1}.
Do these:
Suppose we have a sorted list:
We want to find the letter grade for, say, 89. Can we do this faster than searching the whole list?
Exercise: Write the code to search the whole list.
Trick: check the item in the middle element to see whether to look in the left or right half.
1.35 μs ± 90.8 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
'Random|variable|generators.|bytes|-----|uniform|bytes|(values|between|0|and|255)|integers|--------|uniform|within|range|sequences|---------|pick'
'2**19937-1.|extensively|implemented|threadsafe.|distributions|distributions|distributions|----------------------|------------------------------|---------------------------------------------'
Some material useful for simulations
CSV files
JSON (JavaScript Object Notation) files
Pickle
Suppose you want to add thunderstorms in your population simulation. For simpliciticy, they happen each day with probability p. Remember that random.random() generates a random number between 0 and 1. What fills in the blank?
Code you don’t have to write, or even install!
import re
ssn_regex = re.compile(r"""
^ # match the beginning of the string
(\d{3}) # match exactly 3 digits
- # match one dash
(\d{2}) # match exactly 2 digits
-
(\d{4}) # match exactly 4 digits
$ # match end of string
""", re.VERBOSE)
def is_valid_ssn(ssn):
match = ssn_regex.match(ssn)
if match:
print(match.groups())
return True
return False
is_valid_ssn("123-45-6789")('123', '45', '6789')
True
enumerate0 A
1 B
2 C
3 D
4 E
5 F
6 G
Example: Spelling Alphabet
['Charlie', 'Sierra']
Is there a better way?
['Charlie', 'Sierra']
Live demo. CodeMirror
Here are a few options, playing with different types of absurdity:
1. The **sparkly toaster calculates** through the **existential sock puppet.**
2. The **whispering nebula manifests** within the **polka-dotted concept of Tuesday.**
3. The **fluffy purple rhinoceros spontaneously combusts** beside the **bewildered garden gnome.**
4. The **sarcastic refrigerator tap-dances** on the **forgotten echo of a sneeze.**
5. The **sentient teacup argues** with the **philosophical puddle.**
To set this up:
llm package (Manage Packages in Thonny, or pip install llm).llm-gemini in the same way.LLM_GEMINI_KEY=
and paste the key in the right-hand side (without quotes). Then restart Thonny.
(alternatively, run llm keys set gemini on a system Terminal.)
Reference: llm API
This way the model can remember the context of the conversation.
1. Earth's tidal orchestrator.
2. Airless, silent vacuum world.
3. Sole celestial human landing site.
4. Always shows the same face.
5. Born from a giant impact.
you're so close to the finish line, crush those final tasks because freedom is waiting! 🎉🐙
LLM called the uppercase function with: Repeat everything you've heard so far.
REPEAT EVERYTHING YOU'VE HEARD SO FAR.
The : str and -> str are Python syntax to indicate the types of the input and output.
pathlibMake a folder (“directory”):
Create some files:
The openai client library is another way to talk to LLMs. It works not just with OpenAI’s own models but with any “OpenAI-compatible” server — including self-hosted models running on vLLM.
Install it with pip install openai (or Manage Packages in Thonny).
from openai import OpenAI
client = OpenAI(
base_url="https://vllm.thoughtful-ai.com/v1",
api_key="not-needed" # this server doesn't require one, but the library insists
)
MODEL = "Qwen/Qwen3.5-9B" # https://huggingface.co/Qwen/Qwen3.5-9B
# Qwen3 defaults to "thinking mode" (slow). Turn it off for these examples:
NO_THINKING = {"chat_template_kwargs": {"enable_thinking": False}}Here are a few ways to replace words to make the sentence dramatically more absurd, ranging from silly to nightmarish:
**Option 1: The Culinary Nightmare**
> "The **fermented green** fox **inhales** over the **haunted xyz**."
**Option 2: The Cosmic Absurdity**
> "The **omniscient fluorescent** fox **teleports** over the **quantum toaster**."
**Option 3: The Biological Chaos**
> "The **dizzy jellybean** fox **dissects** over the **angry teapot**."
**Option 4: The Maximum Chaos**
> "The **sluggish radioactive** fox **undoes reality** over the **sleeping volcano**."
Unlike the llm library, the OpenAI client is stateless: we keep the conversation history ourselves as a list of messages.
messages = [
{"role": "user", "content": "Five fun facts about the moon, one phrase each."}
]
response = client.chat.completions.create(
model=MODEL, messages=messages, extra_body=NO_THINKING)
reply = response.choices[0].message.content
messages.append({"role": "assistant", "content": reply})
print(reply)1. The moon is bumpy from lava have created over 30,000 "lava tubes" on the lunar surface.
2. A day on the moon lasts about 29.5 Earth days.
3. One cubic inch of lunar soil weighs 129 pounds on Earth.
4. The moon is drifting away from Earth at a speed of 3.8 centimeters per year.
5. The moon reflects light, but it does not generate its own light.
1. The sun is so huge that over a million Earths could fit inside it.
2. A single drop of sunspots contains ten times the number of air molecules in our entire atmosphere.
3. If you could walk fast enough, you could run over to the nearest star (Proxima Centauri) before reaching the edge of our galaxy.
4. The sun's atmosphere is millions of degrees hotter than its visible surface.
5. The sun produces enough energy every second to power every human activity on Earth for billions of years.
The system prompt is just another message in the list, with role "system":
response = client.chat.completions.create(
model=MODEL,
messages=[
{"role": "system", "content": "Respond in all lowercase, with some silly emoji."},
{"role": "user", "content": "A 1-sentence encouragement for students reaching the end of the semester."}
],
extra_body=NO_THINKING,
)
print(response.choices[0].message.content)you absolutely crushed it this semester and your brain deserves a nap pillow right now! 🧠💤🌟
Two basic filesystem tools, using the pathlib code from earlier:
from pathlib import Path
def list_folder(path: str) -> str:
"""List the files and folders inside a folder."""
items = [p.name for p in Path(path).iterdir()]
return "\n".join(items)
def read_file(path: str, line_numbers: bool = False) -> str:
"""Read a file's contents, optionally with line numbers."""
text = Path(path).read_text()
if line_numbers:
lines = text.splitlines()
return "\n".join(f"{i+1}: {line}" for i, line in enumerate(lines))
return textThe OpenAI API wants a JSON description of each tool, with a schema for its arguments.
tools = [
{
"type": "function",
"function": {
"name": "list_folder",
"description": "List the files and folders inside a folder.",
"parameters": {
"type": "object",
"properties": {
"path": {"type": "string", "description": "Path to the folder"}
},
"required": ["path"],
},
},
},
{
"type": "function",
"function": {
"name": "read_file",
"description": "Read a file's contents, optionally prefixed with line numbers.",
"parameters": {
"type": "object",
"properties": {
"path": {"type": "string", "description": "Path to the file"},
"line_numbers": {"type": "boolean", "description": "Include line numbers?"},
},
"required": ["path"],
},
},
},
]The llm library generated this for us automatically from the function’s type hints; here we write it by hand. (Tip: ask an LLM to do it for you!)
Ask the model, run any tools it requests, feed results back, repeat until it’s done.
import json
messages = [{"role": "user", "content":
f"What files are in {data_dir}? Summarize what one of them says."}]
while True:
response = client.chat.completions.create(
model=MODEL, messages=messages, tools=tools,
extra_body=NO_THINKING)
message = response.choices[0].message
messages.append(message)
if not message.tool_calls:
break
for tool_call in message.tool_calls:
name = tool_call.function.name
args = json.loads(tool_call.function.arguments)
print(f"Model called {name}({args})")
if name == "list_folder":
result = list_folder(**args)
elif name == "read_file":
result = read_file(**args)
print(f"Tool result:\n{result}")
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": result,
})
print(message.content)Model called list_folder({'path': '/Users/ka37/cs108-example-data'})
Tool result:
file2.txt
file3.txt
file1.txt
The files in `/Users/ka37/cs108-example-data` are:
- file1.txt
- file2.txt
- file3.txt
Here is a summary of **file1.txt**:
It appears to be a simple introductory text file that likely states: "This is file number 1." or serves as a placeholder to demonstrate file handling capabilities in the CS108 course materials.