In pretty much all of my scientific career, hidden markov models and filters have played a major role. – Sebastian Thrun, Stanford AI course videos.

Implementing Simple Hidden Markov Models

You start by using Michael Hamilton’s HMM module to implement a simple HMM.

Exercise 7.1

Do the following exercises based on AIMA’s rainy-umbrella HMM example given in Figure 15.2.

  1. Download the following sample code: lab1.py. Be sure to read through the comments carefully.

  2. Run the code and verify that it produces the values computed in the text for the Forward algorithm. See the comments in the code itself for detailed explanations.

  3. Consider the following additional questions:

    1. What happens to the filtered probabilities for longer sequences of umbrella observations? Does this make sense?
    2. How about for sequences of non-umbrella observations.

You now implement a similar example.1

Exercise 7.2
Use Hamilton’s HMM module to build a Bayesian network matching the diagram shown on the right and use its implementation of the Forward algorithm to compute the following probabilities:
  1. P(Rain0)
  2. P(Rain1|happy1)
  3. P(Rain2|happy1, happy2)

Assume that P(rain0)=0.5. Be sure that you can explain the values you produce (for steps b and c) by deriving them by hand.

We expect you to be able to compute the FORWARD algorithm by hand.

Exercise 7.3

The HMM given in the previous exercise is a Bayesian network. Can you compute the specified probabilities using the Enumeration-Ask algorithm? If so, demonstrate it (programmatically) and explain which algorithm is a better choice. If not, explain why not.

Be sure that you understand the relationship between Bayesian networks and HMMs before moving on.

Smoothing and Most Likely Explanations

Filtering (aka state estimation) is not the only kind of inference that can profitably be made over HMMs.

Exercise 7.4

Download the following sample code: lab4.py and run it, verifying that it produces the values computed in the text for the HMM inference algorithms. See the comments in the code itself for detailed explanations.

Consider the following questions:

  1. Smoothing

    1. Does the forward-backward algorithm compute the same value for P(R2|u1,u2) as the forward algorithm? Should it? Why or why not?
    2. Of what value is this smoothing algorithm?
  2. Most likely explanation

    1. What are the predictions for a sequence of six umbrella days? How about six non-umbrella days? Do they make sense?
    2. The hidden state sequence shown in the text matches the sequence of umbrella sightings; is there any sequence of umbrella sightings that leads to a mis-assignment of hidden states?
    3. Of what value would this sort of hind-sight algorithm be?

We don’t expect you to be able to compute these latter algorithms by hand.

Checking in

Submit your source code as specified above in Moodle under lab 7. Submit paper if that’s easier for you.

1 This example is adapted from Thrun’s happy-grumpy example.

Back to the top