The subjectivist (i.e. Bayesian) states his judgements, whereas the objectivist sweeps them under the carpet by calling assumptions knowledge, and he basks in the glorious objectivity of science. — attributed to I. J. Good.

This lab exercise covers discrete probabilistic inference using the full joint probability density function and Bayes’ rule.

Inference using the Full Joint Probability Distribution

Creating the full joint probability distribution is rarely a tractable approach to probabilistic inference, but it can be helpful in understanding the nature of probabilities and of the inference process.

Exercise 4.1

Do the following exercises based on the AIMA text’s Toothache example given in Figure 13.3.

  1. Pull u04probability/joint.py and note that it implements and runs the computation of P(Cavity|toothache). Make sure you know how it represents the joint probability distribution and computes particular probabilities.

    Note that the bold P and the capital C, indicate that the code gives a probability distribution, written as <P(cavity|toothache), P(¬cavity|toothache)> ; the +toothache is given so the system doesn’t consider ¬toothache .

  2. Compute the value of P(Cavity|catch):

    1. First, compute it by hand.
    2. Verify your answer (and the AIMA implementation) by adding code to compute the specified value.
  3. Create a new probability density function that implements the flipping of two coins and then compute the probability of P(Coin2|coin1=heads). Does the answer confirm what you believe to be true about the probabilities of flipping coins?

Can you see now why the full joint is generally not used in probabilistic systems?

Save your program in lab_1.py and include a summary of your hand-work in the program comments.

Not all random variables influence each other.

Exercise 4.2 (Extra credit)

If you have time, do the following exercises based on your code from the previous exercise for extra credit.

  1. Modify the domain to include a new random variable Rain, which takes on values rain or not rain, and then do the following:

    1. How many entries does your full joint probability distribution contain now?
    2. Do the probabilities sum up to 1.0? Should they? Explain why or why not.
    3. Did you think that you can use anything other than T or F values for the values for the random variables? Explain why or why not.
    4. Did the probabilities you chose indicate that the value of Rain is independent of the original values?
  2. Compute the value of P(Toothache|rain). Again, compute this value on pencil and paper, and then verify your answer by adding code to compute the specified value.

Save your program in lab_2.py.

Inference using Bayes’ Rule

Bayes’ rule is the basis of most probabilistic methods used in AI. We suggest that you record your solutions manually using a simple text file, e.g., here is the solution to one of the class exercises:

P(snowy) = 0.3 + 0.05 = 0.35
P(snowy | coats)
    = P(snowy^coats)/P(coats)
    =  0.3 / (0.3 + 0.2 + 0.02 + 0.01)
    = 0.3 / 0.53
    = 0.566
Exercise 4.3

Use probability theory and Bayes’ rule to compute the following (manually, showing all steps):

  1. Drug testing1 — Given that a drug test is 99% sensitive (i.e., drug users get positive results 99% of the time) and 98% specific (i.e., non drug users get negative results 98% of the time) and also that 8.9% of Americans are drug users of some sort, compute the following probabilities:

    1. P(User)
    2. P(test | user)
    3. P(¬test | user)
    4. P(test | ¬user)
    5. P(User | test)
  2. Breast cancer2 — 1% of women at age forty who participate in routine screening have breast cancer. 80% of women with breast cancer will get positive mammographies. 9.6% of women without breast cancer will also get positive mammographies.

    A woman in this age group is found to have a positive mammography in a routine screening. What are the chances that she has/doesn't have cancer?

    According to Yudkowsky, only 15% of doctors have the right intuition on this problem.

Store this in lab_3.txt.

Checking in

We will grade your work according to the following criteria:

1 This example adapted from Wikipedia’s Bayes’ theorem entry.
2 This example taken from E. Yudkowsky, An Intuitive Explanation of Bayes’ Theorem.

See the policies page for lab due-dates and times.