This lab exercise covers discrete probabilistic inference using the full joint probability density function and Bayes’ rule.
Creating the full joint probability distribution is rarely a tractable approach to probabilistic inference, but it can be helpful in understanding the nature of probabilities and of the inference process.
Do the following exercises based on the AIMA text’s Toothache example given in Figure 13.3.
Pull u04probability/joint.py
and note that it
implements and runs the computation of
P(Cavity|toothache).
Make sure you know how it
represents the joint probability distribution and computes
particular probabilities.
Note that the bold P
and the
capital C, indicate that the code gives a probability
distribution, written as <P(cavity|toothache),
P(¬cavity|toothache)>
; the +toothache
is
given so the system doesn’t consider
¬toothache
.
Compute the value of P(Cavity|catch):
Create a new probability density function that implements the flipping of two coins and then compute the probability of P(Coin2|coin1=heads). Does the answer confirm what you believe to be true about the probabilities of flipping coins?
Can you see now why the full joint is generally not used in probabilistic systems?
Save your program in lab_1.py
and include a summary of
your hand-work in the program comments.
Not all random variables influence each other.
If you have time, do the following exercises based on your code from the previous exercise for extra credit.
Modify the domain to include a new random variable Rain, which takes on values rain or not rain, and then do the following:
Compute the value of P(Toothache|rain). Again, compute this value on pencil and paper, and then verify your answer by adding code to compute the specified value.
Save your program in lab_2.py
.
Bayes’ rule is the basis of most probabilistic methods used in AI. We suggest that you record your solutions manually using a simple text file, e.g., here is the solution to one of the class exercises:
P(snowy) = 0.3 + 0.05 = 0.35 P(snowy | coats) = P(snowy^coats)/P(coats) = 0.3 / (0.3 + 0.2 + 0.02 + 0.01) = 0.3 / 0.53 = 0.566
Use probability theory and Bayes’ rule to compute the following (manually, showing all steps):
Drug testing1 — Given that a drug test is 99% sensitive (i.e., drug users get positive results 99% of the time) and 98% specific (i.e., non drug users get negative results 98% of the time) and also that 8.9% of Americans are drug users of some sort, compute the following probabilities:
Breast cancer2 — 1% of women at age forty who participate in routine screening have breast cancer. 80% of women with breast cancer will get positive mammographies. 9.6% of women without breast cancer will also get positive mammographies.
A woman in this age group is found to have a positive mammography in a routine screening. What are the chances that she has/doesn't have cancer?
According to Yudkowsky, only 15% of doctors have the right intuition on this problem.
Store this in lab_3.txt
.
We will grade your work according to the following criteria:
1 This example adapted from Wikipedia’s Bayes’ theorem entry.
2 This example
taken from E. Yudkowsky, An Intuitive Explanation of
Bayes’ Theorem.
See the policies page for lab due-dates and times.