Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Bayesian inference POORFISH workshop Helsinki 22.-26.5.2006 Samu Mäntyniemi Fisheries and Environmental Management group (FEM) Department.

Similar presentations


Presentation on theme: "Introduction to Bayesian inference POORFISH workshop Helsinki 22.-26.5.2006 Samu Mäntyniemi Fisheries and Environmental Management group (FEM) Department."— Presentation transcript:

1 Introduction to Bayesian inference POORFISH workshop Helsinki 22.-26.5.2006 Samu Mäntyniemi Fisheries and Environmental Management group (FEM) Department of Biological and Environmental Sciences University of Helsinki

2 Aims of Bayesian inference Provide a measure of uncertainty about the unobserved state of Nature Make quantitative inference about unobserved state of Nature, based on observed facts, e.g., Stock size | observed catches Disease status of a patient | blood test result Age structure | catch samples Provide transparent way of updating already accumulated knowledge based on new observations integrating information from different sources

3 Probability as a measure of uncertainty Probability measures a personal degree of belief on propositions Exemplary propositions: Finland will win Eurovision song contest 2007 Sweden is the world champion of ice hockey in 2006 It rains in Helsinki today It rained in Helsinki yesterday In the beginning of 2005 the biomass of Baltic herring stock was 100kg Finnish fleet will catch more salmon in 2010 than it did on 2006, if the TAC for 2007 and beyond is 200 000. P(prop. is true | my information)=1 : “I am sure that.. 0<P(prop. is true | my information)<1 : “I am uncertain whether…

4 Probability calculus as information processor How my knowledge about the state of Nature (N) changes in the light of new evidence (X)? 3 steps are required 1. Specify what is already known about N before the new information is obtained, P(N) : Prior distribution 2. Specify what is known about the different types of new information (X) before obtaining it under alternative hypotheses about N, P(X|N) : Conditional distribution of evidence 3. Record the new information, and combine with the old knowledge P(N|X) : Posterior distribution of N

5 Introductory example You have been to a tropical country where malaria occurs Later you take a malaria test that is claimed to have 95% sensitivity (gives positive result if you have the disease) and 98% specificity (gives negative result if you don’t have the disease) Your test result turns out to be positive: should you worry?

6 Prior probabilities The probability to catch malaria is (say) 0.1% if preventive drugs are used Our probabilities: P(test+|malaria+)=95% -Probability of having positive test, if one has malaria P(test-|malaria-)=98% P(test+|malaria-)=2% P(malaria)=0.1% -Probability of having malaria prior to knowing the test result

7 Probabilities Malaria + Malaria - Test + Test - Test + Test - 0.1% 99,9% 95% 5% 98% 2% Malaria+ & test + 0.1% x 95% = 0.095% Malaria - & test + 99,9% x 2% = 1.998% Prob. of a positive test: 0.095%+1.998%=2.093%

8 Probability calculus Prob. of malaria given the positive test result = “malaria positives” / ”all positives” = 0.095% / 2.093% ≈ 4.5% Formally: This is the Bayes’ theorem!

9 Inference Taking into account that the probability to catch malaria is so low, it is not likely that you have malaria even though your malaria test is positive.

10 Example: population size 1) specify information about population size prior to seeing data, P(N) This can be based on pop. size in previous years and information about population parameters from other studies 2) specify probability to observe the data set given each possible population size, P(data | N) These are based on knowledge about the sampling process Once the data has been observed, these probabilities form the likelihood function for the population size 3) Once the data set has been observed, compute probabilities for each population size given the observed data set, P(N | data)

11 posterior  likelihood X prior P(N | data)  P(data | N) P(N)

12 Try it yourself! Implement the malaria example by using MS Excel or R In order to understand how the information processing works, try doing the following : 1. Assume that the malaria test was repeated and found to be negative. Calculate the probability that you have malaria based on all the information you have (1 pos. & 1 neg & prior) 2. Then assume that the test was still repeated for additional two times, and found to be positive in both. Calculate now the probability that you have malaria, based on the four sequential test results. (1 pos & 1 neg & 1 pos & 1 pos & prior). What if you had the results in different order? (like 3 pos & 1 neg & prior 3. Alter the prior probability of having malaria and see how it affects the resulting inference in the cases with different number of repeated tests. 4. Alter the sensitivity and specificity of the test and examine the effects


Download ppt "Introduction to Bayesian inference POORFISH workshop Helsinki 22.-26.5.2006 Samu Mäntyniemi Fisheries and Environmental Management group (FEM) Department."

Similar presentations


Ads by Google