Parameter Estimation using likelihood functions Tutorial #1

Slides:



Advertisements
Similar presentations
Bayes rule, priors and maximum a posteriori
Advertisements

Basics of Statistical Estimation
A Tutorial on Learning with Bayesian Networks
Probabilistic models Haixu Tang School of Informatics.
1 Essential Probability & Statistics (Lecture for CS598CXZ Advanced Topics in Information Retrieval ) ChengXiang Zhai Department of Computer Science University.
Learning: Parameter Estimation
INTRODUCTION TO ARTIFICIAL INTELLIGENCE Massimo Poesio LECTURE 11 (Lab): Probability reminder.
Flipping A Biased Coin Suppose you have a coin with an unknown bias, θ ≡ P(head). You flip the coin multiple times and observe the outcome. From observations,
1 Methods of Experimental Particle Physics Alexei Safonov Lecture #21.
.. . Parameter Estimation using likelihood functions Tutorial #1 This class has been cut and slightly edited from Nir Friedman’s full course of 12 lectures.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Bayesian Wrap-Up (probably). 5 minutes of math... Marginal probabilities If you have a joint PDF:... and want to know about the probability of just one.
Intro to Bayesian Learning Exercise Solutions Ata Kaban The University of Birmingham 2005.
1 BASIC NOTIONS OF PROBABILITY THEORY. NLE 2 What probability theory is for Suppose that we have a fair dice, with six faces, and that we keep throwing.
Maximum Likelihood-Maximum Entropy Duality : Session 1 Pushpak Bhattacharyya Scribed by Aditya Joshi Presented in NLP-AI talk on 14 th January, 2014.
. Learning – EM in ABO locus Tutorial #08 © Ydo Wexler & Dan Geiger.
HMM for CpG Islands Parameter Estimation For HMM Maximum Likelihood and the Information Inequality Lecture #7 Background Readings: Chapter 3.3 in the.
. Algorithms in Computational Biology – אלגוריתמים בביולוגיה חישובית – (
. Parameter Estimation For HMM Background Readings: Chapter 3.3 in the book, Biological Sequence Analysis, Durbin et al., 2001.
. Algorithms in Computational Biology – אלגוריתמים בביולוגיה חישובית – (
This presentation has been cut and slightly edited from Nir Friedman’s full course of 12 lectures which is available at Changes.
Bayesian learning finalized (with high probability)
Probabilistic Graphical Models Tool for representing complex systems and performing sophisticated reasoning tasks Fundamental notion: Modularity Complex.
Basics of Statistical Estimation. Learning Probabilities: Classical Approach Simplest case: Flipping a thumbtack tails heads True probability  is unknown.
Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.
. PGM: Tirgul 10 Parameter Learning and Priors. 2 Why learning? Knowledge acquisition bottleneck u Knowledge acquisition is an expensive process u Often.
. Maximum Likelihood (ML) Parameter Estimation with applications to reconstructing phylogenetic trees Comput. Genomics, lecture 6b Presentation taken from.
Class 3: Estimating Scoring Rules for Sequence Alignment.
Thanks to Nir Friedman, HU
Learning Bayesian Networks (From David Heckerman’s tutorial)
. Parameter Estimation For HMM Lecture #7 Background Readings: Chapter 3.3 in the text book, Biological Sequence Analysis, Durbin et al.,  Shlomo.
Recitation 1 Probability Review
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Additional Slides on Bayesian Statistics for STA 101 Prof. Jerry Reiter Fall 2008.
All of Statistics Chapter 5: Convergence of Random Variables Nick Schafer.
.. . Maximum Likelihood (ML) Parameter Estimation with applications to inferring phylogenetic trees Comput. Genomics, lecture 6a Presentation taken from.
BINF6201/8201 Hidden Markov Models for Sequence Analysis
Introduction to Bayesian statistics Yves Moreau. Overview The Cox-Jaynes axioms Bayes’ rule Probabilistic models Maximum likelihood Maximum a posteriori.
IID Samples In supervised learning, we usually assume that data points are sampled independently and from the same distribution IID assumption: data are.
Course Introduction What these courses are about What I expect What you can expect.
Consistency An estimator is a consistent estimator of θ, if , i.e., if
CS498-EA Reasoning in AI Lecture #10 Instructor: Eyal Amir Fall Semester 2009 Some slides in this set were adopted from Eran Segal.
Conditional Probability Mass Function. Introduction P[A|B] is the probability of an event A, giving that we know that some other event B has occurred.
Maximum Likelihood Estimation
Statistical Estimation Vasileios Hatzivassiloglou University of Texas at Dallas.
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
The Uniform Prior and the Laplace Correction Supplemental Material not on exam.
Week 31 The Likelihood Function - Introduction Recall: a statistical model for some data is a set of distributions, one of which corresponds to the true.
CS 2750: Machine Learning Linear Models for Classification Prof. Adriana Kovashka University of Pittsburgh February 15, 2016.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Conditional Expectation
CSC321: Lecture 8: The Bayesian way to fit models Geoffrey Hinton.
Bayesian Estimation and Confidence Intervals Lecture XXII.
Oliver Schulte Machine Learning 726
Bayesian Estimation and Confidence Intervals
Parameter Estimation 主講人:虞台文.
Bayes Net Learning: Bayesian Approaches
Oliver Schulte Machine Learning 726
Tutorial #3 by Ma’ayan Fishelson
More about Posterior Distributions
Important Distinctions in Learning BNs
CS498-EA Reasoning in AI Lecture #20
Computing and Statistical Data Analysis / Stat 8
Statistical NLP: Lecture 4
Important Distinctions in Learning BNs
LECTURE 09: BAYESIAN LEARNING
CS 594: Empirical Methods in HCC Introduction to Bayesian Analysis
Parametric Methods Berlin Chen, 2005 References:
Learning From Observed Data
Mathematical Foundations of BME Reza Shadmehr
Presentation transcript:

Parameter Estimation using likelihood functions Tutorial #1 This class has been cut and slightly edited from Nir Friedman’s full course of 12 lectures which is available at www.cs.huji.ac.il/~pmai. Changes made by Dan Geiger and Ydo Wexler. .

Example: Binomial Experiment Head Tail When tossed, it can land in one of two positions: Head or Tail We denote by  the (unknown) probability P(H). Estimation task: Given a sequence of toss samples x[1], x[2], …, x[M] we want to estimate the probabilities P(H)=  and P(T) = 1 - 

Statistical Parameter Fitting Consider instances x[1], x[2], …, x[M] such that The set of values that x can take is known Each is sampled from the same distribution Each sampled independently of the rest i.i.d. samples The task is to find a vector of parameters  that have generated the given data. This vector parameter  can be used to predict future data. i.I.d - idependent, identically distributed

The Likelihood Function How good is a particular ? It depends on how likely it is to generate the observed data The likelihood for the sequence H,T, T, H, H is 0.2 0.4 0.6 0.8 1  L()

Sufficient Statistics To compute the likelihood in the thumbtack example we only require NH and NT (the number of heads and the number of tails) NH and NT are sufficient statistics for the binomial distribution For example: H,T,T,H,H is just like T,H,H,H,T

Sufficient Statistics A sufficient statistic is a function of the data that summarizes the relevant information for the likelihood Datasets Statistics Formally, s(D) is a sufficient statistics if for any two datasets D and D’ s(D) = s(D’ )  LD() = LD’ ()

Maximum Likelihood Estimation MLE Principle: Choose parameters that maximize the likelihood function This is one of the most commonly used estimators in statistics Intuitively appealing One usually maximizes the log-likelihood function defined as lD() = logeLD()

Example: MLE in Binomial Data Applying the MLE principle we get (Which coincides with what one would expect) 0.2 0.4 0.6 0.8 1 L() Example: (NH,NT ) = (3,2) MLE estimate is 3/5 = 0.6

From Binomial to Multinomial For example, suppose X can have the values 1,2,…,K (For example a die has 6 sides) We want to learn the parameters 1, 2. …, K Sufficient statistics: N1, N2, …, NK - the number of times each outcome is observed Likelihood function: MLE: A case of a die where 1….6 are the possible values

Example: Multinomial Let be a protein sequence We want to learn the parameters q1, q2,…,q20 corresponding to the frequencies of the 20 amino acids N1, N2, …, N20 - the number of times each amino acid is observed in the sequence Likelihood function: MLE: A case of a die where 1….6 are the possible values

Is MLE all we need? Suppose that after 10 observations, ML estimates P(H) = 0.7 for the thumbtack Would you bet on heads for the next toss? Suppose now that after 10 observations, ML estimates P(H) = 0.7 for a coin Would you place the same bet? Solution: The Bayesian approach which incorporates your subjective prior knowledge. E.g., you may know a priori that some amino acids have the same frequencies and some have low frequencies. How would one use this information ?

Bayes’ rule Bayes’ rule: Where It hold because:

Example: Dishonest Casino A casino uses 2 kind of dice: 99% are fair 1% is loaded: 6 comes up 50% of the times We pick a die at random and roll it 3 times We get 3 consecutive sixes What is the probability the die is loaded?

Dishonest Casino (cont.) The solution is based on using Bayes rule and the fact that while P(loaded | 3sixes) is not known, the other three terms in Bayes rule are known, namely: P(3sixes | loaded)=(0.5)3 P(loaded)=0.01 P(3sixes) = P(3sixes | loaded) P(loaded)+P(3sixes | not loaded) (1-P(loaded))

Dishonest Casino (cont.)

Biological Example: Proteins Extracellular proteins have a slightly different amino acid composition than intracellular proteins. From a large enough protein database (SWISS-PROT), we can get the following: p(ai|int) - the frequency of amino acid ai for intracellular proteins p(ai|ext) - the frequency of amino acid ai for extracellular proteins p(int) - the probability that any new sequence is intracellular p(ext) - the probability that any new sequence is extracellular

Biological Example: Proteins (cont.) What is the probability that a given new protein sequence x=x1x2….xn is extracellular? Assuming that every sequence is either extracellular or intracellular (but not both) we can write . Thus,

Biological Example: Proteins (cont.) Using conditional probability we get , By Bayes’ theorem The probabilities p(int), p(ext) are called the prior probabilities. The probability P(ext|x) is called the posterior probability.

Bayesian Parameter Estimation (step by step) View the unknown parameter  as a random variable. Use probability to quantify the uncertainty about the unknown parameter. Update of the parameter follows from the rules of probability using Bayes rule The updated parameter is set to be the expected value of the posterior . posterior of likelihood prior of

Example: Binomial Data Revisited Prior: say, a uniform distribution for  in [0,1] P( ) = 1 (say due to seeing one head h = 1 and one tail t = 1 before seeing the data) Then for data (NH,NT ) = (4,1) MLE for is 4/5 = 0.8 Bayesian estimation gives 0.2 0.4 0.6 0.8 1 Dirichlet integral

The multinomial case The hyper-parameters 1,…,K can be thought of as “imaginary” counts from our prior experience Imaginary sample size = 1+…+K The larger the imaginary sample size the more confident we are in our prior. The more it influences the outcome. Bayesian Estimation vs. MLE

Protein Example Continued (estimation of ext) Let’s assume that from biological literature we know that 40% of the proteins are extracellular and 60% are intracellular ext= p(ext)=0.4 Recall that we had Now, given a protein sequence x we have all the information we need to calculate p(ext|x).

Protein Example (cont.) (updating ext) To update ext after seeing the sequence x, we need to decide what weight should we give our prior knowledge. If our prior knowledge is worth seeing M sequences then the update will be as follow: