Presentation is loading. Please wait.

Presentation is loading. Please wait.

Contrastive Divergence Learning Geoffrey E. Hinton A discussion led by Oliver Woodford.

Similar presentations


Presentation on theme: "Contrastive Divergence Learning Geoffrey E. Hinton A discussion led by Oliver Woodford."— Presentation transcript:

1 Contrastive Divergence Learning Geoffrey E. Hinton A discussion led by Oliver Woodford

2 Contents Maximum Likelihood learning Gradient descent based approach Markov Chain Monte Carlo sampling Contrastive Divergence Further topics for discussion: –Result biasing of Contrastive Divergence –Product of Experts –High-dimensional data considerations

3 Given: –Probability model - model parameters - the partition function, defined as –Training data Aim: –Find that maximizes likelihood of training data: –Or, that minimizes negative log of likelihood: Maximum Likelihood learning X = f x k g K k = 1 p ( x; £ ) = 1 Z ( £ ) f ( x; £ ) Z ( £ ) £ Z ( £ ) = R f ( x; £ ) d x £ p ( X ; £ ) = Q K k = 1 1 Z ( £ ) f ( x k ; £ ) £ Toy example Known result: E ( X ; £ ) = K l og ( Z ( £ )) ¡ P K k = 1 l og ( f ( x k ; £ )) f ( x; £ ) = exp ¡ ( x ¡ ¹ ) 2 2 ¾ 2 £ = f ¹ ; ¾ g Z ( £ ) = ¾ p 2 ¼

4 Method: – at minimum –Lets assume that there is no linear solution… Maximum Likelihood E ( X ; £ £ = E ( X ; £ £ og Z ( £ £ ¡ 1 K K X i = og f ( x i ; £ £ og Z ( £ £ ¡ og f ( x; £ £ À X is the expectation of given the data distribution. h ¢ i X ¢ E ( X ; £ £ og ( ¾ p 2 ¼ £ + ( x ¡ ¹ ) 2 2 ¾ £ À E ( X ; £ ¹ = ¡ ­ x ¡ ¹ ¾ 2 ® X = 0 ) ¹ = h x i E ( X ; £ ¾ = 1 ¾ + D ( x ¡ ¹ ) 2 ¾ 3 E X = 0 ) ¾ = p h ( x ¡ ¹ ) 2 i X

5 –Move a fixed step size,, in the direction of steepest gradient. (Not line search – see why later). –This gives the following parameter update equation: Gradient descent-based approach ´ £ t + 1 = £ t ¡ E ( X ; £ t £ t = £ t ¡ ´ og Z ( £ t £ t ¡ og f ( x; £ t £ t À X ¶

6 –Recall. Sometimes this integral will be algebraically intractable. –This means we can calculate neither nor (hence no line search). –However, with some clever substitution… –so where can be estimated numerically. Gradient descent-based approach Z ( £ ) = R f ( x; £ ) d x E ( X ; £ og Z ( £ £ og f ( x; £ £ E p ( x; £ ) £ t + 1 = £ t ¡ ´ µ og f ( x; £ t £ t E p ( x; £ t ) ¡ og f ( x; £ t £ t E X ¶

7 –To estimate we must draw samples from. –Since is unknown, we cannot draw samples randomly from a cumulative distribution curve. –Markov Chain Monte Carlo (MCMC) methods turn random samples into samples from a proposed distribution, without knowing. –Metropolis algorithm: Perturb samples e.g. Reject if Repeat cycle for all samples until stabilization of the distribution. –Stabilization takes many cycles, and there is no accurate criteria for determining when it has occurred. Markov Chain Monte Carlo sampling og f ( x; £ £ E p ( x; £ ) p ( x; £ ) Z ( £ ) x 0 k = x k + ran d n ( s i ze ( x k )) x 0 k p ( x 0 k ; £ ) p ( x k ; £ ) < ran d ( 1 ) Z ( £ )

8 –Let us use the training data,, as the starting point for our MCMC sampling. –Our parameter update equation becomes: Markov Chain Monte Carlo sampling X £ t + 1 = £ t ¡ ´ µ og f ( x; £ t £ t E X 1 £ t ¡ og f ( x; £ t £ t E X 0 £ t ¶ Notation: - training data, - training data after cycles of MCMC, - samples from proposed distribution with parameters. n X 1 £ X 0 £ X n £ £

9 –Let us make the number of MCMC cycles per iteration small, say even 1. –Our parameter update equation is now: –Intuition: 1 MCMC cycle is enough to move the data from the target distribution towards the proposed distribution, and so suggest which direction the proposed distribution should move to better model the training data. Contrastive divergence £ t + 1 = £ t ¡ ´ µ og f ( x; £ t £ t E X 1 £ t ¡ og f ( x; £ t £ t E X 0 £ t ¶

10 Contrastive divergence bias –We assume: –ML learning equivalent to minimizing, where (Kullback-Leibler divergence). –CD attempts to minimize –Usually, but can sometimes bias results. –See On Contrastive Divergence Learning, Carreira-Perpinan & Hinton, AIStats 2005, for more details. P jj Q = R p ( x ) l og p ( x ) q ( x ) d E ( X ; £ £ ¼ og f ( x; £ £ E X 1 £ ¡ og f ( x; £ £ E X 0 £ X 0 £ jj X 1 £ X 0 £ jj X 1 £ ¡ X 1 £ jj X £ ( X 0 £ jj X 1 £ ¡ X 1 £ jj X 1 £ ) = og f ( x; £ £ E X 1 £ ¡ og f ( x; £ £ E X 0 £ X 1 X 1 £ jj X 1 X 1 X 1 X 1 £ jj X 1 X 1 £ ¼ 0

11 Product of Experts

12 Dimensionality issues


Download ppt "Contrastive Divergence Learning Geoffrey E. Hinton A discussion led by Oliver Woodford."

Similar presentations


Ads by Google