Presentation is loading. Please wait.

Presentation is loading. Please wait.

Akio Utsugi National Institute of Bioscience and Human-technology,

Similar presentations


Presentation on theme: "Akio Utsugi National Institute of Bioscience and Human-technology,"— Presentation transcript:

1 Bayesian Sampling and Ensemble Learning in Generative Topographic Mapping
Akio Utsugi National Institute of Bioscience and Human-technology, Neural Processing Letters, vol. 12, no. 3, pp Summarized by Jong-Youn, Lim

2 © 2001 SNU CSE Artificial Intelligence Lab (SCAI)
Introduction SOM A minimal model for the formation of topology-preserving maps An information processing tool to extract a hidden smooth manifold from data Drawbacks : no explicit statistical model for the data generation Alternatives Elastic net Generative topographic mapping : based on the mixture of spherical Gaussian generators with a constraint on the centroids © 2001 SNU CSE Artificial Intelligence Lab (SCAI)

3 © 2001 SNU CSE Artificial Intelligence Lab (SCAI)
Hyperparameter search of GTM on small data using a Gibbs sampler, but time consuming on large data Needs for deterministic algorithm producing the estimates quickly – ensemble learning(to minimize the variational free energy of the model, which gives an upper bound of negative log evidence) © 2001 SNU CSE Artificial Intelligence Lab (SCAI)

4 Generative topographic mapping
Two versions : an original regression version , a Gaussian process version It consists of a spherical Gaussian mixture density and a Gaussian process prior A spherical Gaussian mixture density © 2001 SNU CSE Artificial Intelligence Lab (SCAI)

5 © 2001 SNU CSE Artificial Intelligence Lab (SCAI)
W has a Gaussian prior Bayesian inference of W Inference of h is based on its evidence f(X|h) (the maximizer of the evidence is called the generalized maximum likelihood(GML) estimate of h) The approximations for the hyperparameter search algorithm are valid only on abundant data Hyperparameter search is improved using a Gibbs sampler © 2001 SNU CSE Artificial Intelligence Lab (SCAI)

6 © 2001 SNU CSE Artificial Intelligence Lab (SCAI)
Gibbs sampler in GTM Any moment of the posteriors can be obtained precisely by an average over the long sample series Gibbs sampler is one of MCMC methods, which does not need the design of a trial distribution Conditional posterios on Y and W Conditional posterior on Y(p is the posterior selection probabilities of the inner units © 2001 SNU CSE Artificial Intelligence Lab (SCAI)

7 © 2001 SNU CSE Artificial Intelligence Lab (SCAI)
The conditional posterior on W is obtained by normalizing f(X,Y,W|h) (product of 1,2,4) © 2001 SNU CSE Artificial Intelligence Lab (SCAI)

8 © 2001 SNU CSE Artificial Intelligence Lab (SCAI)
Conditional posteriors on hyperparameters Conditional posteriors are obtained by normalizing © 2001 SNU CSE Artificial Intelligence Lab (SCAI)

9 Ensemble learning in GTM
The ensemble learning is a deterministic algorithm to obtain the estimates of parameters and hyperparameters concurrently Approximating ensemble density Q, and its variational free energy on a model H If we restrict Q to a factorial form, we can have a straightforward algorithm for the minimization of F © 2001 SNU CSE Artificial Intelligence Lab (SCAI)

10 © 2001 SNU CSE Artificial Intelligence Lab (SCAI)
The optimization procedure Initial densities are set to the partial ensembles A new density of Q(Y) is obtained from other densities by Each of the other partial ensembles is also updated using the same formula as above except that Y and the target variable are exchanged These updates of the partial ensembles are repeated until a convergence condition is satisfied © 2001 SNU CSE Artificial Intelligence Lab (SCAI)

11 © 2001 SNU CSE Artificial Intelligence Lab (SCAI)
Simulations Compare the algorithms in simulations : the ensemble learning(deterministic algorithm), the Gibbs sampler Artificial data , i = 1,..,n are generated from two independent standard Gaussian random series { }, { } by Three noise levels : © 2001 SNU CSE Artificial Intelligence Lab (SCAI)

12 © 2001 SNU CSE Artificial Intelligence Lab (SCAI)

13 © 2001 SNU CSE Artificial Intelligence Lab (SCAI)

14 © 2001 SNU CSE Artificial Intelligence Lab (SCAI)

15 © 2001 SNU CSE Artificial Intelligence Lab (SCAI)

16 © 2001 SNU CSE Artificial Intelligence Lab (SCAI)
Conclusion A simulation experiment showed the superiority of the Gibbs sampler on small data and the validity of the deterministic algorithms on large data © 2001 SNU CSE Artificial Intelligence Lab (SCAI)


Download ppt "Akio Utsugi National Institute of Bioscience and Human-technology,"

Similar presentations


Ads by Google