Presentation is loading. Please wait.

Presentation is loading. Please wait.

Deep belief nets experiments and some ideas. Karol Gregor NYU/Caltech.

Similar presentations


Presentation on theme: "Deep belief nets experiments and some ideas. Karol Gregor NYU/Caltech."— Presentation transcript:

1 Deep belief nets experiments and some ideas. Karol Gregor NYU/Caltech

2 Outline DBN Image database experiments Temporal sequences

3 Deep belief network Input H1 H2 H3 Labels Backprop

4 Preprocessing – Bag of words of SIFT ImagesFeatures (using SIFT) Group them (e.g. K-means) Bag of words Image1Image2 Word12311 Word21255 Word39233 ……… With: Greg Griffin (Caltech)

5 13 Scenes Database – Test error

6 Train error

7 - Pre-training on larger dataset - Comparison to svm, spm

8 Explicit representations?

9 Compatibility between databases Pretraining: Corel database Supervised training: 15 Scenes database

10 Conclusions - Bag of words is not a good input for deep architectures - The networks can be pretrained on one database and the supervised training can be used on other one. - Other observations: -

11 Temporal Sequences

12 Simple prediction t-1t-2t-3 t X Y W Supervised learning

13 With hidden units (need them for several reasons) t-1,t-2,t-3t t H G XY Memisevic, R. F. and Hinton, G. E., Unsupervised Learning of Image Transformations. CVPR-07

14 Example pred_xyh_orig.m

15 Additions t-1t t H G XY Sparsity: When inferring the H the first time, keep only the largest n units on Slow H change: After inferring the H the first time, take H=(G+H)/2

16 Examples pred_xyh.m present_line.m present_cross.m

17 Senses e.g. Eye (through retina, LGN) Muscles (through sub- cortical structures) Hippocampus Cortex+Thalamus e.g. See: Jeff Hawkins: On Intelligence

18 Cortical patch: Complex structure (not a single layer RBM) From Alex Thomson and Peter Bannister, (see numenta.com)

19 Desired properties

20 A B C D E F G H J K L E F H 1) Prediction

21 2) Explicit representations for sequences VISIONRESEARCH time

22 3) Invariance discovery time e.g. complex cell

23 4) Sequences of variable length VISIONRESEARCH time

24 5) Long sequences ? ? Layer1 Layer2

25 6) Multilayer VISIONRESEARCH time - Inferred only after some time

26 7) Smoother time steps

27 8) Variable speed - Can fit a knob with small speed range

28 9) Add a clock for actual time

29 Senses e.g. Eye (through retina, LGN) Muscles (through sub- cortical structures) Hippocampus Cortex+Thalamus

30 Senses e.g. Eye (through retina, LGN) Muscles (through sub- cortical structures) Hippocampus Cortex+Thalamus In Addition - Top down attention - Bottom up attention - Imagination - Working memory - Rewards

31 Training data - Videos -Of the real world -Simplified: Cartoons (Simsons) -A robot in an environment -Problem: Hard to grasp objects -Artificial environment with 3D objects that are easy to manipulate (e.g. Grand theft auto IV with objects)


Download ppt "Deep belief nets experiments and some ideas. Karol Gregor NYU/Caltech."

Similar presentations


Ads by Google