Presentation is loading. Please wait.

Presentation is loading. Please wait.

16-824: Learning-based Methods in Vision Instructors: Alexei (Alyosha) Efros 225 Smith Hall Leon Sigal

Similar presentations


Presentation on theme: "16-824: Learning-based Methods in Vision Instructors: Alexei (Alyosha) Efros 225 Smith Hall Leon Sigal"— Presentation transcript:

1 16-824: Learning-based Methods in Vision Instructors: Alexei (Alyosha) Efros efros@cs.cmu.edu, 225 Smith Hall efros@cs.cmu.edu Leon Sigal lsigal@disneyresearch.com, Disney Research Pittsburgh lsigal@disneyresearch.com Web Page: http://www.cs.cmu.edu/~efros/courses/LBMV12/

2 Today Introduction Why This Course? Administrative stuff Overview of the course

3 Alexei (Alyosha) Efros Ph.D 2003, from UC Berkeley (signed by Arnie!) Postdoctoral Fellow, University of Oxford, ’03-’04 Research Interests: Vision, Graphics, Data-driven “stuff” Leonid Sigal PhD 2007, from Brown University Postdoctoral Fellow, University of Toronto, ’07-’09 Research interests: Vision, Graphics, Machine Learning A bit about Us

4 Why this class? The Old Days™: 1. Graduate Computer Vision 2. Advanced Machine Perception

5 Why this class? The New and Improved Days: 1. Graduate Computer Vision 2. Advanced Machine Perception Physics-based Methods in Vision Geometry-based Methods in Vision Learning-based Methods in Vision

6 Describing Visual Scenes using Transformed Dirichlet ProcessesTransformed Dirichlet Processes. E. Sudderth, A. Torralba, W. Freeman, and A. Willsky. NIPS, Dec. 2005. The Hip & Trendy Learning

7 Learning as Last Resort

8 from [Sinha and Adelson 1993] EXAMPLE: Recovering 3D geometry from single 2D projection Infinite number of possible solutions!

9 Learning-based Methods in Vision This class is about trying to solve problems that do not have a solution! Don’t tell your mathematician frineds! This will be done using Data: E.g. what happened before is likely to happen again Google Intelligence (GI): The AI for the post-modern world! Note: this is not quite statistics Why is this even worthwhile? Even a decade ago at ICCV99 Faugeras claimed it wasn’t!

10 The Vision Story Begins… “What does it mean, to see? The plain man's answer (and Aristotle's, too). would be, to know what is where by looking.” -- David Marr, Vision (1982)

11 Vision: a split personality “What does it mean, to see? The plain man's answer (and Aristotle's, too). would be, to know what is where by looking. In other words, vision is the process of discovering from images what is present in the world, and where it is.” Answer #1: pixel of brightness 243 at position (124,54) …and depth.7 meters Answer #2: looks like bottom edge of whiteboard showing at the top of the image Which do we want? Is the difference just a matter of scale? depth map

12 Measurement vs. Perception

13 Brightness: Measurement vs. Perception

14 Proof!

15 Lengths: Measurement vs. Perception http://www.michaelbach.de/ot/sze_muelue/index.html Müller-Lyer Illusion

16 Vision as Measurement Device Real-time stereo on Mars Structure from Motion Physics-based Vision Virtualized Reality

17 …but why do Learning for Vision? “What if I don’t care about this wishy-washy human perception stuff? I just want to make my robot go!” Small Reason: For measurement, other sensors are often better (in DARPA Grand Challenge, vision was barely used!) For navigation, you still need to learn! Big Reason: The goals of computer vision (what + where) are in terms of what humans care about.

18 So what do humans care about? slide by Fei Fei, Fergus & Torralba

19 Verification: is that a bus? slide by Fei Fei, Fergus & Torralba

20 Detection: are there cars? slide by Fei Fei, Fergus & Torralba

21 Identification: is that a picture of Mao? slide by Fei Fei, Fergus & Torralba

22 Object categorization sky building flag wall banner bus cars bus face street lamp slide by Fei Fei, Fergus & Torralba

23 Scene and context categorization outdoor city traffic … slide by Fei Fei, Fergus & Torralba

24 Rough 3D layout, depth ordering slide by Fei Fei, Fergus & Torralba

25 Challenges 1: view point variation Michelangelo 1475-1564 slide by Fei Fei, Fergus & Torralba

26 Challenges 2: illumination slide credit: S. Ullman

27 Challenges 3: occlusion Magritte, 1957 slide by Fei Fei, Fergus & Torralba

28 Challenges 4: scale slide by Fei Fei, Fergus & Torralba

29 Challenges 5: deformation Xu, Beihong 1943 slide by Fei Fei, Fergus & Torralba

30 Challenges 6: background clutter Klimt, 1913 slide by Fei Fei, Fergus & Torralba

31 Challenges 7: object intra-class variation slide by Fei-Fei, Fergus & Torralba

32 Challenges 8: local ambiguity slide by Fei-Fei, Fergus & Torralba

33 Challenges 9: the world behind the image

34 In this course, we will: Take a few baby steps…

35 Role of Learning Learning Algorithm Features Data

36 Role of Learning Data Features Algorithm Shashua

37 Course Outline Overview of Learning for Vision (1 lecture) Overview of Data for Vision (1 lecture) Features Human Perception and visual neuroscience Theories of Human Vision Low-level Vision Filters, edge detection, interest points, etc. Mid-level Vision Segmentation, Occlusions, 2-1/2D, scene layout, etc. High-Level Vision Object recognition Scene Understanding Action / Motion Understaing Etc.

38 Goals Read some interesting papers together Learn something new: both you and us! Get up to speed on big chunk of vision research understand 70% of CVPR papers! Use learning-based vision in your own work Learn how to speak Learn how think critically about papers Participate in an exciting meta-study!

39 Course Organization Requirements: 1.Class Participation (33%) Keep annotated bibliography Post on the Class Blog before each class Ask questions / debate / flight / be involved! 2.Two Projects (66%) Deconstruction Project Implement and Evaluate a paper and present it in class Must talk to us AT LEAST 2 weeks beforehand! Can be done in groups of 2 (but must do 2 projects) Synthesis Project Do something worthwhile with what you learned for Deconstruction Project Can be done in groups of 2 (1 project)

40 Class Participation Keep annotated bibliography of papers you read (always a good idea!). The format is up to you. At least, it needs to have: Summary of key points A few Interesting insights, “aha moments”, keen observations, etc. Weaknesses of approach. Unanswered questions. Areas of further investigation, improvement. Before each class: Submit your summary for current paper(s) in hard copy (printout/xerox) Submit a comment on the Class Blog ask a question, answer a question, post your thoughts,praise, criticism, start a discussion, etc.

41 Deconstruction Project 1.Pick a paper / set of papers from the list 2.Understand it as if you were the author Re-implement it If there is code, understand the code completely Run it on data the same data (you can contact authors for data and even code sometimes) 3.Understand it better than the author Run it on two other data sets (e.g. LabelMe dataset, Flickr dataset, etc, etc) Run it with two other feature representations Run it with two other learning algorithms Maybe suggest directions for improvement. 4.Prepare an amazing 45min presentation Discuss with me twice – once when you start the project, 3 days before the presentation

42 Synthesis Project Hopefully can grow out of the deconstruction project 2 people can work on one

43 End of Semester Awards We will vote for: Best Deconstruction Project Best Synthesis Project Best Blog Comment Prize: dinner in a French restaurant in Paris (transportation not included!) or some other worthy prizes


Download ppt "16-824: Learning-based Methods in Vision Instructors: Alexei (Alyosha) Efros 225 Smith Hall Leon Sigal"

Similar presentations


Ads by Google