By: Joshua Michalczak, 28 May 2010 For: Computer Vision R.E.U. at University of Central Florida.

Slides:



Advertisements
Similar presentations
compilers and interpreters
Advertisements

3-D Computer Vision CSc83020 / Ioannis Stamos  Revisit filtering (Gaussian and Median)  Introduction to edge detection 3-D Computater Vision CSc
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Other Classification Techniques 1.Nearest Neighbor Classifiers 2.Support Vector Machines.
Support Vector Machines
Joshua Michalczak For UCF REU in Computer Vision, Summer 2010.
Active Contour Models (Snakes)
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
Segmentation Divide the image into segments. Each segment:
Detecting and Tracking Moving Objects for Video Surveillance Isaac Cohen and Gerard Medioni University of Southern California.
MSU CSE 803 Stockman Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute.
Announcements Project1 artifact reminder counts towards your grade Demos this Thursday, 12-2:30 sign up! Extra office hours this week David (T 12-1, W/F.
Canny Edge Detector1 1)Smooth image with a Gaussian optimizes the trade-off between noise filtering and edge localization 2)Compute the Gradient magnitude.
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering
Announcements Project 1 test the turn-in procedure this week (make sure your folder’s there) grading session next Thursday 2:30-5pm –10 minute slot to.
CS 376b Introduction to Computer Vision 04 / 01 / 2008 Instructor: Michael Eckmann.
Lecture 19: Optical flow CS6670: Computer Vision Noah Snavely
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
100+ Times Faster Weighted Median Filter [cvpr ‘14]
MSU CSE 803 Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute some result.
Announcements Project1 due Tuesday. Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Supplemental:
A Vision-Based System that Detects the Act of Smoking a Cigarette Xiaoran Zheng, University of Nevada-Reno, Dept. of Computer Science Dr. Mubarak Shah,
Overview and Mathematics Bjoern Griesbach
1 REAL-TIME IMAGE PROCESSING APPROACH TO MEASURE TRAFFIC QUEUE PARAMETERS. M. Fathy and M.Y. Siyal Conference 1995: Image Processing And Its Applications.
Image segmentation by clustering in the color space CIS581 Final Project Student: Qifang Xu Advisor: Dr. Longin Jan Latecki.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean Hall 5409 T-R 10:30am – 11:50am.
Abstract Design Considerations and Future Plans In this project we focus on integrating sensors into a small electrical vehicle to enable it to navigate.
CS 376b Introduction to Computer Vision 04 / 29 / 2008 Instructor: Michael Eckmann.
Open Problems in Computer Vision(oProCV) Topic 4. Motion MSc. Luis A. Mateos.
Takuya Matsuo, Norishige Fukushima and Yutaka Ishibashi
Sensing for Robotics & Control – Remote Sensors R. R. Lindeke, Ph.D.
University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Image processing.
Week 2 REU Nolan Warner. Overview This weeks progress/projects Things learned Tentative research topics.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Source: Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on Author: Paucher, R.; Turk, M.; Adviser: Chia-Nian.
Templates, Image Pyramids, and Filter Banks
RESEARCH POSTER PRESENTATION DESIGN © Large uncontrollable crowds have the potential to cause serious injuries, and even.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
PRESENTATION REU IN COMPUTER VISION 2014 AMARI LEWIS CRCV UNIVERSITY OF CENTRAL FLORIDA.
1 Imaging Techniques for Flow and Motion Measurement Lecture 10 Lichuan Gui University of Mississippi 2011 Direct Correlation & MQD Method.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Speaker Min-Koo Kang March 26, 2013 Depth Enhancement Technique by Sensor Fusion: MRF-based approach.
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
A Framework for Perceptual Studies in Photorealistic Augmented Reality Martin Knecht 1, Andreas Dünser 2, Christoph Traxler 1, Michael Wimmer 1 and Raphael.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
Motion Estimation of Moving Foreground Objects Pierre Ponce ee392j Winter March 10, 2004.
Reading: R. Schapire, A brief introduction to boosting
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
Face Detection EE368 Final Project Group 14 Ping Hsin Lee
Human Detection in Surveillance Applications
Paper – Stephen Se, David Lowe, Jim Little
Motion and Optical Flow
Jean Ballet, CEA Saclay GSFC, 31 May 2006 All-sky source search
Motion Detection And Analysis
M. Frydler : Institute of Mathematical Machines
Making Exercise Easy Matthew Penk.
Robotic Guidance.
Augmented Reality SDK Introduction
Optical Flow For Vision-Aided Navigation
Levi Smith REU Week 1.
Motion Estimation Today’s Readings
Parallel Integration of Video Modules
By: Mohammad Qudeisat Supervisor: Dr. Francis Lilley
Filtering Things to take away from this lecture An image as a function
Announcements more panorama slots available now
Linear Operations Using Masks
Dor Granat, Ran yeheskel ADvisor: matan sela
Data Mining (and machine learning)
Announcements more panorama slots available now
Multi-UAV to UAV Tracking
Project Presentation – Week 6
Presentation transcript:

By: Joshua Michalczak, 28 May 2010 For: Computer Vision R.E.U. at University of Central Florida

 False alarm = 10%, Detection = 63.5%, Threshold = ~62.0%  False alarm = 30.3%, Detection = 90%, Threshold = ~32.6%

False alarm = 10%, Detection = 63.5%, Threshold = ~62.0% False alarm = 30.3%, Detection = 90%, Threshold = ~32.6%

6 cluster centers, without location weight 6 cluster centers, with location weight

15 cluster centers, without location weight 15 cluster centers, with location weight

Higher color weightEqual weightsHigher location weights

“Blob” of flow? Noise?

Bigger “blobs” Less noise

Shrinks “blobs” Better alignment Lessens noise Vector magnitudes reduced

 Larger windows  Reduce noise (+)  Increase “blob” size (-)  More iterations  “blobs” better modeled (+)  Computationally expensive (-)  Changes vector magnitudes (More accurate?)  Levels  Each scene/scenario seems to have an ideal pyramid count.  Inappropriate level size greatly affects the resulting flow adversely.

Briefcase shadow

Standing shadows

Median (10 dist.) Gaussian (-2 2) Frame 152 Frame 670

3D Creation of 3D Data Shape/Depth from X Stereo Working with 3D SLAM Interpreting Structure Augmented Reality 1 2* 3 4 5

 Likes: Problem well defined  Plethora of research – good ideas and algorithms  Could be coupled with machine learning for other tasks?  Idea: Could aide in the accuracy of Dr. Lobo’s 4-camera device? Extend uses in market?  When coupled together, localization and mapping correct errors in each-other.

 Likes: Combining 3D interpretation with applications  Dislikes: Seems to have moved away from research and more into application fields  Most papers were system specific or discussed a use of AR rather than AR itself  Idea: Coupling “Where am I” with AR  Sort of already done though, see Layar app for Android phones

 Likes: Couples 3D field with Machine Learning field  Useful base for other work; see Depth-map To Skeleton project  Dislikes: Too broad of a problem  Would prefer focused project to ensure good results from REU program  Also, not sure of project ideas other than DTS project….

 Likes: Low-level  I tend to gravitate to “fundamental problems”  Again, building a framework for other work  Dislikes: Dead field?  Most techniques are well researched  New technology automagically grabs reliant depth; stereo and 3D cameras  Again, not sure on project topics other than depth from defocus