Page: 1 PHAM VAN Tien Real-Time Approach for Auto-Adjusting Vision System Reading Class International Graduate School of Dynamic Intelligent Systems.

Slides:



Advertisements
Similar presentations
Automatic Color Gamut Calibration Cristobal Alvarez-Russell Michael Novitzky Phillip Marks.
Advertisements

QR Code Recognition Based On Image Processing
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
COLORING WITH NUMBERS. NumbersNumbers NumbersNumbers
© red ©
1 of 25 1 of 22 Blind-Spot Experiment Draw an image similar to that below on a piece of paper (the dot and cross are about 6 inches apart) Close your right.
Recognition of Traffic Lights in Live Video Streams on Mobile Devices
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
A Study of Approaches for Object Recognition
Tracking a moving object with real-time obstacle avoidance Chung-Hao Chen, Chang Cheng, David Page, Andreas Koschan and Mongi Abidi Imaging, Robotics and.
1 Comp300a: Introduction to Computer Vision L. QUAN.
Traffic Sign Recognition Jacob Carlson Sean St. Onge Advisor: Dr. Thomas L. Stewart.
Vision Computing An Introduction. Visual Perception Sight is our most impressive sense. It gives us, without conscious effort, detailed information about.
An Approach to Korean License Plate Recognition Based on Vertical Edge Matching Mei Yu and Yong Deak Kim Ajou University Suwon, , Korea 指導教授 張元翔.
Color Recognition and Image Processing CSE321 MyRo Project Rene van Ee and Ankit Agarwal.
Object Detection and Tracking Mike Knowles 11 th January 2005
Vision January 10, Today's Agenda ● Some general notes on vision ● Colorspaces ● Numbers and Java ● Feature detection ● Rigid body motion.
Chinese Character Recognition for Video Presented by: Vincent Cheung Date: 25 October 1999.
Visual Attention Jeremy Wyatt. Where to look? Many visual processes are expensive Humans don’t process the whole visual field How do we decide what to.
Lecture 2 Photographs and digital mages Friday, 7 January 2011 Reading assignment: Ch 1.5 data acquisition & interpretation Ch 2.1, 2.5 digital imaging.
Smart Traveller with Visual Translator. What is Smart Traveller? Mobile Device which is convenience for a traveller to carry Mobile Device which is convenience.
Redaction: redaction: PANAKOS ANDREAS. An Interactive Tool for Color Segmentation. An Interactive Tool for Color Segmentation. What is color segmentation?
Feature extraction Feature extraction involves finding features of the segmented image. Usually performed on a binary image produced from.
Computer Vision. Computer vision is concerned with the theory and technology for building artificial Computer vision is concerned with the theory and.
Real-Time Auto-Adjusting Vision System for Robotic Soccer Matthias J¨ungel, Jan Hoffmann, Martin L¨otzsch Abstract. This paper presents a real-time approach.
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
Traffic Sign Identification Team G Project 15. Team members Lajos Rodek-Szeged, Hungary Marcin Rogucki-Lodz, Poland Mircea Nanu -Timisoara, Romania Selman.
CIS 601 Fall 2004 Introduction to Computer Vision and Intelligent Systems Longin Jan Latecki Parts are based on lectures of Rolf Lakaemper and David Young.
A Brief Overview of Computer Vision Jinxiang Chai.
Deep Green System for real-time tracking and playing the board game Reversi Nadav Erell Intro to Computational and Biological Vision, CS department, Ben-Gurion.
by Utku Tatlıdede Kemal Kaplan
Lecture 4: Feature matching CS4670 / 5670: Computer Vision Noah Snavely.
Computer Engineering Department INTRODUCTION TO ROBOTICS COE 484 Dr. Mayez Al-Mouhamed SPRING 2008 Module II – VISION AND LOCALIZATION.
G52IVG, School of Computer Science, University of Nottingham 1 Administrivia Timetable Lectures, Friday 14:00 – 16:00 Labs, Friday 17:00 -18:00 Assessment.
November 13, 2014Computer Vision Lecture 17: Object Recognition I 1 Today we will move on to… Object Recognition.
National Taiwan A Road Sign Recognition System Based on a Dynamic Visual Model C. Y. Fang Department of Information and.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Efficient Visual Object Tracking with Online Nearest Neighbor Classifier Many slides adapt from Steve Gu.
Object Lesson: Discovering and Learning to Recognize Objects Object Lesson: Discovering and Learning to Recognize Objects – Paul Fitzpatrick – MIT CSAIL.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Autonomous Robots Vision © Manfred Huber 2014.
Vision and Obstacle Avoidance In Cartesian Space.
IEEE Robot Team Vision System Project Michael Slutskiy & Paul Nguyen ECE 533 Presentation.
1 Machine Vision. 2 VISION the most powerful sense.
APECE-505 Intelligent System Engineering Basics of Digital Image Processing! Md. Atiqur Rahman Ahad Reference books: – Digital Image Processing, Gonzalez.
Reference books: – Digital Image Processing, Gonzalez & Woods. - Digital Image Processing, M. Joshi - Computer Vision – a modern approach, Forsyth & Ponce.
Stereo Vision Local Map Alignment for Robot Environment Mapping Computer Vision Center Dept. Ciències de la Computació UAB Ricardo Toledo Morales (CVC)
Suggested Machine Learning Class: – learning-supervised-learning--ud675
Chapter 24: Perception April 20, Introduction Emphasis on vision Feature extraction approach Model-based approach –S stimulus –W world –f,
1 Computational Vision CSCI 363, Fall 2012 Lecture 12 Review for Exam 1.
Vision-Guided Humanoid Footstep Planning for Dynamic Environments P. Michel, J. Chestnutt, J. Kuffner, T. Kanade Carnegie Mellon University – Robotics.
AN ACTIVE VISION APPROACH TO OBJECT SEGMENTATION – Paul Fitzpatrick – MIT CSAIL.
Best Practice T-Scan5 Version T-Scan 5 vs. TS50-A PropertiesTS50-AT-Scan 5 Range51 – 119mm (stand- off 80mm / total 68mm) 94 – 194mm (stand-off.
Vision-Guided Humanoid Footstep Planning for Dynamic Environments
2. Skin - color filtering.
Paper – Stephen Se, David Lowe, Jim Little
Machine Vision Acquisition of image data, followed by the processing and interpretation of these data by computer for some useful application like inspection,
Learning about Objects
Name: _______________________________
Digital Image Processing
Average Number of Photons
Module II – VISION & PERCEPTS
Chapter X – VISION AND LOCALIZATION
Object Recognition Today we will move on to… April 12, 2018
Lecture 2 Photographs and digital mages
Computer Vision Basics
What Color is it?.
Let’s Learn the Basic Colors
Presentation transcript:

Page: 1 PHAM VAN Tien Real-Time Approach for Auto-Adjusting Vision System Reading Class International Graduate School of Dynamic Intelligent Systems

Page: 2 Objectives of the real-time approach Not too heavy computation. Power of processors and memory in robots are limited Adaptive to changes of lighting condition during run-time Pre-run calibration should be avoided Recognition process should not depend only on color segmentation Devices for image processing and computation : - Camera - Sensor

Page: 3 Color-coded environment Two color-coded flags (pink and yellow/green/skyblue) for localization Two goals (skyblue and yellow) Ball (orange) Robots (wearing red and blue tricots)

Page: 4 Guiding attention More attention is guided on areas of image where small objects are expected Not all, but only pixels at grid point is considered Image sequences: looking for objects around the previous detection (e.g. ball) Iterative processing: first, prominent features are searched, if found, it will hint to the other features Other sensor: reading data from distance or tilt sensors to guide visual attention Knowledge about environment: heuristics can be used to simplify image processing

Page: 5 Scan Lines Horizon is determined first Grid lines above and below the horizon are then set GT2004ImageProcessor.cpp

Page: 6 Vertical lines Bellow lines: for determination of the ball, field lines/borders, and lower half of the goals Above lines: mainly for finding the flags. Lines paralle to the horizon may be used if prediction fails

Page: 7 Color classification  A sub cube is the reference (green of carpet). Limited number of colors are defined (class CorlorTableReference)  Auto-adaption of reference and color segmentation of the cube improve identification ColorClasses ColorTableReference

Page: 8 Adaptation to lighting condition

Page: 9 Color adaptation Analysis of scan lines over goals and field border will help determining the reference cube (green) Update is made as every image green enough appears.

Page: 10 Edge detection Finding characteristic changes in YUV channels Two criteria to identify edges: Three dimensional constrast pattern: Surrounding color classes: pixels surrounding detected edges are considered to resolve ambiguities of constrast pattern classification and to filter edges caused by noise REdgeDetection : detection SUSANEdgeDetectionLite: edge filter

Page: 11 Open question Why need to predefine color of objects: goal, border, ball, etc ? Is that possible for robots to self-identify the object right before the match ? The goalkeeper is supposed to be more idle than other robots, why not impose more computation load on him, and then let him tell other players ?