The optical sensor of the robot Phoenix-1 Aleksey Dmitriev.

Slides:



Advertisements
Similar presentations
Analysis of Dental Images using Artificial Immune Systems Zhou Ji 1, Dipankar Dasgupta 1, Zhiling Yang 2 & Hongmei Teng 1 1: The University of Memphis.
Advertisements

Applications of one-class classification
Chayatat Ratanasawanya Min He May 13, Background information The goal Tasks involved in implementation Depth estimation Pitch & yaw correction angle.
Exploiting Homography in Camera-Projector Systems Tal Blum Jiazhi Ou Dec 11, 2003 [Sukthankar, Stockton & Mullin. ICCV-2001]
Micromodel in Porous Flow. WHO ARE WE? Prof. Laura Pyrak-Nolte, Purdue University James McClure Ph.D. Student, North Carolina University Mark Porter Ph.D.
Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation.
Saint-Petersburg State University of Aerospace Instrumentation Robot Phoenix-1 Design Authors:Goncharov A. Miheev A. Jan 2007.
Høgskolen i Gjøvik Saleh Alaliyat Video - based Fall Detection in Elderly's Houses.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Research on high-definition video vehicles location and tracking Xiong Changzhen, LiLin IEEE, Distributed Computing and Applications to Business Engineering.
Vision Computing An Introduction. Visual Perception Sight is our most impressive sense. It gives us, without conscious effort, detailed information about.
Face Recognition Based on 3D Shape Estimation
Implementing a reliable neuro-classifier
Original image: 512 pixels by 512 pixels. Probe is the size of 1 pixel. Picture is sampled at every pixel ( samples taken)
Simultaneous Localization and Map Building System for Prototype Mars Rover CECS 398 Capstone Design I October 24, 2001.
October 7, 2010Neural Networks Lecture 10: Setting Backpropagation Parameters 1 Creating Data Representations On the other hand, sets of orthogonal vectors.
Hand Signals Recognition from Video Using 3D Motion Capture Archive Tai-Peng Tian Stan Sclaroff Computer Science Department B OSTON U NIVERSITY I. Introduction.
Traffic Sign Recognition Jacob Carlson Sean St. Onge Advisor: Dr. Thomas L. Stewart.
I mage and M edia U nderstanding L aboratory for Performance Evaluation of Vision-based Real-time Motion Capture Naoto Date, Hiromasa Yoshimoto, Daisaku.
Real-Time Vision on a Mobile Robot Platform Mohan Sridharan Joint work with Peter Stone The University of Texas at Austin
College of Engineering and Science Clemson University
Page 1 | Microsoft Streams sync and coordinate mapping Kinect for Windows Video Courses.
Automatic Camera Calibration
Biologically Inspired Turn Control for Autonomous Mobile Robots Xavier Perez-Sala, Cecilio Angulo, Sergio Escalera.
Zereik E., Biggio A., Merlo A. and Casalino G. EUCASS 2011 – 4-8 July, St. Petersburg, Russia.
By Meidika Wardana Kristi, NRP  Digital cameras used to take picture of an object requires three sensors to store the red, blue and green color.
1 Efficient Reference Frame Selector for H.264 Tien-Ying Kuo, Hsin-Ju Lu IEEE CSVT 2008.
Network-based Production Quality Control Principal Investigators: Dr. Yongjin Kwon, Dr. Richard Chiou Research Assistants: Shreepud Rauniar, Sweety Agarwal.
Page 1 | Microsoft Work With Color Data Kinect for Windows Video Courses Jan 2013.
Machine Vision Products that IMPACT your Bottom Line! Introducing KickStart!
Rotation Invariant Neural-Network Based Face Detection
Extracting Barcodes from a Camera-Shaken Image on Camera Phones Graduate Institute of Communication Engineering National Taiwan University Chung-Hua Chu,
COMPARISON OF IMAGE ANALYSIS FOR THAI HANDWRITTEN CHARACTER RECOGNITION Olarik Surinta, chatklaw Jareanpon Department of Management Information System.
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
18 th August 2006 International Conference on Pattern Recognition 2006 Epipolar Geometry from Two Correspondences Michal Perďoch, Jiří Matas, Ondřej Chum.
VEGETABLE SEEDLING FEATURE EXTRACTION USING STEREO COLOR IMAGING Ta-Te Lin, Jeng-Ming Chang Department of Agricultural Machinery Engineering, National.
Figure ground segregation in video via averaging and color distribution Introduction to Computational and Biological Vision 2013 Dror Zenati.
Senior Design Project Megan Luh Hao Luo Febrary
Development of a laser slit system in LabView
CSP Visual input processing 1 Visual input processing Lecturer: Smilen Dimitrov Cross-sensorial processing – MED7.
Data Analysis Markarian 421 – Crab Nebula – Crab Nebula –  2 dim plots : false source analysis  Miss-pointing study.
Feature Matching. Feature Space Outlier Rejection.
Ning Sun, Hassan Mansour, Rabab Ward Proceedings of 2010 IEEE 17th International Conference on Image Processing September 26-29, 2010, Hong Kong HDR Image.
A Recognition Method of Restricted Hand Shapes in Still Image and Moving Image Hand Shapes in Still Image and Moving Image as a Man-Machine Interface Speaker.
Chongwen DUAN, Weidong HU, Xiaoyong DU ATR Key Laboratory, National University of Defense Technology IGARSS 2011, Vancouver.
Date of download: 6/3/2016 Copyright © ASME. All rights reserved. From: Quantifying Function in the Early Embryonic Heart J Biomech Eng. 2013;135(4):
Introduction To IBR Ying Wu. View Morphing Seitz & Dyer SIGGRAPH’96 Synthesize images in transition of two views based on two images No 3D shape is required.
License Plate Recognition of A Vehicle using MATLAB
Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University.
Coin Recognition Using MATLAB - Emad Zaben - Bakir Hasanein - Mohammed Omar.
SMART CAMERAS AS EMBEDDED SYSTEM SMVEC. SMART CAMERA  See, think and act  Intelligent cameras  Embedding of image processing algorithms  Can be networked.
Automatic License Plate Recognition for Electronic Payment system Chiu Wing Cheung d.
OCR Reading.
THE TECHNICAL VISION SYSTEM FOR DIAGNOSIS OF THE HARVESTING UNIT OF THE ROBOT FOR GATHERING WILD PLANTS A.I. Kuznetsov, A.V. Tyryshkin National Research.
PRESENTED BY Yang Jiao Timo Ahonen, Matti Pietikainen
Pearson Lanka (Pvt) Ltd.
Image Processing for Physical Data
Facial Recognition in Biometrics
Optical Flow For Vision-Aided Navigation
Zhengjun Pan and Hamid Bolouri Department of Computer Science
OBJECT RECOGNITION – BLOB ANALYSIS
Camera Calibration Using Neural Network for Image-Based Soil Deformation Measurement Systems Zhao, Honghua Ge, Louis Civil, Architectural, and Environmental.
Applications of Cellular Neural Networks to Image Understanding
(super) quick DSLR Camera guide
ECE 477 Digital Systems Senior Design Project  Spring 2006
Apparent Subdiffusion Inherent to Single Particle Tracking
Vision Based UAV Landing
Digital image Levels of gray levels, quality: 1 byte = 8 bit 0 = Black
Random Neural Network Texture Model
Presentation transcript:

The optical sensor of the robot Phoenix-1 Aleksey Dmitriev

Introduction The presented work is devoted to the description of the optical sensor designed and experimentally verified as a part of the students’ project of a mobile robot.

Optical sensor overview

Objectives and criteria The primary objective of the sensor is to measure a displacement of a contrast stripe from the image center. Performance of recognition and control algorithms was the main criteria of the optical sensor design. The principal idea of stripe recognition algorithm is based on the assumption that stripe has homogeneous color and high contrast with the background.

Stripe recognition (1/2) The optical sensor should estimate size D, representing a displacement of the white stripe center from the image center.

Stripe recognition (2/2) The displacement D is calculated with the method of centre of mass. where Nc – pixel number that corresponds to white stripe center, Li – brightness of i pixel in a measuring stripe, n – number of pixels (length) of a measuring stripe.

Sample frame The sensor consists of several measuring stripes which indications are used by the algorithm of the robot control. Each stripe has its own parameters: offset from the image center, width and height.

Sample movie This is a sample movie taken during one of experiments.

Experiments’ result The control algorithm is based on PID-regulator. Indications of the optical sensor and output data of the control algorithm are used for teaching a neural network.

Revealed issues The control system is very sensitive to a camera position. Conclusion: we should install a cam angle sensor.

Conclusion The optical sensor has been successfully used on the teaching stage of “teaching by showing” methodology. During the tests the algorithm has shown good working capacity. The speed of the algorithm is at very high level that allows to write it even to low-performance microprocessors.

Contact info Aleksey Dmitriev SUAI, Saint-Petersburg, Russia