Robotics & Machine Intelligence James Painter Computer Engineering - Elizabethtown College 08 Dr. Joseph Wunderlich - Project Advisor Dr. Troy McBride.

Slides:



Advertisements
Similar presentations
Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science.
Advertisements

QR Code Recognition Based On Image Processing
Literature Review: Safe Landing Zone Identification Presented by Keith Sevcik.
Background Prometheus is a contender in the Intelligent Ground Vehicle Competition (IGVC). The competition requires project teams to design a small outdoor.
Pore Detection in Small Diameter Bores The University of Michigan, Ann Arbor NSF Engineering Research Center for Reconfigurable Manufacturing Systems.
Segmentation (2): edge detection
Abstract This project focuses on realizing a series of operational improvements for WPI’s unmanned ground vehicle Prometheus with the end goal of a winning.
Sponsors Mechanical Improvements Software The software is written in C++ and Python using the Robot Operating System (ROS) framework. The ROS tool, rviz,
Intelligent Ground Vehicle Competition 2006 Brigham Young University.
I-SOBOT SOCCER Padmashri Gargesa Intelligent Robotics I I (Winter 2011)
Autonomous Dual Navigation System Vehicle Dmitriy Bekker Sergei Kunsevich Computer Engineering Rochester Institute of Technology December 1, 2005 Advisor:
Simultaneous Localization and Map Building System for Prototype Mars Rover CECS 398 Capstone Design I October 24, 2001.
Capstone Design Project Plan Team SAUSAGES Ryan Campbell Anne Carrier Gonzalo Gonzalez Bryan Grider Steve Kerkmaz Ziad Mohieddin EE 401 – EE Design I Instructor.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Primary Goals Fully develop vision system for Wunderbot IV autonomous robot Adapt it specifically for June 2008 Intelligent Ground Vehicle Competition.
STC Robot Optimally Covering an Unknown Indoor Environment Majd Srour, Anis Abboud Under the supervision of: Yotam Elor and Prof. Alfred Bruckstein.
Object Detection Procedure CAMERA SOFTWARE LABVIEW IMAGE PROCESSING ALGORITHMS MOTOR CONTROLLERS TCP/IP
1 Robotics Technology. 2 Irish Mini Sumo Robot Competition Explained.
On the Design, Construction and Operation of a Diffraction Rangefinder MS Thesis Presentation Gino Lopes A Thesis submitted to the Graduate Faculty of.
Intelligent Ground Vehicle Competition Navigation Michael Lebson - James McLane - Image Processing Hamad Al Salem.
June 12, 2001 Jeong-Su Han An Autonomous Vehicle for People with Motor Disabilities by G. Bourhis, O.Horn, O.Habert and A. Pruski Paper Review.
Abstract Design Considerations and Future Plans In this project we focus on integrating sensors into a small electrical vehicle to enable it to navigate.
FEATURE EXTRACTION FOR JAVA CHARACTER RECOGNITION Rudy Adipranata, Liliana, Meiliana Indrawijaya, Gregorius Satia Budhi Informatics Department, Petra Christian.
Autonomous Surface Navigation Platform Michael Baxter Angel Berrocal Brandon Groff.
STEP 1: Determining the exact image width STEP 1: Determining the exact image width Position of X-ray Filter Position of X-ray Filter STEP 5: Crop Extra.
Autonomous Tracking Robot Andy Duong Chris Gurley Nate Klein Wink Barnes Georgia Institute of Technology School of Electrical and Computer Engineering.
Seraj Dosenbach Greg Lammers Beau Morrison Ananya Panja.
Ruslan Masinjila Aida Militaru.  Nature of the Problem  Our Solution: The Roaming Security Robot  Functionalities  General System View  System Design.
Smart Pathfinding Robot. The Trouble Quad Ozan Mindek Team Leader, Image Processing Tyson Mowery Packaging Specialist Jungwoo Seo Webmaster, Networking.
STC Robot 2 Majd Srour, Anis Abboud Under the supervision of: Yotam Elor and Prof. Alfred Bruckstein Optimally Covering an Unknown Environment with Ant-like.
Smart street lighting system
EEL 5666: Intelligent Machine Design Laboratory Final Presentation by Rob Hamersma April 12, 2005.
Sensing for Robotics & Control – Remote Sensors R. R. Lindeke, Ph.D.
Department of Electrical Engineering, Southern Taiwan University Robotic Interaction Learning Lab 1 The optimization of the application of fuzzy ant colony.
Digital Image Processing CCS331 Relationships of Pixel 1.
Robot sensors MVRT 2010 – 2011 season. Analog versus Digital Analog Goes from 0 to 254 Numerous values Similar to making waves because there are not sudden.
Detection of nerves in Ultrasound Images using edge detection techniques NIRANJAN TALLAPALLY.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Phong Le (EE) Josh Haley (CPE) Brandon Reeves (EE) Jerard Jose (EE)
How to startpage 1. How to start How to specify the task How to get a good image.
Digital Image Processing Lecture 10: Image Restoration March 28, 2005 Prof. Charlene Tsai.
Probabilistic Coverage in Wireless Sensor Networks Authors : Nadeem Ahmed, Salil S. Kanhere, Sanjay Jha Presenter : Hyeon, Seung-Il.
Digital Image Processing Lecture 10: Image Restoration
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Casey Smith Doug Ritchie Fred Lloyd Michael Geary School of Electrical and Computer Engineering December 15, 2011 ECE 4007 Automated Speed Enforcement.
Bryan Willimon IROS 2011 San Francisco, California Model for Unfolding Laundry using Interactive Perception.
Autonomous Robots Vision © Manfred Huber 2014.
ECE 4007 L01 DK6 1 FAST: Fully Autonomous Sentry Turret Patrick Croom, Kevin Neas, Anthony Ogidi, Joleon Pettway ECE 4007 Dr. David Keezer.
Vehicle Identification Study Ingress/egress to a parking area For private audience and use only October 2015.
ΜCHIP Micro-Controlled High-tech Independent Putter.
The HESSI Imaging Process. How HESSI Images HESSI will make observations of the X-rays and gamma-rays emitted by solar flares in such a way that pictures.
1 Motion Fuzzy Controller Structure(1/7) In this part, we start design the fuzzy logic controller aimed at producing the velocities of the robot right.
Final Presentation Prime Mobility Group Group Members: Fredrick Baggett William Crick Sean Maxon Project Advisor: Dr. Elliot Moore.
Surface Layer SLODAR J. Osborn, R. Wilson and T. Butterley A prototype of a new SLODAR instrument has been developed at Durham CfAI and tested at the Paranal.
School of Systems, Engineering, University of Reading rkala.99k.org April, 2013 Motion Planning for Multiple Autonomous Vehicles Rahul Kala Logic Based.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Diffraction AP Physics B. Superposition..AKA….Interference One of the characteristics of a WAVE is the ability to undergo INTERFERENCE. There are TWO.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Machine Vision. Image Acquisition > Resolution Ability of a scanning system to distinguish between 2 closely separated points. > Contrast Ability to detect.
Software Narrative Autonomous Targeting Vehicle (ATV) Daniel Barrett Sebastian Hening Sandunmalee Abeyratne Anthony Myers.
Detection of nerves in Ultrasound Images using edge detection techniques NIRANJAN TALLAPALLY.
We thank the Office of Research and Sponsored Programs for supporting this research, and Learning & Technology Services for printing this poster. Fully-Autonomous.
The entire system was tested in a small swimming pool. The fully constructed submarine is shown in Fig. 14. The only hardware that was not on the submarine.
Best Practice T-Scan5 Version T-Scan 5 vs. TS50-A PropertiesTS50-AT-Scan 5 Range51 – 119mm (stand- off 80mm / total 68mm) 94 – 194mm (stand-off.
3D Perception and Environment Map Generation for Humanoid Robot Navigation A DISCUSSION OF: -BY ANGELA FILLEY.
IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM.
Self-Navigation Robot Using 360˚ Sensor Array
Factors that Influence the Geometric Detection Pattern of Vehicle-based Licence Plate Recognition Systems Martin Rademeyer Thinus Booysen, Arno Barnard.
Vision Tracking System
Greg Yoblin & Joseph Marino
Presentation transcript:

Robotics & Machine Intelligence James Painter Computer Engineering - Elizabethtown College 08 Dr. Joseph Wunderlich - Project Advisor Dr. Troy McBride - Project Consultant Objective Fully develop the vision system for the Wunderbot IV autonomous robot and adapt it specifically for the June 2008 Intelligent Ground Vehicle Competition (IGVC). This will entail the construction of a sturdy camera mount with appropriate viewing angle, image processing with line parsing, and intelligent motor control. Camera Mount The Wunderbot camera mount is part of the new utility pole designed to hold the GPS, digital compass, and wireless router antenna. Atop the pole and set back about 16 inches is an angled metal bracket, onto which the camera is bolted and secured with wing nuts. The wing nuts allow easy fine-tune adjustments to the mounted angle. The position of the camera on the utility pole was an important consideration in regard to the resulting viewable region in front of the robot. As the downward angle of the camera is increased, the depth of the view is decreased, and vice versa. However, as the camera is tilted up, the area directly in front of the robot – a crucial region for avoiding immediate obstacles – drops out of view. Raising the camera higher and moving it farther back (Figs. 1 & 2) has similar consequences on the viewable region. Camera angle analysis was performed to find the most ideal configuration. Figures 1 & 2: Viewable region directly in front of robot using (1) vertical utility pole and (2) utility pole angled back at ~75 degrees; meter sticks arranged on floor directly in front of wheels for depth measurement As can be seen in Figs. 1 and 2, by moving the camera a distance behind the robots rear bumper, a range of view can be established that is about 85cm deeper than having the camera directly above the bumper. Also note the loss of view of about 25cm in front of the vehicle when the depth of view is extended. Another consideration is the ability to crop edge regions out of the image for faster acquisition and subsequent image processing. A larger field of view allows more unnecessary area in the image to be removed. The table below shows the elapsed time for processing images of different sizes. By these results, it was decided that moving the camera back would yield significant improvements in line detection, for both time factors and the ability of the robot to detect lines a greater distance ahead on the path. The figure below shows the change in the cameras viewable region when it is mounted above the rear bumper (blue triangle) vs. mounted behind the bumper (dashed lines). Top Edge CroppedProcessing Time Speedup 15% (153 lines)16% (90ms) 24% (246 lines)25% (140ms) Image Processing Image processing is performed in the cameras proprietary software, DVT Intellect. First, a dilate filter is applied, using a 3x3 kernel. Next, a Hough Transform line detection algorithm is used. Multiple thresholds are applied in order to filter out noise and extraneous small objects, such as leaves, dirt patches, and chipmunks. Among those was a line thickness sensor, which first scans the image for 75% intensity contrast between neighboring pixels. The first three chains of 50 pixels that satisfy this requirement are accepted as lines. (Alternatively, all chains can be found and those with highest contrast accepted, but this process takes about three times as long.) Once lines have been established, they are only considered valid if they have a separation of less than 300 pixels. They must also have a straightness of less than a 75-pixel maximum deviation from the averaged center line. These qualifiers help eliminate large detected regions like shadows. Motor Control Once line position data has been found, the camera sends it to the on-board PC via wired TCP/IP network communication. LabVIEW is used to record the lines on a global map and to control the robots motion accordingly. The location of the lines, in terms of both depth (distance away) and lateral position, is considered in order to calculate the sharpness of turn and possibly backup. The image above shows a screenshot taken during an IGVC simulation. Image processing resulted in successful detection of the dashed white lines shown. The fine yellow lines were auto-generated and were then used to guide the robot toward the left. Controls were designed to adjust numerous parameters, such as overall target speed, backup speed, proximity for backing up, turning aggressiveness (factoring depth and lateral position separately), and minimum line width.