iRobot ATRV Mini and my use of it or how I learned to stop worrying and love robotics by Michael Eckmann.

Slides:



Advertisements
Similar presentations
Term 2, 2011 Week 1. CONTENTS Sending and receiving devices Mobile devices connected to networks – Smart phones – Personal digital assistants – Hand-held.
Advertisements

CS 206 Introduction to Computer Science II 09 / 05 / 2008 Instructor: Michael Eckmann.
CS 206 Introduction to Computer Science II 02 / 27 / 2009 Instructor: Michael Eckmann.
P3 Revision. How do forces have a turning effect? The turning effect of a force is called the moment. Distance from force to pivot – perpendicular to.
Week 5 - Friday.  What did we talk about last time?  Quaternions  Vertex blending  Morphing  Projections.
What is Video Conferencing? Allow two or more locations to interact via two-way video and audio transmissions simultaneously. People.
Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to.
Sensors For Robotics Robotics Academy All Rights Reserved.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Remote sensing in meteorology
Attitude Determination and Control
CS 206 Introduction to Computer Science II 09 / 10 / 2008 Instructor: Michael Eckmann.
1 Part II: Data Transmission The basics of media, signals, bits, carriers, and modems Fall 2005 Qutaibah Malluhi Computer Science and Engineering Qatar.
CS 376b Introduction to Computer Vision 04 / 01 / 2008 Instructor: Michael Eckmann.
Theoretical Foundations of Multimedia Chapter 3 Virtual Reality Devices Non interactive Slow image update rate Simple image Nonengaging content and presentation.
Wireless Sensor Localization Decoding Human Movement Michael Baswell CS526 Semester Project, Spring 2006.
Ambient Displays of User Mood Tony Morelli Department of Computer Science, University of Nevada, Reno Abstract: Determining a user’s mood can be a very.
5/1/2006Baswell/SensorLocalization1 Wireless Sensor Localization Decoding Human Movement Michael Baswell CS526 Semester Project, Spring 2006.
Welcome to Geoinformatics! Produced by Department of Geomatic Engineering, UCL.
Global Positioning System (GPS) Learning Objectives: By the end of this topic you should be able to: describe how satellite communications systems are.
Visualization and Graphics Introduction
Quick Overview of Robotics and Computer Vision. Computer Vision Agent Environment camera Light ?
MR. WOMACK GEOGRAPHY Maps and Globes. A globe is a three-dimensional representation of the earth. It provides a way to view the earth as it travels through.
GPS and Navigation Subject Topics: Geography, Geometry and Trigonometry.
Introduction to Robotics and ASU Robots Yinong Chen (Ph.D.) School of Computing, Informatics, and Decision Systems Engineering.
Smartphone Overview iPhone 4 By Anthony Poland 6 Nov 2014.
Unit 8 Model Answer. Task 1 - Bluetooth Bluetooth equipped devices can exchange information. Bluetooth is most commonly used to connect mobile telephones.
Autonomous Surface Navigation Platform Michael Baxter Angel Berrocal Brandon Groff.
A HIGH RESOLUTION 3D TIRE AND FOOTPRINT IMPRESSION ACQUISITION DEVICE FOR FORENSICS APPLICATIONS RUWAN EGODA GAMAGE, ABHISHEK JOSHI, JIANG YU ZHENG, MIHRAN.
CS 376b Introduction to Computer Vision 04 / 29 / 2008 Instructor: Michael Eckmann.
LESSON 5 Understanding Global Positioning Systems.
3D SLAM for Omni-directional Camera
Flow Separation for Fast and Robust Stereo Odometry [ICRA 2009]
Team/ Individual Name: S.P.A.R.K(students promoting Awareness of Research Knowledge) Members of the Team: S.Deepa, Y.Sowjanya, M.Navya, S.Sowmya. Name.
Global Positioning Systems Agriscience. OnStar Navigation System.
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
 The job of a police officer is to work in a community to ensure the safety of the community and to maintain law and order. I am interested in this career.
A Photon Accurate Model Of The Human Eye Michael F. Deering.
 When light strikes the surface of an object  Some light is reflected  The rest is absorbed (and transferred into thermal energy)  Shiny objects,
Lecture 6: 3D graphics Concepts 1  Principles of Interactive Graphics  CMSCD2012  Dr David England, Room 718,  ex 2271 
More LEGO Mark Green School of Creative Media. Introduction  Now that we know the basics its time to look at putting some robots (or toys) together 
PICTORIAL DRAWINGS.
Images formed by mirrors –plane mirrors –curved mirrors concave convex Images formed by lenses the human eye –correcting vision problems nearsightedness.
1 Optical Camouflage-Invisibility Cloak Ayush Jain EC
LECTURE 3B – CHART PROJECTION. Introduction to Chart Projection  Usually we read chart in coordinate system.  A projected coordinate system is defined.
January 28, 2013Q-2 Pg. Daily Goal: We will evaluate how our actions affect the surface of the earth’s soil. We will prove our evaluations based on the.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Describe the characteristics of a personal/desktop computer and its uses, both as a standalone and networked computer Describe the characteristics of a.
12/24/2015 A.Aruna/Assistant professor/IT/SNSCE 1.
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
Visual Odometry David Nister, CVPR 2004
 Mirrors that are formed from a section of a sphere.  Convex: The reflection takes place on the outer surface of the spherical shape  Concave: The.
Science 9 Chapter SOUNDLIGHT REFLECTIONREFRACTION
By Shane Palmer April 15 th Earnings The average salary for a special effects technician would be between 35,000 – 50,000 This can vary depending.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
R.O.M.P Robot Orientation Mapping Project Team Evolution Peri Subrahmanya: Lead Designer Michael Lazar: Project Manager Sean Hogan: Lead Designer Joe Hackstadt:
Career Research Project By: Naomi Schmidt. Acting/Modeling I would like to go for acting/modeling because I think it would be a fun job that is hands.
Location-Sensing and Location Systems 1. A positioning system provides the means to determine location and leaves it to the user device to calculate its.
Project Overview  Introduction  Frame Build  Motion  Power  Control  Sensors  Advanced Sensors  Open design challenges  Project evaluation.
Chapter 3: Models of the Earth
What Is an Electric Motor? How Does a Rotation Sensor Work?
Sensors For Robotics Robotics Academy All Rights Reserved.
Paper – Stephen Se, David Lowe, Jim Little
Contemporary Tools.
Sensors For Robotics Robotics Academy All Rights Reserved.
N7 Graphic Communication
Ian Ramsey C of E School GCSE ICT On the move Finding the way.
Remote sensing in meteorology
Presentation transcript:

iRobot ATRV Mini and my use of it or how I learned to stop worrying and love robotics by Michael Eckmann

Michael Eckmann - Skidmore College - CS Fall 2005 Overview Specs of the iRobot ATRV-Mini and available sensors How it came to be that I'm using it What I'm using it for –my dissertation project How I'm using it –what sensors I attach and how the information from the sensors will allow me to do what I want.

iRobot ATRV Mini A mobile robot –maker: iRobot –model: ATRV-Mini They're the makers of the Roomba Vacuuming robot. Specialize in industrial and government robots –ATRV-Mini is one of these but it has been discontinued

iRobot ATRV Mini It was purchase sometime around 2001 by my advisor for use in some military visual surveillance application. It was sort of acquiring dust and so I ended up using it. –that's how things go sometimes --- you have to use what's available Luckily it's a good fit for what I need to use it for

iRobot ATRV Mini I have a spec sheet handout Onboard computer running Linux (RedHat 6.2) Wireless networking communication to onboard computer rugged enough for outdoor usage Sonar Sensors: anyone know what sonar is and give a short description of what it does for us? Optional sensors (not on this robot) –vision sensors pan tilt zoom camera stereo vision (anyone know what this is and what we can use it for?)

iRobot ATRV Mini Optional sensors (not on this robot) –inertial sensor angular – determines the amount of rotation about all three axes (yaw, pitch and roll) linear – determines motion along the three axes (x, y and z) –laser range finder fast, accurate and expensive –GPS determines position on Earth by communicating with satellites orbiting the Earth –get latitude, longitude and altitude

what's the specs on Mindstorms? what are the specs on the mindstorms? what sensors are available for them?

What I needed a robot with an onboard computer one that I could attach –camera –inertial sensor –GPS I need to be able to store prior knowledge on the harddrive and do computations while the robot's moving by taking input from all three sensors and combining that information with the stored data.

What I'm doing with it I am storing a crude "3d model" of the environment (with known world positions) determine robot's position within that "3d model" –to a high degree of accuracy –and quickly –from the sensor data (video, inertial and gps) Once I have the robot's position accurately –I can overlay the video from the camera on the robot with graphical information from the "3d model" –The goal is to provide more information than just video

What I'm doing with it Textual information –display name of a building overlayed on the video of the building. –doesn't need high accuracy of position –as long as the name of the building appears anywhere near it or on it, that's reasonable

What I'm doing with it More precise graphical information –overlay wireframe graphics over certain objects –requires much higher accuracy of the position For instance, assuming I am driving the robot indoors and I would like to "look through a wall to see the plumbing or electrical lines or even just the 3d structure behind the wall. Assuming that information is in my model I should be able to overlay those graphical elements on the video, but I need to know where the robot is to a high degree of accuracy, or else the user won't get a good sense of where this hidden information really is in the video. Imagine video of this room...

What I'm doing with it How to do this? –use GPS to get an initial guess –refine the position from the information I get from the inertial tracker and the visual information I get from the camera. –Information extracted from the camera includes tracked features in the video over time –use to find those same features in the model (e.g. corners of a room or building). direction and speed of travel to some degree –This extracting information from image and video data falls within Computer Vision (a subarea of computer science.)

What I'm doing with it The GPS and Inertial sensors that I'll use are not that exciting, but the camera I'm using is pretty cool. I'll pass one around as I explain some things about it. It's an omnidirectional camera. –contains a hyperbolic shaped mirror –the camera images the reflection off this mirror –can "see" 360 degrees around one axis and 180 around the other two axes –basically it images a hemisphere and if we put two together we would get the whole sphere

Whats an omnidirectional sensor? Secondary mirror (parabolic) Primary mirror Imaging lens Video camera

Whats an omnidirectional sensor?

Why use an omni sensor? Advantages: – large field of view get as much of the objects as possible in each image – utilizes only one non-moving camera – images the full upper hemisphere from one point of view – always sees perpendicular to robot motion why do you think I consider these advantages? Can you think of any disadvantages to this kind of camera?

Why use an omni sensor? Humans prefer to view the world perspectively – like how a normal camera takes images with a horizon line and parallel lines converging to some vanishing point. Since this particular omnidirectional camera uses a parabolic mirror, it makes it easy (to do quickly) to compute a perspective view from a portion of the omniimage. The shape is a parabola – a specific mathematical shape that has nice properties which allow a perspective view to be generated quickly.

References iRobot RemoteReality (formerly Cyclovision Technologies) make the omnidirectional camera I passed around –

Any Questions? I only gave an overview of the robot and how I'm using it, but I can field questions about more details or where you can find more info.