iRobot ATRV Mini and my use of it or how I learned to stop worrying and love robotics by Michael Eckmann
Michael Eckmann - Skidmore College - CS 106 - Fall 2005 Overview Specs of the iRobot ATRV-Mini and available sensors How it came to be that I'm using it What I'm using it for –my dissertation project How I'm using it –what sensors I attach and how the information from the sensors will allow me to do what I want.
iRobot ATRV Mini A mobile robot –maker: iRobot –model: ATRV-Mini www.irobot.com They're the makers of the Roomba Vacuuming robot. Specialize in industrial and government robots –ATRV-Mini is one of these but it has been discontinued
iRobot ATRV Mini It was purchase sometime around 2001 by my advisor for use in some military visual surveillance application. It was sort of acquiring dust and so I ended up using it. –that's how things go sometimes --- you have to use what's available Luckily it's a good fit for what I need to use it for
iRobot ATRV Mini I have a spec sheet handout Onboard computer running Linux (RedHat 6.2) Wireless networking communication to onboard computer rugged enough for outdoor usage Sonar Sensors: anyone know what sonar is and give a short description of what it does for us? Optional sensors (not on this robot) –vision sensors pan tilt zoom camera stereo vision (anyone know what this is and what we can use it for?)
iRobot ATRV Mini Optional sensors (not on this robot) –inertial sensor angular – determines the amount of rotation about all three axes (yaw, pitch and roll) linear – determines motion along the three axes (x, y and z) –laser range finder fast, accurate and expensive –GPS determines position on Earth by communicating with satellites orbiting the Earth –get latitude, longitude and altitude
what's the specs on Mindstorms? what are the specs on the mindstorms? what sensors are available for them?
What I needed a robot with an onboard computer one that I could attach –camera –inertial sensor –GPS I need to be able to store prior knowledge on the harddrive and do computations while the robot's moving by taking input from all three sensors and combining that information with the stored data.
What I'm doing with it I am storing a crude "3d model" of the environment (with known world positions) determine robot's position within that "3d model" –to a high degree of accuracy –and quickly –from the sensor data (video, inertial and gps) Once I have the robot's position accurately –I can overlay the video from the camera on the robot with graphical information from the "3d model" –The goal is to provide more information than just video
What I'm doing with it Textual information –display name of a building overlayed on the video of the building. –doesn't need high accuracy of position –as long as the name of the building appears anywhere near it or on it, that's reasonable
What I'm doing with it More precise graphical information –overlay wireframe graphics over certain objects –requires much higher accuracy of the position For instance, assuming I am driving the robot indoors and I would like to "look through a wall to see the plumbing or electrical lines or even just the 3d structure behind the wall. Assuming that information is in my model I should be able to overlay those graphical elements on the video, but I need to know where the robot is to a high degree of accuracy, or else the user won't get a good sense of where this hidden information really is in the video. Imagine video of this room...
What I'm doing with it How to do this? –use GPS to get an initial guess –refine the position from the information I get from the inertial tracker and the visual information I get from the camera. –Information extracted from the camera includes tracked features in the video over time –use to find those same features in the model (e.g. corners of a room or building). direction and speed of travel to some degree –This extracting information from image and video data falls within Computer Vision (a subarea of computer science.)
What I'm doing with it The GPS and Inertial sensors that I'll use are not that exciting, but the camera I'm using is pretty cool. I'll pass one around as I explain some things about it. It's an omnidirectional camera. –contains a hyperbolic shaped mirror –the camera images the reflection off this mirror –can "see" 360 degrees around one axis and 180 around the other two axes –basically it images a hemisphere and if we put two together we would get the whole sphere
Whats an omnidirectional sensor? Secondary mirror (parabolic) Primary mirror Imaging lens Video camera
Why use an omni sensor? Advantages: – large field of view get as much of the objects as possible in each image – utilizes only one non-moving camera – images the full upper hemisphere from one point of view – always sees perpendicular to robot motion why do you think I consider these advantages? Can you think of any disadvantages to this kind of camera?
Why use an omni sensor? Humans prefer to view the world perspectively – like how a normal camera takes images with a horizon line and parallel lines converging to some vanishing point. Since this particular omnidirectional camera uses a parabolic mirror, it makes it easy (to do quickly) to compute a perspective view from a portion of the omniimage. The shape is a parabola – a specific mathematical shape that has nice properties which allow a perspective view to be generated quickly.
References iRobot www.irobot.comwww.irobot.com RemoteReality (formerly Cyclovision Technologies) make the omnidirectional camera I passed around –www.remotereality.com
Any Questions? I only gave an overview of the robot and how I'm using it, but I can field questions about more details or where you can find more info.