# Tomas Kulvicius Poramate Manoonpong

## Presentation on theme: "Tomas Kulvicius Poramate Manoonpong"— Presentation transcript:

Tomas Kulvicius Poramate Manoonpong
Introduction to Computer Vision and Robotics: Navigation, Localization and Mapping Tomas Kulvicius Poramate Manoonpong 1

“I know where I am & I am able to make a plan to reach destination”

“How do I get there?”  Navigation from A to B A B  The problem involves obstacles, uneven terrain of varying friction properties, and obstruction of the robot’s view to the goal. Sometimes the robot will have maps of the area to be traversed or information about known landmarks by which to navigate. While it navigates from A to B, the robot might face to obstacles and it should use the best path to the goal!, e.g., the shortest path, less energy consumption, less time

2) “Where am I?”  Localization problem  The problem involves obstacles, uneven terrain of varying friction properties, and obstruction of the robot’s view to the goal. Sometimes the robot will have maps of the area to be traversed or information about known landmarks by which to navigate. While it navigates from A to B 2)  This problem is sometimes cast as the kidnapped-robot problem ! Imagine, if the robot is picked up, blindfolded, and placed in a new location. How does it determine where it is with respect to known coordinate? What sensors does it need? How does a robot’s ability to answer this question differ from that of a human or a homing pigeon?

3) “Where have I been?”  Mapping  The problem involves obstacles, uneven terrain of varying friction properties, and obstruction of the robot’s view to the goal. Sometimes the robot will have maps of the area to be traversed or information about known landmarks by which to navigate. While it navigates from A to B 2)  This problem is sometimes cast as the kidnapped-robot problem ! Imagine, if the robot is picked up, blindfolded, and placed in a new location. How does it determine where it is with respect to known coordinate? What sensors does it need? How does a robot’s ability to answer this question differ from that of a human or a homing pigeon? 3) The third problem concerns thee ability of a robot to prepare maps of the terrain it transverses, indoor or out. This problem has been described in term of the answer to the question “Where have I been?” Mostly used method is called simultaneously localization and mapping Dealing with these problems, the robot must rely on sensors where these sensors provide indications of distance to landmarks, distance traveled, velocity, and orientation with respect to landmarks or the coordinate directions. However, all these measurements are noisy and imprecise. Furthermore, one needs to also consider uncertainty of measurement and variability of the environment. Thus many of the algorithms used for localization, navigation and mapping are probabilistic ! Which I will give you an example later! Before talking about how to solve these problems for a robot, let me show you how Biological systems solve such problems!

Biological Inspiration

Biological Inspiration

Biological Inspiration
Compass orientation MacFarland who studies animal behavior and navigation! One method for demonstrating the difference between compass orientation and true navigation is to use displacement of animals during a journey!. During their journey, animals are captured and released in another. If an animal proceeds in the same direction without compensating when it is released then it uses “Compass orientation” But If it corrects for the displacement on release and change direction to head for the original destination it is using true navigation. True navigation

Biological Inspiration

Biological Inspiration

Biological Inspiration

Biological Inspiration

Many animals have abilities to navigate and localize like Bird, fish mammals and insects!

Biological Inspiration

Biological Inspiration

This video illustrate the trail laying behaviour of Argentine ants
This video illustrate the trail laying behaviour of Argentine ants. As most ant species, these ants lay a trace of pheromone as they move across space in search for food. The pheromone is not directly visible to the human eye, nor detectable with biochemical assays at the concentrations at which it is present on the natural trails. Here, we visualize possible pheromone traces left by the ants with an image analysis technique, by colouring the substrate where the ants go. Illustrative video for the article: Individual rules for trail pattern formation in Argentine ants (Linepithema humile) by Andrea Perna, Boris Granovskiy, Simon Garnier, Stamatios Nicolis, Marjorie Labédan, Guy Theraulaz, Vincent Fourcassié, David Sumpter

Biological Inspiration

Many animals have abilities to navigate and localize like Bird, fish mammals and insects!

From Biological Inspiration to Robot Implementation
We have already seen that animals show a variety of remarkable abilities to assist them in navigation. To provide robots with many of the same abilities We use sensors such as Computer vision. It enables a robot to see and recognize landmarks like large rocks, tree and beacons. Vision sensors are the eyes of the robot; Object recognition software is essential for useful interpretation of the visual input that these sensors provide! Mars rovers are equipped with sun sensors (CCD camera with image processing used to extract the sun position) to enable them to determine their orientation rwt the sun Similary, a star sensor (CCD camera with special lens and image processing) combined with a celestial map can be used for celesial navigation 2) Gas sensor 3) Compass = digital compass Devantech makes an I2C device which returns heading in 8-bits or 10 bit-resolution. There also also several models available Have 4 pins one of each N, W,S,E thus giving either 4 or 8 possible direction 4) Wheel encoders or joint angle sensors  joint angle for legged robot to count walking steps Since odometric measurements Clearly, we can equip our robots with the basic functions of animal navigation sensors. Having sensors, however, is only part of the solution. So, Now we must look at how these sensors are used!!!! Navigation is the process of determining and maintaining a path or trajectory to a goal destination.

Mobile Robots NASA’s 'curious' rover E-puck robot ARMAR III AMOS robot

E-puck Robot Setup As robot becomes increasingly autonomous and able to operate in unstructed environment, they will be faced with more and more difficult problems of Orientation Navigation When a robot is moving indoors on smooth surfaces over short distance, with unobstructed visibility, these problems are not difficult! Since the robot will have a clear view of a target location and use vision for navigation! OR it can use odometer (wheel encoders). However the odometry is generally not satisfactory across long distance over uneven terrain (because different wheels will experience different amount of slippage) or even on a smooth floor since the wheels are not identical, a robot’s actual path will gradually drift away from the desired path to the goal.!

Sensors for Robot Navigation, Mapping, Localization
We have already seen that animals show a variety of remarkable abilities to assist them in navigation. To provide robots with many of the same abilities We use sensors such as Computer vision. It enables a robot to see and recognize landmarks like large rocks, tree and beacons. Vision sensors are the eyes of the robot; Object recognition software is essential for useful interpretation of the visual input that these sensors provide! Mars rovers are equipped with sun sensors (CCD camera with image processing used to extract the sun position) to enable them to determine their orientation rwt the sun Similary, a star sensor (CCD camera with special lens and image processing) combined with a celestial map can be used for celesial navigation 2) Gas sensor 3) Compass = digital compass Devantech makes an I2C device which returns heading in 8-bits or 10 bit-resolution. There also also several models available Have 4 pins one of each N, W,S,E thus giving either 4 or 8 possible direction 4) Wheel encoders or joint angle sensors  joint angle for legged robot to count walking steps Since odometric measurements Clearly, we can equip our robots with the basic functions of animal navigation sensors. Having sensors, however, is only part of the solution. So, Now we must look at how these sensors are used!!!! Navigation is the process of determining and maintaining a path or trajectory to a goal destination.

Sensors for Robot Navigation, Localization, Mapping
Computer vision enable a robot to see and recognize landmarks, orient using sun sensors, star sensors. Star sensor We have already seen that animals show a variety of remarkable abilities to assist them in navigation. To provide robots with many of the same abilities We use sensors such as Computer vision. It enables a robot to see and recognize landmarks like large rocks, tree and beacons. Vision sensors are the eyes of the robot; Object recognition software is essential for useful interpretation of the visual input that these sensors provide! Mars rovers are equipped with sun sensors (CCD camera with image processing used to extract the sun position) to enable them to determine their orientation rwt the sun Similary, a star sensor (CCD camera with special lens and image processing) combined with a celestial map can be used for celesial navigation 2) Gas sensor 3) Compass = digital compass Devantech makes an I2C device which returns heading in 8-bits or 10 bit-resolution. There also also several models available Have 4 pins one of each N, W,S,E thus giving either 4 or 8 possible direction 4) Wheel encoders or joint angle sensors  joint angle for legged robot to count walking steps Since odometric measurements Clearly, we can equip our robots with the basic functions of animal navigation sensors. Having sensors, however, is only part of the solution. So, Now we must look at how these sensors are used!!!! Navigation is the process of determining and maintaining a path or trajectory to a goal destination. Star sensor Volpe, IROS 1999 Sun sensor

Sensors for Robot Navigation, Localization, Mapping
Computer vision enable a robot to see and recognize landmarks, orient using sun sensors, star sensors. Olfactory sensors (gas or alcohol sensing) enable a robot to detect chemical substance or trail and a specific odor source. We have already seen that animals show a variety of remarkable abilities to assist them in navigation. To provide robots with many of the same abilities We use sensors such as Computer vision. It enables a robot to see and recognize landmarks like large rocks, tree and beacons. Vision sensors are the eyes of the robot; Object recognition software is essential for useful interpretation of the visual input that these sensors provide! Mars rovers are equipped with sun sensors (CCD camera with image processing used to extract the sun position) to enable them to determine their orientation rwt the sun Similary, a star sensor (CCD camera with special lens and image processing) combined with a celestial map can be used for celesial navigation 2) Gas sensor 3) Compass = digital compass Devantech makes an I2C device which returns heading in 8-bits or 10 bit-resolution. There also also several models available Have 4 pins one of each N, W,S,E thus giving either 4 or 8 possible direction 4) Wheel encoders or joint angle sensors  joint angle for legged robot to count walking steps Since odometric measurements Clearly, we can equip our robots with the basic functions of animal navigation sensors. Having sensors, however, is only part of the solution. So, Now we must look at how these sensors are used!!!! Navigation is the process of determining and maintaining a path or trajectory to a goal destination. Sensor response to alcohol (70%)

Sensors for Robot Navigation, Localization, Mapping
Computer vision enable a robot to see and recognize landmarks, orient using sun sensors, star sensors. Olfactory sensors (gas sensing) enable a robot to detect chemical substance or trail and a specific odor source. Compass provide an indication of magnetic north. But unreliable when interfering with magnet or metal. The DIGITAL SENSOR No magnetically indicates the four Cardinal (N. E, S. W) directions, and, by overlapping the four Cardinal directions, shows the four intermediate (NE, NW, SE, SW) directions ANALOG SENSOR No outputs a sine-cosine curve pair which may be interpreted by microprocessor, graphs, or other simple system into directional information. No Sensor (Protected by patents applied for) requires a regulated 5 volt input and gives a ratiometric output. The rail to rail voltage swing is close to 0.75 volts for both curves. Dinsmore ANALOG SENSOR No. R1655 is the same as No except that it outputs a voltage swing of close to 1.3 volts rail to rail for both curves. Devantech Magnetic Compass Analog Dinsmore compass

Sensors for Robot Navigation, Localization, Mapping
Computer vision enable a robot to see and recognize landmarks, orient using sun sensors, star sensors. Olfactory sensors (gas sensing) enable a robot to detect chemical substance or trail and a specific odor source. Compass provide an indication of magnetic north. But unreliable when interfering by magnetic or metal. Clock (on board)  essential in connection with a sun sensor. We have already seen that animals show a variety of remarkable abilities to assist them in navigation. To provide robots with many of the same abilities We use sensors such as Computer vision. It enables a robot to see and recognize landmarks like large rocks, tree and beacons. Vision sensors are the eyes of the robot; Object recognition software is essential for useful interpretation of the visual input that these sensors provide! Mars rovers are equipped with sun sensors (CCD camera with image processing used to extract the sun position) to enable them to determine their orientation rwt the sun Similary, a star sensor (CCD camera with special lens and image processing) combined with a celestial map can be used for celesial navigation 2) Gas sensor 3) Compass = digital compass Devantech makes an I2C device which returns heading in 8-bits or 10 bit-resolution. There also also several models available Have 4 pins one of each N, W,S,E thus giving either 4 or 8 possible direction 4) Wheel encoders or joint angle sensors  joint angle for legged robot to count walking steps Since odometric measurements Clearly, we can equip our robots with the basic functions of animal navigation sensors. Having sensors, however, is only part of the solution. So, Now we must look at how these sensors are used!!!! Navigation is the process of determining and maintaining a path or trajectory to a goal destination.

Sensors for Robot Navigation, Localization, Mapping
Wheel encoders or joint angle sensors  measure distance traveled and change in orientation and are use for path integration (dead reckoning) . Range finders  enable a robot to estimate its distance from objects in the environment. (IR, Ultrasonic sensors at short distances, laser scanner in longer distance). Gyroscopes provide heading directions and to improve odometric readings. GPS enable outdoor robots to determine (within centimeters) their latitude and longitude. Unfortunately, it does not work indoors as well as on Mars, the moon, and other planets. Etc. Wind, pressure, sound, heat sensors!!! We have already seen that animals show a variety of remarkable abilities to assist them in navigation. To provide robots with many of the same abilities We use sensors such as Computer vision. It enables a robot to see and recognize landmarks like large rocks, tree and beacons. Vision sensors are the eyes of the robot; Object recognition software is essential for useful interpretation of the visual input that these sensors provide! Mars rovers are equipped with sun sensors (CCD camera with image processing used to extract the sun position) to enable them to determine their orientation rwt the sun Similary, a star sensor (CCD camera with special lens and image processing) combined with a celestial map can be used for celesial navigation 2) Gas sensor 3) Compass = digital compass Devantech makes an I2C device which returns heading in 8-bits or 10 bit-resolution. There also also several models available Have 4 pins one of each N, W,S,E thus giving either 4 or 8 possible direction 4) Wheel encoders or joint angle sensors  joint angle for legged robot to count walking steps Since odometric measurements Clearly, we can equip our robots with the basic functions of animal navigation sensors. Having sensors, however, is only part of the solution. So, Now we must look at how these sensors are used!!!! Navigation is the process of determining and maintaining a path or trajectory to a goal destination.

Navigation “Moving from a starting point to a goal”

I. Outdoor Navigation Robot’s initial position at “START”
Consider the situation illustrated in this figure, A robot is situated at the point labeled “START” It is Robot’s initial position at “START” Move to “GOAL” Straight-line path is “NOT possible” due to obstacles GOAL & Two landmarks (L1, L2) are visible on a clear day!

“What sensors should we use?”
I. Outdoor Navigation Three possible scenarios: Clear day, visible goal, unknown distance to goal: “What sensors should we use?” Consider the situation illustrated in this figure, A robot is situated at the point labeled “START” It is

I. Outdoor Navigation Three possible scenarios:
Clear day, visible goal, unknown distance to goal: Sensors: Vision, Compass, and Sonar or IR sensors. Strategy: “Wandering Standpoint” (Puttkamer 2000) Using vision to recognize its goal. It then travels toward the goal by keeping the goal image in the center of its viewfinder, and moving in the same compass direction toward the goal until IR sensors detect obstacles (and the obstacle obstructs the goal view), then the robot can random move to left or right, and then travel around the obstacle by keeping a fixed distance to it

I. Outdoor Navigation Three possible scenarios:
q Three possible scenarios: Clear day, visible goal, unknown distance to goal: Sensors: Vision, Compass, and Sonar or IR sensors. Strategy: “Wandering Standpoint” (Puttkamer 2000) Try to reach the goal from start in direct line, when encountering an obstacle  randomly turn left/right OR check small turning angle to avoid an obstacle and then turn to that direction Move around the obstacle (boundary-following) until the goal is clear or visible then heading to the goal. Using vision to recognize its goal. It then travels toward the goal by keeping the goal image in the center of its viewfinder, and moving in the same compass direction toward the goal until IR sensors detect obstacles (and the obstacle obstructs the goal view), then the robot can random move to left or right, and then travel around the obstacle by keeping a fixed distance to it

I. Outdoor Navigation Three possible scenarios:
Clear day, visible goal, unknown distance to goal: Sensors: Vision, Compass, and Sonar or IR sensors. Strategy: “Wandering Standpoint” Vision to recognize the goal Keeping the goal image in the center of a view finder Moving in the same compass direction toward the goal If an obstacle is detected (the goal is not visible), then move to left or right (randomly) and travel around the obstacle When the original compass direction is detected and/or the goal is visible, the robot can turn and change its heading then cont. toward the goal Using vision to recognize its goal. It then travels toward the goal by keeping the goal image in the center of its viewfinder, and moving in the same compass direction toward the goal until IR sensors detect obstacles (and the obstacle obstructs the goal view), then the robot can random move to left or right, and then travel around the obstacle by keeping a fixed distance to it

I. Outdoor Navigation Three possible scenarios:
Clear day, visible goal, unknown distance to goal: Sensors: Vision, Compass, and Sonar or IR sensors. Strategy: “Wandering Standpoint” Vision to recognize the goal Keeping the goal image in the center of a view finder Moving in the same compass direction toward the goal If an obstacle is detected (the goal is not visible), then move to left or right (randomly) and travel around the obstacle When the original compass direction is detected and/or the goal is visible, the robot can turn and change its heading then cont. toward the goal Clearly, there are many alternative strategies for finding a path the avoid obstacle and reach the goal. Finding the best path from start to goal is termed path planning This results in “Path 1” “However, this is a simple strategy but not optimal one! & might lead to an endless loop in case of extreme obstacle placements” Finding the best path from start to goal is called “path planning”, using e.g., wavefront, machine learning algorithm! (RL)

I. Outdoor Navigation Three possible scenarios:
q Three possible scenarios: 2) Goal not visible from start location, landmarks visible, goal visible from landmarks: Sensors: Vision Strategy: “Navigation by landmarks” Move to visible landmarks From landmarks to the goal  This results in “Path 2” This condition, the robot can navigate using only VISION which known as navigation by landmarks However, if landmarks are not visible then. It might uses “map of environment of available”

I. Outdoor Navigation Three possible scenarios:
q Three possible scenarios: 3) Goal and landmarks not visible, direction to the goal known: Sensors: GPS or Compass & Wheel Encoders (obtain current position!) Strategy: “Dead reckoning” When the goal or even first obstacle is not visible but its map location is known, the robot can travel the desired distance (10 m, NE!) and in the desired direction by using GPS. If GPS is not available the robot can use a compass, wheel encoders and knowledge of the wheel diameters. This method is known as Dead reckoning!. It is useful in a short distance. It also requires the that robot knows its initial pose (location and orientation) on the map. The process of determining the change in a robot (vehicle)’s position (x,y) and orientation (q) over time. It records information (compass heading, wheel encoders, etc.) every certain period! Or while moving  the information is used to calculate the new position. The new position is used to estimate “How long and which direction I should still move to reach” the goal! n. navigation by calculation; method of determining the position of a plane or ship by making a graph of its course and speed from a previously known position

Don’t see the goal from starting point!!!
Dead Reckoning Goal Don’t see the goal from starting point!!! Start Recording distance and heading direction for “path integration“

Dead Reckoning Goal Start X=0, Y = 0, q = E X=5, Y = 0 ……. X=10, Y = 0
, q = N Recording distance and heading direction for “path integration“

Dead Reckoning Goal … X=10, Y = 2.5 Start X=0, Y = 0, q = E
, q = N Recording distance and heading direction for “path integration“

Dead Reckoning Goal X_final=10, Y_final = 5, q_final = N …
X=10, Y = 2.5 Start X=0, Y = 0, q = E X=5, Y = 0 ……. X=10, Y = 0 , q = N Recording distance and heading direction for “path integration“

Dead Reckoning Goal X_final=10, Y_final = 5, q_final = N …
“Return to start” X=10, Y = 2.5 Start X=0, Y = 0, q = E X=5, Y = 0 ……. X=10, Y = 0 , q = N Recording distance and heading direction for “path integration“

Dead Reckoning Goal X_final=10, Y_final = 5, q_final = N Q …
Back in a shortest path home (D) X=10, Y = 2.5 Start Q X=0, Y = 0, q = E X=5, Y = 0 ……. X=10, Y = 0 , q = N Recording distance and heading direction for “path integration“

Dead Reckoning D = sqrt (X_final ^2 + Y_final ^2) = arcsin(Y_final/D)
Go home: Distance = D Heading = 90+ Goal X_final=10, Y_final = 5, q_final = N Q Q Back in a shortest path home (D) X=10, Y = 2.5 Start Q X=0, Y = 0, q = E X=5, Y = 0 ……. X=10, Y = 0 , q = N Recording distance and heading direction for “path integration“

Dead Reckoning q Control inputs to the robot: linear velocity V(t), and rotational velocity (w (t)) Starting position (x0, y0) and orientation (q0) The current robot pose (x, y, q) can be computed as: (x0, y0, q0)

Don’t know where is the food!!  Do random exploration to search for the food! The Ant Odometer: Stepping on Stilts and Stumps

The Ant Odometer: Stepping on Stilts and Stumps Homing distances of experimental ants, tested immediately after the lengths of their legs had been modified at the feeding site. (A) Leg lengths were normal during the outbound journey but manipulated during the homebound run, resulting in different homing distances. (B) Ants tested after reemerging from the nest after previous manipulation. In this situation, leg lengths were equal, although manipulated, during outbound and homebound runs. Box plots show median values of the homing distances recorded in n = 25 ants per experiment (as well as IQRs, box margins, and 5th and 95th percentiles, whiskers). Median values of the initial six turning points of an ant's nest-search behavior were considered as the centers of search, indicating homing distance. The hatched box plots in (A) illustrate the centers of search as predicted from the high-speed video analyses of stride lengths in normal and manipulated animals. The open box represents the prediction corrected for slow walking speed. Details in text. Movie 1 Movie S1. An experimental ant walking on stilts filmed in its typical desert habitat to demonstrate how the animals are able to walk accurately and trouble-free with these modified legs (walking speed ca m/s). The reproduction is in slow motion, ×0.5.

The Ant Odometer: Stepping on Stilts and Stumps Homing distances of experimental ants, tested immediately after the lengths of their legs had been modified at the feeding site. (A) Leg lengths were normal during the outbound journey but manipulated during the homebound run, resulting in different homing distances. (B) Ants tested after reemerging from the nest after previous manipulation. In this situation, leg lengths were equal, although manipulated, during outbound and homebound runs. Box plots show median values of the homing distances recorded in n = 25 ants per experiment (as well as IQRs, box margins, and 5th and 95th percentiles, whiskers). Median values of the initial six turning points of an ant's nest-search behavior were considered as the centers of search, indicating homing distance. The hatched box plots in (A) illustrate the centers of search as predicted from the high-speed video analyses of stride lengths in normal and manipulated animals. The open box represents the prediction corrected for slow walking speed. Details in text. Movie 1 Movie S1. An experimental ant walking on stilts filmed in its typical desert habitat to demonstrate how the animals are able to walk accurately and trouble-free with these modified legs (walking speed ca m/s). The reproduction is in slow motion, ×0.5. Leg lengths were normal during the outbound journey but manipulated during the homebound run Leg lengths were manipulated during the outbound journey & the homebound run Wittlinger et al. 2006, Science,312 (5782):

The Ant Odometer: Stepping on Stilts and Stumps Homing distances of experimental ants, tested immediately after the lengths of their legs had been modified at the feeding site. (A) Leg lengths were normal during the outbound journey but manipulated during the homebound run, resulting in different homing distances. (B) Ants tested after reemerging from the nest after previous manipulation. In this situation, leg lengths were equal, although manipulated, during outbound and homebound runs. Box plots show median values of the homing distances recorded in n = 25 ants per experiment (as well as IQRs, box margins, and 5th and 95th percentiles, whiskers). Median values of the initial six turning points of an ant's nest-search behavior were considered as the centers of search, indicating homing distance. The hatched box plots in (A) illustrate the centers of search as predicted from the high-speed video analyses of stride lengths in normal and manipulated animals. The open box represents the prediction corrected for slow walking speed. Details in text. Movie 1 Movie S1. An experimental ant walking on stilts filmed in its typical desert habitat to demonstrate how the animals are able to walk accurately and trouble-free with these modified legs (walking speed ca m/s). The reproduction is in slow motion, ×0.5.

Wheels slip Terrain change Inaccurate sensors (proprioceptive sensors) Robot is not exactly symmetrical Robot is not exactly symmetrical so the distance travelled by the right side may differ from that traveled by the left side!

Wheels slip Terrain change Inaccurate sensors (proprioceptive sensors) Robot is not exactly symmetrical The distance traveled and orientation will deviate randomly from the estimated values Robot is not exactly symmetrical so the distance travelled by the right side may differ from that traveled by the left side! Combining local (propriocep) and global (exterocep) strategies where we use global information to update and improve local information periodically. As we show subsequently this is in fact a common approach to localization (to determine where the robot is) and hence to navigation. Strategies based on such an approach have some similarity to navigation methods of bees, desert ants

Wheels slip Terrain change Inaccurate sensors (proprioceptive sensors) Robot is not exactly symmetrical The distance traveled and orientation will deviate randomly from the estimated values Robot is not exactly symmetrical so the distance travelled by the right side may differ from that traveled by the left side! Combining local (propriocep) and global (exterocep) strategies where we use global information to update and improve local information periodically. As we show subsequently this is in fact a common approach to localization (to determine where the robot is) and hence to navigation. Strategies based on such an approach have some similarity to navigation methods of bees, desert ants Improve by: Using additional (exteroceptive) sensors, e.g., Vision, Gyroscope, GPS. Combining local (propriocep) and global (exterocep) strategies Using statistic estimation techniques (Probabilistic Localization, Kalman filter)

We have already seen that animals show a variety of remarkable abilities to assist them in navigation. To provide robots with many of the same abilities We use sensors such as Computer vision. It enables a robot to see and recognize landmarks like large rocks, tree and beacons. Vision sensors are the eyes of the robot; Object recognition software is essential for useful interpretation of the visual input that these sensors provide! Mars rovers are equipped with sun sensors (CCD camera with image processing used to extract the sun position) to enable them to determine their orientation rwt the sun Similary, a star sensor (CCD camera with special lens and image processing) combined with a celestial map can be used for celesial navigation 2) Gas sensor 3) Compass = digital compass Devantech makes an I2C device which returns heading in 8-bits or 10 bit-resolution. There also also several models available Have 4 pins one of each N, W,S,E thus giving either 4 or 8 possible direction 4) Wheel encoders or joint angle sensors  joint angle for legged robot to count walking steps Since odometric measurements Clearly, we can equip our robots with the basic functions of animal navigation sensors. Having sensors, however, is only part of the solution. So, Now we must look at how these sensors are used!!!! Navigation is the process of determining and maintaining a path or trajectory to a goal destination.

Laying and sensing odor markings
We have already seen that animals show a variety of remarkable abilities to assist them in navigation. To provide robots with many of the same abilities We use sensors such as Computer vision. It enables a robot to see and recognize landmarks like large rocks, tree and beacons. Vision sensors are the eyes of the robot; Object recognition software is essential for useful interpretation of the visual input that these sensors provide! Mars rovers are equipped with sun sensors (CCD camera with image processing used to extract the sun position) to enable them to determine their orientation rwt the sun Similary, a star sensor (CCD camera with special lens and image processing) combined with a celestial map can be used for celesial navigation 2) Gas sensor 3) Compass = digital compass Devantech makes an I2C device which returns heading in 8-bits or 10 bit-resolution. There also also several models available Have 4 pins one of each N, W,S,E thus giving either 4 or 8 possible direction 4) Wheel encoders or joint angle sensors  joint angle for legged robot to count walking steps Since odometric measurements Clearly, we can equip our robots with the basic functions of animal navigation sensors. Having sensors, however, is only part of the solution. So, Now we must look at how these sensors are used!!!! Navigation is the process of determining and maintaining a path or trajectory to a goal destination. Robots exploring an unknown environment lay trails indicating the route back to their starting positions A path finder robot ‘A’ lays a trail for load carrying robots ‘B’ and ‘C’ (Russell, 1995)

Odor tracking robot We have already seen that animals show a variety of remarkable abilities to assist them in navigation. To provide robots with many of the same abilities We use sensors such as Computer vision. It enables a robot to see and recognize landmarks like large rocks, tree and beacons. Vision sensors are the eyes of the robot; Object recognition software is essential for useful interpretation of the visual input that these sensors provide! Mars rovers are equipped with sun sensors (CCD camera with image processing used to extract the sun position) to enable them to determine their orientation rwt the sun Similary, a star sensor (CCD camera with special lens and image processing) combined with a celestial map can be used for celesial navigation 2) Gas sensor 3) Compass = digital compass Devantech makes an I2C device which returns heading in 8-bits or 10 bit-resolution. There also also several models available Have 4 pins one of each N, W,S,E thus giving either 4 or 8 possible direction 4) Wheel encoders or joint angle sensors  joint angle for legged robot to count walking steps Since odometric measurements Clearly, we can equip our robots with the basic functions of animal navigation sensors. Having sensors, however, is only part of the solution. So, Now we must look at how these sensors are used!!!! Navigation is the process of determining and maintaining a path or trajectory to a goal destination. Robot tracking an ethanol vapor (Ishida et al, 2002)

Olfactory coordinated area coverage
We have already seen that animals show a variety of remarkable abilities to assist them in navigation. To provide robots with many of the same abilities We use sensors such as Computer vision. It enables a robot to see and recognize landmarks like large rocks, tree and beacons. Vision sensors are the eyes of the robot; Object recognition software is essential for useful interpretation of the visual input that these sensors provide! Mars rovers are equipped with sun sensors (CCD camera with image processing used to extract the sun position) to enable them to determine their orientation rwt the sun Similary, a star sensor (CCD camera with special lens and image processing) combined with a celestial map can be used for celesial navigation 2) Gas sensor 3) Compass = digital compass Devantech makes an I2C device which returns heading in 8-bits or 10 bit-resolution. There also also several models available Have 4 pins one of each N, W,S,E thus giving either 4 or 8 possible direction 4) Wheel encoders or joint angle sensors  joint angle for legged robot to count walking steps Since odometric measurements Clearly, we can equip our robots with the basic functions of animal navigation sensors. Having sensors, however, is only part of the solution. So, Now we must look at how these sensors are used!!!! Navigation is the process of determining and maintaining a path or trajectory to a goal destination. Larinova et al, 2006

Path finding based on self-marking navigation
We have already seen that animals show a variety of remarkable abilities to assist them in navigation. To provide robots with many of the same abilities We use sensors such as Computer vision. It enables a robot to see and recognize landmarks like large rocks, tree and beacons. Vision sensors are the eyes of the robot; Object recognition software is essential for useful interpretation of the visual input that these sensors provide! Mars rovers are equipped with sun sensors (CCD camera with image processing used to extract the sun position) to enable them to determine their orientation rwt the sun Similary, a star sensor (CCD camera with special lens and image processing) combined with a celestial map can be used for celesial navigation 2) Gas sensor 3) Compass = digital compass Devantech makes an I2C device which returns heading in 8-bits or 10 bit-resolution. There also also several models available Have 4 pins one of each N, W,S,E thus giving either 4 or 8 possible direction 4) Wheel encoders or joint angle sensors  joint angle for legged robot to count walking steps Since odometric measurements Clearly, we can equip our robots with the basic functions of animal navigation sensors. Having sensors, however, is only part of the solution. So, Now we must look at how these sensors are used!!!! Navigation is the process of determining and maintaining a path or trajectory to a goal destination. (Robot) Kulvicius et al., 2008

Path finding based on self-marking navigation
Reactive control Odor following Sabaliauskas, 2009

Path finding based on self-marking navigation
We have already seen that animals show a variety of remarkable abilities to assist them in navigation. To provide robots with many of the same abilities We use sensors such as Computer vision. It enables a robot to see and recognize landmarks like large rocks, tree and beacons. Vision sensors are the eyes of the robot; Object recognition software is essential for useful interpretation of the visual input that these sensors provide! Mars rovers are equipped with sun sensors (CCD camera with image processing used to extract the sun position) to enable them to determine their orientation rwt the sun Similary, a star sensor (CCD camera with special lens and image processing) combined with a celestial map can be used for celesial navigation 2) Gas sensor 3) Compass = digital compass Devantech makes an I2C device which returns heading in 8-bits or 10 bit-resolution. There also also several models available Have 4 pins one of each N, W,S,E thus giving either 4 or 8 possible direction 4) Wheel encoders or joint angle sensors  joint angle for legged robot to count walking steps Since odometric measurements Clearly, we can equip our robots with the basic functions of animal navigation sensors. Having sensors, however, is only part of the solution. So, Now we must look at how these sensors are used!!!! Navigation is the process of determining and maintaining a path or trajectory to a goal destination. Kulvicius et al., 2008

Path finding based on self-marking navigation
Robot setup Sabaliauskas, 2009

Path finding based on self-marking navigation
5th Run 10th Run Sabaliauskas, 2009

Path finding based on self-marking navigation
Statistics 1.5m n=9 Time (s) 1.5m Number of runs Sabaliauskas, 2009

IV. Maze Navigation We have already seen that animals show a variety of remarkable abilities to assist them in navigation. To provide robots with many of the same abilities We use sensors such as Computer vision. It enables a robot to see and recognize landmarks like large rocks, tree and beacons. Vision sensors are the eyes of the robot; Object recognition software is essential for useful interpretation of the visual input that these sensors provide! Mars rovers are equipped with sun sensors (CCD camera with image processing used to extract the sun position) to enable them to determine their orientation rwt the sun Similary, a star sensor (CCD camera with special lens and image processing) combined with a celestial map can be used for celesial navigation 2) Gas sensor 3) Compass = digital compass Devantech makes an I2C device which returns heading in 8-bits or 10 bit-resolution. There also also several models available Have 4 pins one of each N, W,S,E thus giving either 4 or 8 possible direction 4) Wheel encoders or joint angle sensors  joint angle for legged robot to count walking steps Since odometric measurements Clearly, we can equip our robots with the basic functions of animal navigation sensors. Having sensors, however, is only part of the solution. So, Now we must look at how these sensors are used!!!! Navigation is the process of determining and maintaining a path or trajectory to a goal destination.

Wall-following: Touching (Left/Right)
Wall-following: Always follows the left/right wall. For example, if a robot comes to an intersection with several open sides, it follows the leftmost path Start from x =0, y =0 If Goal is reached then terminate Each iteration, Read sensor values (left, right, front): PSD = position sensitive device to check whether a wall exists on the front, left, or right-hand side Then the robot selects the most leftmost direction (Always turn left) Otherwise, it drives straight Only if the other two directions are blocked, it will turn right If none are free, then move backward Then repeat (1) Right wall following Left wall following

Wall-following: Left/Rightmost path strategy
“A robot uses the left/right-handed rule If a robot comes to an intersection with several open sides, it follows the left/rightmost path” the left hand on the wall rule, where the robot would always take the left most path at an intersection. At dead ends it would turn around. It would do this until the end of the maze was reached. The Swan robot solves a maze, which is made of empty cans. In this demonstration, the robot uses the left-handed rule, namely, Swan selects the left-most branch at each intersection. Notice smooth turns at intersections. At a dead-end, the robot makes a special "flip turn" that negotiates with the tight space. Finally it arrives at the exit. If you are interested in seeing the trajectory of this maze-solving motion, please visit Start from x =0, y =0 If Goal is reached then terminate Each iteration, Read sensor values (left, right, front): PSD = position sensitive device to check whether a wall exists on the front, left, or right-hand side Then the robot selects the most leftmost direction (Always turn left) Otherwise, it drives straight Only if the other two directions are blocked, it will turn right If none are free, then move backward Then repeat (1) Leftmost Rightmost

Left/Rightmost path strategy
Check state the left hand on the wall rule, where the robot would always take the left most path at an intersection. At dead ends it would turn around. It would do this until the end of the maze was reached. The Swan robot solves a maze, which is made of empty cans. In this demonstration, the robot uses the left-handed rule, namely, Swan selects the left-most branch at each intersection. Notice smooth turns at intersections. At a dead-end, the robot makes a special "flip turn" that negotiates with the tight space. Finally it arrives at the exit. If you are interested in seeing the trajectory of this maze-solving motion, please visit Start from x =0, y =0 If Goal is reached then terminate Each iteration, Read sensor values (left, right, front): PSD = position sensitive device to check whether a wall exists on the front, left, or right-hand side Then the robot selects the most leftmost direction (Always turn left) Otherwise, it drives straight Only if the other two directions are blocked, it will turn right If none are free, then move backward Then repeat (1) Move

Left/Rightmost path strategy
Leftmost path Rightmost path

Problem for Wall-following
This simple and elegant algorithm works very well for most mazes. However, there are mazes where this algorithm does not work As can be seen in Figure 15.4, a maze can be constructed with the goal in the middle, so a wallfollowing robot will never reach it. The recursive algorithm shown in the following section, however, will be able to cope with arbitrary mazes.

Recursive exploration
“This leads to Full maze exploration requires us to generate an internal representation of the maze and to maintain a bit-field of marking whether a particular square has already been visited” the left hand on the wall rule, where the robot would always take the left most path at an intersection. At dead ends it would turn around. It would do this until the end of the maze was reached. The Swan robot solves a maze, which is made of empty cans. In this demonstration, the robot uses the left-handed rule, namely, Swan selects the left-most branch at each intersection. Notice smooth turns at intersections. At a dead-end, the robot makes a special "flip turn" that negotiates with the tight space. Finally it arrives at the exit. If you are interested in seeing the trajectory of this maze-solving motion, please visit Start from x =0, y =0 If Goal is reached then terminate Each iteration, Read sensor values (left, right, front): PSD = position sensitive device to check whether a wall exists on the front, left, or right-hand side Then the robot selects the most leftmost direction (Always turn left) Otherwise, it drives straight Only if the other two directions are blocked, it will turn right If none are free, then move backward Then repeat (1)

Recursive exploration
Algorithm: Explore the whole maze starting at the start square & visit all reachable square to obtain the map of the area. E.g., move FrontLeftRight & every visited square or location  mark! Compute the shortest distance from the start square to any other square (or “GOAL”) using a “wavefront” algorithm. Allow the user to enter the coordinate of a goal: Then determine the shortest driving path by reversing the path in the the “wavefront” array from the destination to the start square

The Wavefront Planner: Setup
Goal Robot Mapping and Navigation The theories behind robot maze navigation is immense - so much that it would take several books just to cover the basics! So to keep it simple this tutorial will teach you one of the most basic but still powerful methods of intelligent robot navigation. For reasons I will explain later, this robot navigation method is called the wavefront algorithm. There are four main steps to running this algorithm. Step 1: Create a Discretized Map Create an X-Y grid matrix to mark empty space, robot/goal locations, and obstacles. For example, this is a pic of my kitchen. Normally there isn't a cereal box on the floor like that, so I put it there as an example of an obstacle: Starting

The Wavefront Planner: Setup
Goal Robot Mapping and Navigation The theories behind robot maze navigation is immense - so much that it would take several books just to cover the basics! So to keep it simple this tutorial will teach you one of the most basic but still powerful methods of intelligent robot navigation. For reasons I will explain later, this robot navigation method is called the wavefront algorithm. There are four main steps to running this algorithm. Step 1: Create a Discretized Map Create an X-Y grid matrix to mark empty space, robot/goal locations, and obstacles. For example, this is a pic of my kitchen. Normally there isn't a cereal box on the floor like that, so I put it there as an example of an obstacle: Starting

The Wavefront in Action (1)
Starting with the goal, set all adjacent cells with “0” to the current cell + 1 – 4-Point Connectivity or 8-Point Connectivity? – Your Choice. We’ll use 8-Point Connectivity in our example Starting

The Wavefront in Action (2)
Now repeat with the modified cells – This will be repeated until no 0’s are neighboring to cells with values >= 2 0’s will only remain when regions are unreachable Starting

The Wavefront in Action (3)
Repeat again…. Starting

The Wavefront in Action (4)
And again…. Starting

The Wavefront in Action (5)
And again until…. Starting

The Wavefront in Action (DONE!)
You’re done – Remember, 0’s should only remain if unreachable regions exist Starting

To find the shortest path, according to your metric, simply always move toward a cell with a lower number – The numbers generated by the Wavefront planner are roughly proportional to their distance from the goal Starting

To find the shortest path, according to your metric, simply always move toward a cell with a lower number – The numbers generated by the Wavefront planner are roughly proportional to their distance from the goal

Localization “Where am I? My position with respect to a reference frame”

Deviation it is required to know the robot’s starting position and orientation. For all subsequent driving actions (for example straight sections or rotations on the spot or curves), the robot’s current position is updated as per the feedback provided from the wheel encoders. Deviation

Deviation it is required to know the robot’s starting position and orientation. For all subsequent driving actions (for example straight sections or rotations on the spot or curves), the robot’s current position is updated as per the feedback provided from the wheel encoders. Deviation

GPS (Outdoor Localization)
Knows latitude, longitude, altitude – Can derive velocities, heading Provides “Direct observation of state” – State: [x, y, q] ~ [longitude, latitude, heading] One of the central problems for driving robots is localization. For many application scenarios, we need to know a robot’s position and orientation at all times. For example, a cleaning robot needs to make sure it covers the whole floor area without repeating lanes or getting lost, or an office delivery robot needs to be able to navigate a building floor and needs to know its position and orientation relative to its starting point. This is a non-trivial problem in the absence of global sensors. The localization problem can be solved by using a global positioning system. In an outdoor setting this could be the satellite-based GPS. The polarized-light sensors POL-sensors are composed of photodiodes which function as photoreceptors and polarizing filters which function as microvillus. Fig. 16. Example of a landmark array used for the navigation experiments. The grid visible on the desert ground was used for the alignment of landmarks and robot, and for the registration of the final robot position. For example, the desert ant Cataglyphis is able to explore its desert habitat for hundreds of meters while foraging and return back to its nest precisely and on a straight line. The three main strategies that Cataglyphis is using to accomplish this task are path integration, visual piloting and systematic search. In this study, we use a synthetic methodology to gain additional insights into the navigation behavior of Cataglyphis. Inspired by the insect’s navigation system we have developed mechanisms for path integration and visual piloting that were successfully employed on the mobile robot Sahabot 2.

GPS (Outdoor Localization)
Knows latitude, longitude, altitude – Can derive velocities, heading Provides “Direct observation of state” – State: [x, y, q] ~ [longitude, latitude, heading] One of the central problems for driving robots is localization. For many application scenarios, we need to know a robot’s position and orientation at all times. For example, a cleaning robot needs to make sure it covers the whole floor area without repeating lanes or getting lost, or an office delivery robot needs to be able to navigate a building floor and needs to know its position and orientation relative to its starting point. This is a non-trivial problem in the absence of global sensors. The localization problem can be solved by using a global positioning system. In an outdoor setting this could be the satellite-based GPS. We have implemented and tested our integrated pose system on several outdoor terrains. Figure 3 shows a typical outdoor image captured by the left camera of the stereo pair. Since GPS is accurate to only about 3-4 meters, in order to validate our results, the robot was moved in a closed loop over meters. Since the starting and the ending point are the same, the difference in pose between these two points gives a good indication of the error in localization. We measure this error in percentage over the total distance The localization is important when… the GPS precision is insufficient, e.g. in a scale of millimeters, the GPS signal is unavailable, e.g. in the underground. Agrawal, M. & Konolige, K. 2006

Sun compass & vision(Outdoor Localization)
Panoramic visual system, polarized-light sensors, ambient-light sensors. One of the central problems for driving robots is localization. For many application scenarios, we need to know a robot’s position and orientation at all times. For example, a cleaning robot needs to make sure it covers the whole floor area without repeating lanes or getting lost, or an office delivery robot needs to be able to navigate a building floor and needs to know its position and orientation relative to its starting point. This is a non-trivial problem in the absence of global sensors. The localization problem can be solved by using a global positioning system. In an outdoor setting this could be the satellite-based GPS. The polarized-light sensors POL-sensors are composed of photodiodes which function as photoreceptors and polarizing filters which function as microvillus. Fig. 16. Example of a landmark array used for the navigation experiments. The grid visible on the desert ground was used for the alignment of landmarks and robot, and for the registration of the final robot position. For example, the desert ant Cataglyphis is able to explore its desert habitat for hundreds of meters while foraging and return back to its nest precisely and on a straight line. The three main strategies that Cataglyphis is using to accomplish this task are path integration, visual piloting and systematic search. In this study, we use a synthetic methodology to gain additional insights into the navigation behavior of Cataglyphis. Inspired by the insect’s navigation system we have developed mechanisms for path integration and visual piloting that were successfully employed on the mobile robot Sahabot 2. Sahabot 2 D. Lambrinos et. al., Robotics and Autonomous systems, 2000

Beacon measurements (Indoor Localization)
Freq 1 Freq 2 In an indoor setting, a global sensor network with infrared, sonar, laser, or radio beacons could be employed. These will give us directly the desired robot coordinates as shown in Figure 14.1. Let us assume a driving environment that has a number of synchronized beacons that are sending out sonar signals at the same regular time intervals, but at different (distinguishable) frequencies. By receiving signals from two or three different beacons, the robot can determine its local position from the time difference of the signals’ arrival times. Using two beacons can narrow down the robot position to two possibilities, since two circles have two intersection points. For example, if the two signals arrive at exactly the same time, the robot is located in the middle between the two transmitters. If, say, the left beacon’s signal arrives before the right one, then the robot is closer to the left beacon by a distance proportional to the time difference. Using local position coherence, this may already be sufficient for global positioning. However, to be able to determine a 2D position without local sensors, three beacons are required. Global sensor: Only the robot’s position can be determined by this method, not its orientation. The orientation has to be deducted from the change in position (difference between two subsequent positions), which is exactly the method employed for satellite-based GPS, or from an additional compass sensor. Local sensor: For example, if the sonar sensors can be mounted on the robot and the beacons are converted to reflective markers, then we have an autonomous robot with local sensors. Freq 3 Sonar signals

Mapping “Creating models of the environment they
traverse using sensor data” One of the major applications of mobile robots is to create models of the environment they traverse using sensor data, OR Robotic mapping is a discipline related to cartography. The goal for an autonomous robot to be able to construct (or use ) a map or floor plan and to localize itself in it. Robotic mapping is that branch of one, which deals with the study and application of ability to construct map or floor plan by the autonomous robot and to localize itself in it. The robot has two sources of information: the idiothetic (internal) and the allothetic (external) sources. When in motion, a robot can use dead reckoning methods such as tracking the number of revolutions of its wheels; this corresponds to the idiothetic source and can give the absolute position of the robot, but it is subject to cumulative error which can grow quickly. The allothetic source corresponds to the sensors of the robot, like a camera, a microphone, laser or sonar. The problem here is "perceptual aliasing". This means that two different places can be perceived as the same. For example, in a building, it is nearly impossible to determine a location solely with the visual information, because all the corridors may look the same. Idiothetic literally means "self-proposition" (Greek derivation), and is used in navigation models (e.g., of a rat in a maze) as in the phrase "idiothetic cues" to indicate that path integration[1][2] was used to determine the present location instead of allothetic, or external, cues (e.g., visual or tactile). 2) It is widely used in military applications 3) For example, the robot is throw through an open window, crawl through drain pipes, or climb up the side of building 89

Sensors for the task: Internal sensors Encoders, Compass (Dead reckoning) External sensors Computer vision, sonar, IR, GPS, and laser range finders The information obtained from the sensors can be transmitted to external receivers or stored on board for later analysis “indoor mapping” Outdoor mapping is more difficult since the external environment is not conveniently arranged in orthogonal corridors like those found in interior settings. Two common approaches to mapping are: Topological mapping Grid-based or metric mapping 90

Topological mapping It relies on landmarks (e.g., door, hallway intersections, T-junctions for indoor). It involves the creation of a map where the location of landmarks is essential and not distance between them. It is represented by graph in which each node is a landmark and adjacent nodes are connected by edges. Topological approaches to mapping make use of a graph representation of landmarks in the environment. Letting the robot explore the area and then recognize landmark marking or placing a node at each landmark and then connect between nodes with arcs  after it explores the whole are then you obtain a graphical map with nodes. 91

Grid-based or metric mapping
We cover the environment to be mapped with an evenly space grid. The robot does not have complete and accurate priori knowledge concerning the presence of obstacles. Each cell in the grid stores the probability p(x,y) that cell c(x,y) is occupied. This value represents the robot’s belief that it can or cannot move to the center of the cell. This grid-based map is also called “occupancy map”. Metric approaches use a two-dimensional grid and attempt to place the robot on a map location with respect to the grid coordinate system to determine the cell in the grid that most closely approximates the robot’s position. Consider the indoor environemnt on the right. It shows: two offices A and B, an entrance from a stairway, hall ways, dark areas are pillars or structures in the corners. Desks Cabinets The robot’s goal as it traverses the space is to estimate the probability of each square’s being occupied. When beginning the task the robot has the empty grid!!! Not the information about the structure and furnishing in the figure!!! Also physical dimension of the robot must take into account. To protect the robot to collide the obstacles, this can be done by growing the boundaries by at least half the diameter of the robot (grey area). To generate the occupancy map is by using sonar sensors. Example: p(x,y) Grid superimposed on the map 92

Example of Map Generation
Mapping algorithm Mobile robot setup Navigating to unexplored areas in the physical environment. • Keeping track of the robot’s position. • Recording grid positions as free or occupied (preliminary or final). • Determining whether the map generation has been completed 93

Example of Map Generation
StepA: The robot starts with a completely unknown occupancy grid and an empty corresponding configuration space (i.e. no obstacles). The first step is to do a 360° scan around the robot. If a range sensor returns a value larger than a threshold, then a preliminary obstacle is entered in the cell at the measured distance. All cells between this obstacle and the current robot position are marked as preliminary empty. The same is entered for all cells in line of a measurement that does not locate an obstacle; all other cells remain “unknown”. Only final obstacle states are entered into the configuration space, therefore space A is still empty. Preliminary free Preliminary obstacle For this, the robot performs a rotation on the spot. The angle the robot has to turn depends on the number and location of its range sensors. Unknown Free 94

Example of Map Generation
StepB: The robot drives to the closest obstacle in order to examine it closer. The robot performs a wall-following behavior around the obstacle and, while doing so, updates both grid and space. Now at close range, preliminary obstacle states have been changed to final obstacle states and their precise location has been entered into configuration space B. For this, the robot performs a rotation on the spot. The angle the robot has to turn depends on the number and location of its range sensors. 95

Example of Map Generation
StepC: The robot has completely surrounded one object by performing the wall-following algorithm. The robot is now close again to its starting position. Since there are no preliminary cells left around this rectangular obstacle, the algorithm terminates the obstacle-following behavior and looks for the nearest preliminary obstacle. For this, the robot performs a rotation on the spot. The angle the robot has to turn depends on the number and location of its range sensors. 96

Example of Map Generation
StepD: The whole environment has been explored by the robot, and all preliminary states have been eliminated by a subsequent obstacle-following routine around the rectangular obstacle on the right-hand side. The final occupancy grid and the final configuration space are matched. For this, the robot performs a rotation on the spot. The angle the robot has to turn depends on the number and location of its range sensors. 97

Other simulation results
For this, the robot performs a rotation on the spot. The angle the robot has to turn depends on the number and location of its range sensors. 98

Real robot experimental results
For this, the robot performs a rotation on the spot. The angle the robot has to turn depends on the number and location of its range sensors. 99

Filters Removing noise from a signal
One of the major applications of mobile robots is to create models of the environment they traverse using sensor data, OR Robotic mapping is a discipline related to cartography. The goal for an autonomous robot to be able to construct (or use ) a map or floor plan and to localize itself in it. Robotic mapping is that branch of one, which deals with the study and application of ability to construct map or floor plan by the autonomous robot and to localize itself in it. The robot has two sources of information: the idiothetic (internal) and the allothetic (external) sources. When in motion, a robot can use dead reckoning methods such as tracking the number of revolutions of its wheels; this corresponds to the idiothetic source and can give the absolute position of the robot, but it is subject to cumulative error which can grow quickly. The allothetic source corresponds to the sensors of the robot, like a camera, a microphone, laser or sonar. The problem here is "perceptual aliasing". This means that two different places can be perceived as the same. For example, in a building, it is nearly impossible to determine a location solely with the visual information, because all the corridors may look the same. Idiothetic literally means "self-proposition" (Greek derivation), and is used in navigation models (e.g., of a rat in a maze) as in the phrase "idiothetic cues" to indicate that path integration[1][2] was used to determine the present location instead of allothetic, or external, cues (e.g., visual or tactile). 2) It is widely used in military applications 3) For example, the robot is throw through an open window, crawl through drain pipes, or climb up the side of building 100

Filters

Filters As we known, measurements used to localize a robot in its environment are imprecise and include noise How do we filter noise ? How do we obtain an estimate of the true robot position as accurately as possible? 102

Low Pass Filters I.) Moving average (FIR filter): Tuning parameters:
Filtered Sensory signal Sensory signal In statistics, a moving average, also called rolling average, rolling mean or running average, is a type of finite impulse response filter used to analyze a set of data points by creating a series of averages of different subsets of the full data set. 103

Low Pass Filters In statistics, a moving average, also called rolling average, rolling mean or running average, is a type of finite impulse response filter used to analyze a set of data points by creating a series of averages of different subsets of the full data set. 104

Low Pass Filters II.) Exponential moving average (IIR filter): Tuning parameters: Sensory signal Filtered Sensory signal An exponential moving average (EMA), also known as an exponentially weighted moving average (EWMA),[3] is a type of infinite impulse response filter that applies weighting factors which decrease exponentially. The weighting for each older datum point decreases exponentially, never reaching zero. The graph at right shows an example of the weight decrease. The EMA for a series Y may be calculated recursively: 105

Low Pass Filters 0.5 0.125

Kalman Filter (Estimator)
A set of mathematical equations to estimate the state of a process: Controlled process: Measurement: A – relates previuos state xk-1 to current state xk B – relates optional control input u to state x H – relates state x to measurement z R – process noise covariance Q – measurement noise covariance Process noise: A set of mathematical equations to estimate the state of a process, in a way that minimizes the squared error (prediction & previous prediction). Linear system driven by stochastic process • Statistical steady-state • Linear Gauss-Markov model • Kalman filter • Steady-state Kalman filter Here are the most important concepts you need to know: Kalman Filters are discrete. That is, they rely on measurement samples taken between repeated but constant periods of time. Although you can approximate it fairly well, you don't know what happens between the samples. Kalman Filters are recursive. This means its prediction of the future relies on the state of the present (position, velocity, acceleration, etc) as well as a guess about what any controllable parts tried to do to affect the situation (such as a rudder or steering differential). Kalman Filters work by making a prediction of the future, getting a measurement from reality, comparing the two, moderating this difference, and adjusting its estimate with this moderated value. The more you understand the mathematical model of your situation, the more accurate the Kalman filter's results will be. If your model is completely consistent with what's actually happening, the Kalman filter's estimate will eventually converge with what's actually happening. When you start up a Kalman Filter, these are the things it expects: The mathematical model of the system, represented by matrices A, B, and C. An initial estimate about the complete state of the system, given as a vector x. An initial estimate about the error of the system, given as a matrix P. Estimates about the general process and measurement error of the system, represented by matrices Q and R. During each time step, you are expected to give it the following information: A vector containing the most current control state (vector "u"). This is the system's guess as to what it did to affect the situation (such as steering commands). A vector containing the most current measurements that can be used to calculate the state (vector "z"). After the calculations, you get the following information: The most current estimate of the true state of the system. The most current estimate of the overall error of the system Measurement noise: 107

Kalman Filter (Estimator)
Procedure: A set of mathematical equations to estimate the state of a process, in a way that minimizes the squared error (prediction & previous prediction). Linear system driven by stochastic process • Statistical steady-state • Linear Gauss-Markov model • Kalman filter • Steady-state Kalman filter Here are the most important concepts you need to know: Kalman Filters are discrete. That is, they rely on measurement samples taken between repeated but constant periods of time. Although you can approximate it fairly well, you don't know what happens between the samples. Kalman Filters are recursive. This means its prediction of the future relies on the state of the present (position, velocity, acceleration, etc) as well as a guess about what any controllable parts tried to do to affect the situation (such as a rudder or steering differential). Kalman Filters work by making a prediction of the future, getting a measurement from reality, comparing the two, moderating this difference, and adjusting its estimate with this moderated value. The more you understand the mathematical model of your situation, the more accurate the Kalman filter's results will be. If your model is completely consistent with what's actually happening, the Kalman filter's estimate will eventually converge with what's actually happening. When you start up a Kalman Filter, these are the things it expects: The mathematical model of the system, represented by matrices A, B, and C. An initial estimate about the complete state of the system, given as a vector x. An initial estimate about the error of the system, given as a matrix P. Estimates about the general process and measurement error of the system, represented by matrices Q and R. During each time step, you are expected to give it the following information: A vector containing the most current control state (vector "u"). This is the system's guess as to what it did to affect the situation (such as steering commands). A vector containing the most current measurements that can be used to calculate the state (vector "z"). After the calculations, you get the following information: The most current estimate of the true state of the system. The most current estimate of the overall error of the system 108

Kalman Filter (Estimator)
Procedure: A set of mathematical equations to estimate the state of a process, in a way that minimizes the squared error (prediction & previous prediction). Linear system driven by stochastic process • Statistical steady-state • Linear Gauss-Markov model • Kalman filter • Steady-state Kalman filter Here are the most important concepts you need to know: Kalman Filters are discrete. That is, they rely on measurement samples taken between repeated but constant periods of time. Although you can approximate it fairly well, you don't know what happens between the samples. Kalman Filters are recursive. This means its prediction of the future relies on the state of the present (position, velocity, acceleration, etc) as well as a guess about what any controllable parts tried to do to affect the situation (such as a rudder or steering differential). Kalman Filters work by making a prediction of the future, getting a measurement from reality, comparing the two, moderating this difference, and adjusting its estimate with this moderated value. The more you understand the mathematical model of your situation, the more accurate the Kalman filter's results will be. If your model is completely consistent with what's actually happening, the Kalman filter's estimate will eventually converge with what's actually happening. When you start up a Kalman Filter, these are the things it expects: The mathematical model of the system, represented by matrices A, B, and C. An initial estimate about the complete state of the system, given as a vector x. An initial estimate about the error of the system, given as a matrix P. Estimates about the general process and measurement error of the system, represented by matrices Q and R. During each time step, you are expected to give it the following information: A vector containing the most current control state (vector "u"). This is the system's guess as to what it did to affect the situation (such as steering commands). A vector containing the most current measurements that can be used to calculate the state (vector "z"). After the calculations, you get the following information: The most current estimate of the true state of the system. The most current estimate of the overall error of the system 109

Kalman Filter (simplified version)
We had: If we drop state matrices and control input out we get: A set of mathematical equations to estimate the state of a process, in a way that minimizes the squared error (prediction & previous prediction). Linear system driven by stochastic process • Statistical steady-state • Linear Gauss-Markov model • Kalman filter • Steady-state Kalman filter Here are the most important concepts you need to know: Kalman Filters are discrete. That is, they rely on measurement samples taken between repeated but constant periods of time. Although you can approximate it fairly well, you don't know what happens between the samples. Kalman Filters are recursive. This means its prediction of the future relies on the state of the present (position, velocity, acceleration, etc) as well as a guess about what any controllable parts tried to do to affect the situation (such as a rudder or steering differential). Kalman Filters work by making a prediction of the future, getting a measurement from reality, comparing the two, moderating this difference, and adjusting its estimate with this moderated value. The more you understand the mathematical model of your situation, the more accurate the Kalman filter's results will be. If your model is completely consistent with what's actually happening, the Kalman filter's estimate will eventually converge with what's actually happening. When you start up a Kalman Filter, these are the things it expects: The mathematical model of the system, represented by matrices A, B, and C. An initial estimate about the complete state of the system, given as a vector x. An initial estimate about the error of the system, given as a matrix P. Estimates about the general process and measurement error of the system, represented by matrices Q and R. During each time step, you are expected to give it the following information: A vector containing the most current control state (vector "u"). This is the system's guess as to what it did to affect the situation (such as steering commands). A vector containing the most current measurements that can be used to calculate the state (vector "z"). After the calculations, you get the following information: The most current estimate of the true state of the system. The most current estimate of the overall error of the system 110

Kalman Filter (Estimator)
eEMA eMA eKalman Kalman gain (K) 111