Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Computer Vision and Robotics: Navigation, Localization and Mapping Tomas Kulvicius Poramate Manoonpong.

Similar presentations


Presentation on theme: "Introduction to Computer Vision and Robotics: Navigation, Localization and Mapping Tomas Kulvicius Poramate Manoonpong."— Presentation transcript:

1 Introduction to Computer Vision and Robotics: Navigation, Localization and Mapping Tomas Kulvicius Poramate Manoonpong

2 Robot Navigation, Localization, and Mapping “I know where I am & I am able to make a plan to reach destination”

3 1)“How do I get there?”  Navigation from A to B Problems: Navigation, Localization, Mapping A B

4 2) “Where am I?”  Localization problem Problems: Navigation, Localization, Mapping

5 3) “Where have I been?”  Mapping Problems: Navigation, Localization, Mapping

6 Biological Inspiration

7 MacFarland (1999) has identified three important strategies in navigation: 1) Pilotage: Navigation using familiar landmarks or features of some sort (visual, olfactory, etc.). For example, some insects navigate towards objects that are associated with their nest. 2) Compass orientation: Navigation using a particular compass direction without using landmarks. For example, the small white butterfly moves in the same direction regardless of wind direction day after day. 3) True navigation: The ability to navigate to a goal point without the use of landmarks and regardless of the direction. For example, Pigeons seem to use true navigation and can return to their loft no matter where they are released.

8 Biological Inspiration Compass orientation True navigation

9 Biological Inspiration Visual sensing  Landmarks (objects, mountains, lakes) Olfactory cues  Indentify home, track food source or home Sun compass  Orientation with respect to the sun (requires internal clock) Magnetic compass  Earth’s magnetic field (polarity & gradients) Star compass  Orientation with respect to the stars at night (e.g., North Star or Polaris) Dead reckoning  Record the distances and directions travelled from the home point Memory  Memorize specific gradients or landmarks near their home base Some Important components for navigation

10 Biological Inspiration The use of landmark by the digger wasp (Tinbergen’s study, 1951) Visual sensing  Landmarks (objects, mountains, lakes) Olfactory cues  Indentify home, track food source or home Sun compass  Orientation with respect to the sun (requires internal clock) Magnetic compass  Earth’s magnetic field (polarity & gradients) Star compass  Orientation with respect to the stars at night (e.g., North Star or Polaris) Dead reckoning  Record the distances and directions travelled from the home point

11 Biological Inspiration Visual sensing  Landmarks (objects, mountains, lakes) Olfactory cues  Indentify home, track food source or home Sun compass  Orientation with respect to the sun (requires internal clock) Magnetic compass  Earth’s magnetic field (polarity & gradients) Star compass  Orientation with respect to the stars at night (e.g., North Star or Polaris) Dead reckoning  Record the distances and directions travelled from the home point

12 Biological Inspiration Visual sensing  Landmarks (objects, mountains, lakes) Olfactory cues  Indentify home, track food source or home Sun compass  Orientation with respect to the sun (requires internal clock) Magnetic compass  Earth’s magnetic field (polarity & gradients) Star compass  Orientation with respect to the stars at night (e.g., North Star or Polaris) Dead reckoning  Record the distances and directions travelled from the home point

13

14 Biological Inspiration Odor based navigation in rats Visual sensing  Landmarks (objects, mountains, lakes) Olfactory cues  Indentify home, track food source or home Sun compass  Orientation with respect to the sun (requires internal clock) Magnetic compass  Earth’s magnetic field (polarity & gradients) Star compass  Orientation with respect to the stars at night (e.g., North Star or Polaris) Dead reckoning  Record the distances and directions travelled from the home point

15 Biological Inspiration Odor based navigation in ants (pheromone trails) Visual sensing  Landmarks (objects, mountains, lakes) Olfactory cues  Indentify home, track food source or home Sun compass  Orientation with respect to the sun (requires internal clock) Magnetic compass  Earth’s magnetic field (polarity & gradients) Star compass  Orientation with respect to the stars at night (e.g., North Star or Polaris) Dead reckoning  Record the distances and directions travelled from the home point Memory  Memorize specific gradients or landmarks near their home base

16

17 Biological Inspiration Sahara desert ant (Cataglyphis bicolor) Visual sensing  Landmarks (objects, mountains, lakes) Olfactory cues  Indentify home, track food source or home Sun compass  Orientation with respect to the sun (requires internal clock) Magnetic compass  Earth’s magnetic field (polarity & gradients) Star compass  Orientation with respect to the stars at night (e.g., North Star or Polaris) Dead reckoning  Record the distances and directions travelled from the home point

18

19 From Biological Inspiration to Robot Implementation

20 ARMAR III NASA’s 'curious' rover E-puck robot AMOS robot Mobile Robots

21 E-puck Robot Setup

22 Sensors for Robot Navigation, Mapping, Localization

23 Computer vision  enable a robot to see and recognize landmarks, orient using sun sensors, star sensors. Volpe, IROS 1999 Sun sensor Star sensor Sensors for Robot Navigation, Localization, Mapping

24 Computer vision  enable a robot to see and recognize landmarks, orient using sun sensors, star sensors. Olfactory sensors  (gas or alcohol sensing) enable a robot to detect chemical substance or trail and a specific odor source. Sensor response to alcohol (70%) Sensors for Robot Navigation, Localization, Mapping

25 Computer vision  enable a robot to see and recognize landmarks, orient using sun sensors, star sensors. Olfactory sensors  (gas sensing) enable a robot to detect chemical substance or trail and a specific odor source. Compass  provide an indication of magnetic north. But unreliable when interfering with magnet or metal. Analog Dinsmore compass Devantech Magnetic Compass Sensors for Robot Navigation, Localization, Mapping

26

27 Computer vision  enable a robot to see and recognize landmarks, orient using sun sensors, star sensors. Olfactory sensors  (gas sensing) enable a robot to detect chemical substance or trail and a specific odor source. Compass  provide an indication of magnetic north. But unreliable when interfering by magnetic or metal. Clock (on board)  essential in connection with a sun sensor. Sensors for Robot Navigation, Localization, Mapping

28 Wheel encoders or joint angle sensors  measure distance traveled and change in orientation and are use for path integration (dead reckoning). Range finders  enable a robot to estimate its distance from objects in the environment. (IR, Ultrasonic sensors at short distances, laser scanner in longer distance). Gyroscopes  provide heading directions and to improve odometric readings. GPS  enable outdoor robots to determine (within centimeters) their latitude and longitude. Unfortunately, it does not work indoors as well as on Mars, the moon, and other planets. Etc.  Wind, pressure, sound, heat sensors!!! Sensors for Robot Navigation, Localization, Mapping

29 Navigation “Moving from a starting point to a goal”

30 Robot’s initial position at “START” Move to “GOAL” Straight-line path is “NOT possible” due to obstacles GOAL & Two landmarks (L1, L2) are visible on a clear day! I. Outdoor Navigation

31 Three possible scenarios: 1)Clear day, visible goal, unknown distance to goal: “What sensors should we use?” I. Outdoor Navigation

32 Three possible scenarios: 1)Clear day, visible goal, unknown distance to goal: -Sensors: Vision, Compass, and Sonar or IR sensors. -Strategy: “Wandering Standpoint” (Puttkamer 2000) I. Outdoor Navigation

33  Three possible scenarios: 1)Clear day, visible goal, unknown distance to goal: -Sensors: Vision, Compass, and Sonar or IR sensors. -Strategy: “Wandering Standpoint” (Puttkamer 2000) Try to reach the goal from start in direct line, when encountering an obstacle  randomly turn left/right OR check small turning angle to avoid an obstacle and then turn to that direction  Move around the obstacle (boundary-following)  until the goal is clear or visible then heading to the goal. I. Outdoor Navigation

34 Three possible scenarios: 1)Clear day, visible goal, unknown distance to goal: -Sensors: Vision, Compass, and Sonar or IR sensors. -Strategy: “Wandering Standpoint” Vision to recognize the goal Keeping the goal image in the center of a view finder Moving in the same compass direction toward the goal If an obstacle is detected (the goal is not visible), then move to left or right (randomly) and travel around the obstacle When the original compass direction is detected and/or the goal is visible, the robot can turn and change its heading then cont. toward the goal I. Outdoor Navigation

35 Three possible scenarios: 1)Clear day, visible goal, unknown distance to goal: -Sensors: Vision, Compass, and Sonar or IR sensors. -Strategy: “Wandering Standpoint” Vision to recognize the goal Keeping the goal image in the center of a view finder Moving in the same compass direction toward the goal If an obstacle is detected (the goal is not visible), then move to left or right (randomly) and travel around the obstacle When the original compass direction is detected and/or the goal is visible, the robot can turn and change its heading then cont. toward the goal This results in “Path 1” “However, this is a simple strategy but not optimal one! & might lead to an endless loop in case of extreme obstacle placements” Finding the best path from start to goal is called “path planning”, using e.g., wavefront, machine learning algorithm! (RL) I. Outdoor Navigation

36 Three possible scenarios: 2) Goal not visible from start location, landmarks visible, goal visible from landmarks: -Sensors: Vision -Strategy: “Navigation by landmarks” Move to visible landmarks From landmarks to the goal  This results in “Path 2”  I. Outdoor Navigation

37 Three possible scenarios: 3) Goal and landmarks not visible, direction to the goal known: -Sensors: GPS or Compass & Wheel Encoders (obtain current position!) -Strategy: “Dead reckoning”  I. Outdoor Navigation

38 Basically this is “keeping track of your distance and your turns”, then adding them all together (“path integration”) to figure out your “total displacement” Dead Reckoning

39 Recording distance and heading direction for “path integration“ Don’t see the goal from starting point!!! Start Goal

40 Dead Reckoning X=10, Y = 0 X=5, Y = 0 ……. Recording distance and heading direction for “path integration“ Start Goal X=0, Y = 0,  

41 Dead Reckoning X=10, Y = 0 X=5, Y = 0 ……. X=10, Y = 2.5 … Recording distance and heading direction for “path integration“ Start Goal X=0, Y = 0,  

42 Dead Reckoning X=10, Y = 0 X=5, Y = 0 ……. X=10, Y = 2.5 … Start Goal X=0, Y = 0,   X_final=10, Y_final = 5,  _final = N Recording distance and heading direction for “path integration“

43 Dead Reckoning X=5, Y = 0 ……. X=10, Y = 2.5 … Start Goal “Return to start” X=0, Y = 0,   X=10, Y = 0 X_final=10, Y_final = 5,  _final = N Recording distance and heading direction for “path integration“

44 Dead Reckoning X=10, Y = 0 X=5, Y = 0 ……. X=10, Y = 2.5 … Back in a shortest path home (D)   Start Goal X=0, Y = 0,   X_final=10, Y_final = 5,  _final = N Recording distance and heading direction for “path integration“

45 Dead Reckoning X=10, Y = 0 X=5, Y = 0 ……. X=10, Y = 2.5 … Recording distance and heading direction for “path integration“ Back in a shortest path home (D) D = sqrt (X_final ^2 + Y_final ^2)  arcsin(Y_final/D) Go home: Distance = D Heading = 90+    Start Goal  X=0, Y = 0,  X_final=10, Y_final = 5,  _final = N

46  Dead Reckoning Control inputs to the robot: linear velocity V(t), and rotational velocity  (t)) Starting position (x 0, y 0 ) and orientation     The current robot pose (x, y,  ) can be computed as: (x 0, y 0,   )

47 Dead reckoning & path integration: navigation of desert ants (Cataglyphis) Don’t know where is the food!!  Do random exploration to search for the food!

48 Problems: Dead reckoning & path integration for navigation

49 Leg lengths were normal during the outbound journey but manipulated during the homebound run Leg lengths were manipulated during the outbound journey & the homebound run Wittlinger et al. 2006, Science,312 (5782): 1965-1967

50 Problems: Dead reckoning & path integration for navigation

51 Wheels slip Terrain change Inaccurate sensors (proprioceptive sensors) Robot is not exactly symmetrical

52 Problems: Dead reckoning & path integration for navigation Wheels slip Terrain change Inaccurate sensors (proprioceptive sensors) Robot is not exactly symmetrical The distance traveled and orientation will deviate randomly from the estimated values

53 Problems: Dead reckoning & path integration for navigation Wheels slip Terrain change Inaccurate sensors (proprioceptive sensors) Robot is not exactly symmetrical The distance traveled and orientation will deviate randomly from the estimated values Improve by:  Using additional (exteroceptive) sensors, e.g., Vision, Gyroscope, GPS.  Combining local (propriocep) and global (exterocep) strategies  Using statistic estimation techniques (Probabilistic Localization, Kalman filter)

54 II. Odor based navigation

55 Laying and sensing odor markings A path finder robot ‘A’ lays a trail for load carrying robots ‘B’ and ‘C’ Robots exploring an unknown environment lay trails indicating the route back to their starting positions (Russell, 1995)

56 Odor tracking robot Robot tracking an ethanol vapor (Ishida et al, 2002)

57 Olfactory coordinated area coverage Larinova et al, 2006

58 Path finding based on self-marking navigation Kulvicius et al., 2008 (Robot)

59 Path finding based on self-marking navigation Sabaliauskas, 2009 Odor following Reactive control

60 Path finding based on self-marking navigation Kulvicius et al., 2008

61 Path finding based on self-marking navigation Robot setup Sabaliauskas, 2009

62 Path finding based on self-marking navigation 5 th Run 10 th Run Sabaliauskas, 2009

63 Time (s) n=9 1.5m Path finding based on self-marking navigation Statistics Number of runs Sabaliauskas, 2009

64 IV. Maze Navigation

65 Wall-following: Touching (Left/Right) Right wall followingLeft wall following

66 Wall-following: Left/Rightmost path strategy “A robot uses the left/right-handed rule  If a robot comes to an intersection with several open sides, it follows the left/rightmost path” Leftmost Rightmost

67 Check state Move Left/Rightmost path strategy

68 Leftmost pathRightmost path Left/Rightmost path strategy

69 Problem for Wall-following

70 Recursive exploration “This leads to Full maze exploration  requires us to generate an internal representation of the maze and to maintain a bit-field of marking whether a particular square has already been visited”

71 Algorithm: 1)Explore the whole maze  starting at the start square & visit all reachable square to obtain the map of the area. E.g., move Front  Left  Right & every visited square or location  mark! 2)Compute the shortest distance from the start square to any other square (or “GOAL”) using a “wavefront” algorithm. 3)Allow the user to enter the coordinate of a goal: Then determine the shortest driving path by reversing the path in the the “wavefront” array from the destination to the start square Recursive exploration

72 The Wavefront Planner: Setup Starting Goal

73 The Wavefront Planner: Setup Starting Goal

74 The Wavefront in Action (1) Starting with the goal, set all adjacent cells with “0” to the current cell + 1 – 4-Point Connectivity or 8-Point Connectivity? – Your Choice. We’ll use 8-Point Connectivity in our example Starting

75 The Wavefront in Action (2) Now repeat with the modified cells – This will be repeated until no 0’s are neighboring to cells with values >= 2 0’s will only remain when regions are unreachable Starting

76 The Wavefront in Action (3) Repeat again…. Starting

77 The Wavefront in Action (4) And again…. Starting

78 The Wavefront in Action (5) And again until…. Starting

79 The Wavefront in Action (DONE!) You’re done – Remember, 0’s should only remain if unreachable regions exist Starting

80 The Wavefront, Now Navigation! To find the shortest path, according to your metric, simply always move toward a cell with a lower number – The numbers generated by the Wavefront planner are roughly proportional to their distance from the goal Starting

81 The Wavefront, Now Navigation! To find the shortest path, according to your metric, simply always move toward a cell with a lower number – The numbers generated by the Wavefront planner are roughly proportional to their distance from the goal

82 Localization “Where am I?  My position with respect to a reference frame”

83 Proprioceptive sensors (Encoders) Dead reckoning Deviation

84 Proprioceptive sensors (Encoders) Dead reckoning Deviation

85 GPS (Outdoor Localization) Knows latitude, longitude, altitude – Can derive velocities, heading Provides “Direct observation of state” – State: [x, y,  ] ~ [longitude, latitude, heading]

86 Knows latitude, longitude, altitude – Can derive velocities, heading Provides “Direct observation of state” – State: [x, y,  ] ~ [longitude, latitude, heading] Agrawal, M. & Konolige, K. 2006 GPS (Outdoor Localization)

87 Panoramic visual system, polarized-light sensors, ambient-light sensors. D. Lambrinos et. al., Robotics and Autonomous systems, 2000 Sahabot 2 Sun compass & vision(Outdoor Localization)

88 Beacon measurements (Indoor Localization) Sonar signals Freq 1 Freq 2 Freq 3

89 Mapping “Creating models of the environment they traverse using sensor data”

90 Sensors for the task: Internal sensors  Encoders, Compass (Dead reckoning) External sensors  Computer vision, sonar, IR, GPS, and laser range finders indoor mapping The information obtained from the sensors can be transmitted to external receivers or stored on board for later analysis  “indoor mapping” Outdoor mapping Outdoor mapping is more difficult since the external environment is not conveniently arranged in orthogonal corridors like those found in interior settings. Two common approaches to mapping are: Topological mapping Grid-based or metric mapping

91 Topological mapping It relies on landmarks (e.g., door, hallway intersections, T-junctions for indoor). It involves the creation of a map where the location of landmarks is essential and not distance between them. It is represented by graph in which each node is a landmark and adjacent nodes are connected by edges.

92 Grid-based or metric mapping We cover the environment to be mapped with an evenly space grid. The robot does not have complete and accurate priori knowledge concerning the presence of obstacles. Each cell in the grid stores the probability p(x,y) that cell c(x,y) is occupied. This value represents the robot’s belief that it can or cannot move to the center of the cell. This grid-based map is also called “occupancy map”. p(x,y)p(x,y) Grid superimposed on the map

93 Example of Map Generation Mapping algorithmMobile robot setup

94 Example of Map Generation StepA: The robot starts with a completely unknown occupancy grid and an empty corresponding configuration space (i.e. no obstacles). The first step is to do a 360° scan around the robot. If a range sensor returns a value larger than a threshold, then a preliminary obstacle is entered in the cell at the measured distance. Only final obstacle states are entered into the configuration space, therefore space A is still empty. All cells between this obstacle and the current robot position are marked as preliminary empty. The same is entered for all cells in line of a measurement that does not locate an obstacle; all other cells remain “unknown”. Only final obstacle states are entered into the configuration space, therefore space A is still empty. Preliminary obstacle Unknown Preliminary free Free

95 Example of Map Generation StepB: The robot drives to the closest obstacle in order to examine it closer. The robot performs a wall-following behavior around the obstacle and, while doing so, updates both grid and space. final obstacle states and their precise location has been entered into configuration space B. Now at close range, preliminary obstacle states have been changed to final obstacle states and their precise location has been entered into configuration space B.

96 Example of Map Generation StepC: The robot has completely surrounded one object by performing the wall-following algorithm. The robot is now close again to its starting position. the algorithm terminates the obstacle-following behavior and looks for the nearest preliminary obstacle. Since there are no preliminary cells left around this rectangular obstacle, the algorithm terminates the obstacle-following behavior and looks for the nearest preliminary obstacle.

97 Example of Map Generation StepD: The whole environment has been explored by the robot, and all preliminary states have been eliminated by a subsequent obstacle-following routine around the rectangular obstacle on the right-hand side. The final occupancy grid and the final configuration space are matched.

98 Other simulation results

99 Real robot experimental results

100 Filters Removing noise from a signal

101 Filters

102

103 I.) Moving average (FIR filter): Tuning parameters: Low Pass Filters Sensory signalFiltered Sensory signal

104 Low Pass Filters

105 II.) Exponential moving average (IIR filter): Tuning parameters: Sensory signal Filtered Sensory signal

106 Low Pass Filters 0.5 0.125 0.03125

107 Kalman Filter (Estimator) A set of mathematical equations to estimate the state of a process: Controlled process: Measurement: Process noise: Measurement noise: A – relates previuos state x k-1 to current state x k B – relates optional control input u to state x H – relates state x to measurement z R – process noise covariance Q – measurement noise covariance

108 Kalman Filter (Estimator) Procedure:

109 Kalman Filter (Estimator) Procedure:

110 Kalman Filter (simplified version) If we drop state matrices and control input out we get: We had:

111 Kalman Filter (Estimator) (n=30) 0.0523 0.0418 0.0194 e EMA e MA e Kalman Kalman gain (K)

112 Conclusions Problems: Navigation (How to go there?), Mapping (Where have I been?), Localization (Where am I?) Biological Inspiration  Pilotage: Navigation using familiar landmarks  Compass orientation: Navigation using a particular compass direction NOT using landmarks.  True navigation: Navigation to a goal point NOT using landmarks and regardless of the direction.  Components: Visual sensing, Olfactory cues, Compass, Dead reckoning, Memory Navigation  Sensors for Robot Navigation, Mapping, Localization  Simplified Outdoor Navigation: Wandering standpoint, Landmarks, Dead reckoning & Path integration  Odor based navigation: Laying and sensing odor markings, Odor tracking robot, Olfactory coordinated area coverage, Path finding based on self-marking navigation  Maze navigation: Wall following, Recursive exploration + Wavefront Localization: Local & Global sensors, Low pass filters (High freq noise), Kalman filter (Gaussian noise) Mapping: Topological mapping, Grid-based or metric mapping


Download ppt "Introduction to Computer Vision and Robotics: Navigation, Localization and Mapping Tomas Kulvicius Poramate Manoonpong."

Similar presentations


Ads by Google