CS B 659: I NTELLIGENT R OBOTICS Planning Under Uncertainty.

Slides:



Advertisements
Similar presentations
Solving problems by searching
Advertisements

Reactive and Potential Field Planners
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Markov Decision Process
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
CSE-573 Artificial Intelligence Partially-Observable MDPS (POMDPs)
D ECISION -M AKING WITH P ROBABILISTIC U NCERTAINTY.
N ONDETERMINISTIC U NCERTAINTY & S ENSORLESS P LANNING.
MDP Presentation CS594 Automated Optimal Decision Making Sohail M Yousof Advanced Artificial Intelligence.
Probabilistic Robotics
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
Bayes Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the.
Planning under Uncertainty
POMDPs: Partially Observable Markov Decision Processes Advanced AI
Real-Time Tracking of an Unpredictable Target Amidst Unknown Obstacles Cheng-Yu Lee Hector Gonzalez-Baños* Jean-Claude Latombe Computer Science Department.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Decision Making Under Uncertainty Russell and Norvig: ch 16, 17 CMSC421 – Fall 2005.
Making Decisions under Probabilistic Uncertainty (Where an agent optimizes what it gets on average, but it may get more... or less ) R&N: Chap. 17, Sect.
P. Ögren (KTH) N. Leonard (Princeton University)
Integrating POMDP and RL for a Two Layer Simulated Robot Architecture Presented by Alp Sardağ.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Markov Decision Processes
CS 326 A: Motion Planning Target Tracking and Virtual Cameras.
Decision Making Under Uncertainty Russell and Norvig: ch 16, 17 CMSC421 – Fall 2003 material from Jean-Claude Latombe, and Daphne Koller.
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
CSE-473 Artificial Intelligence Partially-Observable MDPS (POMDPs)
Planning with Non-Deterministic Uncertainty. Recap Uncertainty is inherent in systems that act in the real world Last lecture: reacting to unmodeled disturbances.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
B EYOND C LASSICAL S EARCH Instructor: Kris Hauser 1.
TKK | Automation Technology Laboratory Partially Observable Markov Decision Process (Chapter 15 & 16) José Luis Peralta.
Model-based Bayesian Reinforcement Learning in Partially Observable Domains by Pascal Poupart and Nikos Vlassis (2008 International Symposium on Artificial.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 5.1: State Estimation Jürgen Sturm Technische Universität München.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
U NCERTAINTY IN S ENSING ( AND ACTION ). P LANNING W ITH P ROBABILISTIC U NCERTAINTY IN S ENSING No motion Perpendicular motion.
Deciding Under Probabilistic Uncertainty Russell and Norvig: Sect ,Chap. 17 CS121 – Winter 2003.
Learning to Navigate Through Crowded Environments Peter Henry 1, Christian Vollmer 2, Brian Ferris 1, Dieter Fox 1 Tuesday, May 4, University of.
Lecture 3: 18/4/1435 Searching for solutions. Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
U NCERTAINTY IN S ENSING ( AND ACTION ). A GENDA Planning with belief states Nondeterministic sensing uncertainty Probabilistic sensing uncertainty.
Robotics Club: 5:30 this evening
U NCERTAINTY IN S ENSING ( AND ACTION ). A GENDA Planning with belief states Nondeterministic sensing uncertainty Probabilistic sensing uncertainty.
Decision Making Under Uncertainty CMSC 471 – Spring 2041 Class #25– Tuesday, April 29 R&N, material from Lise Getoor, Jean-Claude Latombe, and.
1 Energy-Efficient Mobile Robot Exploration Yongguo Mei, Yung-Hsiang Lu Purdue University ICRA06.
Probabilistic Robotics
1 Chapter 17 2 nd Part Making Complex Decisions --- Decision-theoretic Agent Design Xin Lu 11/04/2002.
Planning in Information Space for a Quad-rotor Helicopter Ruijie He Thesis Advisor: Prof. Nicholas Roy.
COMP 2208 Dr. Long Tran-Thanh University of Southampton Reinforcement Learning.
Smart Sleeping Policies for Wireless Sensor Networks Venu Veeravalli ECE Department & Coordinated Science Lab University of Illinois at Urbana-Champaign.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Planning Under Uncertainty. Sensing error Partial observability Unpredictable dynamics Other agents.
Possible actions: up, down, right, left Rewards: – 0.04 if non-terminal state Environment is observable (i.e., agent knows where it is) MDP = “Markov Decision.
Lecture 2: Problem Solving using State Space Representation CS 271: Fall, 2008.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Achieving Goals in Decentralized POMDPs Christopher Amato Shlomo Zilberstein UMass.
Solving problems by searching Chapter 3. Types of agents Reflex agent Consider how the world IS Choose action based on current percept Do not consider.
Solving problems by searching
CS b659: Intelligent Robotics
Analytics and OR DP- summary.
ECE 448 Lecture 4: Search Intro
Schedule for next 2 weeks
Markov Decision Processes
Locomotion of Wheeled Robots
Robust Belief-based Execution of Manipulation Programs
Markov Decision Processes
Real-Time Motion Planning
A Short Introduction to the Bayes Filter and Related Models
CS 416 Artificial Intelligence
Reinforcement Learning Dealing with Partial Observability
Deciding Under Probabilistic Uncertainty
Presentation transcript:

CS B 659: I NTELLIGENT R OBOTICS Planning Under Uncertainty

Perception (e.g., filtering, SLAM) Prior knowledge (often probabilistic) Sensors Decision-making (control laws, optimization, planning) Offline learning (e.g., calibration) Actuators Actions

D EALING WITH U NCERTAINTY Sensing uncertainty Localization error Noisy maps Misclassified objects Motion uncertainty Noisy natural processes Odometry drift Imprecise actuators Uncontrolled agents (treat as state)

M OTION U NCERTAINTY g obstacle s

D EALING WITH M OTION U NCERTAINTY Hierarchical strategies Use generated trajectory as input into a low-level feedback controller Reactive strategies Approach #1: re-optimize when the state has diverged from the planned path ( online optimization) Approach #2: precompute optimal controls over state space, and just read off the new value from the perturbed state ( offline optimization) Proactive strategies Explicitly consider future uncertainty Heuristics (e.g., grow obstacles, penalize nearness) Markov Decision Processes Online and offline approaches

P ROACTIVE S TRATEGIES TO HANDLE M OTION U NCERTAINTY g obstacle s

D YNAMIC COLLISION AVOIDANCE ASSUMING WORST - CASE BEHAVIORS Optimizing safety in real- time under worst case behavior model T=1 T=2 TTPF=2.6 Robot

M ARKOV D ECISION P ROCESS A PPROACHES Alterovitz et al 2007

D EALING WITH S ENSING U NCERTAINTY Reactive heuristics often work well Optimistic costs Assume most-likely state Penalize uncertainty Proactive strategies Explicitly consider future uncertainty Active sensing Reward actions that yield information gain Partially Observable Markov Decision Processes (POMDPs)

10 Assuming no obstacles in the unknown region and taking the shortest path to the goal

11 Assuming no obstacles in the unknown region and taking the shortest path to the goal

12 Assuming no obstacles in the unknown region and taking the shortest path to the goal Works well for navigation because the space of all maps is too huge, and certainty is monotonically nondecreasing

13 Assuming no obstacles in the unknown region and taking the shortest path to the goal Works well for navigation because the space of all maps is too huge, and certainty is monotonically nondecreasing

14 What if the sensor was directed (e.g., a camera)?

15 What if the sensor was directed (e.g., a camera)?

16 What if the sensor was directed (e.g., a camera)?

17 What if the sensor was directed (e.g., a camera)?

18 What if the sensor was directed (e.g., a camera)?

19 What if the sensor was directed (e.g., a camera)?

20 What if the sensor was directed (e.g., a camera)?

21 What if the sensor was directed (e.g., a camera)?

22 What if the sensor was directed (e.g., a camera)?

23 What if the sensor was directed (e.g., a camera)?

24 What if the sensor was directed (e.g., a camera)? At this point, it would have made sense to turn a bit more to see more of the unknown map

A NOTHER E XAMPLE ? Locked? Key?

A CTIVE D ISCOVERY OF H IDDEN I NTENT No motion Perpendicular motion

M AIN A PPROACHES TO P ARTIAL O BSERVABILITY Ignore and react Doesn’t know what it doesn’t know Model and react Knows what it doesn’t know Model and predict Knows what it doesn’t know AND what it will/won’t know in the future Better decisions More components in implementation Harder computationally

U NCERTAINTY MODELS Model #1: Nondeterministic uncertainty f(x,u) -> a set of possible successors Model #2: Probabilistic uncertainty P(x’|x,u): a probability distribution over successors x’, given state x, control u Markov assumption

N ONDETERMINISTIC U NCERTAINTY : R EASONING WITH SETS x’ = x + e,   [-a,a] t=0 t=1 -aa t=2 -2a2a Belief State: x(t)  [-ta,ta]

U NCERTAINTY WITH S ENSING Plan = policy (mapping from states to actions) Policy achieves goal for every possible sensor result in belief state Move Sense Observations should be chosen wisely to keep branching factor low Outcomes Special case: fully observable state

T ARGET T RACKING target robot  The robot must keep a target in its field of view  The robot has a prior map of the obstacles  But it does not know the target’s trajectory in advance

T ARGET -T RACKING E XAMPLE  Time is discretized into small steps of unit duration  At each time step, each of the two agents moves by at most one increment along a single axis  The two moves are simultaneous  The robot senses the new position of the target at each step  The target is not influenced by the robot (non-adversarial, non- cooperative target) target robot

T IME -S TAMPED S TATES ( NO CYCLES POSSIBLE )  State = (robot-position, target-position, time)  In each state, the robot can execute 5 possible actions : {stop, up, down, right, left}  Each action has 5 possible outcomes (one for each possible action of the target), with some probability distribution [Potential collisions are ignored for simplifying the presentation] ([i,j], [u,v], t) ([i+1,j], [u,v], t+1) ([i+1,j], [u-1,v], t+1) ([i+1,j], [u+1,v], t+1) ([i+1,j], [u,v-1], t+1) ([i+1,j], [u,v+1], t+1) right

R EWARDS AND C OSTS The robot must keep seeing the target as long as possible  Each state where it does not see the target is terminal  The reward collected in every non-terminal state is 1; it is 0 in each terminal state [  The sum of the rewards collected in an execution run is exactly the amount of time the robot sees the target]  No cost for moving vs. not moving

E XPANDING THE STATE / ACTION TREE... horizon 1 horizon h

A SSIGNING REWARDS... horizon 1 horizon h  Terminal states: states where the target is not visible  Rewards: 1 in non-terminal states; 0 in others  But how to estimate the utility of a leaf at horizon h?

E STIMATING THE UTILITY OF A LEAF... horizon 1 horizon h  Compute the shortest distance d for the target to escape the robot’s current field of view  If the maximal velocity v of the target is known, estimate the utility of the state to d/v [conservative estimate] target robot d

S ELECTING THE NEXT ACTION... horizon 1 horizon h  Compute the optimal policy over the state/action tree using estimated utilities at leaf nodes  Execute only the first step of this policy  Repeat everything again at t+1… (sliding horizon)

P URE V ISUAL S ERVOING

Computing and Using a Policy

N EXT WEEK More algorithm details

F INAL INFORMATION Final presentations 20 minutes + 10 minutes questions Final reports due 5/3 Must include a technical report Introduction Background Methods Results Conclusion May include auxiliary website with figures / examples / implementation details I will not review code