We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byDaniella Matthys
Modified about 1 year ago
Lecture 4: Command and Behavior Fusion Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments
2 © Gal Kaminka Previously, on Robots … Behavior Selection/Arbitration Activation-based selection winner-take-all selection, behavior networks argmax selection (priority, utility, success likelihood, … ) State-based selection world models, preconditions and termination conditions Sequencing through FSA
3 © Gal Kaminka This week, on Robots …. Announcements Programming homework: This is not an option! Project: expectations Rosenblatt-Payton Subsumption: Command override Information loss An alternative design: Command Fusion Behavior Fusion Context-dependent merging of behaviors
4 © Gal Kaminka Brooks’ Subsumption Architecture Multiple layers, divided by competence All layers get all sensor readings, issue commands Higher layers override commands by lower layers What happens when a command is overriden?
5 © Gal Kaminka Information Loss in Selection Layer chooses best command for its competence level Implication: No other command is possible But layer implicitly knows about satisficing solutions Satisficing: Loosely translated as “good enough” These get ignored if command is overriden
6 © Gal Kaminka Why is this a problem Example: Avoid-obstacle wants to keep away from left [20,160] heading possible Best: +90 degree heading Seek-goal wants to keep towards goal ahead [-30,30] heading possible Best: +0 degree heading No way to layer these such that overriding works But [20,30] is good for both
7 © Gal Kaminka There is a deeper problem When a higher-level behavior subsumes another It must take the other’s decision-making into account A higher behavior may have to contain lower behaviors Include within itself their decision-making Example: Goal-seek overrides avoid Must still avoid while seeking goal May need to reason about obstacles while seeking
8 © Gal Kaminka Rosenblatt-Payton Command Fusion Two principles: Every behavior weights ALL possible commands Weights are numbers [-inf,+inf] “No information loss” Any behavior can access weights of another Not just override its output “no information hidden”
9 © Gal Kaminka Weighting outputs AVOID does not choose a specific heading It provides preferences for all possible headings AVOID Sensor -20 degrees 20 degrees 30 degrees 160 degrees
10 © Gal Kaminka Merging outputs Now combine AVOID and SEEK-GOAL by adding AVOID Sensor -20 degrees 20 degrees 30 degrees 160 degrees SEEK
11 © Gal Kaminka Merging outputs Now combine AVOID and SEEK-GOAL by adding Then choose top one (e.g., winner-take-all) AVOID Sensor -20 degrees, degrees, degrees, degrees, +30 SEEK
12 © Gal Kaminka Advantages Easy to add new behaviors that modify heading Their output is also merged Considers all possibilities—finds useful compromises Negative weights possible, useful Can forbid certain commands, in principle And….
13 © Gal Kaminka Advantages Easy to add new behaviors that modify heading Their output is also merged Considers all possibilities—finds useful compromises Negative weights possible, useful Can forbid certain commands, in principle And….Intermediate Variables
14 © Gal Kaminka Rosenblatt-Payton 2 nd Principle: No information hidden Not only all choices of behavior should be open Internal state can also be important for other layers In Rosenblatt-Payton, a variable is a vector of weights We operate, combine, and reason about weights Even as internal state variables Example: Trajectory speed behavior (in article) Combines internal “Safe-Path”, “Turn-choices” output
15 © Gal Kaminka A problem Two behaviors If goal is far, speed should be fast if obstacle is close, speed should be stop Robot may not slow sufficiently Because overall results compromises between stop and fast Options: Tweak weights… Hack the numbers Add distance to goal into obstacle-avoidance rules Re-define fast to be slower
16 © Gal Kaminka Behavior Weighting Use decision context to weight the behaviors Dynamic weighting: change weights based on context Have a meta-behavior with context rules IF obstacle-close THEN increase weight of AVOID IF not obstacle-close THEN increase weight of SEEK Scale/weight based on context rules Weights of AVOID, SEEK multiplied by the behavior weight
17 © Gal Kaminka Merging outputs: Static Combine AVOID and SEEK-GOAL by adding Then choose top one (e.g., winner-take-all) AVOID Sensor -20 degrees, degrees, degrees, degrees, +30 SEEK
18 © Gal Kaminka When obstacle close Combine AVOID and SEEK-GOAL by adding Then choose top one (e.g., winner-take-all) AVOID Sensor -20 degrees, degrees, degrees, degrees, +105 SEEK
19 © Gal Kaminka When obstacle not close Combine AVOID and SEEK-GOAL by adding Then choose top one (e.g., winner-take-all) AVOID Sensor -20 degrees, degrees, degrees, degrees, -30 SEEK
20 © Gal Kaminka Final thoughts… Context rules combine activations and fusion Activation of behavior by priority Behavior has vector output (rather than single value) When should we use this? What about meta-meta behaviors? Compromises are a problem: Can cause a selection of non-satisficing solution Sometimes must choose! Compromises must be on package deals: Otherwise, get behavior X on speed, behavior Y on heading!
21 © Gal Kaminka שאלות?
22 © Gal Kaminka Potential Fields Independently developed from Payton-Rosenblatt Inspired by Physics Robot is particle Moving through forces of attraction and repulsion Basic idea: Each behavior is a force: Pushing robot this way or that Combination of forces results in selected action by robot
23 © Gal Kaminka Example: Goal-directed obstacle-avoidance Move towards goal while avoiding obstacles We have seen this before: In Brooks' subsumption architecture (two layers) In Payton-Rosenblatt command fusion Two behaviors: One pushes robot towards goal One pushes robot away from obstacle
24 © Gal Kaminka SEEK Goal
25 © Gal Kaminka Avoid Obstacle
26 © Gal Kaminka In combination....
27 © Gal Kaminka Run-time Robot calculates forces current acting on it Combines all forces Moves along the resulting vector
28 © Gal Kaminka But how to calculate these fields? Translate a position into a direction and magnitude Given X,Y of robot position Generate force acting on this position (dx,dy) Do this for all forces Combine forces: Final_dx = sum of all forces dx Final_dy = sum of all forces dy
29 © Gal Kaminka Example: SEEK Given: (Xg, Yg) goal coordinates, (x,y) current position Calculate: Find distance to goal d = sqrt((Xg-x)^2 + (Yg-y)^2) Find angle to goal a = tan -1 ((Yg-y)/(Xg-x)) Now: If d = 0, then dx = dy = 0 If d < s, then dx = d cos(a), dy = d sin(a) If d >= s, then dx = s cos(a), dy = s sin(a)
30 © Gal Kaminka Other types of potential fields Potential fiels useful in more than avoiding obstacles Can set in advance many different types Combine them dynamically Each field is a behavior All behaviors always active
31 © Gal Kaminka Uniform potential field dx = constant, dy = 0 e.g., for follow wall, or return to area
32 © Gal Kaminka Perpendicular field dx = +constant or -constant, depending on reference e.g., for avoid fence
33 © Gal Kaminka Problem: Stuck! Can get stuck in local minimum All forces in a certain area push towards a sink Once robot stuck, cannot get out One solution: Random field Distance d and angle a chosen randomly Gets robot unstuck, but also unstable
34 © Gal Kaminka A really difficult problem:
35 © Gal Kaminka A surprising solution Balch and Arkin found a surprisingly simple solution Avoid-the-past field Repulsion from places already visited This is a field that changes dynamically
Lecture 6: Hybrid Robot Control Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments.
DAMN : A Distributed Architecture for Mobile Navigation Julio K. Rosenblatt Presented By: Chris Miles.
Behavior Coordination Mechanisms – State-of-the- Art Paper by: Paolo Pirjanian (USC) Presented by: Chris Martin.
Lecture 7: Potential Fields and Model Predictive Control CS 344R: Robotics Benjamin Kuipers.
Reactive and Potential Field Planners David Johnson.
Lecture 3: Behavior Selection Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments.
Autonomous Mobile Robots CPE 470/670 Lecture 8 Instructor: Monica Nicolescu.
Lecture 2: Reactive Systems Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments.
Electric Potential AP Physics C Montwood High School R. Casao.
Embedded System Lab Kim Jong Hwi Chonbuk National University Introduction to Intelligent Robots.
Artificial Intelligence in Game Design Lecture 8: Complex Steering Behaviors and Combining Behaviors.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Chapter 5: Path Planning Hadi Moradi. Motivation Need to choose a path for the end effector that avoids collisions and singularities Collisions are easy.
A Summary of the Article “Intelligence Without Representation” by Rodney A. Brooks (1987) Presented by Dain Finn.
CSCI 4310 Lecture 5: Steering Behaviors in Raven.
Electric Field You have learned that two charges will exert a force on each other even though they are not actually touching each other. This force is.
Internal States Sensory Data Learn/Act Action Development Action Selection Action Action Action[N] Action[i] (being developed) Action[i] Environment.
Deflections We have already done programs on Gravitational Deflection (trajectories - PHYS 201, Vol. 1, #6) and Electric Deflection (cathode ray tube,
The gravitational force between two masses, m1 & m2 is proportional to the product of the masses and inversely proportional to the square of the distance.
Fall 2008Physics 231Lecture 8-1 Sources of the Magnetic Field.
Wednesday, Oct. 26, 2005PHYS , Fall 2005 Dr. Jaehoon Yu 1 PHYS 1444 – Section 003 Lecture #16 Wednesday, Oct. 26, 2005 Dr. Jaehoon Yu Charged Particle.
Artificial Intelligence in Game Design Complex Steering Behaviors and Combining Behaviors.
1 Planning under Uncertainty. Today’s Topics Sequential Decision Problems Markov Decision Process (MDP) Value Iteration Policy Iteration Partially Observable.
D. Roberts PHYS 121 University of Maryland Physic² 121: Phundament°ls of Phy²ics I November 13, 2006.
Evolving Virtual Creatures & Evolving 3D Morphology and Behavior by Competition Papers by Karl Sims Presented by Sarah Waziruddin.
D. Roberts PHYS 121 University of Maryland Physic² 121: Phundament°ls of Phy²ics I November 15, 2006.
PHYS 1442 – Section 004 Lecture #12 Wednesday February 26, 2014 Dr. Andrew Brandt Chapter 20 -Charged Particle Moving in Magnetic Field -Sources of Magnetic.
Multi Robot Systems Course Bar Ilan University Mor Vered.
Topics: Introduction to Robotics CS 491/691(X) Lecture 8 Instructor: Monica Nicolescu.
Lecture Outline Chapter 3 Physics, 4 th Edition James S. Walker Copyright © 2010 Pearson Education, Inc.
Chapter 8 Rotational Equilibrium and Rotational Dynamics.
Vectors and Direction In drawing a vector as an arrow you must choose a scale. If you walk five meters east, your displacement can be represented by a.
V=0 Initial D Final An electron has a speed v. Calculate the magnitude and direction of a uniform electric field that will stop this electron in a distance.
Department of Electrical Engineering, Southern Taiwan University Robotic Interaction Learning Lab 1 The optimization of the application of fuzzy ant colony.
Ying Yi PhD Chapter 8 Rotational Equilibrium and Rotational Dynamics 1 PHYS HCC.
4 Introduction to AI Robotics (MIT Press)Chapter 4: The Reactive Paradigm1 The Reactive Paradigm Describe the Reactive Paradigm in terms of the 3 robot.
Mobile Robot ApplicationsMobile Robot Applications Textbook: –T. Bräunl Embedded Robotics, Springer 2003 Recommended Reading: 1. J. Jones, A. Flynn: Mobile.
Robotica Lecture Review Reactive control Complete control space Action selection The subsumption architecture –Vertical vs. horizontal decomposition.
Gauss’ Law. Class Objectives Introduce the idea of the Gauss’ law as another method to calculate the electric field. Understand that the previous method.
© Manfred Huber Autonomous Robots Robot Path Planning (3)
Torque, Equilibrium, and Stability Dual Credit Physics Montwood High School R. Casao.
Electric Field Physics Department, New York City College of Technology.
IofT 1910 W Fall 2006 Week 3 Plan for today: discuss questions asked for the writeup talk about Brooks’ approach and compare it with other approaches.
5-1 Chapter 5: REACTIVE AND HYBRID ARCHITECTURES.
Combining Motion and Vector concepts 2D Motion Moving Motion Forward Velocity, Displacement and Acceleration are VECTORS Vectors have magnitude AND direction.
Adrian Treuille, Seth Cooper, Zoran Popović 2006 Walter Kerrebijn
3-2 Vectors and Scalars Is a number with units. It can be positive or negative. Example: distance, mass, speed, Temperature… Chapter 3: VECTORS 3-2 Vectors.
© 2017 SlidePlayer.com Inc. All rights reserved.