Presentation is loading. Please wait.

Presentation is loading. Please wait.

Adaptive Multi-Robot Team Reconfiguration using a Policy-Reuse Reinforcement Learning Approach Ke Cheng 1, Raj Dasgupta 1 and Bikramjit Banerjee 2 1 Computer.

Similar presentations


Presentation on theme: "Adaptive Multi-Robot Team Reconfiguration using a Policy-Reuse Reinforcement Learning Approach Ke Cheng 1, Raj Dasgupta 1 and Bikramjit Banerjee 2 1 Computer."— Presentation transcript:

1 Adaptive Multi-Robot Team Reconfiguration using a Policy-Reuse Reinforcement Learning Approach Ke Cheng 1, Raj Dasgupta 1 and Bikramjit Banerjee 2 1 Computer Science Department, University of Nebraska, Omaha 2 Computer Science Department, University of Southern Mississippi Autonomous Robots and Multi-robot Systems (ARMS) 2011 Workshop May 2, 2011

2 Distributed Multi-robot Coverage Enable a group of robots to cover an initially unknown environment – Unmanned search and rescue – Robotic de-mining – Explore an extra-terrestrial surface (Mars, Moon) – Explore an engineering structure like a airplane’s turbine-blade or a bridge for anomalies (e.g., cracks) – Robotic lawn-mowing, vacuum cleaning

3 Distributed Multi-robot Coverage Efficiency is measured in time and space – Time: reduce the time required to cover the environment – Space: avoid repeated coverage of regions that have already been covered Use a set of robots to perform complete coverage of an initially unknown environment in an efficient manner The region of the environment that passes under the swathe of the robot’s coverage tool is considered as covered Using an actuator (e.g., vacuum) or a sensor (e.g., camera or sonar) Robot’s coverage tool Tradeoff in achieving both simultaneously Source: Manuel Mazo Jr. and Karl Henrik Johansson, “Robust area coverage using hybrid control,”, TELEC'04, Santiago de Cuba, Cuba, 2004

4 Major Challenges Distributed – no shared memory or map of the environment that the robots can use to know which portion of the environment is covered Each robot has limited storage and computation capabilities – Can’t store map of the entire environment Other challenges: Sensor and encoder noise, communication overhead, localizing robots

5 Related Work: Multi-robot Coverage Deterministic Approaches – mSTC (Multi-robot Spanning Tree Coverage) [Agmon, Kaminka 2008] Environment modeled as a connected graph Each robot does depth first search within a sub-graph Sub-graphs covered by each robot made disjoint – Multi-robot Boustrophedon [Rekleitis et al. 2009] Robots determine disjoint regions; cover each region using ladder search Record ‘holes’ in regions; uses auction protocol to allocate robot to fill holes Emergent Approaches – Potential field based [Batalin, Sukhatme 2002, Parker 2002] Robots exert repelling force on each other when in vicinity – disperses robots away from each other – Ant-coverage based Pheromone based[Koenig et al. 2001] – Coverage marked with pheromone, centralized map used to record all robots’ pheromones, robots LRTA* to choose next cell to visit Frontier-based [Bruckstein et al. 1998, 2007] Complete coverage emerges, not provable Complete coverage provable

6 Multi-robot coverage: Individually coordinated robots using swarming Global Objective: Complete coverage of environment

7 Multi-robot coverage: Individually coordinated robots using swarming Global Objective: Complete coverage of environment Local coverage rule of robot... Local coverage rule of robot

8 Multi-robot coverage: Individually coordinated robots using swarming Global Objective: Complete coverage of environment Local coverage rule of robot... Local coverage rule of robot Local interactions between robots

9 Multi-robot coverage: Individually coordinated robots using swarming Global Objective: Complete coverage of environment Local coverage rule of robot... Local coverage rule of robot Local interactions between robots How well do the results of the local interactions translate to achieving the global objective? Done empirically References: 1. K. Cheng and P. Dasgupta, "Dynamic Area Coverage using Faulty Multi-agent Swarms" Proc. IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT 2007), Fremont, CA, 2007, pp. 17-24. 2. P. Dasgupta, K. Cheng, "Distributed Coverage of Unknown Environments using Multi-robot Swarms with Memory and Communication Constraints," UNO CS Technical Report (cst-2009-1).

10 Multi-robot coverage: Team-based robots using swarming Global Objective: Complete coverage of environment Local coverage rule of robot-team... Local coverage rule of robot-team Flocking technique to maintain team formation

11 Multi-robot coverage: Team-based robots using swarming Global Objective: Complete coverage of environment Local coverage rule of robot-team... Local coverage rule of robot-team Flocking technique to maintain team formation Local interactions between robot teams How well do the results of the local interactions translate to achieving the global objective? Done empirically Relevant publications: 1.K. Cheng, P. Dasgupta, Yi Wang ”Distributed Area Coverage Using Robot Flocks”, Nature and Biologically Inspired Computing (NaBIC’09), 2009. 2.P. Dasgupta, K. Cheng, and L. Fan, ”Flocking-based Distributed Terrain Coverage with Mobile Mini-robots,” Swarm Intelligence Symposium 2009.

12 Flocking-based Controller for Multi- robot Teams 12 Works with physical characteristics such as wheel speed, sensor reading, pose, etc. Controller Layer (uses flocking)

13 Multi-robot teams for area coverage Theoretical analysis: Forming teams gives a significant speed-up in terms of coverage efficiency Simulation Results: The speed-up decreases from the theoretical case but still there is some speed-up as compared to not forming teams Based on Reynolds’ flocking model Leader referenced Follower robots designated specific positions within team

14 Coverage with Multi-robot Teams Square Corridor Office 20 robots in different sized teams, in different environments over 2 hours

15 Dynamic Reconfigurations of Multi-robot Teams Having teams of robots is efficient for coverage Having large teams of robots doing frequent reformations is inefficient for coverage Can we make the modules change their configurations dynamically – Based on their recent performance: If a team of robots is doing frequent reformations (and getting bad coverage efficiency), split the team into smaller teams and see if coverage improves

16 Layered Controller for Dynamically Reforming Multi-robot Teams 16 Works with agent utility, agent strategies, equilibrium points, etc. Works with physical characteristics such as wheel speed, sensor reading, pose, etc. Coalition Game Layer (uses WVG) Controller Layer (uses flocking) Mediator Map from agent strategy to robot action, sensor reading to agent utility, maintain data structure for mapping

17 Coalition game-based Team Reconfiguration Coalition games provide a theory to divide a set of players into smaller subsets or teams We used a form of coalition games called weighted voting games (WVG) – N: set of players – Each player i is assigned a weight w i – q: threshold value called quota – Solution concept: What is the minimum set of players whose weights taken together can reach q subject to  w i >=q for all S subset of N minimize |S| i  S

18 Coalition game-based Team Reconfiguration Coalition games provide a theory to divide a set of players into smaller subsets or teams We used a form of coalition games called weighted voting games (WVG) – N: set of players – Each player i is assigned a weight w i – q: threshold value called quota – Solution concept: What is the minimum set of players whose weights taken together can reach q subject to  w i >=q for all S subset of N minimize |S| i  S Minimum Winning Coalition (MWC)

19 Weighted Voting Game (WVG) for Multi-robot Team Reconfiguration Set of players = Robots in a team Weight of player i = coverage efficiency of robot i Determined as a weighted combination of useful coverage and repeated (bad) coverage over last T timesteps w i = 1 if robot i did only useful coverage in last T time steps w i = 1/T if robot i did only repeated coverage in last T time steps

20 Weighted Voting Game (WVG) for Multi-robot Team Reconfiguration Set of players = Robots in a team Weight of player i = coverage efficiency of robot i Quota: range = [0, ] Determined as a weighted combination of useful coverage and repeated (bad) coverage over last T timesteps w i = 1 if robot i did only useful coverage in last T time steps w i = 1/T if robot i did only repeated coverage in last T time steps  w i i  N Varies across different scenarios, different team sizes

21 Weighted Voting Game (WVG) for Multi-robot Team Reconfiguration Set of players = Robots in a team Weight of player i = coverage efficiency of robot i Quota: range = [0, ] – q = q f X, where q f  [0,1] Determined as a weighted combination of useful coverage and repeated (bad) coverage over last T timesteps w i = 1 if robot i did only useful coverage in last T time steps w i = 1/T if robot i did only repeated coverage in last T time steps  w i i  N  w i i  N Varies across different scenarios, different team sizes Quota fraction

22 Robot Coverage as WVG Determining weights of players (robots) – Modeled as coverage capability Environment considered as a 2-D grid Coverage map: Region covered by robot in last T timesteps Coverage efficiency: – Time: What fraction of the coverage map has been covered at least once? – Space: What fraction of the coverage map has been covered more than once? C i = a X  i – b X  i + C 0 a=2, b=1, C 0 = -0.04 C i = 1.96C i = 0.96

23 Example of WVG for Robot Team Reconfiguration 4 robots: N = {A, B, C, D} w A = 0.45, w B = 0.25, w C = w D = 0.15 q f = 0.5 Here = 1.0 and q = 0.5 X 1 = 0.5 Find the MWC, i.e., min. set of players with  w i >= q MWC = {A, B} {A, C} {A, D} {A, B, C} {A, B, D} {A, C, D} {B, C, D} {A, B, C, D} If we change q f to 0.76, MWC becomes {A, B, C} {A, B, D} {A, B, C, D}  w i i  N

24 Example of WVG for Robot Team Reconfiguration 4 robots: N = {A, B, C, D} w A = 0.45, w B = 0.25, w C = w D = 0.15 q f = 0.5 Here = 1.0 and q = 0.5 X 1 = 0.5 Find the MWC, i.e., min. set of players with  w i >= q MWC = {A, B} {A, C} {A, D} {A, B, C} {A, B, D} {A, C, D} {B, C, D} {A, B, C, D} If we change q f to 0.76, MWC becomes {A, B, C} {A, B, D} {A, B, C, D}  w i i  N Changing the value of q f (quota) changes the solution (MWCs) Our prior works refine the MWCs further to select one best MWC (BMWC) depending on the pose of the robots forming the team - P. Dasgupta and K. Cheng, "Robust Multi-robot Team Formations using Weighted Voting Games," 10th International Symposium on Distributed Autonomous Robotic Systems (DARS 2010), Lausanne, Switzerland, 2010

25 Problems with Fixed q f 4 robots: N = {A, B, C, D} w A = w B = w C = w D = 1 q f = 0.5 q = 0.5 X 4 = 2 MWC: Any two players But the team of 4 was giving useful coverage only! (each robot’s w i = 1) Team split was unnecessary First T time steps – 5 robots: N = {A, B, C, D, E} – w A = w B = w C = w D = w E =1 – q f = 0.9 (q = 0.9 X 5 = 4.5) – MWC: all 5 robots stay together…good! Next T time steps – 5 robots: N = {A, B, C, D, E} – w A = 0.9, w B = 0.8, w C = 0.7, w D = w E = 0.6 – q f = 0.9 (q = 0.9 X 3.6 = 3.24) – MWC: all 5 robots stay together again…bad! They should have split Team did not split when it was necessary

26 Problems with Fixed q f 4 robots: N = {A, B, C, D} w A = w B = w C = w D = 1 q f = 0.5 q = 0.5 X 4 = 2 MWC: Any two players But the team of 4 was giving useful coverage only! (each robot’s w i = 1) Team split was unnecessary First T time steps – 5 robots: N = {A, B, C, D, E} – w A = w B = w C = w D = w E =1 – q f = 0.9 (q = 0.9 X 5 = 4.5) – MWC: all 5 robots stay together…good! Next T time steps – 5 robots: N = {A, B, C, D, E} – w A = 0.9, w B = 0.8, w C = 0.7, w D = w E = 0.6 – q f = 0.9 (q = 0.9 X 3.6 = 3.24) – MWC: all 5 robots stay together again…bad! They should have split Team did not split when it was necessary Depending on operating conditions, (e.g., cov. eff. in team), dynamically adapt q f

27 Layered Controller for Dynamically Adpating q f 27 Works with agent utility, agent strategies, equilibrium points, etc. Works with physical characteristics such as wheel speed, sensor reading, pose, etc. Coalition Game Layer (uses WVG) Controller Layer (uses flocking) Mediator Map from agent strategy to robot action, sensor reading to agent utility, maintain data structure for mapping Learning Mechanism Used to learn coalition game parameter q f Perceived environment features do not change  -greedy Learning Policy Reuse Learning Mechanism Perceived environment features change

28 Reinforcement Learning for Updating q f Problem formulated as a Markov Decision Process (MDP) = Depending on coverage efficiency in team, dynamically adapt q f Recall that coverage efficiency  [1/T, 1] Discretize the coverage efficiency: [0.1, 0.2, …, 0.9, 1.0] Each of these discretized values are denoted by S 1, S 2, S 3, ….S 9, S 10 State Space

29 Action Space of MDP q f  [0, 1] – discretize this space too A L : q f = 0.9 (90% of combined wts.) - robots having very poor coverage efficiency are dropped, if at all A M : q f = 0.5 (50% of combined wts.) - robots having below average coverage efficiency are likely to be dropped A S : q f = 0.2 (20% of combined wts.) - robots having best coverage efficiency are likely to be retained ActionSpace

30 Action Space of MDP Obstacle Robots Comm. range Five robot team trying to stay together, but impeded by a long obstacle q f  [0, 1] – discretize this space too A L : q f = 0.9 (90% of combined wts.) - robots having very poor coverage efficiency are dropped, if at all A M : q f = 0.5 (50% of combined wts.) - robots having below average coverage efficiency are likely to be dropped A S : q f = 0.2 (20% of combined wts.) - robots having best coverage efficiency are likely to be retained ActionSpace

31 Action Space of MDP Obstacle Robots Comm. range Five robot team trying to stay together, but impeded by a long obstacle Probabilities representing uncertainties with actions A L and A M q f  [0, 1] – discretize this space too A L : q f = 0.9 (90% of combined wts.) - robots having very poor coverage efficiency are dropped, if at all A M : q f = 0.5 (50% of combined wts.) - robots having below average coverage efficiency are likely to be dropped A S : q f = 0.2 (20% of combined wts.) - robots having best coverage efficiency are likely to be retained ActionSpace

32 DomainActionEffectTransition 0% of perceived environment has obstacles ALAL Improves coverage efficiency (reward) by 10%T(S i, A L ) S i+1 AMAM Reduces coverage efficiency (reward) by 40%T(S i, A M ) S i-4 ASAS Reduces coverage efficiency (reward) by 70%T(S i, A S ) S i-7 20% of perceived environment has obstacles ALAL Reduces coverage efficiency (reward) by 10%T(S i, A L ) S i-1 AMAM Improves coverage efficiency (reward) by 20%T(S i, A M ) S i+2 ASAS Reduces coverage efficiency (reward) by 60%T(S i, A S ) S i-6 40% of perceived environment has obstacles ALAL Reduces coverage efficiency (reward) by 40%T(S i, A L ) S i-4 AMAM Reduces coverage efficiency (reward) by 20%T(S i, A M ) S i-2 ASAS Improves coverage efficiency (reward) by 10%T(S i, A S ) S i+1 Transition Function of MDP

33 DomainActionEffectTransition 0% of perceived environment has obstacles ALAL Improves coverage efficiency (reward) by 10%T(S i, A L ) S i+1 AMAM Reduces coverage efficiency (reward) by 40%T(S i, A M ) S i-4 ASAS Reduces coverage efficiency (reward) by 70%T(S i, A S ) S i-7 20% of perceived environment has obstacles ALAL Reduces coverage efficiency (reward) by 10%T(S i, A L ) S i-1 AMAM Improves coverage efficiency (reward) by 20%T(S i, A M ) S i+2 ASAS Reduces coverage efficiency (reward) by 60%T(S i, A S ) S i-6 40% of perceived environment has obstacles ALAL Reduces coverage efficiency (reward) by 40%T(S i, A L ) S i-4 AMAM Reduces coverage efficiency (reward) by 20%T(S i, A M ) S i-2 ASAS Improves coverage efficiency (reward) by 10%T(S i, A S ) S i+1 Transition Function of MDP

34 DomainActionEffectTransition 0% of perceived environment has obstacles ALAL Improves coverage efficiency (reward) by 10%T(S i, A L ) S i+1 AMAM Reduces coverage efficiency (reward) by 40%T(S i, A M ) S i-4 ASAS Reduces coverage efficiency (reward) by 70%T(S i, A S ) S i-7 20% of perceived environment has obstacles ALAL Reduces coverage efficiency (reward) by 10%T(S i, A L ) S i-1 AMAM Improves coverage efficiency (reward) by 20%T(S i, A M ) S i+2 ASAS Reduces coverage efficiency (reward) by 60%T(S i, A S ) S i-6 40% of perceived environment has obstacles ALAL Reduces coverage efficiency (reward) by 40%T(S i, A L ) S i-4 AMAM Reduces coverage efficiency (reward) by 20%T(S i, A M ) S i-2 ASAS Improves coverage efficiency (reward) by 10%T(S i, A S ) S i+1 Transition Function of MDP

35 DomainActionEffectTransition 0% of perceived environment has obstacles ALAL Improves coverage efficiency (reward) by 10%T(S i, A L ) S i+1 AMAM Reduces coverage efficiency (reward) by 40%T(S i, A M ) S i-4 ASAS Reduces coverage efficiency (reward) by 70%T(S i, A S ) S i-7 20% of perceived environment has obstacles ALAL Reduces coverage efficiency (reward) by 10%T(S i, A L ) S i-1 AMAM Improves coverage efficiency (reward) by 20%T(S i, A M ) S i+2 ASAS Reduces coverage efficiency (reward) by 60%T(S i, A S ) S i-6 40% of perceived environment has obstacles ALAL Reduces coverage efficiency (reward) by 40%T(S i, A L ) S i-4 AMAM Reduces coverage efficiency (reward) by 20%T(S i, A M ) S i-2 ASAS Improves coverage efficiency (reward) by 10%T(S i, A S ) S i+1 Transition Function of MDP Summary Across different environments S and A are unchanged T changes is called a domain D

36 Reward Function of MDP R(S i ) –  X actual coverage efficiency received in state S i Summary Across different environments S and A are unchanged T changes is called a domain D Reward changes: different domains have different awards Taken together a domain and its corresponding rewards define a task  =

37 Iterated policy selection strategy Used within each domain (MDP is fixed) – Follow policy for MDP with probability  – Explore (choose an action not recommended by policy) with probability 1- 

38 Policy Reuse Algorithm If domain has changed, which policy to use? – At certain intervals (called episodes) If discounted reward from current policy is low – Store (current policy, current domain) in pollicy library L along with discounted reward – Probabilisitically select a (policy, domain) from policy library L that has highest value of discounted rewards (excluding current domain) Else continue to use current policy Both iterated policy selection and policy reuse algorithm are run by a robot team’s leader 0 12 34  u d sep F. Fernandez and M. Veloso, “Probabilistic Policy Reuse in Reinforcement Learning Agent,” Proc. 5th Intl. Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2006

39 Experimental Results on Webots Simulated models of e -puck robot Wheel speed: 2.8 cm/sec Wireless comms, IR sensors for obstacle avoidance Simulated on-board GPS Robot size = Grid cell size = 7 cm X 7 cm Results averaged over 10 runs, each run is 30 min – 2 hrs Test environment: 2 m X 2 m arena with no obstacles with 10% of the arena’s area occupied by obstacles with 20% of the arena’s area occupied by obstacles

40 Average Reward per Episode 20% of environment occupied by obstacles 10% of environment occupied by obstacles No obstacles in environment Learning algorithm parameters: Iterated Policy Selection Learning rate,  = 0.05  - greeedy strategy:  0 = 0,  = 0.001 Policy reuse algorithm No. of time steps per episode, H = 100 Reward discount factor,  = 0.95 More obstacles, allows more policy reuse – library built faster, and convergence happens faster

41 Percentage of Environment Covered Up to 2 hours 20 robots, divided into 5 robot teams, 20% of environment occupied by obstacles Different no. of robots {5, 10, 15, 20}, 20% of environment occupied by obstacles, 2 hours Adapting q f using our reinforcement learning algorithm improves the percentage of environment covered by 4-10% w.r.t. a setting where q f is fixed

42 Video demo on the Webots with learning

43 Conclusions, Ongoing and Future Work Learning the quota fraction parameter, q f, using reinforcement learning + policy reuse improves the coverage performance of robot teams – By allowing them to reconfigure more efficiently Improving learning algorithm: – Learning across multiple teams – Apply principles from transfer learning, keep- away soccer domain – Modeling partially observed information (environment features) in existing algorithm Implementation on physical robots

44 Acknowledgements We are grateful to the sponsors of our projects: – COMRADES project, Office of Naval Research – NASA Nebraska EPSCoR Mini-grant For more information please visit our C-MANTIC lab’s Website http://cmantic.unomaha.eduhttp://cmantic.unomaha.edu THANK YOU


Download ppt "Adaptive Multi-Robot Team Reconfiguration using a Policy-Reuse Reinforcement Learning Approach Ke Cheng 1, Raj Dasgupta 1 and Bikramjit Banerjee 2 1 Computer."

Similar presentations


Ads by Google