Presentation is loading. Please wait.

Presentation is loading. Please wait.

Supervisor: Dr. Somnuk Phon-Amnuaisuk Co-supervisor: Dr. Chin Kuan Ho

Similar presentations

Presentation on theme: "Supervisor: Dr. Somnuk Phon-Amnuaisuk Co-supervisor: Dr. Chin Kuan Ho"— Presentation transcript:

1 Supervisor: Dr. Somnuk Phon-Amnuaisuk Co-supervisor: Dr. Chin Kuan Ho
Navigating Agents in Uncertain Environments Using Particle Swarm Optimization Adham Atyabi Supervisor: Dr. Somnuk Phon-Amnuaisuk Co-supervisor: Dr. Chin Kuan Ho

2 Navigation

3 Problem Statement This study addresses robot search problems in hostile scenarios. The idea is to use a team of cooperative agents in dynamic and uncertain environments in which agents tasks are time dependent. Agents’ task is to navigate to survivors' location in the environment. The dynamic nature of the environment causes unpredictability which makes the environment’s map unreliable. Unreliability of the map makes the predefined planning methods brittle. In such environments, conventional search techniques such as PSO suffer from the lack of search diversity which leads to a poor search results. In addition, these techniques often have premature convergence problem resulting in the stagnation of agents. Such stagnation affects the overall performance of the team.

4 Objective To identify, design and evaluate strategies for implementing a new Particle Swarm Optimization (PSO) for robot/agent navigation in uncertain environments. To handle uncertainty in the perception level of the robots/agents through cooperation of autonomous agents. To provide better exploration and exploitation in searches. Even though various solutions are suggested to handle the lack of diversity in PSO, our experiments illustrate that controlling the diversity does not guarantee high performance. Although controlling diversity provide balance between exploration and exploitation acts of agents, in such environment (dynamic, uncertain, time dependent), it is necessary to provide an intelligent balance between these two behaviours in a way that each behaviour only be used when it is truly the best behaviour.

5 Literature review-1 Navigation in Robotics
CLASSICAL APPROACHES The current developed classic methods are variations of a few general approaches: Roadmap (Retraction, Skeleton, or Highway approach) Cell Decomposition (CD) Potential fields (PF) Mathematical programming. HEURISTICAL APPROACHES Artificial Neural Network (ANN) Genetic Algorithms (GA) Particle Swarm Optimization (PSO) Ant Colony (ACO) Heuristic algorithms do not guarantee to find a solution, but if they do, are likely to do so much faster than classical methods. (Latombe , 1991, Keil and Sack, 1985, Masehian and Sedighzadeh, 2007, Pugh et al, 2007, Ramakrishnan and Zein-Sabatto, 2001, Hettiarachchi, 2006, Hu, 2007, Liu 2006, Mohamad et. Al 2006, Mclurkin and Yamins, 2005, Ying-Tung et. Al 2004)

6 Literature review-2 Difficulties in Conventional Navigating Techniques
Navigation techniques performances are highly dependent to their initialization and reliability of their map. According to the literatures, in real robotic domains, a small difference in the starting location of the robots or goals may show high effect on the overall performance. Due to the dynamic, noisy and unpredictable nature of real-world robotic applications, it is quite difficult to implement navigation technique based on a well-known predefined map. (Pugh and Zhang,2005, Pugh and Martinoli,2006,2007; Gu et al. , 2003)

7 Literature review-3 Robotic Swarm
The amount of robots used in literatures are 20 to 300 robots (Lee at al.,2005; Hettiarachchi, 2006; Werfel et al., 2005; Chang et al., 2005; Ahmadabadi et al., 2001; Mondada et al. 2004). Evaluation in robotic learning is costly even more than the processing of the learning algorithm itself (Pugh and Martinoli, 2006,2007). On real robots, sensors and actuators may have slightly different performances due to variations in manufacturing. As a result, multiple robots of the same model may actually perceive and interact with their environment differently, creating a heterogeneous swarm (Pugh and Martinoli, 2006,2007). Robots can use more knowledge (e.g. robots have knowledge about the location of goals and their teammates) (luke et al., 2005; Ahmadabadi et al., 2001; Yamaguchi et al., 1997; Martinson and Arkin, 2003). It is commune to train robots individually (Ahmadabadi et al., 2001; Yamaguchi et al, 1997; Hayas et al., 1994).

8 Literature review-4 Particle Swarm Optimization
PSO is an Evolutionary Algorithm inspired from animal social behaviors. (Kennedy, 1995, Ribeiro and Schlansker, 2005; Chang et al., 2004; Pugh and Martinoli, 2006; Sousa et al., 2003; Nomura,2007) PSO outperformed other Evolutionary Algorithms such as GA in some problems (Vesterstrom and Riget, 2002; Ratnaweera et al., 2004; Pasupuleti and Battiti,2006). PSO is an optimization technique which models a set of potential problem solutions as a swarm of particles moving about in a virtual search space. (Kennedy, 1995 ) The method was inspired by the movement of flocking birds and their interactions with their neighbors in the group. (Kennedy, 1995 ) PSO achieves optimization using three primary principles: Evaluation, where quantitative fitness can be determined for some particle location; Comparison, where the best performer out of multiple particles can be selected; Imitation, where the qualities of better particles are mimicked by others.

9 Literature review-5 Particle Swarm Optimization
Every particle in the population begins with a randomized position X(i,j) and randomized velocity V(i,j) in the n-dimensional search space. where i represent the particle index and j represents the dimension in the search space Each particle remembers the position at which it achieved its highest performance (p). Each particle is also a member of some neighborhood of particles, and remembers which particle achieved the best overall position in that neighborhood (g). Vij(t)= last Velocity + Cognitive component + Social component Vij(t)= w*Vij(t-1) C1*R1*(pij-xij(t-1)) C2*R2*(gi-Xij(t-1)) X(t)= X(t-1)+ V(t)

10 Methodology Particle Swarm Optimization with Area Extension
To improve Velocity To handle Direction and Fitness criteria To handle Cooperation To handle diversity of search To handle Lack of reliable perception (Pugh and Martinoli, 2006; Bogatyreva and Shillerov, 2005):

11 AEPSO / CAEPSO New velocity heuristic which solved the premature convergence Credit Assignment heuristic which solve the cul-de-sacs problem Environment Reduction heuristic and Different communications ranges condition which provide dynamic neighborhood and sub-swarms Help Request Signal which provide cooperation between different sub-swarms Boundary Condition heuristic which solve the lack of diversity in basic PSO Leave Force which provide the high level of noise resistance. Speculation mechanism which provide the high level of noise resistance.

12 New Velocity Adjustment Equation

13 Environment Reduction Heuristic
The idea is based on dividing the environment to sub virtual fixed areas with various credits. Areas credit defined the proportion of goals and obstacles positioned in the area. Particles know the credit of first and second layers of their current neighborhood

14 Communication Methodology and Help Request Signal Heuristics
Robots can only communicate with those who are in their communication range. Various communication ranges were used (500, 250, 125, 5 pixels). This heuristic has major effect on the sub swarm size. Help request signal can provide a chain of connections.

15 Credit Assignment and Boundary Condition Heuristics
Reward and Punishment Suspend factor In AEPSO, robots would be suspend each time that they cross boundary lines. By this conditions they can escape from the areas that they are stuck in it and it is as useful as reinitializing the robot states in the environment.

16 Uncertainty (Random Noise and Illusion Effect)
The Illusion idea is inspired from our real world perceptions errors and mistakes which can be easily imagined as corrupted data which could be caused by the lack of communication (satellite data’s) or even sensation elements (sensors) weaknesses. Illusion effect forced approximately over 50% noise to the environment.

17 Cooperative Learning It is commune to do the experiences in 2 phases (Ahmadabadi et al., 2001). Training Testing In the training phase, the suggested training method is important (Individual training or Team based training) In the testing phase, there are two different suggestions. Use same initialization as the training Use different initialization

18 Speculation Mechanism and Leave Force Heuristics
Speculation mechanism is based on using an extra memory in robots called Mask. Masks can take values by: Illusion effect. Robots self observation. Self Speculation. Neighbor’s observation. Neighbors Speculation. Leave Force is an extra punishment which will force robots to decrease 10% of their current area’s credit after certain iteration.

19 Employed Constraints Static Scenario. The scenario is used to examine:
The feasibility of Basic PSO as movement controller of a swarm of robots in survivors rescuing missions. The effects of various parameter adjustment, swarm size, and population density on basic PSO To comprise the results of basic PSO, random search, linear search and AEPSO. Dynamic scenario. The scenario is used to examine the feasibility of AEPSO in dynamic environments. Time dependent scenario. The scenario is used to examine the impact of Uncertainty (random noise) Time dependency Cooperative learning scenario: The scenario is used to examine the effects of Homogeneity/Heterogeneity Various Past-Knowledge quality and Initialization setup Uncertainty

20 Simulations Simulation 1. Used to examine feasibility of Basic PSO in static environment. Examined elements i) parameter adjustment ii) swarm size iii) population density. Simulation 2. The simulation is used to examine the feasibility of AEPSO in static and dynamic environments using population density scenario. Simulation 3. The simulation is used to examine the feasibility of AEPSO in static and dynamic environments under constraints such as population density, Uncertainty (random noise) and Time dependency. Simulation 4: The simulation is used to examine the feasibility of AEPSO and CAEPSO in static environment under constraints such as Uncertainty (Illusion effect), Homogeneity/Heterogeneity, Various Past-Knowledge quality, various Initialization setup, and Time dependency.

21 Parameter Adjustments in Simulations

22 PSO’s variation PSO Type Used Inertia Weight Acceleration Coefficients
AEPSO LDIWa: w1= 0.2, w2=1 FACb:C1=0.5, C2=2.5 Basic PSO1 LDIW: w1= 0.2, w2= FAC:C1=0.5, C2=2.5 Basic PSO2 FIWc: w= FAC: C1=0.5, C2=2.5 Basic PSO3 RANDIWd: w[0, 1] FAC:C1=0.5, C2=2.5 a: Linearly Decreasing Inertia Weight b: Fixed Acceleration Coefficients c: Fixed Inertia Weight d: Random Inertia Weight

23 Simulations’ Results In this study, results are presented based on:
How fast teams were able to accomplish the missions How many survivors have been rescued by agents How many survivors have been eliminated in missions Trajectory trace of robots.

24 Simulation 1:Effects of Parameter Adjustment on Basic PSO
Comparison of PSO's results in static environment.

25 Simulation 1:Effects of Swarm-Size on Basic PSO
Effect of swarm-size on basic PSO in static environment.

26 Simulation 1:Effects of Population Density on Basic PSO
Effect of population density on various basic PSOs in static environment.

27 Basic PSO vs. AEPSO

28 Simulation 2:AEPSO vs. PSO
(b) Comparison of AEPSO's performance in static and dynamic environments.

29 Simulation 3:AEPSO vs. Time dependency and Random Noise
AEPSO's effectiveness in accomplishing missions in various environments.

30 Achievements-1 AEPSO performed better search compare with variation of Basic PSO, Random search and Linear search. AEPSO performed well in dynamic environment and the results were reliable in noisy environment.


32 Achievements-2 CAEPSO achieved to reliable results with homogeneous robots due to its ability to reduce the effect of illusion from the environment Results shows that CAEPSO achieved to 99% performance with same initialization and it also achieved to 97% performance with new initialization (homogeneous scenario). More accurate past knowledge helped agents controlled by CAEPSO to accomplish their missions faster. CAEPSO achieved to 95% performance with heterogeneous robots. CAEPSO was able to reduce the illusion effect from the environment and improve its movements.

33 Advantages of AEPSO/CAEPSO
AEPSO showed better movements compare with Basic PSO. AEPSO achieved to reliable results with only 5 robots which is a big advantage compare with surveys. AEPSO/CAEPSO proved their efficiency in complex scenarios based on navigation in hostile situations. CAEPSO achieved to reliable results in homogeneous scenario with new and same initialization constrains. CAEPSO achieved to reliable results with heterogeneous robots.

34 Contributions AEPSO performed better local search compare with other techniques (Basic PSO, Random Search, Linear Search). AEPSO and CAEPSO are robust to Noise and Time dependency. Cooperation between agents allowed CAEPSO to perform well.

35 Conclusion and Future work
In this study, we introduced AEPSO as a new modified version of Basic PSO and we also investigated its effectiveness on static, dynamic, time dependent and uncertain problem domains. It is necessary to be mentioned that the small number of particles (only 5 robots) gave a great advantage to AEPSO (due to being able to reduce the costs). Robots were able to solve problems with high level of complexities based on using poor level of knowledge (training knowledge) and high level of cooperation and experience sharing. It would be useful to compare CAEPSO results with a behaviour-based version of q-learning in a Cooperative Learning scenario with Heterogeneous robots. It would be helpful to examine AEPSO and CAEPSO performance on team of real robots in real world.

36 Publications Atyabi, A., Amnuaisuk, S. P., Ho, C. K., (2008a), Effectiveness of a cooperative learning version of AEPSO in homogeneous and heterogeneous multi robot learning scenario. The 2008 World Congress on Computational Intelligence (WCCI 2008), pp Atyabi, A., Amnuaisuk, S. P., (2007), Particle swarm optimization with area extension (AEPSO). IEEE Congress of Evolutionary Computation CEC'07, Stamford University, Singapore, pp Atyabi, A., Amnuaisuk, S. P., Ho, C. K., (2007a), Effects of communication range, noise and help request signal on particle swarm optimization with area extension (AEPSO). The 2007 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology ACM, pp Atyabi, A., amnuaisuk, S. P., Ho, C. K., (2007b), Particle Swarm Optimizations: A Critical Review, Conf IKT07, Third conference of Information and Knowledge Technology, Ferdowsi University, Iran. Atyabi, A., amnuaisuk, S. P., Ho, C. K., (2008b), Robotic Navigation with PSO, Applied Soft Computing journal, Elsevier, submitted. Atyabi, A., Amnuaisuk, S. P., Ho, C. K., (2008c), Applying area extension PSO in robotic swarm. Journal of Intelligent and Robotic Systems, Springer, submitted.

37 References Agassounon, W., Martinoli, A., and Easton, K. (2004). Macroscopic modeling of aggregation experiments using embodied agents in teams of constant and time-varying sizes. Autonomous Robots. Hingham, MA, USA, September 2004, Vol. 17, pp. 163 – 192. Ahmadabadi, M. N., Asadpour, M., and Nakano, E. (2001). Cooperative q-learning: the knowledge sharing issue. Advanced Robotics, Vol. 15, no. 8, pp Angeline, P. J. (1998). Evolutionary optimization versus particle swarm optimization: Philosophy and performance differences. Lecture Notes In Computer Science, Proceedings of the 7th International Conference on Evolutionary Programming VII, Springer-Verlag, Vol. 1447, pp Atyabi, A. and Phon-Amnuaisuk, S. (2007). Particle swarm optimization with area extension (AEPSO). IEEE Congress of Evolutionary Computation CEC 2007, Stamford, Singapore, pp Atyabi, A., Phon-Amnuaisuk, S., and Ho, C. K. (2007). Effects of communication range, noise and help request signal on particle swarm optimization with area extension (AEPSO). Proceedings of the 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IEEE Computer Society  Washington, DC, USA, pp Atyabi, A., Phon-Amnuaisuk, S., and Ho, C. K. (2008b). Effectiveness of a cooperative learning version of AEPSO in homogeneous and heterogeneous multi robot learning scenario IEEE Congress on Evolutionary Computation (CEC 2008), Hong Kong, pp Beslon, G., Biennier, F., and Hirsbrunner, B. (1998). Multi-robot path-planning based on implicit cooperation in a robotic swarm. Proceedings of the second international conference on Autonomous agents, Minneapolis, Minnesota, United States, pp Bogatyreva, O. and Shillerov, A. (2005). Robot swarms in an uncertain world: controllable adaptability. International Journal of Advanced Robotic Systems. Vol.2, no3, pp Brits, R., Engelbrecht, A. P., and Bergh, F. V. D. (2002). A niching particle swarm optimizer. 4th Asia-Pacific Conference on Simulated Evolution and Learning 2002 (SEAL 2002). pp Brits, R., Engelbrecht, A. P., and Bergh, F. V. D. (2003). Scalability of niche PSO. Proceedings of the 2003 IEEE Swarm Intelligence Symposium, SIS '03. pp

38 Chang, B. C. H. , Ratnaweera, A. , Halgamuge, S. K. , and Watson, H. C
Chang, B. C. H., Ratnaweera, A., Halgamuge, S. K., and Watson, H. C. (2004). Particle swarm optimization for protein motif discovery. Journal of Genetic Programming and Evolvable Machines, pp Chang, K., Hwang, J., Lee, E., and Kazadi, S. T. (2005). The application of swarm engineering technique to robust multi chain robot systems. IEEE Conference on Systems, Man, and Cybernetics, USA, Vol. 2, pp Cheng, G. and Zelinsky, A. (1995). A physically grounded search in a behaviour based robot, Proc. Eighth Australian Joint Conference on Artificial Intelligence, pp Clerc, M. and Kennedy, J. (2002). The particle swarm - explosion, stability, and convergence in multidimensional complex space. IEEE Transaction on Evolutionary, Vol. 6, no. 1, pp Dowling, J. and Cahill, V. (2004). Self managed decentralized systems using k-components and collaborative reinforcement learning. Proc of 1st ACM SIGSOFT workshop on Self-managed systems, Newport Beach, California, pp Faverjon, B. and Tournassoud, P. (1987). A local based approach for path planning of manipulators with a high number of degrees of freedom. Proceedings of 1987 IEEE International Conference on Robotics and Automation, Vol. 4, pp Fourie, P. C. and Groenwold, A. A. (2002). The particle swarm optimization algorithm in size and shape optimization. Structural and Multidisciplinary Optimization, Vol. 23, no. 4, pp Fujii, T., Arai, Y., Asama, H., and Endo, I. (1998). Multilayered reinforcement learning for complicated collision avoidance problems. In Proceeding of the IEEE International Conference on Robotics and Automation. Vol. 3, pp Grosan, C., Abraham, A., and Chis, M. (2006). Swarm intelligence in data mining. Springer, studies in computational intelligence (SCI), Vol. 34, pp Gu, D., Hu, H., Reynolds, J., and Tsang, E. (2003). Ga-based learning in behaviour based robotics. Proc IEEE International Symposiom on Computational Intelligence in Robotic and Autonomous, pp Hettiarachchi, S. (2007). Distributed evolution for swarm robotics, Doctoral Thesis, University of Wyoming, USA. Hsiao, Y.-T., Chuang, C.-L., and Chien, C.-C. (2004). Ant colony optimization for best path planning. IEEE International Symposium on Communications and Information Technology ISCIT 2004, Vol. 1, pp

39 Hu, C. , Wu, X. , Liang, Q. , andWang, Y. (2007)
Hu, C., Wu, X., Liang, Q., andWang, Y. (2007). Autonomous robot path planning based on swarm intelligence and stream functions. Springer, ICES2007, pp Kazadi, S., Abdul-Khaliq, A., and Goodman, R. (2002). On the convergence of puck clustering systems. Robotics and Autonomous Systems, Vol. 38, no. 2, pp Keil, M. and Sack, J.-R. (1985). Minimum decomposition of polygonal objects. Springer, Computational Geometry, North-Holland, Amsterdam., pp Kennedy, J. and Eberhart, R. C. (1995). Particle swarm optimization. IEEE Press Proceedings of the 1995 IEEE International Conference on Neural Networks, pp Kennedy, J. and Mendes, R. (2002). Population structure and particle swarm performance. Proceedings of the 2002 Congress on Evolutionary Computation (CEC '02), pp Kennedy, J. and Spears, W. M. (1998). Matching algorithms to problems: An experimental test of the particle swarm and some genetic algorithms on the multimodal problem generator. Evolutionary Computation Proceedings, IEEE World Congress on Computational Intelligence, pp Latombe, J.-C. (1991). Robot motion planing. Kluwer International Series in Engineering and Computer Science, London. Lerman, K., Galstyan, A., and Martinoli, A. (2001). A macroscopic analytical model of collaboration in distributed robotic systems. Artificial Life, Vol. 7, pp Li, L., Martinoli, A., and Abu-Mostafa, Y. S. (2004). Learning and measuring specialization in collaborative swarm systems. International Society for Adaptive Behavior. Vol. 12, no. 3 - 4, pp. 199 – 212. Li, Q., Tong, X., Xie, S., and Zhang, Y. (2006). Optimum path planning for mobile robots based on a hybrid genetic algorithm. Proceedings of the Sixth International Conference on Hybrid Intelligent Systems HIS'06, Washington, DC, USA , pp. 53. Li, Y. and Chen, X. (2005). Mobile robot navigation using particle swarm optimization and adaptive NN. Springer, ICNC 2005, Vol. 3612, pp Liu, S., Mao, L., and Yu, J. (2006). Path planning based on ant colony algorithm and distributed local navigation for multi-robot systems. Proceedings of the 2006 IEEE International Conference on Mechatronics and Automation, pp Luke, S., Sullivan, K., Balan, G. C., and Panait, L. (2005). Tunably decentralized algorithms for cooperative target observation. Fourth International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2005), Netherlands, pp Martinoli, A., Easton, K., and Agassounon, W. (2007). Modeling swarm robotic systems: A case study in collaborative distributed manipulation, Int. Journal of Robotics Research, Vol. 23, no. 4, pp.  Martinson, E. and Arkin, R. C. (2004). Learning to role-switch in multi-robot systems. Proceedings IEEE International Conference on Robotics and Automation, ICRA '03. Vol. 2, pp

40 Masehian, E. and Sedighizadeh, D. (2007)
Masehian, E. and Sedighizadeh, D. (2007). Classic and heuristic approaches in robot motion planning- a chronological review. Proc of World Academy of Science Engineering and Technology (PWASET 2007), Vol. 23, pp Mei, H., Tian, Y., and Zu, L. (2006). A hybrid ant colony optimization algorithm for path planning of robot in dynamic environment. International Journal of Information Technology, Vol. 12, no. 3, pp Mohamad, M. M., Taylor, N. K., and Dunnigan, M. W. (2006). Articulated robot motion planning using ant colony. 3rd International IEEE Conference on Intelligent Systems, pp Mondada, F., Pettinaro, G. C., Guignard, A., Kwee, I. W., Floreano, D., Deneubourg, J., Nol, S., Gambardella, L. M., and Dorigo, M. (2004). Swarm-bot: a new distributed robotic concept. Autonomous Robots, special Issue on Swarm Robotics, Vol. 17, no. 2-3, pp Nomura, Y. (2007). An integrated fuzzy control system for structural vibration. Computer-Aided Civil and Infrastructure Engineering 22, Vol. 22, no. 4, pp Park, K. H., Kim, Y. J., and Kim, J. H. (2001). Modular q-learning based multi-agent cooperation for robot soccer. Robotics and Autonomous Systems, Vol. 35, pp Pasupuleti, S. and Battiti, R. (2006). The gregarious particle swarm optimizer (G-PSO). Proceedings of the 8th annual conference on Genetic and evolutionary computation, Seattle, Washington, USA, pp. 67 – 74.   Peer, E. S., Bergh, F. V. D., and Engelbrecht, A. P. (2003). Using neighbourhood with the guaranteed convergence PSO. Proceedings of the 2003 IEEE Swarm Intelligence Symposium SIS '03, pp Peram, T., Veeramachaneni, K., and Mohan, C. K. (2003). Fitness distance ratio based particle swarm optimization (FDR-PSO). Swarm Intelligence Symposium, SIS '03, pp Pugh, J. and Martinoli, A. (2006). Multi-robot learning with particle swarm optimization. International Conference on Autonomous Agents and Multi Agent Systems AAMAS'06, Hakodate, Japan, pp Pugh, J. and Martinoli, A. (2007a). Inspiring and modeling multi-robot search with particle swarm optimization. Proceeding of the 2007 IEEE Swarm Intelligence Symposium (SIS 2007), pp Pugh, J. and Martinoli, A. (2007b). Parallel learning in heterogeneous multi-robot swarms. IEEE Congress on Evolutionary Computation CEC'07, Singapore, pp Pugh, J. and Zhang, Y. (2005). Particle swarm optimization for unsupervised robotic learning. Proceedings of 2005 IEEE in Swarm Intelligence Symposium, SIS’05, pp Qin, Y.-Q., Sun, D.-B., Li, N., and Cen, Y.-G. (2004a). Path planing for mobile robot using the particle swarm optimization with mutation operator. Proceeding of the Tird International Conference on Machine learning and Cybernetics, Shanghai, pp Qin, Y.-Q., Sun, D.-B., Li, N., and Cen, Y.-G. (2004b). Path planning for mobile robot using the particle swarm optimization with mutation operator. Proceedings of the Third International Conference on Machine Laming and Cybernetics, Shanghai, Vol. 4, pp

41 Ramakrishnan, R. and Zein-Sabatto, S. (2001)
Ramakrishnan, R. and Zein-Sabatto, S. (2001). Multiple path planning for a group of mobile robot in a 2-d environment using genetic algorithms. In Proc of the IEEE Int Conf on SoutheastCon'01, Vol. 4, pp Ratnaweera, A., Halgamuge, S. K., and Watson, H. C. (2004). Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Transactions on evolutionary computation, Vol. 8, no. 3, pp Ribeiro, P. F. and Schlansker, W. K. (2005). A particle swarm optimized fuzzy neural network for voice controlled robot systems. IEEE Transactions on Industrial Electronics, pp Riget, J. and Vesterstrom, J. (2002). Controlling diversity in particle swarm optimization. Congress on Evolutionary Computation (CEC),2002 IEEE World Congress on Computational Intelligence, Hawaiian, Honolulu,. Rodri'guez, A. and Reggia, J. A. (2004). Extending self-organizing particle systems to problem solving. MIT Press Artificial Life, Vol. 10(4), pp Roy, N., Burgard, W., Fox, D., and Thrun, S. (1999). Coastal navigation-robot motion with uncertainty. IEEE/RSJ International Conference on Robotics and Automation. Sousa, T., Neves, A., and Silva, A. (2003). Swarm optimization as a new tool for data mining. Proceeding of the international parallel and distributed processing symposium (IPDPS'03), Los Alamitos, CA, USA, pp Sousa, T., Neves, A., and Silva, A. (2004). Particle swarm based data mining algorithms for classification tasks. IEEE Parallel and nature-inspired computational paradigms and applications 30, Vol. 30, pp Stacey, A., Jancic, M., and Grundy, I. (2003). Particle swarm optimization with mutation. IEEE Evolutionary Computation, CEC '03, pp Suganthan, P. N. (1999). Particle swarm optimiser with neighbourhood operator. Proceedings of the 1999 Congress on Evolutionary Computation CEC 99, Vol. 3, pp Tangamchit, P., Dolan, J. M., and Khosla, P. K. (2003a). Crucial factors affecting cooperative multi-robot learning. Proceeding of the 2003 IEEE/RSJ Intl conference on intelligent Robots and Systems, Las Vegas, Nevada, Vol. 2, pp

42 Vesterstrom, J. and Riget, J. (2002)
Vesterstrom, J. and Riget, J. (2002). Particle swarms extensions for improved local, multi-modal, and dynamic search in numerical optimization. Master's thesis, Dept Computer Science, Univ Aarhus, Aarhus C, Denmark. Werfel, J., Bar-Yam, Y., and Negpal, R. (2005). Building patterned structures with robot swarms. In Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI’05), pp Wilson, L. A., Moore, M. D., Picarazzi, J. P., and Miquel, S. D. S. (2004). Parallel genetic algorithm for search and constrained multi-objective optimization. International Parallel and Distributed Processing Symposium, Vol. 7, pp Yamaguchi, T., Tanaka, Y., and Yachida, M. (1997). Speed up reinforcement learning between two agents with adaptive mimetism. IEEE/RSJ Intelligent Robots and Systems, Vol. 2, pp Yang, C. and Simon, D. (2005). A new particle swarm optimization technique. 18th International Conference on Systems Engineering (ICSEng 2005), pp Yang, E. and Gu, D. (2004). Multi-agent reinforcement learning for multi-robot systems: A survey. Technical report, University of Essex Technical Report CSM-404, Department of Computer Science. Zein-Sabatto, S. and Ramakrishnan, R. (2002). Multiple path planning for a group of mobile robots in a 3d environment using genetic algorithms. Proc IEEE Southeast, Columbia, SC, USA pp Zhang, W.-J. and Xie, X.-F. (2003). Depso: Hybrid particle swarm with differential evolution operator. IEEE International Conference on Systems, Man and Cybernetics (SMCC), Washington D C, USA, pp Zhao, Y. and Zheng, J. (2004). Particle swarm optimization algorithm in signal detection and blind extraction. Proceedings of IEEE 7th International Symposium on Parallel Architectures, Algorithms and Networks, pp

43 Simulation 1- Basic PSO

44 Simulation-2 AEPSO Vs. Basic PSO, Random Search, Linear Search

45 Simulation-3 AEPSO Vs. Uncertainty, Dynamism, Time Dependency

46 Simulation 2:AEPSO vs. Linear Search and random Search
(b) Comparison of AEPSO's results in static and dynamic environments, linear search, and random search.

47 Simulation 3:AEPSO vs. Time dependency and Random Noise
(b) Uncertain static environment (a) Uncertain dynamic environment Experimental results of AEPSO with different communication ranges in uncertain environments.

48 Simulation 3:AEPSO vs. Time dependency and Random Noise
Static environment Effect of different deserting policies on accomplishing the missions

49 Homogeneous scenario(A1,A2,A3,A4)
In this simulation, The experiments are focused on the initialization dependency and past knowledge efficiency in CAEPSO. 1: Initialization dependency: A1: agents experience the same initialization as they used in their training for their testing phase. A2: agents experience different initialization (new initialization). 2: Impact of Past knowledge quality: A3: agents use four different tables as past knowledge. A4: agent's past knowledge in each experiment during the testing phase refers to the knowledge gathered from same environment during the training phase.

50 Simulation 4:Experiment A1 and A2: Effectiveness of Initialization
(a) New Initialization - A2 Experiment (b) Same Initialization - A1 Experiment Comparison of AEPSO’s and CAEPSO's results in terms of rescued survivors with new and same initializations in various experiments.

51 Simulation 4:Experiment A3 and A4: Effectiveness of Past Knowledge
(a) Accomplished missions (b) Eliminated survivors Comparison of CAEPSO's results in A3 and A4 experiments with same initializations during the testing phases.

52 Heterogeneous scenario(S1,S2,S3,S4)

53 Simulation 4:Experiments with Heterogeneous Agents
(a) Training Phase/ AEPSO (b) Testing Phase/ CAEPSO Comparison of CAEPSO's results in terms of accomplished missions with various scenarios in both training and testing phase.

54 Simulation 4: Experiments with Heterogeneous Agents
(a) Training Phase (b) Testing Phase Comparison of CAEPSO's results in terms of amount of rescued survivors with various scenarios in both training and testing phase.

Download ppt "Supervisor: Dr. Somnuk Phon-Amnuaisuk Co-supervisor: Dr. Chin Kuan Ho"

Similar presentations

Ads by Google