Department of Electrical Engineering, Southern Taiwan University Robotic Interaction Learning Lab 1 The optimization of the application of fuzzy ant colony.

Slides:



Advertisements
Similar presentations
Application a hybrid controller to a mobile robot J.-S Chiou, K. -Y. Wang,Simulation Modelling Pratice and Theory Vol. 16 pp (2008) Professor:
Advertisements


Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
Recursively Defined Functions
Generated Waypoint Efficiency: The efficiency considered here is defined as follows: As can be seen from the graph, for the obstruction radius values (200,
An Approach to Evaluate Data Trustworthiness Based on Data Provenance Department of Computer Science Purdue University.
1.2 Row Reduction and Echelon Forms
Linear Equations in Linear Algebra
Lecture 11: Recursive Parameter Estimation
Path Planning with the humanoid robot iCub Semester Project 2008 Pantelis Zotos Supervisor: Sarah Degallier Biologically Inspired Robotics Group (BIRG)
Date:2011/06/08 吳昕澧 BOA: The Bayesian Optimization Algorithm.
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Tracking a moving object with real-time obstacle avoidance Chung-Hao Chen, Chang Cheng, David Page, Andreas Koschan and Mongi Abidi Imaging, Robotics and.
Learning Behavior using Genetic Algorithms and Fuzzy Logic GROUP #8 Maryam Mustafa Sarah Karim
III–2 Magnetic Fields Due to Currents.
Computer Engineering Department INTRODUCTION TO ROBOTICS COE 484 Dr. Mayez Al-Mouhamed SPRING 2008 Chapter V – REFERENCE BEHAVIOR.
Abstract In this project we expand our previous work entitled "Design of a Robotic Platform and Algorithms for Adaptive Control of Sensing Parameters".
Paper Title (use style: Paper Title) Authors Name/s Author/s affiliation: , department, name of organization, city, country I- Introduction  There.
Structural Equation Models – Path Analysis
FORS 8450 Advanced Forest Planning Lecture 19 Ant Colony Optimization.
1 SOUTHERN TAIWAN UNIVERSITY ELECTRICAL ENGINEERING DEPARTMENT Gain Scheduler Middleware: A Methodology to Enable Existing Controllers for Networked Control.
1 PSO-based Motion Fuzzy Controller Design for Mobile Robots Master : Juing-Shian Chiou Student : Yu-Chia Hu( 胡育嘉 ) PPT : 100% 製作 International Journal.
17.5 Rule Learning Given the importance of rule-based systems and the human effort that is required to elicit good rules from experts, it is natural to.
Chapter 10 Review: Matrix Algebra
A Shaft Sensorless Control for PMSM Using Direct Neural Network Adaptive Observer Authors: Guo Qingding Luo Ruifu Wang Limei IEEE IECON 22 nd International.
Complete Coverage Path Planning Based on Ant Colony Algorithm International conference on Mechatronics and Machine Vision in Practice, p.p , Dec.
Department of Electrical Engineering Southern Taiwan University
INVERSE KINEMATICS ANALYSIS TRAJECTORY PLANNING FOR A ROBOT ARM Proceedings of th Asian Control Conference Kaohsiung, Taiwan, May 15-18, 2011 Guo-Shing.
Swarm Intelligence 虞台文.
Computer Science CPSC 502 Lecture 14 Markov Decision Processes (Ch. 9, up to 9.5.3)
Copyright © 2010 Pearson Education, Inc. All rights reserved Sec
Topics in Artificial Intelligence By Danny Kovach.
Integrals  In Chapter 2, we used the tangent and velocity problems to introduce the derivative—the central idea in differential calculus.  In much the.
Mobile Robot Navigation Using Fuzzy logic Controller
INVENTORY CONTROL AS IDENTIFICATION PROBLEM BASED ON FUZZY LOGIC ALEXANDER ROTSHTEIN Dept. of Industrial Engineering and Management, Jerusalem College.
The Application of The Improved Hybrid Ant Colony Algorithm in Vehicle Routing Optimization Problem International Conference on Future Computer and Communication,
Method of Hooke and Jeeves
Numerical Methods.
Reliability-Based Design Methods of Structures
Design of PCA and SVM based face recognition system for intelligent robots Department of Electrical Engineering, Southern Taiwan University, Tainan County,
Motion Review. What is the difference between an independent and dependent variable?
University of Colorado at Boulder Yicheng Wang, Phone: , Optimization Techniques for Civil and Environmental Engineering.
1 Motion Fuzzy Controller Structure(1/7) In this part, we start design the fuzzy logic controller aimed at producing the velocities of the robot right.
Robot Velocity Based Path Planning Along Bezier Curve Path Gil Jin Yang, Byoung Wook Choi * Dept. of Electrical and Information Engineering Seoul National.
Path Planning Based on Ant Colony Algorithm and Distributed Local Navigation for Multi-Robot Systems International Conference on Mechatronics and Automation.
Application of the GA-PSO with the Fuzzy controller to the robot soccer Department of Electrical Engineering, Southern Taiwan University, Tainan, R.O.C.
EKT 441 MICROWAVE COMMUNICATIONS CHAPTER 3: MICROWAVE NETWORK ANALYSIS (PART 1)
1 1.2 Linear Equations in Linear Algebra Row Reduction and Echelon Forms © 2016 Pearson Education, Ltd.
Department of Electrical Engineering, Southern Taiwan University 1 Robotic Interaction Learning Lab The ant colony algorithm In short, domain is defined.
Science and Information Conference 2015
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
Chapter 12 Case Studies Part B. Control System Design.
(poster size is 84.1x118.9 cm) POSTER TEMPLATE A0
21st Mediterranean Conference on Control and Automation
Optimization Of Robot Motion Planning Using Genetic Algorithm
The Clutch Control Strategy of EMCVT in AC Power Generation System
A Signal Processing Approach to Vibration Control and Analysis with Applications in Financial Modeling By Danny Kovach.
Newton’s second law In this lesson, students learn to apply Newton's second law to calculate forces from motion, and motion from forces. The lesson includes.
Winning Strategy in Programming Game Robocode
SOUTHERN TAIWAN UNIVERSITY ELECTRICAL ENGINEERING DEPARTMENT
A Simple Artificial Neuron
Recursively Defined Functions
POSTER TEMPLATE (poster size is 70x110 cm)
Hidden Markov Models Part 2: Algorithms
Ant Colony Optimization with Multiple Objectives
Linear Equations in Linear Algebra
Chapter 6 Network Flow Models.
Danger Prediction by Case-Based Approach on Expressways
POSTER TEMPLATE (poster size is 70x110 cm)
Linear Equations in Linear Algebra
Presentation transcript:

Department of Electrical Engineering, Southern Taiwan University Robotic Interaction Learning Lab 1 The optimization of the application of fuzzy ant colony algorithm in soccer robot Juing-Shian Chiou, Kuo-Yang Wang, and Ming-Yuan Shieh Department of Electrical Engineering, Southern Taiwan University, Tainan County, Taiwan, R.O.C.

Department of Electrical Engineering, Southern Taiwan University 2 Robotic Interaction Learning Lab Outline Abstract Introduction Using generalized predictive control to predict the goal position The application of fuzzy ant colony algorithm on optimized speed of robot Ant colony algorithm used in obstacle avoidance Experiments Conclusion

Department of Electrical Engineering, Southern Taiwan University 3 Robotic Interaction Learning Lab Abstract This article provides a theory which is based on the Fuzzy Ant Colony Optimization, and then uses this theory to design an optimal speed for the football robots, and then we also apply Ant Colony Optimization to design its routes to avoid obstacles. Afterward, we add Generalized Predictive Control to predict the position which its target might appear at the next sampling time.

Department of Electrical Engineering, Southern Taiwan University 4 Robotic Interaction Learning Lab I ntroduction Here we design a GPC to help the robot quickly predict the position of the target at the next sampling time, after which it extrapolates the time required to reach the target and the next sampling time target to decide the route. We use the fuzzy control machine to design the speed of both right and left wheels, and then we add Ant Colony algorithm to adjust its fuzzy rule, to reach optimized result. When make up its obstacle-avoiding routes under Ant Colony algorithm to finish the attribution of the efficiency and effectiveness of the strategy.

Department of Electrical Engineering, Southern Taiwan University 5 Robotic Interaction Learning Lab Simulation platform Fig.1 Five-versus-five simulation platform

Department of Electrical Engineering, Southern Taiwan University 6 Robotic Interaction Learning Lab System architecture Fig. 2 System architecture.

Department of Electrical Engineering, Southern Taiwan University 7 Robotic Interaction Learning Lab Using generalized predictive control to predict the goal position(1/5) We utilized the current position and sampling time of the target to predict the target position at the next sampling time. The following steps illustrate the procedure followed. (I) Although this system is nonlinear, extremely short sampling times were used to make the system linear. (II) At first, we gathered useful conditions from the system:

Department of Electrical Engineering, Southern Taiwan University 8 Robotic Interaction Learning Lab Useful conditions 1) The position of the robot 2) The former position of the target 3) The current position of the target (The former and current positions were determined on the basis of the sampling time, as shown in Fig.3.) Fig.3. sampling time.

Department of Electrical Engineering, Southern Taiwan University 9 Robotic Interaction Learning Lab Using generalized predictive control to predict the goal position(2/5) (III) We calculated the distances between the former and current positions of the target, and we determined the speed of the target by using the sampling time. The equation below shows the calculations involved.

Department of Electrical Engineering, Southern Taiwan University 10 Robotic Interaction Learning Lab Using generalized predictive control to predict the goal position(3/5) (IV) After calculating the velocity of the target, we designed a GPC by using, the direction of the target and the sampling time, which was then used to find the subsequent position of the target, as calculated by means of the equation below and illustrated in Figure 4.

Department of Electrical Engineering, Southern Taiwan University 11 Robotic Interaction Learning Lab Subsequent position of the target Fig.4. Subsequent position of the target.

Department of Electrical Engineering, Southern Taiwan University 12 Robotic Interaction Learning Lab Using generalized predictive control to predict the goal position(4/5) (V) We then calculated the time needed for the robot to reach the target at its central speed, based on the distance ( ) that it had to cover to reach its current position. The equation below shows the calculations involved ( is the speed of the left wheel of the Robot, is the speed of the right wheel).

Department of Electrical Engineering, Southern Taiwan University 13 Robotic Interaction Learning Lab Using generalized predictive control to predict the goal position(5/5) (VI) If is larger than, then the robot followed to the next position of the target; if is smaller than, the robot proceeded to the current position of the target. By repeating steps I to VI, the target could be reached in less time.

Department of Electrical Engineering, Southern Taiwan University 14 Robotic Interaction Learning Lab The application of fuzzy ant colony algorithm on optimized speed of robot The design of a fuzzy logic controller The state equation The ant colony algorithm

Department of Electrical Engineering, Southern Taiwan University 15 Robotic Interaction Learning Lab The design of a fuzzy logic controller We designed an FLC to generate the velocity of both wheels of the robot. Two input parameters of an FLC are the distance and the angle. Fig.5. Relationship between d and ψ.

Department of Electrical Engineering, Southern Taiwan University 16 Robotic Interaction Learning Lab Membership function Each input parameters can be divided into seven classes, as shown in Fig. 6 and Fig. 7. Fig. 6. The distance between the robot and the goal. Fig.7. The orientation of the robot with respect to the straight line path to the goal.

Department of Electrical Engineering, Southern Taiwan University 17 Robotic Interaction Learning Lab Fuzzy rule table

Department of Electrical Engineering, Southern Taiwan University 18 Robotic Interaction Learning Lab The state equation With the velocities generated by the FLC, we determined the maximum velocity of the robot. We defined the mathematical model of the equation of the robot’s movements as follows:

Department of Electrical Engineering, Southern Taiwan University 19 Robotic Interaction Learning Lab The state equation Having defined the state variables, We were able to determine the state equation: Consequently, we identified the optimal vector of the velocity of the left wheel as, and that of the right wheel as

Department of Electrical Engineering, Southern Taiwan University 20 Robotic Interaction Learning Lab The ant colony algorithm First, we will find four rules triggered by fuzzy control machine like Fig. 8 shows. Let’s suppose the triggered rules are A,B,C,D. Fig. 8 Let’s suppose the triggered rules are A,B,C,D.

Department of Electrical Engineering, Southern Taiwan University 21 Robotic Interaction Learning Lab The ant colony algorithm In short, domain is defined as the limit of the membership function, and then we can transfer this question as a route on a plain one. Fig. 9 the form of fuzzy rules transfered into the route one.

Department of Electrical Engineering, Southern Taiwan University 22 Robotic Interaction Learning Lab The ant colony algorithm We can let be the number of ants at time in city. However, in computation side, we will show as domain, and shows is the total number of ants. This gives the possibilities of choosing target and it means the possibilities for the ant to reach the next city under the affection of visibility and pheromone, and its possibilities to choose by follow equation.

Department of Electrical Engineering, Southern Taiwan University 23 Robotic Interaction Learning Lab The ant colony algorithm For ant, stands for the value of pheromone at time through route i to j. As shown in follow equations.

Department of Electrical Engineering, Southern Taiwan University 24 Robotic Interaction Learning Lab Ant colony algorithm used in obstacle avoidance We propose Ant Colony Algorithm that aims at planning obstacle avoidance path of moving object as Robot Soccer. As Fig. 10 shows: Fig. 10 obstacle avoidance path

Department of Electrical Engineering, Southern Taiwan University 25 Robotic Interaction Learning Lab Ant colony algorithm used in obstacle avoidance In order to improve searching speed of Ant Colony Algorithm and prevent Convergence Rate from becoming slow and optimizing partially, we take concentration which could be got by i into account to confirm path number e that ant could choose by follow equation:

Department of Electrical Engineering, Southern Taiwan University 26 Robotic Interaction Learning Lab Ant colony algorithm used in obstacle avoidance Robot Soccer will have to choose the best path. This part adopts Objective Function to describe the performance of path choice by follow equation. When Objective Function is confirmed, weight value of every path that soccer may pass is confirmed by follow equation.

Department of Electrical Engineering, Southern Taiwan University 27 Robotic Interaction Learning Lab Ant colony algorithm used in obstacle avoidance we simulate ant’s Pheromone in this way. When all the Robot Soccer find Feasible Solution of one planning path, but it may not the best solution because the Pheromone has been changed at this time, therefore it is necessary to make a overall amending, amending principle is follow equation.

Department of Electrical Engineering, Southern Taiwan University 28 Robotic Interaction Learning Lab Ant colony algorithm used in obstacle avoidance is the Pheromone variable quantity of path (i, j): Its formula just like Ant-Cycle type; as formula shows follow equation.

Department of Electrical Engineering, Southern Taiwan University 29 Robotic Interaction Learning Lab Ant colony algorithm used in obstacle avoidance Now we are going to explain the procedure of using Ant Colony algorithm. Step 1: Parameter Initialization. Step 2: Iterative process. And then calculate probability of path choice according to

Department of Electrical Engineering, Southern Taiwan University 30 Robotic Interaction Learning Lab Ant colony algorithm used in obstacle avoidance Step 3: Update Pheromone concentration of path according to follow equation. Step 4: Repeat Step2, Step3 until ant reach its target point. Step 5: Stop iterative search when one in m ants has already completed searching the path length and has exceeded the best path length of previous iteration. Step 6: Make N=N+1, place ant at starting point and target point again if N<NC, repeat STEP2; else output the best path and stop the Algorithm.

Department of Electrical Engineering, Southern Taiwan University 31 Robotic Interaction Learning Lab Experiments This part we have several points of simulation. The first part is the experiment for the robot to reach the top speed and predict the route of control. The second part is the simulation of the optimal route of the robot. The third part is the obstacle-avoiding route of the robot.

Department of Electrical Engineering, Southern Taiwan University 32 Robotic Interaction Learning Lab Simulations of the velocity and the GPC Fig. 11 Using Fuzzy Ant Colony algorithm to adjust the velocity of the soccer robot. Fig. 12 using GPC to predict the movement of the target and design the moving route for the robot.

Department of Electrical Engineering, Southern Taiwan University 33 Robotic Interaction Learning Lab Simulation of the robot’s path Fig. 13 using fuzzy ant colony algorithm control machine to chase the route of the target, and also apply MATLAB to simulate. Fig. 14 using control robot driven by fuzzy ant colony algorithm to look for the route of the target, and also using FIRA simulation.

Department of Electrical Engineering, Southern Taiwan University 34 Robotic Interaction Learning Lab Simulate obstacle avoidance path Fig. 15 simulate obstacle avoidance path of Robot Soccer by MATLAB Fig. 16 simulate obstacle avoidance path of Robot Soccer by using FIRA simulation.

Department of Electrical Engineering, Southern Taiwan University 35 Robotic Interaction Learning Lab Conclusion The result of the experiment above shows that the method we provided can apply on the wheel robot effectively, and the generalized predictive control machine we designed can clarify the position of the target appearing at the next sampling time. In the future, we will shorten the time for the fuzzy ant colony to weaken and also make the system to reach the optimal condition in a short time. By adding other different algorithms, we can find out the best combination of them.

Department of Electrical Engineering, Southern Taiwan University 36 Robotic Interaction Learning Lab Thanks for your attention!