Presentation is loading. Please wait.

Presentation is loading. Please wait.

SA, GA and GSA in Fuzzy Systems Supervisor: Prof. Ho Cheng-Seen Presented by: Irfan Subakti 司馬伊凡 (M9215801) EE601-2 NTUST, February 9 th 2004 Supervisor:

Similar presentations


Presentation on theme: "SA, GA and GSA in Fuzzy Systems Supervisor: Prof. Ho Cheng-Seen Presented by: Irfan Subakti 司馬伊凡 (M9215801) EE601-2 NTUST, February 9 th 2004 Supervisor:"— Presentation transcript:

1 SA, GA and GSA in Fuzzy Systems Supervisor: Prof. Ho Cheng-Seen Presented by: Irfan Subakti 司馬伊凡 (M9215801) EE601-2 NTUST, February 9 th 2004 Supervisor: Prof. Ho Cheng-Seen Presented by: Irfan Subakti 司馬伊凡 (M9215801) EE601-2 NTUST, February 9 th 2004

2 Fields of Artificial Intelligent (AI)

3 Simulated Annealing (SA) SA is stochastic iterative improvement methods for solving combinatorial optimization problems. SA is stochastic iterative improvement methods for solving combinatorial optimization problems. SA generates a single sequence of solutions and searches for an optimum solution along this search path. SA generates a single sequence of solutions and searches for an optimum solution along this search path. SA starts with a given initial solution x 0. SA starts with a given initial solution x 0. At each step, SA generates a candidate solution x’ by changing a small fraction of a current solution x. At each step, SA generates a candidate solution x’ by changing a small fraction of a current solution x. SA accepts the candidate solution as a new solution with a probability min {1, e -  f/T }, where  f = f(x’) – f(x) is cost reduction from the current solution x to the candidate solution x’, and T is a control parameter called temperature. SA accepts the candidate solution as a new solution with a probability min {1, e -  f/T }, where  f = f(x’) – f(x) is cost reduction from the current solution x to the candidate solution x’, and T is a control parameter called temperature. A key point of SA is that SA accepts up-hill moves with the probability e -  f/T. A key point of SA is that SA accepts up-hill moves with the probability e -  f/T. This allows SA to escape from local minima. This allows SA to escape from local minima. But SA cannot cover a large region of the solution space within a limited computation time because SA is based on small moves. But SA cannot cover a large region of the solution space within a limited computation time because SA is based on small moves. SA is stochastic iterative improvement methods for solving combinatorial optimization problems. SA is stochastic iterative improvement methods for solving combinatorial optimization problems. SA generates a single sequence of solutions and searches for an optimum solution along this search path. SA generates a single sequence of solutions and searches for an optimum solution along this search path. SA starts with a given initial solution x 0. SA starts with a given initial solution x 0. At each step, SA generates a candidate solution x’ by changing a small fraction of a current solution x. At each step, SA generates a candidate solution x’ by changing a small fraction of a current solution x. SA accepts the candidate solution as a new solution with a probability min {1, e -  f/T }, where  f = f(x’) – f(x) is cost reduction from the current solution x to the candidate solution x’, and T is a control parameter called temperature. SA accepts the candidate solution as a new solution with a probability min {1, e -  f/T }, where  f = f(x’) – f(x) is cost reduction from the current solution x to the candidate solution x’, and T is a control parameter called temperature. A key point of SA is that SA accepts up-hill moves with the probability e -  f/T. A key point of SA is that SA accepts up-hill moves with the probability e -  f/T. This allows SA to escape from local minima. This allows SA to escape from local minima. But SA cannot cover a large region of the solution space within a limited computation time because SA is based on small moves. But SA cannot cover a large region of the solution space within a limited computation time because SA is based on small moves.

4 Simulated Annealing (SA) (continue) Pseudo-code of Simulated Annealing (SA) (Koakutsu et al. [20]) Pseudo-code of Simulated Annealing (SA) (Koakutsu et al. [20]) SA_algorithm(N a, T 0,  ) { x  x 0 ; /* initial solution */ T  T 0 ; /* initial temperature */ while (system is not frozen) { for (loop = 1; loop  N a ; loop++) { x’  Mutate(x);  f  f(x’) – f(x); r  random number between 0 and 1 if (  f < 0 or r < exp(-  f/T)) x  x’; } T  T *  /* lower temperature */ } return x } Pseudo-code of Simulated Annealing (SA) (Koakutsu et al. [20]) Pseudo-code of Simulated Annealing (SA) (Koakutsu et al. [20]) SA_algorithm(N a, T 0,  ) { x  x 0 ; /* initial solution */ T  T 0 ; /* initial temperature */ while (system is not frozen) { for (loop = 1; loop  N a ; loop++) { x’  Mutate(x);  f  f(x’) – f(x); r  random number between 0 and 1 if (  f < 0 or r < exp(-  f/T)) x  x’; } T  T *  /* lower temperature */ } return x }

5 Genetic Algorithms (GA) GA is another approach for solving combinatorial optimization problems. GA is another approach for solving combinatorial optimization problems. GA applies an evolutionary mechanism to optimization problems. GA applies an evolutionary mechanism to optimization problems. It starts with a population of initial solutions. It starts with a population of initial solutions. Each solution has a fitness value which is a measure of the quality of solutions. Each solution has a fitness value which is a measure of the quality of solutions. At each step, called a generation, GA produces a set of candidate solutions, called child solutions (offspring), using two types of genetic operators: mutation and crossover. At each step, called a generation, GA produces a set of candidate solutions, called child solutions (offspring), using two types of genetic operators: mutation and crossover. It selects good solutions as survivors to the next generation according to the fitness value. It selects good solutions as survivors to the next generation according to the fitness value. The mutation operator takes a single parent and modifies it randomly in a localized manner, so that it makes a small jump in the solution space. The mutation operator takes a single parent and modifies it randomly in a localized manner, so that it makes a small jump in the solution space. On the other hand, the crossover operator takes 2 solutions as parents and creates their child solutions by combining the partial solutions of the parents. On the other hand, the crossover operator takes 2 solutions as parents and creates their child solutions by combining the partial solutions of the parents. Crossover tends to create child solutions which differs from both parent solutions. Crossover tends to create child solutions which differs from both parent solutions. It results in a large jump in the solution space. It results in a large jump in the solution space. GA is another approach for solving combinatorial optimization problems. GA is another approach for solving combinatorial optimization problems. GA applies an evolutionary mechanism to optimization problems. GA applies an evolutionary mechanism to optimization problems. It starts with a population of initial solutions. It starts with a population of initial solutions. Each solution has a fitness value which is a measure of the quality of solutions. Each solution has a fitness value which is a measure of the quality of solutions. At each step, called a generation, GA produces a set of candidate solutions, called child solutions (offspring), using two types of genetic operators: mutation and crossover. At each step, called a generation, GA produces a set of candidate solutions, called child solutions (offspring), using two types of genetic operators: mutation and crossover. It selects good solutions as survivors to the next generation according to the fitness value. It selects good solutions as survivors to the next generation according to the fitness value. The mutation operator takes a single parent and modifies it randomly in a localized manner, so that it makes a small jump in the solution space. The mutation operator takes a single parent and modifies it randomly in a localized manner, so that it makes a small jump in the solution space. On the other hand, the crossover operator takes 2 solutions as parents and creates their child solutions by combining the partial solutions of the parents. On the other hand, the crossover operator takes 2 solutions as parents and creates their child solutions by combining the partial solutions of the parents. Crossover tends to create child solutions which differs from both parent solutions. Crossover tends to create child solutions which differs from both parent solutions. It results in a large jump in the solution space. It results in a large jump in the solution space.

6 Genetic Algorithms (GA) (continue) There are 2 key differences between GA and SA. There are 2 key differences between GA and SA. 1. GA maintains a population of solutions and uses them to search the solution space. 2. GA uses the crossover operator which causes a large jump in the solution space. Features 2 allows GA to globally search a large region of the solution space. But GA has no explicit ways to produce a sequence of small moves in the solution space. Features 2 allows GA to globally search a large region of the solution space. But GA has no explicit ways to produce a sequence of small moves in the solution space. Mutation creates a single small move one at a time instead of a sequence of small moves. Mutation creates a single small move one at a time instead of a sequence of small moves. As the result GA cannot search local region on the solution space exhaustively. As the result GA cannot search local region on the solution space exhaustively. There are 2 key differences between GA and SA. There are 2 key differences between GA and SA. 1. GA maintains a population of solutions and uses them to search the solution space. 2. GA uses the crossover operator which causes a large jump in the solution space. Features 2 allows GA to globally search a large region of the solution space. But GA has no explicit ways to produce a sequence of small moves in the solution space. Features 2 allows GA to globally search a large region of the solution space. But GA has no explicit ways to produce a sequence of small moves in the solution space. Mutation creates a single small move one at a time instead of a sequence of small moves. Mutation creates a single small move one at a time instead of a sequence of small moves. As the result GA cannot search local region on the solution space exhaustively. As the result GA cannot search local region on the solution space exhaustively.

7 Genetic Algorithms (GA) (continue)

8 Pseudo-code of Genetic Algorithms (GA) (Koakutsu et al. [20]) Pseudo-code of Genetic Algorithms (GA) (Koakutsu et al. [20]) GA_algorithm(L, R c, R m ) { X  {x 1,..., x L }; /* initial population */ while (stop criterion is not met) { X’   ; X’   ; while (number of children created < L  R c ) { while (number of children created < L  R c ) { select two solution, x i,x j from X x’  Crossover(x i, x j ); X’  X’ + {x’}; } select L solutions from X  X’ as a new population select L solutions from X  X’ as a new population while (number of solutions mutated < L  R m ) { while (number of solutions mutated < L  R m ) { select one solution x k from X x k  Mutate(x k ); }} return the best solution in X } Pseudo-code of Genetic Algorithms (GA) (Koakutsu et al. [20]) Pseudo-code of Genetic Algorithms (GA) (Koakutsu et al. [20]) GA_algorithm(L, R c, R m ) { X  {x 1,..., x L }; /* initial population */ while (stop criterion is not met) { X’   ; X’   ; while (number of children created < L  R c ) { while (number of children created < L  R c ) { select two solution, x i,x j from X x’  Crossover(x i, x j ); X’  X’ + {x’}; } select L solutions from X  X’ as a new population select L solutions from X  X’ as a new population while (number of solutions mutated < L  R m ) { while (number of solutions mutated < L  R m ) { select one solution x k from X x k  Mutate(x k ); }} return the best solution in X }

9 Genetic Simulated Annealing (GSA) In order to improve the performance of GA and SA, several hybrid algorithms have been proposed. In order to improve the performance of GA and SA, several hybrid algorithms have been proposed. Mutation used in GA tends to destroy some good features of solutions at the final stages of optimization process. Mutation used in GA tends to destroy some good features of solutions at the final stages of optimization process.SA-based While Sigrag and Weisser [10] proposed a thermodynamic genetic operator, which incorporates an annealing schedule to control the probability of applying the mutation, Adler [11] used a SA-based acceptance function to control the probability of accepting a new solution produced by the mutation. While Sigrag and Weisser [10] proposed a thermodynamic genetic operator, which incorporates an annealing schedule to control the probability of applying the mutation, Adler [11] used a SA-based acceptance function to control the probability of accepting a new solution produced by the mutation.GA-based More recent works on GA-oriented hybrids are the Simulated Annealing Genetic Algorithm (SAGA) method proposed by Brown et al. [12] and Annealing Genetic (AG) method proposed by Lin et al. [13]. More recent works on GA-oriented hybrids are the Simulated Annealing Genetic Algorithm (SAGA) method proposed by Brown et al. [12] and Annealing Genetic (AG) method proposed by Lin et al. [13]. Both methods divide each “generation” into 2 phases: GA phase and SA phase. Both methods divide each “generation” into 2 phases: GA phase and SA phase. GA generates a set of new solutions using the crossover operator and then SA further refines each solution in the population. GA generates a set of new solutions using the crossover operator and then SA further refines each solution in the population. While SAGA uses the same annealing schedule for each SA phase, AG tries to optimize different schedules for different SA phases. While SAGA uses the same annealing schedule for each SA phase, AG tries to optimize different schedules for different SA phases. In order to improve the performance of GA and SA, several hybrid algorithms have been proposed. In order to improve the performance of GA and SA, several hybrid algorithms have been proposed. Mutation used in GA tends to destroy some good features of solutions at the final stages of optimization process. Mutation used in GA tends to destroy some good features of solutions at the final stages of optimization process.SA-based While Sigrag and Weisser [10] proposed a thermodynamic genetic operator, which incorporates an annealing schedule to control the probability of applying the mutation, Adler [11] used a SA-based acceptance function to control the probability of accepting a new solution produced by the mutation. While Sigrag and Weisser [10] proposed a thermodynamic genetic operator, which incorporates an annealing schedule to control the probability of applying the mutation, Adler [11] used a SA-based acceptance function to control the probability of accepting a new solution produced by the mutation.GA-based More recent works on GA-oriented hybrids are the Simulated Annealing Genetic Algorithm (SAGA) method proposed by Brown et al. [12] and Annealing Genetic (AG) method proposed by Lin et al. [13]. More recent works on GA-oriented hybrids are the Simulated Annealing Genetic Algorithm (SAGA) method proposed by Brown et al. [12] and Annealing Genetic (AG) method proposed by Lin et al. [13]. Both methods divide each “generation” into 2 phases: GA phase and SA phase. Both methods divide each “generation” into 2 phases: GA phase and SA phase. GA generates a set of new solutions using the crossover operator and then SA further refines each solution in the population. GA generates a set of new solutions using the crossover operator and then SA further refines each solution in the population. While SAGA uses the same annealing schedule for each SA phase, AG tries to optimize different schedules for different SA phases. While SAGA uses the same annealing schedule for each SA phase, AG tries to optimize different schedules for different SA phases.

10 GSA (continue) The above GA-oriented hybrids methods try to incorporate the local stochastic hill climbing features of SA into GA. The above GA-oriented hybrids methods try to incorporate the local stochastic hill climbing features of SA into GA. Since they incorporate full SA into each generation and the number of generations is usually very large, GA-oriented hybrid methods are very time-consuming. Since they incorporate full SA into each generation and the number of generations is usually very large, GA-oriented hybrid methods are very time-consuming. SA-oriented hybrid approaches, attempts to adopt the global crossover operations of GA into SA. SA-oriented hybrid approaches, attempts to adopt the global crossover operations of GA into SA. Parallel Genetic Simulated Annealing (PGSA) [14, 15], is a parallel version of SA incorporating GA features. Parallel Genetic Simulated Annealing (PGSA) [14, 15], is a parallel version of SA incorporating GA features. During parallel SA-based search, crossover is used to generate new solutions in order to enlarge the search region of SA. During parallel SA-based search, crossover is used to generate new solutions in order to enlarge the search region of SA. GSA proposed by Koakutsu et al. [20]. GSA proposed by Koakutsu et al. [20]. While PGSA generates the seeds of SA local search in parallel, that is the order of applying each SA local search is independent, GSA generates the seeds of SA sequentially, that is the seeds of a SA local search depends of the best-so-far solutions of all previous SA local searches. While PGSA generates the seeds of SA local search in parallel, that is the order of applying each SA local search is independent, GSA generates the seeds of SA sequentially, that is the seeds of a SA local search depends of the best-so-far solutions of all previous SA local searches. This sequentially approach seems to generate better child solutions. This sequentially approach seems to generate better child solutions. In addition, compared to PGSA, GSA uses fewer crossover operations since it only uses crossover operations when the SA local search reaches a flat surface and it is time to jump in the solution space. In addition, compared to PGSA, GSA uses fewer crossover operations since it only uses crossover operations when the SA local search reaches a flat surface and it is time to jump in the solution space. The above GA-oriented hybrids methods try to incorporate the local stochastic hill climbing features of SA into GA. The above GA-oriented hybrids methods try to incorporate the local stochastic hill climbing features of SA into GA. Since they incorporate full SA into each generation and the number of generations is usually very large, GA-oriented hybrid methods are very time-consuming. Since they incorporate full SA into each generation and the number of generations is usually very large, GA-oriented hybrid methods are very time-consuming. SA-oriented hybrid approaches, attempts to adopt the global crossover operations of GA into SA. SA-oriented hybrid approaches, attempts to adopt the global crossover operations of GA into SA. Parallel Genetic Simulated Annealing (PGSA) [14, 15], is a parallel version of SA incorporating GA features. Parallel Genetic Simulated Annealing (PGSA) [14, 15], is a parallel version of SA incorporating GA features. During parallel SA-based search, crossover is used to generate new solutions in order to enlarge the search region of SA. During parallel SA-based search, crossover is used to generate new solutions in order to enlarge the search region of SA. GSA proposed by Koakutsu et al. [20]. GSA proposed by Koakutsu et al. [20]. While PGSA generates the seeds of SA local search in parallel, that is the order of applying each SA local search is independent, GSA generates the seeds of SA sequentially, that is the seeds of a SA local search depends of the best-so-far solutions of all previous SA local searches. While PGSA generates the seeds of SA local search in parallel, that is the order of applying each SA local search is independent, GSA generates the seeds of SA sequentially, that is the seeds of a SA local search depends of the best-so-far solutions of all previous SA local searches. This sequentially approach seems to generate better child solutions. This sequentially approach seems to generate better child solutions. In addition, compared to PGSA, GSA uses fewer crossover operations since it only uses crossover operations when the SA local search reaches a flat surface and it is time to jump in the solution space. In addition, compared to PGSA, GSA uses fewer crossover operations since it only uses crossover operations when the SA local search reaches a flat surface and it is time to jump in the solution space.

11 GSA (continue) GSA starts with a population X = {x 1, …, X Np } and repeatedly applies 3 operations: SA-based local search, GA-based crossover operation, and population update. GSA starts with a population X = {x 1, …, X Np } and repeatedly applies 3 operations: SA-based local search, GA-based crossover operation, and population update. SA-based local search produces a candidate solution x’ by changing a small fraction of the state of x. SA-based local search produces a candidate solution x’ by changing a small fraction of the state of x. The candidate solution is accepted as the new solution with probability min{1, e -  f/T }. The candidate solution is accepted as the new solution with probability min{1, e -  f/T }. GSA preserves the local best-so-far solution x* L during the SA- based local search. GSA preserves the local best-so-far solution x* L during the SA- based local search. When the search reaches a flat surface or the system is frozen, GSA produces a large jump in the solution space by using GA- based crossover. When the search reaches a flat surface or the system is frozen, GSA produces a large jump in the solution space by using GA- based crossover. GSA picks up a pair of parent solutions x j and x k at random from the population X such that f(x j ) f(x k ), applies crossover operator, and then replace the worst solution x i by the new solution produced by the crossover operator. GSA picks up a pair of parent solutions x j and x k at random from the population X such that f(x j )  f(x k ), applies crossover operator, and then replace the worst solution x i by the new solution produced by the crossover operator. At the end of each SA-based local search, GSA updates the population by replacing the current solution x i by the local best- so-far solution x* L. At the end of each SA-based local search, GSA updates the population by replacing the current solution x i by the local best- so-far solution x* L. GSA terminates when the CPU time reaches given limit, and reports the global best-so-far solution x* G. GSA terminates when the CPU time reaches given limit, and reports the global best-so-far solution x* G. GSA starts with a population X = {x 1, …, X Np } and repeatedly applies 3 operations: SA-based local search, GA-based crossover operation, and population update. GSA starts with a population X = {x 1, …, X Np } and repeatedly applies 3 operations: SA-based local search, GA-based crossover operation, and population update. SA-based local search produces a candidate solution x’ by changing a small fraction of the state of x. SA-based local search produces a candidate solution x’ by changing a small fraction of the state of x. The candidate solution is accepted as the new solution with probability min{1, e -  f/T }. The candidate solution is accepted as the new solution with probability min{1, e -  f/T }. GSA preserves the local best-so-far solution x* L during the SA- based local search. GSA preserves the local best-so-far solution x* L during the SA- based local search. When the search reaches a flat surface or the system is frozen, GSA produces a large jump in the solution space by using GA- based crossover. When the search reaches a flat surface or the system is frozen, GSA produces a large jump in the solution space by using GA- based crossover. GSA picks up a pair of parent solutions x j and x k at random from the population X such that f(x j ) f(x k ), applies crossover operator, and then replace the worst solution x i by the new solution produced by the crossover operator. GSA picks up a pair of parent solutions x j and x k at random from the population X such that f(x j )  f(x k ), applies crossover operator, and then replace the worst solution x i by the new solution produced by the crossover operator. At the end of each SA-based local search, GSA updates the population by replacing the current solution x i by the local best- so-far solution x* L. At the end of each SA-based local search, GSA updates the population by replacing the current solution x i by the local best- so-far solution x* L. GSA terminates when the CPU time reaches given limit, and reports the global best-so-far solution x* G. GSA terminates when the CPU time reaches given limit, and reports the global best-so-far solution x* G.

12 GSA (continue) Pseudo-code of GSA (Koakutsu et al. [20]) Pseudo-code of GSA (Koakutsu et al. [20]) GSA_algorithm(N p, N a, T 0,  ) { X  {x 1,..., X Np }; /* initialize population */ X  {x 1,..., X Np }; /* initialize population */ x* L  the best solution among X; /* initialize local best-so-far */ x* L  the best solution among X; /* initialize local best-so-far */ x* G  x* L /* initialize global best-so-far */ x* G  x* L /* initialize global best-so-far */ while (not reach CPU time limit) { while (not reach CPU time limit) { T  T 0 ; /* initialize temperature */ T  T 0 ; /* initialize temperature */ /* jump */ /* jump */ select the worst solution x i from X; select the worst solution x i from X; select two solutions x j, x k from X such that f(x j )  f(x k ); select two solutions x j, x k from X such that f(x j )  f(x k ); x i  Crossover(x j, x k ); x i  Crossover(x j, x k ); /* SA-based local search */ /* SA-based local search */ while (not frozen or not meet stopping criterion) { while (not frozen or not meet stopping criterion) { for (loop = 1; loop  N a ; loop++) { for (loop = 1; loop  N a ; loop++) { x’  Mutate(x i ); x’  Mutate(x i );  f  f(x’) – f(x i );  f  f(x’) – f(x i ); r  random number between 0 and 1 r  random number between 0 and 1 if (  f < 0 or r < exp(-  f/T)) if (  f < 0 or r < exp(-  f/T)) x i  x’; x i  x’; if (f(x i ) < f(x* L )) if (f(x i ) < f(x* L )) x* L  x i ; /* update local best-so-far */ x* L  x i ; /* update local best-so-far */} T  T *  /* lower temperature */ } if (f(x* L ) < f(x* G )) if (f(x* L ) < f(x* G )) x* G  x* L ; /* update global best-so-far */ x* G  x* L ; /* update global best-so-far */ /* update population */ /* update population */ x i  x* L ; x i  x* L ; f(x* L )  +  ; /* reset current local best-so-far */ f(x* L )  +  ; /* reset current local best-so-far */ } return x* G ; return x* G ;} Pseudo-code of GSA (Koakutsu et al. [20]) Pseudo-code of GSA (Koakutsu et al. [20]) GSA_algorithm(N p, N a, T 0,  ) { X  {x 1,..., X Np }; /* initialize population */ X  {x 1,..., X Np }; /* initialize population */ x* L  the best solution among X; /* initialize local best-so-far */ x* L  the best solution among X; /* initialize local best-so-far */ x* G  x* L /* initialize global best-so-far */ x* G  x* L /* initialize global best-so-far */ while (not reach CPU time limit) { while (not reach CPU time limit) { T  T 0 ; /* initialize temperature */ T  T 0 ; /* initialize temperature */ /* jump */ /* jump */ select the worst solution x i from X; select the worst solution x i from X; select two solutions x j, x k from X such that f(x j )  f(x k ); select two solutions x j, x k from X such that f(x j )  f(x k ); x i  Crossover(x j, x k ); x i  Crossover(x j, x k ); /* SA-based local search */ /* SA-based local search */ while (not frozen or not meet stopping criterion) { while (not frozen or not meet stopping criterion) { for (loop = 1; loop  N a ; loop++) { for (loop = 1; loop  N a ; loop++) { x’  Mutate(x i ); x’  Mutate(x i );  f  f(x’) – f(x i );  f  f(x’) – f(x i ); r  random number between 0 and 1 r  random number between 0 and 1 if (  f < 0 or r < exp(-  f/T)) if (  f < 0 or r < exp(-  f/T)) x i  x’; x i  x’; if (f(x i ) < f(x* L )) if (f(x i ) < f(x* L )) x* L  x i ; /* update local best-so-far */ x* L  x i ; /* update local best-so-far */} T  T *  /* lower temperature */ } if (f(x* L ) < f(x* G )) if (f(x* L ) < f(x* G )) x* G  x* L ; /* update global best-so-far */ x* G  x* L ; /* update global best-so-far */ /* update population */ /* update population */ x i  x* L ; x i  x* L ; f(x* L )  +  ; /* reset current local best-so-far */ f(x* L )  +  ; /* reset current local best-so-far */ } return x* G ; return x* G ;}

13 GSA to Estimate Null Values in Generating Weighted Fuzzy Rules from Relational Database Systems and It’s Estimating on Multiple Null Values Basic Concepts of Fuzzy Sets A fuzzy subset A of the universe of discourse U can be represented as follows: A fuzzy subset A of the universe of discourse U can be represented as follows: A =  A (u 1 ) / u 1 +  A (u 2 ) / u 2 + … +  A (u n ) / u n (1) A =  A (u 1 ) / u 1 +  A (u 2 ) / u 2 + … +  A (u n ) / u n (1) Where  A is the membership function of the fuzzy subset A,  A : U  [0,1], and  A (u i ) indicate the grade of membership of u i in the fuzzy set A. If U is continuous set, then the fuzzy subset A can be represented as follows: Where  A is the membership function of the fuzzy subset A,  A : U  [0,1], and  A (u i ) indicate the grade of membership of u i in the fuzzy set A. If U is continuous set, then the fuzzy subset A can be represented as follows: A = (2) A = (2) Part 1: Estimating Null Values in Relational Database Systems With GSA With GSA In [1], Chen et al. described how to estimate null values in relational database systems with Genetic Algorithms. In [1], Chen et al. described how to estimate null values in relational database systems with Genetic Algorithms. A linguistic term can be represented by a fuzzy set represented by a membership function. In that paper, the membership functions of the linguistic terms “L”, “SL”, “M”, “SH”, and “H” of the attributes “Salary” and “Experience” in relational database system are adopted from [6]. A linguistic term can be represented by a fuzzy set represented by a membership function. In that paper, the membership functions of the linguistic terms “L”, “SL”, “M”, “SH”, and “H” of the attributes “Salary” and “Experience” in relational database system are adopted from [6]. Basic Concepts of Fuzzy Sets A fuzzy subset A of the universe of discourse U can be represented as follows: A fuzzy subset A of the universe of discourse U can be represented as follows: A =  A (u 1 ) / u 1 +  A (u 2 ) / u 2 + … +  A (u n ) / u n (1) A =  A (u 1 ) / u 1 +  A (u 2 ) / u 2 + … +  A (u n ) / u n (1) Where  A is the membership function of the fuzzy subset A,  A : U  [0,1], and  A (u i ) indicate the grade of membership of u i in the fuzzy set A. If U is continuous set, then the fuzzy subset A can be represented as follows: Where  A is the membership function of the fuzzy subset A,  A : U  [0,1], and  A (u i ) indicate the grade of membership of u i in the fuzzy set A. If U is continuous set, then the fuzzy subset A can be represented as follows: A = (2) A = (2) Part 1: Estimating Null Values in Relational Database Systems With GSA With GSA In [1], Chen et al. described how to estimate null values in relational database systems with Genetic Algorithms. In [1], Chen et al. described how to estimate null values in relational database systems with Genetic Algorithms. A linguistic term can be represented by a fuzzy set represented by a membership function. In that paper, the membership functions of the linguistic terms “L”, “SL”, “M”, “SH”, and “H” of the attributes “Salary” and “Experience” in relational database system are adopted from [6]. A linguistic term can be represented by a fuzzy set represented by a membership function. In that paper, the membership functions of the linguistic terms “L”, “SL”, “M”, “SH”, and “H” of the attributes “Salary” and “Experience” in relational database system are adopted from [6]. NotationDenotes LLow SLSomewhat Low MMedium SHSomewhat High HHigh

14 … continue Here is relation in relational database [4], [6] Here is relation in relational database [4], [6] Degree of similarity between the values of the attribute “Degree” listed below: Degree of similarity between the values of the attribute “Degree” listed below: Here is relation in relational database [4], [6] Here is relation in relational database [4], [6] Degree of similarity between the values of the attribute “Degree” listed below: Degree of similarity between the values of the attribute “Degree” listed below: EMP-IDDegreeExperienceSalary S1Ph.D.7.263,000 S2Master2.037,000 S3Bachelor7.040,000 S4Ph.D.1.247,000 S5Master7.553,000 S6Bachelor1.526,000 S7Bachelor2.329,000 S8Ph.D.2.050,000 S9Ph.D.3.854,000 S10Bachelor3.535,000 S11Master3.540,000 S12Master3.641,000 S13Master10.068,000 S14Ph.D.5.057,000 S15Bachelor5.036,000 S16Master6.250,000 S17Bachelor0.523,000 S18Master7.255,000 S19Master6.551,000 S20Ph.D.7.865,000 S21Master8.164,000 S22Ph.D.8.570,000 BachelorMasterPh.D. Bachelor10.60.4 Master0.61 Ph.D.0.40.61

15 … continue

16 After fuzzification: After fuzzification: EMP-IDDegreeExperienceSalary S1Ph.D./1.0SH/0.963,000 S2Master/1.0L/0.537,000 S3Bachelor/1.0SH/1.040,000 S4Ph.D./1.0L/0.947,000 S5Master/1.0SH/0.7553,000 S6Bachelor/1.0L/0.7526,000 S7Bachelor/1.0SL/0.6529,000 S8Ph.D./1.0L/0.550,000 S9Ph.D./1.0SL/0.654,000 S10Bachelor/1.0SL/0.7535,000 S11Master/1.0SL/0.7540,000 S12Master/1.0SL/0.741,000 S13Master/1.0H/1.068,000 S14Ph.D./1.0M/1.057,000 S15Bachelor/1.0M/1.036,000 S16Master/1.0SH/0.650,000 S17Bachelor/1.0L/1.023,000 S18Master/1.0SH/0.955,000 S19Master/1.0SH/0.7551,000 S20Ph.D./1.0SH/0.665,000 S21Master/1.0H/0.5564,000 S22Ph.D./1.0H/0.7570,000

17 … continue Degree of similarity between two nonnumeric values listed below: Degree of similarity between two nonnumeric values listed below: Rank(Bachelor) = 1 Rank(Master) = 2 Rank(Ph.D.) = 3 Let X is a nonnumeric attribute. Based on the value T i.X of the attribute X of tuple T i and the value T j.X of the attribute X of tuple T j, where i  j, the degree of closeness Closeness(T i, T j ) between tuples T i and T j can be calculated by (8) or (9), where Weight(T j.Degree) and Weight(T j.Experience) denote the weights of the attributes “Degree” and “Experience”, respectively, obtained from the fuzzified values of the attributes “Degree” and “Experience” of tuple T j, derived from a chromosome Let X is a nonnumeric attribute. Based on the value T i.X of the attribute X of tuple T i and the value T j.X of the attribute X of tuple T j, where i  j, the degree of closeness Closeness(T i, T j ) between tuples T i and T j can be calculated by (8) or (9), where Weight(T j.Degree) and Weight(T j.Experience) denote the weights of the attributes “Degree” and “Experience”, respectively, obtained from the fuzzified values of the attributes “Degree” and “Experience” of tuple T j, derived from a chromosome If Rank(T i.X)  Rank(T j.X) then Closeness(T i, T j ) = Similarity(T i.X, T j.X)  Weight(T j.Degree) +  Weight(T j.Experience)(8) Closeness(T i, T j ) = Similarity(T i.X, T j.X)  Weight(T j.Degree) +  Weight(T j.Experience)(8) If Rank(Ti.X) < Rank(Tj.X) then Closeness(T i, T j ) = 1/Similarity(T i.X, T j.X)  Weight(T j.Degree) +  Weight(T j.Experience)(9) Closeness(T i, T j ) = 1/Similarity(T i.X, T j.X)  Weight(T j.Degree) +  Weight(T j.Experience)(9) where Similarity(T i.X, T j.X) denotes the degree of similarity between T i.X and T j.X, and its value is obtained from a fuzzy similarity matrix of the linguistic terms of the attribute X defined by a domain expert. where Similarity(T i.X, T j.X) denotes the degree of similarity between T i.X and T j.X, and its value is obtained from a fuzzy similarity matrix of the linguistic terms of the attribute X defined by a domain expert. Degree of similarity between two nonnumeric values listed below: Degree of similarity between two nonnumeric values listed below: Rank(Bachelor) = 1 Rank(Master) = 2 Rank(Ph.D.) = 3 Let X is a nonnumeric attribute. Based on the value T i.X of the attribute X of tuple T i and the value T j.X of the attribute X of tuple T j, where i  j, the degree of closeness Closeness(T i, T j ) between tuples T i and T j can be calculated by (8) or (9), where Weight(T j.Degree) and Weight(T j.Experience) denote the weights of the attributes “Degree” and “Experience”, respectively, obtained from the fuzzified values of the attributes “Degree” and “Experience” of tuple T j, derived from a chromosome Let X is a nonnumeric attribute. Based on the value T i.X of the attribute X of tuple T i and the value T j.X of the attribute X of tuple T j, where i  j, the degree of closeness Closeness(T i, T j ) between tuples T i and T j can be calculated by (8) or (9), where Weight(T j.Degree) and Weight(T j.Experience) denote the weights of the attributes “Degree” and “Experience”, respectively, obtained from the fuzzified values of the attributes “Degree” and “Experience” of tuple T j, derived from a chromosome If Rank(T i.X)  Rank(T j.X) then Closeness(T i, T j ) = Similarity(T i.X, T j.X)  Weight(T j.Degree) +  Weight(T j.Experience)(8) Closeness(T i, T j ) = Similarity(T i.X, T j.X)  Weight(T j.Degree) +  Weight(T j.Experience)(8) If Rank(Ti.X) < Rank(Tj.X) then Closeness(T i, T j ) = 1/Similarity(T i.X, T j.X)  Weight(T j.Degree) +  Weight(T j.Experience)(9) Closeness(T i, T j ) = 1/Similarity(T i.X, T j.X)  Weight(T j.Degree) +  Weight(T j.Experience)(9) where Similarity(T i.X, T j.X) denotes the degree of similarity between T i.X and T j.X, and its value is obtained from a fuzzy similarity matrix of the linguistic terms of the attribute X defined by a domain expert. where Similarity(T i.X, T j.X) denotes the degree of similarity between T i.X and T j.X, and its value is obtained from a fuzzy similarity matrix of the linguistic terms of the attribute X defined by a domain expert.

18 … continue Estimated value “ET i.Salary” of the attribute “Salary” of tuple T i as follows: Estimated value “ET i.Salary” of the attribute “Salary” of tuple T i as follows: ET i.Salary = T i.Salary  Closeness (T i, T j )(10) Estimated error of each tuple by (11), where Error i denotes the estimated error between the estimated value ET i.Salary of the attribute “Salary” of tuple T i and the actual value T i.Salary of the attribute “Salary” of tuple T i Estimated error of each tuple by (11), where Error i denotes the estimated error between the estimated value ET i.Salary of the attribute “Salary” of tuple T i and the actual value T i.Salary of the attribute “Salary” of tuple T i Errori =(11) Let Avg_Error denote the average estimated error of the tuples based on the combination of weights of the attributes derived from the chromosome, where Let Avg_Error denote the average estimated error of the tuples based on the combination of weights of the attributes derived from the chromosome, where Avg_Error =(12) Then, we can obtain the fitness degree of this chromosome as follows: Then, we can obtain the fitness degree of this chromosome as follows: Fitness Degree = 1 – Avg_Error(13) Estimated value “ET i.Salary” of the attribute “Salary” of tuple T i as follows: Estimated value “ET i.Salary” of the attribute “Salary” of tuple T i as follows: ET i.Salary = T i.Salary  Closeness (T i, T j )(10) Estimated error of each tuple by (11), where Error i denotes the estimated error between the estimated value ET i.Salary of the attribute “Salary” of tuple T i and the actual value T i.Salary of the attribute “Salary” of tuple T i Estimated error of each tuple by (11), where Error i denotes the estimated error between the estimated value ET i.Salary of the attribute “Salary” of tuple T i and the actual value T i.Salary of the attribute “Salary” of tuple T i Errori =(11) Let Avg_Error denote the average estimated error of the tuples based on the combination of weights of the attributes derived from the chromosome, where Let Avg_Error denote the average estimated error of the tuples based on the combination of weights of the attributes derived from the chromosome, where Avg_Error =(12) Then, we can obtain the fitness degree of this chromosome as follows: Then, we can obtain the fitness degree of this chromosome as follows: Fitness Degree = 1 – Avg_Error(13)

19 … continue Here is a table where 1 value is null value, and with GSA we try to obtained this value from other value together with formulas described above. Here is a table where 1 value is null value, and with GSA we try to obtained this value from other value together with formulas described above. EMP-IDDegreeExperienceSalary S1Ph.D.7.263,000 S2Master2.037,000 S3Bachelor7.040,000 S4Ph.D.1.247,000 S5Master7.553,000 S6Bachelor1.526,000 S7Bachelor2.329,000 S8Ph.D.2.050,000 S9Ph.D.3.854,000 S10Bachelor3.535,000 S11Master3.540,000 S12Master3.641,000 S13Master10.068,000 S14Ph.D.5.057,000 S15Bachelor5.036,000 S16Master6.250,000 S17Bachelor0.523,000 S18Master7.255,000 S19Master6.551,000 S20Ph.D.7.865,000 S21Master8.164,000 S22Ph.D.8.5NULL

20 EvaluationAndBestSelection {find the best solution among population. Also it initializes LocalBestChromosomeSoFar and GlobalBestChromosomeSoFar: X  {x 1,..., X Np }; {initialize population} x* L  the best solution among X; {initialize local best-so-far} x* G  x*L {initialize global best-so-far} FitnessDegreeEval  FitnessDegree from global best-so-far } for i:= 1 to number-of-generations do begin T  T 0 ; T  T 0 ; EvaluationAndWorstSelection; {select the worst solution x i from X} EvaluationAndWorstSelection; {select the worst solution x i from X} CrossOver; {select two solutions x j, x k from X such that f(x j )  f(x k ): CrossOver; {select two solutions x j, x k from X such that f(x j )  f(x k ): x i  Crossover(x j, x k ); x i  Crossover(x j, x k ); } Mutation; {update local best-so-far if value is better Mutation; {update local best-so-far if value is better repeat repeat for i:= 0 to number-of-mutation do begin for i:= 0 to number-of-mutation do begin f(x i )  Get Fitness Degree from chromosome before mutation f(x i )  Get Fitness Degree from chromosome before mutation x’  Mutate(x i ) x’  Mutate(x i ) f(x’)  Get Fitness Degree from chromosome after mutation f(x’)  Get Fitness Degree from chromosome after mutation  f  f(x i ) - f(x’)  f  f(x i ) - f(x’) r  random number between 0 and 1 r  random number between 0 and 1 f t  f(x’) f t  f(x’) if (  f >= 0) or (r >= exp(-  f/T)) then begin if (  f >= 0) or (r >= exp(-  f/T)) then begin x i  x’; x i  x’; f t  f(xi); f t  f(xi); end; end; if (f t >= FitnessDegreeEval) then begin if (f t >= FitnessDegreeEval) then begin x* L  x i ; {update local best-so-far} x* L  x i ; {update local best-so-far} FitnessDegreeEval  f t FitnessDegreeEval  f t FDLocalBestSoFar  f t {Get local best Fitness Degree} FDLocalBestSoFar  f t {Get local best Fitness Degree} end end T  T *  ; {lower temperature} T  T *  ; {lower temperature} until T <= FrozenValue; until T <= FrozenValue; } CountCloseness( x* L ); {get FD from LocalBestChromosomeSoFar} CountCloseness( x* L ); {get FD from LocalBestChromosomeSoFar} AvgError:= AvgError / NumData; AvgError:= AvgError / NumData; FDLocalBestSoFar:= 1 - AvgError; FDLocalBestSoFar:= 1 - AvgError; CountCloseness( x* G ); {get FD from GlobalBestChromosomeSoFar} CountCloseness( x* G ); {get FD from GlobalBestChromosomeSoFar} AvgError:= AvgError / NumData; AvgError:= AvgError / NumData; FDGlobalBestSoFar:= 1 - AvgError; FDGlobalBestSoFar:= 1 - AvgError; if FDLocalBestSoFar >= FDGlobalBestSoFar then begin if FDLocalBestSoFar >= FDGlobalBestSoFar then begin x* G  x* L ; {update global best-so-far} x* G  x* L ; {update global best-so-far} FitnessDegreeEval:= FDGlobalBestSoFar; FitnessDegreeEval:= FDGlobalBestSoFar; end; end; x i  x* L ; {update population} x i  x* L ; {update population}end;EvaluationAndBestSelection {find the best solution among population. Also it initializes LocalBestChromosomeSoFar and GlobalBestChromosomeSoFar: X  {x 1,..., X Np }; {initialize population} x* L  the best solution among X; {initialize local best-so-far} x* G  x*L {initialize global best-so-far} FitnessDegreeEval  FitnessDegree from global best-so-far } for i:= 1 to number-of-generations do begin T  T 0 ; T  T 0 ; EvaluationAndWorstSelection; {select the worst solution x i from X} EvaluationAndWorstSelection; {select the worst solution x i from X} CrossOver; {select two solutions x j, x k from X such that f(x j )  f(x k ): CrossOver; {select two solutions x j, x k from X such that f(x j )  f(x k ): x i  Crossover(x j, x k ); x i  Crossover(x j, x k ); } Mutation; {update local best-so-far if value is better Mutation; {update local best-so-far if value is better repeat repeat for i:= 0 to number-of-mutation do begin for i:= 0 to number-of-mutation do begin f(x i )  Get Fitness Degree from chromosome before mutation f(x i )  Get Fitness Degree from chromosome before mutation x’  Mutate(x i ) x’  Mutate(x i ) f(x’)  Get Fitness Degree from chromosome after mutation f(x’)  Get Fitness Degree from chromosome after mutation  f  f(x i ) - f(x’)  f  f(x i ) - f(x’) r  random number between 0 and 1 r  random number between 0 and 1 f t  f(x’) f t  f(x’) if (  f >= 0) or (r >= exp(-  f/T)) then begin if (  f >= 0) or (r >= exp(-  f/T)) then begin x i  x’; x i  x’; f t  f(xi); f t  f(xi); end; end; if (f t >= FitnessDegreeEval) then begin if (f t >= FitnessDegreeEval) then begin x* L  x i ; {update local best-so-far} x* L  x i ; {update local best-so-far} FitnessDegreeEval  f t FitnessDegreeEval  f t FDLocalBestSoFar  f t {Get local best Fitness Degree} FDLocalBestSoFar  f t {Get local best Fitness Degree} end end T  T *  ; {lower temperature} T  T *  ; {lower temperature} until T <= FrozenValue; until T <= FrozenValue; } CountCloseness( x* L ); {get FD from LocalBestChromosomeSoFar} CountCloseness( x* L ); {get FD from LocalBestChromosomeSoFar} AvgError:= AvgError / NumData; AvgError:= AvgError / NumData; FDLocalBestSoFar:= 1 - AvgError; FDLocalBestSoFar:= 1 - AvgError; CountCloseness( x* G ); {get FD from GlobalBestChromosomeSoFar} CountCloseness( x* G ); {get FD from GlobalBestChromosomeSoFar} AvgError:= AvgError / NumData; AvgError:= AvgError / NumData; FDGlobalBestSoFar:= 1 - AvgError; FDGlobalBestSoFar:= 1 - AvgError; if FDLocalBestSoFar >= FDGlobalBestSoFar then begin if FDLocalBestSoFar >= FDGlobalBestSoFar then begin x* G  x* L ; {update global best-so-far} x* G  x* L ; {update global best-so-far} FitnessDegreeEval:= FDGlobalBestSoFar; FitnessDegreeEval:= FDGlobalBestSoFar; end; end; x i  x* L ; {update population} x i  x* L ; {update population}end;

21 Procedure CountCloseness describe below: Procedure CountCloseness describe below: AvgError:= 0.0; for i:= 0 to NumData - 1 do begin {base on all data available} BestClosenessEval:= MaxInt; BestClosenessEval:= MaxInt; IdxClosestCloseness:= i; IdxClosestCloseness:= i; for j:= 0 to NumData - 1 do for j:= 0 to NumData - 1 do if i <> j then begin if i <> j then begin if Rank(Ti.X)  Rank(Tj.X) then begin if Rank(Ti.X)  Rank(Tj.X) then begin ClosenessE(Ti,Tj)= Similarity(Ti.X,Tj.X)  Weight(Tj.Degree) + ClosenessE(Ti,Tj)= Similarity(Ti.X,Tj.X)  Weight(Tj.Degree) +  Weight(Tj.Experience);  Weight(Tj.Experience); end end else begin {If Rank(Ti.X) < Rank(Tj.X)} else begin {If Rank(Ti.X) < Rank(Tj.X)} ClosenessE:= 1/Similarity(Ti.X,Tj.X)  Weight(Tj.Degree) + ClosenessE:= 1/Similarity(Ti.X,Tj.X)  Weight(Tj.Degree) +  Weight(Tj.Experience);  Weight(Tj.Experience); end; end; {find a tuples which is closest to 1.0 as a} {find a tuples which is closest to 1.0 as a} {closest tuple to tuple Ti} {closest tuple to tuple Ti} ClosestCloseness:= Abs(1 - ClosenessE); ClosestCloseness:= Abs(1 - ClosenessE); if ClosestCloseness <= BestClosenessEval then begin if ClosestCloseness <= BestClosenessEval then begin BestClosenessEval:= ClosestCloseness; BestClosenessEval:= ClosestCloseness; IdxClosestCloseness:= j; IdxClosestCloseness:= j; end; end; {Then we find Estimated Salary and Error for every record} {Then we find Estimated Salary and Error for every record} {if this record was null value, so we must find} {if this record was null value, so we must find} {another record that closest to 1} {another record that closest to 1} if IsNullValue(i) and IsNullValue(IdxClosestCloseness) then begin if IsNullValue(i) and IsNullValue(IdxClosestCloseness) then begin PreferIdx:= GetPreferIdx; PreferIdx:= GetPreferIdx; ET i.Salary:= T i. Salary  GetClosenessValue(PreferIdx); ET i.Salary:= T i. Salary  GetClosenessValue(PreferIdx); if T prefer-index.Salary <> 0 then if T prefer-index.Salary <> 0 then Error i := Error i := end end else begin else begin ET i.Salary:= T i. Salary  GetClosenessValue(IdxClosestCloseness); ET i.Salary:= T i. Salary  GetClosenessValue(IdxClosestCloseness); if T i.Salary <> 0 then if T i.Salary <> 0 then Error i := Error i := end; end; AvgError:= AvgError + Abs(Error i ); AvgError:= AvgError + Abs(Error i );end; Procedure CountCloseness describe below: Procedure CountCloseness describe below: AvgError:= 0.0; for i:= 0 to NumData - 1 do begin {base on all data available} BestClosenessEval:= MaxInt; BestClosenessEval:= MaxInt; IdxClosestCloseness:= i; IdxClosestCloseness:= i; for j:= 0 to NumData - 1 do for j:= 0 to NumData - 1 do if i <> j then begin if i <> j then begin if Rank(Ti.X)  Rank(Tj.X) then begin if Rank(Ti.X)  Rank(Tj.X) then begin ClosenessE(Ti,Tj)= Similarity(Ti.X,Tj.X)  Weight(Tj.Degree) + ClosenessE(Ti,Tj)= Similarity(Ti.X,Tj.X)  Weight(Tj.Degree) +  Weight(Tj.Experience);  Weight(Tj.Experience); end end else begin {If Rank(Ti.X) < Rank(Tj.X)} else begin {If Rank(Ti.X) < Rank(Tj.X)} ClosenessE:= 1/Similarity(Ti.X,Tj.X)  Weight(Tj.Degree) + ClosenessE:= 1/Similarity(Ti.X,Tj.X)  Weight(Tj.Degree) +  Weight(Tj.Experience);  Weight(Tj.Experience); end; end; {find a tuples which is closest to 1.0 as a} {find a tuples which is closest to 1.0 as a} {closest tuple to tuple Ti} {closest tuple to tuple Ti} ClosestCloseness:= Abs(1 - ClosenessE); ClosestCloseness:= Abs(1 - ClosenessE); if ClosestCloseness <= BestClosenessEval then begin if ClosestCloseness <= BestClosenessEval then begin BestClosenessEval:= ClosestCloseness; BestClosenessEval:= ClosestCloseness; IdxClosestCloseness:= j; IdxClosestCloseness:= j; end; end; {Then we find Estimated Salary and Error for every record} {Then we find Estimated Salary and Error for every record} {if this record was null value, so we must find} {if this record was null value, so we must find} {another record that closest to 1} {another record that closest to 1} if IsNullValue(i) and IsNullValue(IdxClosestCloseness) then begin if IsNullValue(i) and IsNullValue(IdxClosestCloseness) then begin PreferIdx:= GetPreferIdx; PreferIdx:= GetPreferIdx; ET i.Salary:= T i. Salary  GetClosenessValue(PreferIdx); ET i.Salary:= T i. Salary  GetClosenessValue(PreferIdx); if T prefer-index.Salary <> 0 then if T prefer-index.Salary <> 0 then Error i := Error i := end end else begin else begin ET i.Salary:= T i. Salary  GetClosenessValue(IdxClosestCloseness); ET i.Salary:= T i. Salary  GetClosenessValue(IdxClosestCloseness); if T i.Salary <> 0 then if T i.Salary <> 0 then Error i := Error i := end; end; AvgError:= AvgError + Abs(Error i ); AvgError:= AvgError + Abs(Error i );end;

22 … continue Function GetClosenessValue describe below: Function GetClosenessValue describe below: function GetClosenessValue(Idx) Result  find value in ClosenessE which have the same index with Idx Result  find value in ClosenessE which have the same index with Idx Function GetPreferIdx describe below: Function GetPreferIdx describe below: function GetPreferIdx Result  find value in ClosenessE that closest to 1, and it's not null value Result  find value in ClosenessE that closest to 1, and it's not null value Function GetClosenessValue describe below: Function GetClosenessValue describe below: function GetClosenessValue(Idx) Result  find value in ClosenessE which have the same index with Idx Result  find value in ClosenessE which have the same index with Idx Function GetPreferIdx describe below: Function GetPreferIdx describe below: function GetPreferIdx Result  find value in ClosenessE that closest to 1, and it's not null value Result  find value in ClosenessE that closest to 1, and it's not null value

23 … continue Experiments We run this program for different parameters, each for 10 times. We run this program for different parameters, each for 10 times. We get the results as below: We get the results as below: Experiments type 1: Mutation Rate = 0.01 = 1% Mutation Rate = 0.01 = 1% Initial Temperature = 100 Initial Temperature = 100 Alpha = 0.7 Alpha = 0.7 Frozen Value = 0.00001 Frozen Value = 0.00001 Index of Null Values = 21 (It means row/tuple 22th in relational database) Index of Null Values = 21 (It means row/tuple 22th in relational database)Experiments We run this program for different parameters, each for 10 times. We run this program for different parameters, each for 10 times. We get the results as below: We get the results as below: Experiments type 1: Mutation Rate = 0.01 = 1% Mutation Rate = 0.01 = 1% Initial Temperature = 100 Initial Temperature = 100 Alpha = 0.7 Alpha = 0.7 Frozen Value = 0.00001 Frozen Value = 0.00001 Index of Null Values = 21 (It means row/tuple 22th in relational database) Index of Null Values = 21 (It means row/tuple 22th in relational database)

24 … continue

25 Size Of Population30 Number of Generations100 Running # Avg. Estimated Error Ho ur Minu te Seco nd Miliseco nd Total Time 10.01116239902167810h:2m:16s:781ms 20.00982703802108440h:2m:10s:844ms 30.0088526502367970h:2m:36s:797ms 40.0088526503122500h:3m:12s:250ms 50.008852650354060h:3m:5s:406ms 60.0088526502457660h:2m:45s:766ms 70.00702858302344690h:2m:34s:469ms 80.0061381540244460h:2m:44s:46ms 90.00613815402412350h:2m:41s:235ms 100.00613815402325150h:2m:32s:515ms Min0.00613815402546 Average0.00818430802.227.5510.9 Max0.0111623990345844

26 … continue Size Of Population40 Number of Generations150 Running # Avg. Estimated Error Ho ur Minu te Seco nd Miliseco nd Total Time 10.0181200040695780h:6m:9s:578ms 20.01401903708239370h:8m:23s:937ms 30.00831930606563440h:6m:56s:344ms 40.00490796407139690h:7m:13s:969ms 50.00490796409167030h:9m:16s:703ms 60.00328830507405470h:7m:40s:547ms 70.00317297506531870h:6m:53s:187ms 80.0031729750794380h:7m:9s:438ms 90.00315350407345940h:7m:34s:594ms 100.00315350407394060h:7m:39s:406ms Min0.003153504069187 Average0.0066215540729.2570.3 Max0.0181200040956969

27 … continue Size Of Population50 Number of Generations200 Running # Avg. Estimated Error Ho ur Minu te Seco nd Miliseco nd Total Time 10.009636727010365310h:10m:36s:531ms 20.007631349010312500h:10m:31s:250ms 30.00763134909544840h:9m:54s:484ms 40.00763134909477190h:9m:47s:719ms 50.00763134909477350h:9m:47s:735ms 60.0076313490954620h:9m:54s:62ms 70.00763134909504690h:9m:50s:469ms 80.00763134909485310h:9m:48s:531ms 90.00641040809538280h:9m:53s:828ms 100.00633494209456100h:9m:45s:610ms Min0.006334942093162 Average0.00758015209.246.5521.9 Max0.00963672701054828

28 … continue Size Of Population60 Number of Generations300 Running # Avg. Estimated Error Ho ur Minu te Seco nd Miliseco nd Total Time 10.01365808020284530h:20m:28s:453ms 20.01365808020323280h:20m:32s:328ms 30.004348226020295000h:20m:29s:500ms 40.004348226020301250h:20m:30s:125ms 50.00434822602027780h:20m:27s:78ms 60.004348226020232820h:20m:23s:282ms 70.004348226020279840h:20m:27s:984ms 80.004348226020303280h:20m:30s:328ms 90.004348226020315780h:20m:31s:578ms 100.00434822602030160h:20m:30s:16ms Min0.0043482260202316 Average0.00621019702028.7367.2 Max0.0136580802032984

29 … continue Experiments type 2: Mutation Rate = 0.1 = 10% Mutation Rate = 0.1 = 10% Initial Temperature = 100 Initial Temperature = 100 Alpha = 0.7 Alpha = 0.7 Frozen Value = 0.00001 Frozen Value = 0.00001 Index of Null Values = 21 (It means row/tuple 22th in relational database) Index of Null Values = 21 (It means row/tuple 22th in relational database) Experiments type 2: Mutation Rate = 0.1 = 10% Mutation Rate = 0.1 = 10% Initial Temperature = 100 Initial Temperature = 100 Alpha = 0.7 Alpha = 0.7 Frozen Value = 0.00001 Frozen Value = 0.00001 Index of Null Values = 21 (It means row/tuple 22th in relational database) Index of Null Values = 21 (It means row/tuple 22th in relational database)

30 … continue Size Of Population30 Number of Generations100 Running # Avg. Estimated Error Ho ur Minu te Seco nd Miliseco nd Total Time 10.00905076021561250h:21m:56s:125ms 20.00905076021355630h:21m:35s:563ms 30.007650809020596720h:20m:59s:672ms 40.007650809020534840h:20m:53s:484ms 50.004252343020584530h:20m:58s:453ms 60.003712235020578910h:20m:57s:891ms 70.00371223502112030h:21m:1s:203ms 80.003194167020577340h:20m:57s:734ms 90.00319416702126410h:21m:2s:641ms 100.003194167021106870h:21m:10s:687ms Min0.0031941670201125 Average0.005466245020.538.8545.3 Max0.0090507602159891

31 … continue Size Of Population40 Number of Generations100 Running # Avg. Estimated Error Ho ur Minu te Seco nd Miliseco nd Total Time 10.007768472046509060h:46m:50s:906ms 20.00772067104702820h:47m:0s:282ms 30.00772067104655150h:46m:55s:15ms 40.00548183204728910h:47m:2s:891ms 50.00517739604781250h:47m:8s:125ms 60.00517739604718900h:47m:1s:890ms 70.004702165047579070h:47m:57s:907ms 80.0038765220501310h:50m:1s:31ms 90.00341050705095940h:50m:9s:594ms 100.003410507050362340h:50m:36s:234ms Min0.003410507046015 Average0.005444614047.721.9487.5 Max0.00776847205057907

32 … continue Size Of Population50 Number of Generations100 Running # Avg. Estimated Error Ho ur Minu te Seco nd Miliseco nd Total Time 10.009740355123281251h:23m:28s:125ms 20.009450571123149381h:23m:14s:938ms 30.009450571123252191h:23m:25s:219ms 40.008133827123187341h:23m:18s:734ms 50.008133827123219691h:23m:21s:969ms 60.005986674123242341h:23m:24s:234ms 70.003821698123241881h:23m:24s:188ms 80.003821698123303751h:23m:30s:375ms 90.003821698123274371h:23m:27s:437ms 100.003821698123309381h:23m:30s:938ms Min0.00382169812314125 Average0.00661826212324.1515.7 Max0.00974035512330969

33 … continue Size Of Population60 Number of Generations100 Running # Avg. Estimated Error Ho ur Minu te Seco nd Miliseco nd Total Time 10.00309077931721943h:17m:21s:94ms 20.00300359314422033h:14m:42s:203ms 30.00300359314422503h:14m:42s:250ms 40.00300359316311883h:16m:31s:188ms 50.00300359318551563h:18m:55s:156ms 60.0030035931086253h:10m:8s:625ms 70.00289063738297973h:8m:29s:797ms 80.00289063732171253h:21m:7s:125ms 90.002890637318131253h:18m:13s:125ms 100.002890637315509683h:15m:50s:968ms Min0.00289063738794 Average0.002967128315.129.8353.1 Max0.00309077932155968

34 … continue Summaries from experiments Summaries from experiments Avg. Estimation Error Size Of Population Number of Generations Mutation Rate (%) Initial TemperatureAlphaFrozen Value 0.00296712860300101000.70.00001 0.00544461440150101000.70.00001 0.00546624530100101000.70.00001 0.0062101976030011000.70.00001 0.00661826250200101000.70.00001 0.0066215544015011000.70.00001 0.0075801525020011000.70.00001 0.008184308301001 0.70.00001 Min. Estimation Error Size Of Population Number of Generations Mutation Rate (%) Initial TemperatureAlphaFrozen Value 0.00296712860300101000.70.00001 0.00544461440150101000.70.00001 0.00546624530100101000.70.00001 0.0062101976030011000.70.00001 0.00661826250200101000.70.00001 0.0066215544015011000.70.00001 0.0075801525020011000.70.00001 0.008184308301001 0.70.00001

35 … continue Summaries from experiments (continue) Summaries from experiments (continue) Max. Estimation Error Size Of Population Number of Generations Mutation Rate (%) Initial TemperatureAlphaFrozen Value 0.00309077960300101000.70.00001 0.00776847240150101000.70.00001 0.0090507630100101000.70.00001 0.0096367275020011000.70.00001 0.00974035550200101000.70.00001 0.011162399301001 0.70.00001 0.013658086030011000.70.00001 0.0181200044015011000.70.00001

36 … continue For comparing, we toke result from Chen et al. [1] For comparing, we toke result from Chen et al. [1] Best chromosome: Best chromosome: 0.0100.0710.3430.4650.5050.3030.4950.0810.778 0.7170.3030.8690.8690.8280.434 Below is the result from with size of population: 60; number of generations: 300; Cross Over rate: 1.0; and Mutation rate: 0.2 Below is the result from with size of population: 60; number of generations: 300; Cross Over rate: 1.0; and Mutation rate: 0.2 For comparing, we toke result from Chen et al. [1] For comparing, we toke result from Chen et al. [1] Best chromosome: Best chromosome: 0.0100.0710.3430.4650.5050.3030.4950.0810.778 0.7170.3030.8690.8690.8280.434 Below is the result from with size of population: 60; number of generations: 300; Cross Over rate: 1.0; and Mutation rate: 0.2 Below is the result from with size of population: 60; number of generations: 300; Cross Over rate: 1.0; and Mutation rate: 0.2

37 EMP-IDDegreeExperienceSalarySalary (Estimated)Estimated Error S1Ph.D.7.263,00061,515.00-0.024 S2Master2.037,00036,967.44-0.001 S3Bachelor7.040,00040,634.140.016 S4Ph.D.1.247,00046,873.66-0.003 S5Master7.553,00056,134.370.059 S6Bachelor1.526,00026,146.400.006 S7Bachelor2.329,00027,822.08-0.041 S8Ph.D.2.050,00050,067.200.001 S9Ph.D.3.854,00053,958.94-0.001 S10Bachelor3.535,00035,152.000.004 S11Master3.540,00040,206.190.005 S12Master3.641,00040,796.57-0.005 S13Master10.068,00068,495.740.007 S14Ph.D.5.057,00056,240.72-0.013 S15Bachelor5.036,00034,277.54-0.048 S16Master6.250,00049,834.85-0.003 S17Bachelor0.523,00023,722.400.031 S18Master7.255,00051,950.6-0.055 S19Master6.551,00051,197.580.004 S20Ph.D.7.865,00064,813.75-0.003 S21Master8.164,00060,853.28-0.049 S22Ph.D.8.570,00069,065.83-0.013 Average Estimated Error0.018

38 … continue For another running we get average estimated errors for different parameters of the GA (Chen et al. [1]) For another running we get average estimated errors for different parameters of the GA (Chen et al. [1]) Size of Population Number of Generations Crossover Rate Mutation Rate Average Estimated Error 301001.00.10.036 401501.00.10.032 502001.00.20.027 603001.00.20.018

39 … continue Example a result from one of above (from this research, using GSA) Example a result from one of above (from this research, using GSA) Size of Population:60 Number of Generations:300 Mutation Rate (%):10 Initial Temperature:100 Alpha:0.7 Frozen Value:1E-5 Index of Null Values: 21 Best Chromosome Gene-1Gene-2Gene-3Gene-4Gene-5Gene-6Gene-7Gene-8 0.7190.9950.9890.4850.0950.8960.2770.416 Gene-9Gene-10Gene-11 Gene-12Gene-13Gene-14Gene-15 0.0850.9970.1830.5830.3500.6520.241 Example a result from one of above (from this research, using GSA) Example a result from one of above (from this research, using GSA) Size of Population:60 Number of Generations:300 Mutation Rate (%):10 Initial Temperature:100 Alpha:0.7 Frozen Value:1E-5 Index of Null Values: 21 Best Chromosome Gene-1Gene-2Gene-3Gene-4Gene-5Gene-6Gene-7Gene-8 0.7190.9950.9890.4850.0950.8960.2770.416 Gene-9Gene-10Gene-11 Gene-12Gene-13Gene-14Gene-15 0.0850.9970.1830.5830.3500.6520.241

40 … continue Example a result from one of above (continue) Example a result from one of above (continue) Emp. IDDegreeExperience SalarySalary (Estimated)Estimated Error 1Ph.D.7.263,00062,889.86-0.0017482 2Master2.037,00036,847.97-0.0041090 3Bachelor 7.040,00040,128.330.0032082 4Ph.D.1.247,00046,538.60-0.0098170 5Master7.553,00052,978.58-0.0004042 6Bachelor 1.526,00025,970.00-0.0011540 7Bachelor 2.329,00028,967.01-0.0011375 8Ph.D.2.050,00050,341.150.0068230 9Ph.D.3.854,00053,836.28-0.0030319 10 Bachelor 3.535,00035,060.590.0017310 11Master3.540,00039,876.06-0.0030986 12Master3.641,00040,875.72-0.0030312 13Master10.068,00068,087.030.0012798 14Ph.D.5.057,00056,731.71-0.0047068 15Bachelor 5.036,00036,051.190.0014219 16Master6.250,00049,936.01-0.0012798 17Bachelor 0.523,00022,940.28-0.0025966 18Master7.255,00054,966.66-0.0006062 19Master6.551,00050,945.03-0.0010778 20Ph.D.7.865,00064,938.81-0.0009414 21Master8.164,00063,933.65-0.0010367 22Ph.D.8.570,00070,654.720.0093531 Avg Estimated Error: 0.002890636946022 Time Elapsed: 3h:15m:50s:968ms Here, we can prove that this proposes method better than method proposed by C.M. Huang [1]. Here, we can prove that this proposes method better than method proposed by C.M. Huang [1]. Example a result from one of above (continue) Example a result from one of above (continue) Emp. IDDegreeExperience SalarySalary (Estimated)Estimated Error 1Ph.D.7.263,00062,889.86-0.0017482 2Master2.037,00036,847.97-0.0041090 3Bachelor 7.040,00040,128.330.0032082 4Ph.D.1.247,00046,538.60-0.0098170 5Master7.553,00052,978.58-0.0004042 6Bachelor 1.526,00025,970.00-0.0011540 7Bachelor 2.329,00028,967.01-0.0011375 8Ph.D.2.050,00050,341.150.0068230 9Ph.D.3.854,00053,836.28-0.0030319 10 Bachelor 3.535,00035,060.590.0017310 11Master3.540,00039,876.06-0.0030986 12Master3.641,00040,875.72-0.0030312 13Master10.068,00068,087.030.0012798 14Ph.D.5.057,00056,731.71-0.0047068 15Bachelor 5.036,00036,051.190.0014219 16Master6.250,00049,936.01-0.0012798 17Bachelor 0.523,00022,940.28-0.0025966 18Master7.255,00054,966.66-0.0006062 19Master6.551,00050,945.03-0.0010778 20Ph.D.7.865,00064,938.81-0.0009414 21Master8.164,00063,933.65-0.0010367 22Ph.D.8.570,00070,654.720.0093531 Avg Estimated Error: 0.002890636946022 Time Elapsed: 3h:15m:50s:968ms Here, we can prove that this proposes method better than method proposed by C.M. Huang [1]. Here, we can prove that this proposes method better than method proposed by C.M. Huang [1].

41 GSA to Estimate Null Values in Generating Weighted Fuzzy Rules from Relational Database Systems and It’s Estimating on Multiple Null Values Part 2: Estimating Problems on Multiple Null Values In part 1, we concerning just about how to perform In part 1, we concerning just about how to perform At this part, we try to estimate many values, which are null values. At this part, we try to estimate many values, which are null values. Recalling procedure CountCloseness described previously in part 1, we consider part below: Recalling procedure CountCloseness described previously in part 1, we consider part below:…… {Then we find Estimated Salary and Error for every record} {Then we find Estimated Salary and Error for every record} {if this record was null value, so we must find} {if this record was null value, so we must find} {another record that closest to 1} {another record that closest to 1} if IsNullValue(i) and IsNullValue(IdxClosestCloseness) then begin if IsNullValue(i) and IsNullValue(IdxClosestCloseness) then begin PreferIdx:= GetPreferIdx; PreferIdx:= GetPreferIdx; ET i.Salary:= T i. Salary  GetClosenessValue(PreferIdx); ET i.Salary:= T i. Salary  GetClosenessValue(PreferIdx); if T prefer-index.Salary <> 0 then if T prefer-index.Salary <> 0 then Error i := Error i := end end else begin else begin ET i.Salary:= T i. Salary  GetClosenessValue(IdxClosestCloseness); ET i.Salary:= T i. Salary  GetClosenessValue(IdxClosestCloseness); if T i.Salary <> 0 then if T i.Salary <> 0 then Error i := Error i := end; end; Part 2: Estimating Problems on Multiple Null Values In part 1, we concerning just about how to perform In part 1, we concerning just about how to perform At this part, we try to estimate many values, which are null values. At this part, we try to estimate many values, which are null values. Recalling procedure CountCloseness described previously in part 1, we consider part below: Recalling procedure CountCloseness described previously in part 1, we consider part below:…… {Then we find Estimated Salary and Error for every record} {Then we find Estimated Salary and Error for every record} {if this record was null value, so we must find} {if this record was null value, so we must find} {another record that closest to 1} {another record that closest to 1} if IsNullValue(i) and IsNullValue(IdxClosestCloseness) then begin if IsNullValue(i) and IsNullValue(IdxClosestCloseness) then begin PreferIdx:= GetPreferIdx; PreferIdx:= GetPreferIdx; ET i.Salary:= T i. Salary  GetClosenessValue(PreferIdx); ET i.Salary:= T i. Salary  GetClosenessValue(PreferIdx); if T prefer-index.Salary <> 0 then if T prefer-index.Salary <> 0 then Error i := Error i := end end else begin else begin ET i.Salary:= T i. Salary  GetClosenessValue(IdxClosestCloseness); ET i.Salary:= T i. Salary  GetClosenessValue(IdxClosestCloseness); if T i.Salary <> 0 then if T i.Salary <> 0 then Error i := Error i := end; end;

42 … continue Here is a table where many values are null value, and with GSA we try to obtained this value from other value together with formulas described above. Here is a table where many values are null value, and with GSA we try to obtained this value from other value together with formulas described above. EMP-IDDegreeExperienceSalary S1Ph.D.7.263,000 S2Master2.0NULL S3Bachelor7.040,000 S4Ph.D.1.247,000 S5Master7.5NULL S6Bachelor1.526,000 S7Bachelor2.329,000 S8Ph.D.2.050,000 S9Ph.D.3.854,000 S10Bachelor3.535,000 S11Master3.5NULL S12Master3.641,000 S13Master10.0NULL S14Ph.D.5.057,000 S15Bachelor5.036,000 S16Master6.250,000 S17Bachelor0.523,000 S18Master7.255,000 S19Master6.551,000 S20Ph.D.7.865,000 S21Master8.164,000 S22Ph.D.8.5NULL

43 Because there is a checking process with regarding to null values, so we can set one or many null values that we want to estimate. This process performs in function GetPreferIdx as described previous. Because there is a checking process with regarding to null values, so we can set one or many null values that we want to estimate. This process performs in function GetPreferIdx as described previous. Of course as a boundary/quota, there is at least one value in column/field SALARY to estimate another (if it is a null value). Of course as a boundary/quota, there is at least one value in column/field SALARY to estimate another (if it is a null value).Experiments We run this program for different parameters, each for 10 times. We run this program for different parameters, each for 10 times. We get the results as below: We get the results as below: Experiments type 1 Size Of Population = 60 Size Of Population = 60 Number of Generations = 300 Number of Generations = 300 Mutation Rate = 0.01 = 1% Mutation Rate = 0.01 = 1% Initial Temperature = 100 Initial Temperature = 100 Alpha = 0.7 Alpha = 0.7 Frozen Value = 0.00001 Frozen Value = 0.00001 Index of Null Values = 0 (It means row/tuple 1st in relational database) Index of Null Values = 0 (It means row/tuple 1st in relational database) Because there is a checking process with regarding to null values, so we can set one or many null values that we want to estimate. This process performs in function GetPreferIdx as described previous. Because there is a checking process with regarding to null values, so we can set one or many null values that we want to estimate. This process performs in function GetPreferIdx as described previous. Of course as a boundary/quota, there is at least one value in column/field SALARY to estimate another (if it is a null value). Of course as a boundary/quota, there is at least one value in column/field SALARY to estimate another (if it is a null value).Experiments We run this program for different parameters, each for 10 times. We run this program for different parameters, each for 10 times. We get the results as below: We get the results as below: Experiments type 1 Size Of Population = 60 Size Of Population = 60 Number of Generations = 300 Number of Generations = 300 Mutation Rate = 0.01 = 1% Mutation Rate = 0.01 = 1% Initial Temperature = 100 Initial Temperature = 100 Alpha = 0.7 Alpha = 0.7 Frozen Value = 0.00001 Frozen Value = 0.00001 Index of Null Values = 0 (It means row/tuple 1st in relational database) Index of Null Values = 0 (It means row/tuple 1st in relational database) … continue

44 Index of Null Values = 0 (It means row/tuple 1 st in relational database) Index of Null Values = 0 (It means row/tuple 1 st in relational database) … continue Running # Avg. Estimated ErrorTotal Time 10.0091221010h:22m:56s:46ms 20.0091221010h:22m:47s:188ms 30.0059504150h:22m:43s:484ms 40.0059504150h:22m:45s:500ms 50.0056080840h:22m:40s:922ms 60.0052971640h:22m:39s:219ms 70.0052971640h:22m:42s:437ms 80.00352370h:23m:13s:922ms 90.00352370h:22m:47s:47ms 100.00352370h:22m:46s:281ms Min0.00352370h:22m:39s:219ms Average0.0056918550h:22m:48s:205ms Max0.0091221010h:23m:13s:922ms

45 Index of Null Values = 0,1 (It means row/tuple 1 st and 2 nd in relational database) Index of Null Values = 0,1 (It means row/tuple 1 st and 2 nd in relational database) … continue Running # Avg. Estimated ErrorTotal Time 10.0059325160h:23m:0s:906ms 20.0047486380h:23m:1s:281ms 30.0047486380h:22m:59s:954ms 40.0043764140h:22m:56s:937ms 50.0043074160h:22m:53s:797ms 60.0043074160h:22m:46s:844ms 70.0025979170h:22m:52s:484ms 80.0025430980h:22m:53s:688ms 90.0025430980h:22m:53s:406ms 100.0020367440h:22m:56s:109ms Min0.0020367440h:22m:46s:844ms Average0.0038141890h:22m:55s:541ms Max0.0059325160h:23m:1s:281ms

46 Index of Null Values = 0,1,2 Index of Null Values = 0,1,2 … continue Running # Avg. Estimated ErrorTotal Time 10.0093096160h:23m:17s:62ms 20.0093096160h:23m:10s:281ms 30.0031332430h:23m:4s:672ms 40.0031332430h:23m:9s:141ms 50.0031332430h:23m:5s:422ms 60.0030535080h:23m:2s:937ms 70.0030535080h:23m:7s:172ms 80.0028652910h:23m:11s:47ms 90.0028652910h:23m:11s:422ms 100.0026458730h:23m:12s:468ms Min0.0026458730h:23m:2s:937ms Average0.0042502430h:23m:9s:162ms Max0.0093096160h:23m:17s:62ms

47 Index of Null Values = 0,1,2,3 Index of Null Values = 0,1,2,3 … continue Running # Avg. Estimated ErrorTotal Time 10.0027574520h:23m:27s:406ms 20.0020344230h:23m:22s:313ms 30.0018842440h:23m:20s:453ms 40.0018842440h:23m:20s:469ms 50.0018842440h:23m:19s:687ms 60.0018842440h:23m:18s:360ms 70.0018842440h:23m:20s:62ms 80.0018842440h:23m:22s:875ms 90.0018842440h:23m:22s:563ms 100.0018842440h:23m:24s:218ms Min0.0018842440h:23m:18s:360ms Average0.0019865830h:23m:21s:841ms Max0.0027574520h:23m:27s:406ms

48 Index of Null Values = 0,1,2,3,4 Index of Null Values = 0,1,2,3,4 … continue Running # Avg. Estimated ErrorTotal Time 10.0048158090h:23m:45s:500ms 20.0039937180h:23m:46s:922ms 30.0024744140h:23m:36s:156ms 40.0024744140h:23m:39s:219ms 50.0024744140h:23m:35s:843ms 60.0024744140h:23m:33s:891ms 70.0024744140h:23m:40s:875ms 80.0024744140h:23m:40s:656ms 90.0024368380h:24m:9s:657ms 100.0024368380h:24m:42s:156ms Min0.0024368380h:23m:33s:891ms Average0.0028529690h:23m:49s:88ms Max0.0048158090h:24m:42s:156ms

49 Index of Null Values = 0,1,2,3,4,5 Index of Null Values = 0,1,2,3,4,5 … continue Running # Avg. Estimated ErrorTotal Time 10.009631280h:25m:0s:578ms 20.0081877390h:24m:55s:828ms 30.0081877390h:24m:52s:125ms 40.0081877390h:24m:46s:94ms 50.0081877390h:24m:48s:766ms 60.0076795310h:24m:48s:109ms 70.0073071430h:24m:47s:531ms 80.0072949540h:24m:52s:610ms 90.0055840630h:24m:57s:78ms 100.0055840630h:24m:52s:703ms Min0.0055840630h:24m:46s:94ms Average0.0075831990h:24m:52s:142ms Max0.009631280h:25m:0s:578ms

50 Index of Null Values = 0,1,2,3,4,5,6 Index of Null Values = 0,1,2,3,4,5,6 … continue Running # Avg. Estimated ErrorTotal Time 10.0124352190h:26m:0s:546ms 20.0124352190h:25m:45s:125ms 30.0124352190h:25m:49s:547ms 40.0124352190h:25m:54s:32ms 50.0075840390h:25m:53s:890ms 60.0075840390h:25m:54s:391ms 70.0075840390h:25m:50s:312ms 80.0075840390h:25m:46s:860ms 90.0067890610h:25m:50s:218ms 100.0067890610h:25m:56s:891ms Min0.0067890610h:25m:45s:125ms Average0.0093655150h:25m:52s:181ms Max0.0124352190h:26m:0s:546ms

51 Index of Null Values = 0,1,2,3,4,5,6,7 Index of Null Values = 0,1,2,3,4,5,6,7 … continue Running #Avg. Estimated ErrorTotal Time 10.0108225780h:27m:12s:218ms 20.0092975960h:27m:18s:485ms 30.0092975960h:27m:1s:312ms 40.0092975960h:27m:2s:63ms 50.0092975960h:27m:44s:31ms 60.0092975960h:28m:26s:109ms 70.0092975960h:29m:48s:407ms 80.0092975960h:28m:58s:812ms 90.0082050890h:27m:18s:797ms 100.0082050890h:38m:45s:266ms Min0.0082050890h:27m:1s:312ms Average0.0092315930h:28m:57s:550ms Max0.0108225780h:38m:45s:266ms

52 Index of Null Values = 0,1,2,3,4,5,6,7,8 Index of Null Values = 0,1,2,3,4,5,6,7,8 … continue Running # Avg. Estimated ErrorTotal Time 10.007320930h:49m:53s:547ms 20.007320930h:37m:49s:813ms 30.0068904070h:30m:46s:46ms 40.00611140h:30m:18s:938ms 50.00611140h:30m:1s:0ms 60.00611140h:27m:1s:203ms 70.0057569580h:27m:3s:625ms 80.0057569580h:27m:6s:109ms 90.0052674580h:27m:0s:422ms 100.0052674580h:27m:8s:94ms Min0.0052674580h:27m:0s:422ms Average0.006191530h:31m:24s:880ms Max0.007320930h:49m:53s:547ms

53 Index of Null Values = 0,1,2,3,4,5,6,7,8,9 Index of Null Values = 0,1,2,3,4,5,6,7,8,9 … continue Running # Avg. Estimated ErrorTotal Time 10.0192254730h:28m:36s:500ms 20.0121635660h:28m:31s:531ms 30.0121635660h:28m:38s:235ms 40.0111889710h:28m:29s:390ms 50.0107303440h:28m:23s:313ms 60.0107303440h:28m:41s:312ms 70.0107303440h:28m:34s:188ms 80.0107303440h:28m:39s:15ms 90.0107303440h:28m:26s:672ms 100.009883040h:28m:34s:860ms Min0.009883040h:28m:23s:313ms Average0.0118276340h:28m:33s:502ms Max0.0192254730h:28m:41s:312ms

54 Index of Null Values = 0,1,2,3,4,5,6,7,8,9,10 Index of Null Values = 0,1,2,3,4,5,6,7,8,9,10 … continue Running # Avg. Estimated ErrorTotal Time 10.0145210210h:29m:28s:156ms 20.0145210210h:29m:21s:922ms 30.0145210210h:32m:17s:750ms 40.0145210210h:33m:6s:860ms 50.0068909250h:31m:44s:203ms 60.0068909250h:32m:17s:797ms 70.0067932390h:32m:21s:360ms 80.0067932390h:30m:7s:343ms 90.006761170h:30m:17s:438ms 100.0060209780h:30m:0s:844ms Min0.0060209780h:29m:21s:922ms Average0.0098234560h:31m:6s:367ms Max0.0145210210h:33m:6s:860ms

55 Index of Null Values = 0,1,2,3,4,5,6,7,8,9,10,11 Index of Null Values = 0,1,2,3,4,5,6,7,8,9,10,11 … continue Running # Avg. Estimated ErrorTotal Time 10.0217704190h:34m:5s:765ms 20.0197117430h:33m:33s:797ms 30.0156423230h:33m:43s:875ms 40.0156423230h:33m:41s:16ms 50.0152490240h:33m:27s:250ms 60.0140373240h:33m:30s:531ms 70.0128118430h:33m:47s:187ms 80.0128118430h:33m:31s:641ms 90.0128118430h:33m:35s:844ms 100.0128118430h:33m:34s:437ms Min0.0128118430h:33m:27s:250ms Average0.0153300530h:33m:39s:134ms Max0.0217704190h:34m:5s:765ms

56 Index of Null Values = 0,1,2,3,4,5,6,7,8,9,10,11,12 Index of Null Values = 0,1,2,3,4,5,6,7,8,9,10,11,12 … continue Running #Avg. Estimated ErrorTotal Time 10.0157723030h:36m:9s:94ms 20.0157723030h:35m:53s:781ms 30.0117026390h:36m:5s:407ms 40.0117026390h:38m:38s:171ms 50.0117026390h:41m:51s:375ms 60.0117026390h:40m:25s:672ms 70.0052249710h:36m:7s:860ms 80.0052249710h:41m:35s:828ms 90.0047565090h:42m:24s:359ms 100.0043459320h:43m:48s:906ms Min0.0043459320h:35m:53s:781ms Average0.0097907540h:39m:18s:45ms Max0.0157723030h:43m:48s:906ms

57 Index of Null Values = 0,1,2,3,4,5,6,7,8,9,10,11,12,13 Index of Null Values = 0,1,2,3,4,5,6,7,8,9,10,11,12,13 … continue Running # Avg. Estimated ErrorTotal Time 10.0194573730h:42m:1s:281ms 20.0194573730h:37m:38s:547ms 30.017055930h:37m:34s:860ms 40.0126886830h:37m:40s:718ms 50.0124582190h:37m:28s:47ms 60.0124582190h:37m:14s:188ms 70.0124582190h:37m:41s:156ms 80.0103890060h:37m:32s:953ms 90.0099438570h:37m:36s:641ms 100.0097133940h:37m:25s:422ms Min0.0097133940h:37m:14s:188ms Average0.0136080270h:37m:59s:381ms Max0.0194573730h:42m:1s:281ms

58 Index of Null Values = 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14 Index of Null Values = 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14 … continue Running #Avg. Estimated ErrorTotal Time 10.0422307480h:40m:39s:125ms 20.0260246090h:40m:49s:750ms 30.0260246090h:40m:46s:688ms 40.0260246090h:43m:11s:781ms 50.0226839280h:42m:56s:94ms 60.0226839280h:42m:14s:922ms 70.0226839280h:40m:8s:671ms 80.0226839280h:41m:11s:329ms 90.0226839280h:48m:1s:203ms 100.0226839280h:48m:29s:15ms Min0.0226839280h:40m:8s:671ms Average0.0256408140h:42m:50s:858ms Max0.0422307480h:48m:29s:15ms

59 Index of Null Values = 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 Index of Null Values = 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 … continue Running # Avg. Estimated ErrorTotal Time 10.025462620h:55m:2s:62ms 20.0106833940h:52m:13s:797ms 30.0106833940h:42m:11s:531ms 40.0091995570h:42m:28s:141ms 50.0091995570h:42m:32s:344ms 60.0091995570h:43m:50s:797ms 70.0091995570h:45m:44s:468ms 80.0091995570h:48m:57s:360ms 90.0091995570h:54m:33s:250ms 100.0091995570h:50m:24s:422ms Min0.0091995570h:42m:11s:531ms Average0.0111226310h:47m:47s:817ms Max0.025462620h:55m:2s:62ms

60 Summaries from experiments Summaries from experiments … continue # Null ValuesAvg. Estimated ErrorTotal Time 10.0056918550h:22m:48s:205ms 20.0038141890h:22m:55s:541ms 30.0042502430h:23m:9s:162ms 40.0019865830h:23m:21s:841ms 50.0028529690h:23m:49s:88ms 60.0075831990h:24m:52s:142ms 70.0093655150h:25m:52s:181ms 80.0092315930h:28m:57s:550ms 90.006191530h:31m:24s:880ms 100.0118276340h:28m:33s:502ms 110.0098234560h:31m:6s:367ms 120.0153300530h:33m:39s:134ms 130.0097907540h:39m:18s:45ms 140.0136080270h:37m:59s:381ms 150.0256408140h:42m:50s:858ms 160.0111226310h:47m:47s:817ms

61 Conclusion SA generates a single sequence of solutions and searches for an optimum solution along this search path. SA generates a single sequence of solutions and searches for an optimum solution along this search path. At each step, SA generates a candidate solution x’ by changing a small fraction of a current solution x. At each step, SA generates a candidate solution x’ by changing a small fraction of a current solution x. A key point of SA is that SA accepts up-hill moves with the probability e -  f/T. A key point of SA is that SA accepts up-hill moves with the probability e -  f/T. This allows SA to escape from local minima. This allows SA to escape from local minima. But SA cannot cover a large region of the solution space within a limited computation time because SA is based on small moves. But SA cannot cover a large region of the solution space within a limited computation time because SA is based on small moves. GA maintains a population of solutions and uses them to search the solution space. GA maintains a population of solutions and uses them to search the solution space. GA uses the crossover operator which causes a large jump in the solution space. GA uses the crossover operator which causes a large jump in the solution space. GA can do globally search a large region of the solution space. GA can do globally search a large region of the solution space. But GA has no explicit ways to produce a sequence of small moves in the solution space. But GA has no explicit ways to produce a sequence of small moves in the solution space. Mutation on GA creates a single small move one at a time instead of a sequence of small moves. Mutation on GA creates a single small move one at a time instead of a sequence of small moves. As the result GA cannot search local region on the solution space exhaustively. As the result GA cannot search local region on the solution space exhaustively. GSA generates the seeds of SA sequentially, that is the seeds of a SA local search depends of the best-so-far solutions of all previous SA local searches. GSA generates the seeds of SA sequentially, that is the seeds of a SA local search depends of the best-so-far solutions of all previous SA local searches. This sequentially approach seems to generate better child solutions. This sequentially approach seems to generate better child solutions. GSA uses fewer crossover operations since it only uses crossover operations when the SA local search reaches a flat surface and it is time to jump in the solution space. GSA uses fewer crossover operations since it only uses crossover operations when the SA local search reaches a flat surface and it is time to jump in the solution space. GSA proposed to overcome: disability of cover a large region of the solution space of SA and disability of searching local region of the solution space of GA. GSA proposed to overcome: disability of cover a large region of the solution space of SA and disability of searching local region of the solution space of GA. SA generates a single sequence of solutions and searches for an optimum solution along this search path. SA generates a single sequence of solutions and searches for an optimum solution along this search path. At each step, SA generates a candidate solution x’ by changing a small fraction of a current solution x. At each step, SA generates a candidate solution x’ by changing a small fraction of a current solution x. A key point of SA is that SA accepts up-hill moves with the probability e -  f/T. A key point of SA is that SA accepts up-hill moves with the probability e -  f/T. This allows SA to escape from local minima. This allows SA to escape from local minima. But SA cannot cover a large region of the solution space within a limited computation time because SA is based on small moves. But SA cannot cover a large region of the solution space within a limited computation time because SA is based on small moves. GA maintains a population of solutions and uses them to search the solution space. GA maintains a population of solutions and uses them to search the solution space. GA uses the crossover operator which causes a large jump in the solution space. GA uses the crossover operator which causes a large jump in the solution space. GA can do globally search a large region of the solution space. GA can do globally search a large region of the solution space. But GA has no explicit ways to produce a sequence of small moves in the solution space. But GA has no explicit ways to produce a sequence of small moves in the solution space. Mutation on GA creates a single small move one at a time instead of a sequence of small moves. Mutation on GA creates a single small move one at a time instead of a sequence of small moves. As the result GA cannot search local region on the solution space exhaustively. As the result GA cannot search local region on the solution space exhaustively. GSA generates the seeds of SA sequentially, that is the seeds of a SA local search depends of the best-so-far solutions of all previous SA local searches. GSA generates the seeds of SA sequentially, that is the seeds of a SA local search depends of the best-so-far solutions of all previous SA local searches. This sequentially approach seems to generate better child solutions. This sequentially approach seems to generate better child solutions. GSA uses fewer crossover operations since it only uses crossover operations when the SA local search reaches a flat surface and it is time to jump in the solution space. GSA uses fewer crossover operations since it only uses crossover operations when the SA local search reaches a flat surface and it is time to jump in the solution space. GSA proposed to overcome: disability of cover a large region of the solution space of SA and disability of searching local region of the solution space of GA. GSA proposed to overcome: disability of cover a large region of the solution space of SA and disability of searching local region of the solution space of GA.

62 References [1] S.M. Chen and C.M. Huang, “Generating weighted fuzzy rules from relational database systems for estimating null values using genetic algorithms,” IEEE Transactions On Fuzzy Systems, Vol. 11, No. 4, pp. 495-506, August 2003. [1] S.M. Chen and C.M. Huang, “Generating weighted fuzzy rules from relational database systems for estimating null values using genetic algorithms,” IEEE Transactions On Fuzzy Systems, Vol. 11, No. 4, pp. 495-506, August 2003. [3] D.G. Burkhardt and P.P. Bonissone, “Automated fuzzy knowledge base Generation and tuning,” in Proc. 1992 IEEE Int. Conf. Fuzzy Systems, San Diego, CA, 1992, pp. 179-188. [3] D.G. Burkhardt and P.P. Bonissone, “Automated fuzzy knowledge base Generation and tuning,” in Proc. 1992 IEEE Int. Conf. Fuzzy Systems, San Diego, CA, 1992, pp. 179-188. [4] S.M. Chen and H.H. Chen, “Estimating null values in the distributed relational databases environment,” Cybern. Syst., Vol. 31, No. 8, pp. 851-871, 2000. [4] S.M. Chen and H.H. Chen, “Estimating null values in the distributed relational databases environment,” Cybern. Syst., Vol. 31, No. 8, pp. 851-871, 2000. [5] S.M. Chen, S.H. Lee, and C.H. Lee, “A new method for generating fuzzy rules from numerical data for handling classification problems,” Appl. Art. Intell., Vol. 15, No. 7, pp. 645-664, 2001. [5] S.M. Chen, S.H. Lee, and C.H. Lee, “A new method for generating fuzzy rules from numerical data for handling classification problems,” Appl. Art. Intell., Vol. 15, No. 7, pp. 645-664, 2001. [6] S.M. Chen and M.S. Yeh, “Generating fuzzy rules from relational database systems for estimating null values,” Cybern. Syst., Vol. 28, No. 8, pp. 695-723, 1997. [6] S.M. Chen and M.S. Yeh, “Generating fuzzy rules from relational database systems for estimating null values,” Cybern. Syst., Vol. 28, No. 8, pp. 695-723, 1997. [10] D. Sirag and P. Weisser, “Toward a unified thermodynamic genetic operator,” in Proc. 2nd Int. Conf. Genetic Algorithms, pp.116-122, 1987. [10] D. Sirag and P. Weisser, “Toward a unified thermodynamic genetic operator,” in Proc. 2nd Int. Conf. Genetic Algorithms, pp.116-122, 1987. [11] D. Adler, “Genetic algorithms and simulated annealing: a marriage proposal,” in Proc. Int. Conf. Neural Network, pp.1104-1109, 1993. [11] D. Adler, “Genetic algorithms and simulated annealing: a marriage proposal,” in Proc. Int. Conf. Neural Network, pp.1104-1109, 1993. [12] D. Brown, C. Huntley, and A. Spillane, “A parallel genetic heuristic for the quadratic assignment problem,” in Proc. 3rd Int. Conf. Genetic Algorithms, pp.406- 415, 1989. [12] D. Brown, C. Huntley, and A. Spillane, “A parallel genetic heuristic for the quadratic assignment problem,” in Proc. 3rd Int. Conf. Genetic Algorithms, pp.406- 415, 1989. [13] T.-T. Lin, C.-Y. Kao, and C.-C. Hsu, “Applying the genetic approach to simulated annealing in solving some NP-Hard problems,” IEEE Trans. System, Man, and Cybernetics., vol.23, no.6, pp.1752-1767, 1993. [13] T.-T. Lin, C.-Y. Kao, and C.-C. Hsu, “Applying the genetic approach to simulated annealing in solving some NP-Hard problems,” IEEE Trans. System, Man, and Cybernetics., vol.23, no.6, pp.1752-1767, 1993. [14] S. Koakutsu, Y. Sugai, H. Hirata, “Block placement by improved simulated annealing based on genetic algorithm,” Trans. of the institute of Electronics, Information and Communication Engineers of Japan, vol.J73-A, No.1, pp.87-94, 1990. [14] S. Koakutsu, Y. Sugai, H. Hirata, “Block placement by improved simulated annealing based on genetic algorithm,” Trans. of the institute of Electronics, Information and Communication Engineers of Japan, vol.J73-A, No.1, pp.87-94, 1990. [15] S. Koakutsu, Y. Sugai, H. Hirata, “Floorplanning by improved simulated annealing based on genetic algorithm,” Trans. of the institute of Electrical Engineers of Japan, vol.112-C, No.7, pp.411-416, 1992. [15] S. Koakutsu, Y. Sugai, H. Hirata, “Floorplanning by improved simulated annealing based on genetic algorithm,” Trans. of the institute of Electrical Engineers of Japan, vol.112-C, No.7, pp.411-416, 1992. [20] S. Koakutsu, M. Kang, and W. W.-M. Dai, “Genetic simulated annealing and application to non-slicing floorplan design,” in Proc. 5th ACM/SIGDA Physical Design Workshop, (Virginia, USA), pp. 134--141, April 1996. [20] S. Koakutsu, M. Kang, and W. W.-M. Dai, “Genetic simulated annealing and application to non-slicing floorplan design,” in Proc. 5th ACM/SIGDA Physical Design Workshop, (Virginia, USA), pp. 134--141, April 1996. [1] S.M. Chen and C.M. Huang, “Generating weighted fuzzy rules from relational database systems for estimating null values using genetic algorithms,” IEEE Transactions On Fuzzy Systems, Vol. 11, No. 4, pp. 495-506, August 2003. [1] S.M. Chen and C.M. Huang, “Generating weighted fuzzy rules from relational database systems for estimating null values using genetic algorithms,” IEEE Transactions On Fuzzy Systems, Vol. 11, No. 4, pp. 495-506, August 2003. [3] D.G. Burkhardt and P.P. Bonissone, “Automated fuzzy knowledge base Generation and tuning,” in Proc. 1992 IEEE Int. Conf. Fuzzy Systems, San Diego, CA, 1992, pp. 179-188. [3] D.G. Burkhardt and P.P. Bonissone, “Automated fuzzy knowledge base Generation and tuning,” in Proc. 1992 IEEE Int. Conf. Fuzzy Systems, San Diego, CA, 1992, pp. 179-188. [4] S.M. Chen and H.H. Chen, “Estimating null values in the distributed relational databases environment,” Cybern. Syst., Vol. 31, No. 8, pp. 851-871, 2000. [4] S.M. Chen and H.H. Chen, “Estimating null values in the distributed relational databases environment,” Cybern. Syst., Vol. 31, No. 8, pp. 851-871, 2000. [5] S.M. Chen, S.H. Lee, and C.H. Lee, “A new method for generating fuzzy rules from numerical data for handling classification problems,” Appl. Art. Intell., Vol. 15, No. 7, pp. 645-664, 2001. [5] S.M. Chen, S.H. Lee, and C.H. Lee, “A new method for generating fuzzy rules from numerical data for handling classification problems,” Appl. Art. Intell., Vol. 15, No. 7, pp. 645-664, 2001. [6] S.M. Chen and M.S. Yeh, “Generating fuzzy rules from relational database systems for estimating null values,” Cybern. Syst., Vol. 28, No. 8, pp. 695-723, 1997. [6] S.M. Chen and M.S. Yeh, “Generating fuzzy rules from relational database systems for estimating null values,” Cybern. Syst., Vol. 28, No. 8, pp. 695-723, 1997. [10] D. Sirag and P. Weisser, “Toward a unified thermodynamic genetic operator,” in Proc. 2nd Int. Conf. Genetic Algorithms, pp.116-122, 1987. [10] D. Sirag and P. Weisser, “Toward a unified thermodynamic genetic operator,” in Proc. 2nd Int. Conf. Genetic Algorithms, pp.116-122, 1987. [11] D. Adler, “Genetic algorithms and simulated annealing: a marriage proposal,” in Proc. Int. Conf. Neural Network, pp.1104-1109, 1993. [11] D. Adler, “Genetic algorithms and simulated annealing: a marriage proposal,” in Proc. Int. Conf. Neural Network, pp.1104-1109, 1993. [12] D. Brown, C. Huntley, and A. Spillane, “A parallel genetic heuristic for the quadratic assignment problem,” in Proc. 3rd Int. Conf. Genetic Algorithms, pp.406- 415, 1989. [12] D. Brown, C. Huntley, and A. Spillane, “A parallel genetic heuristic for the quadratic assignment problem,” in Proc. 3rd Int. Conf. Genetic Algorithms, pp.406- 415, 1989. [13] T.-T. Lin, C.-Y. Kao, and C.-C. Hsu, “Applying the genetic approach to simulated annealing in solving some NP-Hard problems,” IEEE Trans. System, Man, and Cybernetics., vol.23, no.6, pp.1752-1767, 1993. [13] T.-T. Lin, C.-Y. Kao, and C.-C. Hsu, “Applying the genetic approach to simulated annealing in solving some NP-Hard problems,” IEEE Trans. System, Man, and Cybernetics., vol.23, no.6, pp.1752-1767, 1993. [14] S. Koakutsu, Y. Sugai, H. Hirata, “Block placement by improved simulated annealing based on genetic algorithm,” Trans. of the institute of Electronics, Information and Communication Engineers of Japan, vol.J73-A, No.1, pp.87-94, 1990. [14] S. Koakutsu, Y. Sugai, H. Hirata, “Block placement by improved simulated annealing based on genetic algorithm,” Trans. of the institute of Electronics, Information and Communication Engineers of Japan, vol.J73-A, No.1, pp.87-94, 1990. [15] S. Koakutsu, Y. Sugai, H. Hirata, “Floorplanning by improved simulated annealing based on genetic algorithm,” Trans. of the institute of Electrical Engineers of Japan, vol.112-C, No.7, pp.411-416, 1992. [15] S. Koakutsu, Y. Sugai, H. Hirata, “Floorplanning by improved simulated annealing based on genetic algorithm,” Trans. of the institute of Electrical Engineers of Japan, vol.112-C, No.7, pp.411-416, 1992. [20] S. Koakutsu, M. Kang, and W. W.-M. Dai, “Genetic simulated annealing and application to non-slicing floorplan design,” in Proc. 5th ACM/SIGDA Physical Design Workshop, (Virginia, USA), pp. 134--141, April 1996. [20] S. Koakutsu, M. Kang, and W. W.-M. Dai, “Genetic simulated annealing and application to non-slicing floorplan design,” in Proc. 5th ACM/SIGDA Physical Design Workshop, (Virginia, USA), pp. 134--141, April 1996.


Download ppt "SA, GA and GSA in Fuzzy Systems Supervisor: Prof. Ho Cheng-Seen Presented by: Irfan Subakti 司馬伊凡 (M9215801) EE601-2 NTUST, February 9 th 2004 Supervisor:"

Similar presentations


Ads by Google