Farmer, Wolf, Goat, Cabbage Farmer, Fox, Chicken, Corn Farmer Dog, Rabbit, Lettuce The Farmer, Wolf, Duck, Corn Problem A farmer with his wolf, duck and.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Heuristic Search The search techniques we have seen so far...
Informed search algorithms
Review: Search problem formulation
An Introduction to Artificial Intelligence
Greedy best-first search Use the heuristic function to rank the nodes Search strategy –Expand node with lowest h-value Greedily trying to find the least-cost.
Classic AI Search Problems
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
Blind Search1 Solving problems by searching Chapter 3.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
CSC 423 ARTIFICIAL INTELLIGENCE
Artificial Intelligence (CS 461D)
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
Problem Solving by Searching
Review: Search problem formulation
Now would be a good time to kill your cell phone, and disconnect from the internet. For next time, print Heuristic Search.ppt slides.
Informed Search CSE 473 University of Washington.
Imagine that I am in a good mood Imagine that I am going to give you some money ! In particular I am going to give you z dollars, after you tell me the.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
State-Space Searches. 2 State spaces A state space consists of –A (possibly infinite) set of states The start state represents the initial problem Each.
Solving Problems by Searching
State-Space Searches.
Informed Search Methods
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Informed search algorithms
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Blind Searches - Introduction.
Now would be a good time to kill your cell phone, and disconnect from the internet. For next time, print Heuristic Search.ppt slides.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Blind Search Russell and Norvig: Chapter 3, Sections 3.4 – 3.6 CS121 – Winter 2003.
Search to Solve Word Ladders Before we consider Heuristic Search, let us review a little first We will begin by considering a new problem space, for Word.
For Monday Read chapter 4 exercise 1 No homework.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Lecture 3: Uninformed Search
Next time, Adversarial Search
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
Heuristic Functions.
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Artificial Intelligence Problem solving by searching CSC 361
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Artificial Intelligence
Searching for Solutions
Heuristics Local Search
Heuristics Local Search
State-Space Searches.
State-Space Searches.
CS 416 Artificial Intelligence
State-Space Searches.
Presentation transcript:

Farmer, Wolf, Goat, Cabbage Farmer, Fox, Chicken, Corn Farmer Dog, Rabbit, Lettuce The Farmer, Wolf, Duck, Corn Problem A farmer with his wolf, duck and bag of corn come to the east side of a river they wish to cross. There is a boat at the rivers edge, but of course only the farmer can row. The boat can only hold two things (including the rower) at any one time. If the wolf is ever left alone with the duck, the wolf will eat it. Similarly if the duck is ever left alone with the corn, the duck will eat it. How can the farmer get across the river so that all four arrive safely on the other side? The Farmer, Wolf, Duck, Corm problem dates back to the eighth century and the writings of Alcuin, a poet, educator, cleric, and friend of Charlemagne.

F W D C F D C W This means that everybody/everything is on the same side of the river. This means that we somehow got the Wolf to the other side.

FWDC WDC F DC FW WC FD WD FC Illegal State Search Tree for “Farmer, Wolf, Duck, Corn”

FWDC WDC F DC FW WC FD WD FC FWC D FWDC Repeated StateIllegal State Search Tree for “Farmer, Wolf, Duck, Corn”

FWDC WDC F DC FW WC FD WD FC FWC D FWDC W FDC FWC D WC FD C FWD FC WD FDC W D FWC FWD C FW DC FWC D WD FC W FDC C FWD DC FW D FWC FDC W FWD C FD WC FWDC D FWC Goal StateRepeated StateIllegal State Search Tree for “Farmer, Wolf, Duck, Corn”

FWDC WC FD FWC D C FWD FDC W D FWC FD WCFWDC FWDC WC FD FWC D C FWD FDC W D FWC FD WC FWDC Farmer returns alone Farmer takes duck to left bank Farmer takes wolf to left bank Farmer returns with duck Farmer takes corn to left bank Farmer returns alone Farmer takes duck to left bank Success! Initial State

Problem Solving using Search A Problem Space consists of The current state of the world (initial state) A description of the actions we can take to transform one state of the world into another (operators). A description of the desired state of the world (goal state), this could be implicit or explicit. A solution consists of the goal state*, or a path to the goal state. * Problems were the path does not matter are known as “constraint satisfaction” problems.

Initial StateGoal StateOperators Slide blank square left. Slide blank square right. …. FWDC Move F Move F with W …. Distributive property Associative property... Add a queen such that it does not attack other, previously placed queens. A 4 by 4 chessboard with 4 queens placed on it such that none are attacking each other Queens

Representing the states A state space should describe Everything that is needed to solve the problem. Nothing that is not needed to solve the problem. In general, many possible representations are possible, choosing a good representation will make solving the problem much easier. For the 8-puzzle 3 by 3 array 5, 6, 7 8, 4, BLANK 3, 1, 2 A vector of length nine 5,6,7,8,4, BLANK,3,1,2 A list of facts Upper_left = 5 Upper_middle = 6 Upper_right = 7 Middle_left = 8 …. Choose the representation that make the operators easiest to implement.

Operators I Single atomic actions that can transform one state into another. You must specify an exhaustive list of operators, otherwise the problem may be unsolvable. Operators consist of Precondition: Description of any conditions that must be true before using the operator. Instruction on how the operator changes the state. In general, for any given state, not all operators are possible. Examples: In FWDC, the operator Move_Farmer_Left is not possible if the farmer is already on the left bank. In this 8-puzzle, The operator Move_6_down is possible But the operator Move_7_down is not

Operators II Example: For the eight puzzle we could have... Move 1 left Move 1 right Move 1 up Move 1 down Move 2 left Move 2 right Move 2 up Move 2 down Move 3 left Move 3 right Move 3 up Move 3 down Move 4 left … There are often many ways to specify the operators, some will be much easier to implement... Move Blank left Move Blank right Move Blank up Move Blank down Or

A complete example: The Water Jug Problem  Two jugs of capacity 4 and 3 units.  It is possible to empty a jug, fill a jug, transfer the content of a jug to the other jug until the former empties or the latter fills.  Task: Produce a jug with 2 units. A farm hand was sent to a nearby pond to fetch 2 gallons of water. He was given two pails - one 4, the other 3 gallons. How can he measure the requested amount of water? Abstract away unimportant details State representation (X, Y) X is the content of the 4 unit jug. Y is the content of the 3 unit jug. Initial State (0, 0) Goal State (2, n) Operators Fill 3-jug from faucet (a, b)  (a, 3) Fill 4-jug from faucet (a, b)  (4, b) Fill 4-jug from 3-jug (a, b)  (a + b, 0)... Define a state representation Define an initial state Define an goal state(s) May be a description rather than explicit state Define all operators

Once we have defined the problem space (state representation, the initial state, the goal state and operators) are we are done? We start with the initial state and keep using the operators to expand the parent nodes till we find a goal state. …but the search space might be large… …really large… So we need some systematic way to search.

The average number of new nodes we create when expanding a new node is the (effective) branching factor b. The length of a path to a goal is the depth d. A Generic Search Tree b b2b2 bdbd So visiting every the every node in the search tree to depth d will take O(b d ) time. Not necessarily O(b d ) space. Fringe (Frontier) Set of nonterminal nodes without children I.e nodes waiting to be expanded.

Branching factors for some problems The eight puzzle has a branching factor of 2.13, so a search tree at depth 20 has about 3.7 million nodes. (note that there only 181,400 different states). Rubik’s cube has a branching factor of There are 901,083,404,981,813,616 different states. The average depth of a solution is about 18. The best time for solving the cube in an official championship was sec, achieved by Robert Pergl in the 1983 Czechoslovakian Championship. In 1997 the best AI computer programs took weeks (See Korf, UCLA). Chess has a branching factor of about 35, there are about states (there are about electrons in the universe)

Detecting repeated states is hard….

We are going to consider different techniques to search the problem space, we need to consider what criteria we will use to compare them. Completeness: Is the technique guaranteed to find an answer (if there is one). Optimality: Is the technique guaranteed to find the best answer (if there is more than one). (operators can have different costs) Time Complexity: How long does it take to find a solution. Space Complexity: How much memory does it take to find a solution.

General (Generic) Search Algorithm function general-search(problem, QUEUEING-FUNCTION) nodes = MAKE-QUEUE(MAKE-NODE(problem.INITIAL-STATE)) loop do if EMPTY(nodes) then return "failure" node = REMOVE-FRONT(nodes) if problem.GOAL-TEST(node.STATE) succeeds then return node nodes = QUEUEING-FUNCTION(nodes, EXPAND(node, problem.OPERATORS)) end A nice fact about this search algorithm is that we can use a single algorithm to do many kinds of search. The only difference is in how the nodes and placed in the queue.

Breadth First Search Enqueue nodes in FIFO (first-in, first-out) order. Complete? Yes. Optimal? Yes, if path cost is nondecreasing function of depth Time Complexity: O(b d ) Space Complexity: O(b d ), note that every node in the fringe is kept in the queue. Intuition: Expand all nodes at depth i before expanding nodes at depth i + 1

Uniform Cost Search Enqueue nodes in order of cost Complete? Yes. Optimal? Yes, if path cost is nondecreasing function of depth Time Complexity: O(b d ) Space Complexity: O(b d ), note that every node in the fringe keep in the queue. Intuition: Expand the cheapest node. Where the cost is the path cost g(n) Note that Breadth First search can be seen as a special case of Uniform Cost Search, where the path cost is just the depth.

Depth First Search Enqueue nodes in LIFO (last-in, first-out) order. Complete? No (Yes on finite trees, with no loops). Optimal? No Time Complexity: O(b m ), where m is the maximum depth. Space Complexity: O(bm), where m is the maximum depth. Intuition: Expand node at the deepest level (breaking ties left to right)

Depth-Limited Search Enqueue nodes in LIFO (last-in, first-out) order. But limit depth to L Complete? Yes if there is a goal state at a depth less than L Optimal? No Time Complexity: O(b L ), where L is the cutoff. Space Complexity: O(bL), where L is the cutoff. Intuition: Expand node at the deepest level, but limit depth to L L is 2 in this example Picking the right value for L is a difficult, Suppose we chose 7 for FWDC, we will fail to find a solution...

Iterative Deepening Search I Do depth limited search starting a L = 0, keep incrementing L by 1. Complete? Yes Optimal? Yes Time Complexity: O(b d ), where d is the depth of the solution. Space Complexity: O(bd), where d is the depth of the solution. Intuition: Combine the Optimality and completeness of Breadth first search, with the low space complexity of Depth first search

Iterative Deepening Search II , ,000 = 111, , , ,000 = 123,456 Consider a problem with a branching factor of 10 and a solution at depth 5... Iterative deepening looks wasteful because we reexplore parts of the search space many times...

Bi-directional Search Intuition: Start searching from both the initial state and the goal state, meet in the middle. Complete? Yes Optimal? Yes Time Complexity: O(b d/2 ), where d is the depth of the solution. Space Complexity: O(b d/2 ), where d is the depth of the solution. Notes Not always possible to search backwards How do we know when the trees meet? At least one search tree must be retained in memory.

Heuristic Search The search techniques we have seen so far... Breadth first search Uniform cost search Depth first search Depth limited search Iterative Deepening Bi-directional Search...are all too slow for most real world problems uninformed search blind search

Sometimes we can tell that some states appear better that others FWD C FW C D

...we can use this knowledge of the relative merit of states to guide search Heuristic Search (informed search) A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect to the goal. In other words, the heuristic tells us approximately how far the state is from the goal state*. Note we said “approximately”. Heuristics might underestimate or overestimate the merit of a state. But for reasons which we will see, heuristics that only underestimate are very desirable, and are called admissible. *I.e Smaller numbers are better

Heuristics for 8-puzzle I The number of misplaced tiles (not including the blank) NNN NNN NY In this case, only “8” is misplaced, so the heuristic function evaluates to 1. In other words, the heuristic is telling us, that it thinks a solution might be available in just 1 more move. Goal State Current State Notation: h(n) h(current state) = 1

Heuristics for 8-puzzle II The Manhattan Distance (not including the blank) In this case, only the “3”, “8” and “1” tiles are misplaced, by 2, 3, and 3 squares respectively, so the heuristic function evaluates to 8. In other words, the heuristic is telling us, that it thinks a solution is available in just 8 more moves Goal State Current State spaces 3 spaces Total 8 Notation: h(n) h(current state) = 8

We can use heuristics to guide “hill climbing” search. In this example, the Manhattan Distance heuristic helps us quickly find a solution to the 8-puzzle. But “hill climbing has a problem...” h(n)

In this example, hill climbing does not work! All the nodes on the fringe are taking a step “backwards” (local minima) Note that this puzzle is solvable in just 12 more steps. h(n)

We have seen two interesting algorithms. Uniform Cost Measures the cost to each node. Is optimal and complete! Can be very slow. Hill Climbing Estimates how far away the goal is. Is neither optimal nor complete. Can be very fast. Can we combine them to create an optimal and complete algorithm that is also very fast?

Uniform Cost Search Enqueue nodes in order of cost Intuition: Expand the cheapest node. Where the cost is the path cost g(n) Hill Climbing Search Enqueue nodes in order of estimated distance to goal Intuition: Expand the node you think is nearest to goal. Where the estimate of distance to goal is h(n)

The A* Algorithm (“A-Star”) Enqueue nodes in order of estimate cost to goal, f(n) g(n) is the cost to get to a node. h(n) is the estimated distance to the goal. f(n) = g(n) + h(n) We can think of f(n) as the estimated cost of the cheapest solution that goes through node n Note that we can use the general search algorithm we used before. All that we have changed is the queuing strategy. If the heuristic is optimistic, that is to say, it never overestimates the distance to the goal, then… A* is optimal and complete!

Informal proof outline of A* completeness Assume that every operator has some minimum positive cost, epsilon. Assume that a goal state exists, therefore some finite set of operators lead to it. Expanding nodes produces paths whose actual costs increase by at least epsilon each time. Since the algorithm will not terminate until it finds a goal state, it must expand a goal state in finite time. Informal proof outline of A* optimality When A* terminates, it has found a goal state All remaining nodes have an estimate cost to goal (f(n)) greater than or equal to that of goal we have found. Since the heuristic function was optimistic, the actual cost to goal for these other paths can be no better than the cost of the one we have already found.

How fast is A*? A* is the fastest search algorithm. That is, for any given heuristic, no algorithm can expand fewer nodes than A*. How fast is it? Depends of the quality of the heuristic. If the heuristic is useless (ie h(n) is hardcoded to equal 0 ), the algorithm degenerates to uniform cost. If the heuristic is perfect, there is no real search, we just march down the tree to the goal. Generally we are somewhere in between the two situations above. The time taken depends on the quality of the heuristic.

What is A*’s space complexity? A* has worst case O(b d ) space complexity, but an iterative deepening version is possible ( IDA* )

A Worked Example: Maze Traversal A B D C E Problem: To get from square A3 to square E2, one step at a time, avoiding obstacles (black squares). Operators: (in order) go_left (n) go_down (n) go_right (n) each operator costs 1. Heuristic: Manhattan distance

Operators: (in order) go_left (n) go_down (n) go_right (n) each operator costs 1. A2 A3 B3A4 g(A2) = 1 h(A2) = 4 g(B3) = 1 h(B3) = 4 g(A4) = 1 h(A4) = A B D C E A2 B3 A4

Operators: (in order) go_left (n) go_down (n) go_right (n) each operator costs 1. A2 A3 B3A4 g(A2) = 1 h(A2) = 4 g(B3) = 1 h(B3) = 4 g(A4) = 1 h(A4) = 6 A1 g(A1) = 2 h(A1) = A B D C E A2 B3 A1 A4

Operators: (in order) go_left (n) go_down (n) go_right (n) each operator costs 1. A2 A3 B3A4 g(A2) = 1 h(A2) = 4 g(B3) = 1 h(B3) = 4 g(A4) = 1 h(A4) = 6 C3 B4 g(C3) = 2 h(C3) = 3 g(B4) = 2 h(B4) = 5 A1 g(A1) = 2 h(A1) = A B D C E A2 B3 A4 A1 C3 B4

Operators: (in order) go_left (n) go_down (n) go_right (n) each operator costs 1. A2 A3 B3A4 g(A2) = 1 h(A2) = 4 g(B3) = 1 h(B3) = 4 g(A4) = 1 h(A4) = 6 C3 B4 g(C3) = 2 h(C3) = 3 g(B4) = 2 h(B4) = 5 A1 g(A1) = 2 h(A1) = A B D C E B1 g(B1) = 3 h(B1) = 4 A2 B3 A4 A1 B1 C3 B4

Operators: (in order) go_left (n) go_down (n) go_right (n) each operator costs 1. A2 A3 B3A4 g(A2) = 1 h(A2) = 4 g(B3) = 1 h(B3) = 4 g(A4) = 1 h(A4) = 6 C3 B4 g(C3) = 2 h(C3) = 3 g(B4) = 2 h(B4) = 5 A1 g(A1) = 2 h(A1) = A B D C E B1 g(B1) = 3 h(B1) = 4 B5 g(B5) = 3 h(B5) = 6 A2 B3 A4 A1 B1 C3 B4 B5

Optimizing Search (Iterative Improvement Algorithms) I.e Hill climbing, Simulated Annealing Genetic Algorithms Optimizing search is different to the path finding search we have studied in many ways. The problems are ones for which exhaustive and heuristic search are NP-hard. The path is not important (for that reason we typically don’t bother to keep a tree around) (thus we are CPU bound, not memory bound). Every state is a “solution”. The search space is (often) continuous. Usually we abandon hope of finding the best solution, and settle for a very good solution. The task is usually to find the minimum (or maximum) of a function.

Example Problem I (Continuous) y = f(x) Finding the maximum (minimum) of some function (within a defined range).

The Traveling Salesman Problem (TSP) A salesman spends his time visiting n cities. In one tour he visits each city just once, and finishes up where he started. In what order should he visit them to minimize the distance traveled? There are (n-1)!/2 possible tours. Example Problem II (Discrete)

Function Fitting Depending on the way the problem is setup this, could be continuous and/or discrete. Discrete part Finding the form of the function is it X 2 or X 4 or ABS(log(X)) + 75 Continuous part Finding the value for X is it X= 3.1 or X= 3.2 Example Problem III (Continuous and/or discrete)

Assume that we can Represent a state. Quickly evaluate the quality of a state. Define operators to change from one state to another. y = log(x) + sin(tan(y-x)) x = 2; y = 7; log(2) + sin(tan(7-2)) = x = add_10_percent (x) y = subtract_10_percent (y) …. A C F K W…..Q A A to C = 234 C to F = 142 … Total10,231 A C F K W…..Q A A C K F W…..Q A

Hill-Climbing I function Hill-Climbing (problem) returns a solution state inputs : problem // a problem. local variables : current // a node. next // a node. current  Make-Node ( Initial-State [ problem ]) // make random loop do // initial state. next  a highest-valued successor of current if Value [next] < Value [current] then return current current  next end

How would Hill- Climbing do on the following problems? How can we improve Hill-Climbing? Random restarts! Intuition: call hill- climbing as many times as you can afford, choose the best answer.

function Simulated-Annealing ( problem, schedule ) returns a solution state inputs : problem // a problem schedule // a mapping from time to "temperature" local variables : current // a node next // a node T // a "temperature" controlling the probability of downward steps current  Make-Node ( Initial-State [ problem ]) for t  1 to  do T  schedule [ t ] if T = 0 then return current next  a randomly selected successor of current  E  Value [ next ] - Value [ current ] if  E > 0 then current  next else current  next only with probability e  E/T

Genetic Algorithms I (R and N, pages ) Variation (members of the same species are differ in some ways). Heritability (some of variability is inherited). Finite resources (not every individual will live to reproductive age). Given the above, the basic idea of natural selection is this. Some of the characteristics that are variable will be advantageous to survival. Thus, the individuals with the desirable traits are more likely to reproduce and have offspring with similar traits... And therefore the species evolve over time… Richard Dawkins Since natural selection is known to have solved many important optimizations problems it is natural to ask can we exploit the power of natural selection?

Genetic Algorithms II The basic idea of genetic algorithms (evolutionary programming). Initialize a population of n states (randomly) While time allows Measure the quality of the states using some fitness function. “kill off” some of the states. Allow the surviving states to reproduce (sexually or asexually or..) end Report best state as answer. All we need do is...(A) Figure out how to represent the states. (B) Figure out a fitness function. (C) Figure out how to allow our states to reproduce.

Genetic Algorithms III log(x y ) + sin(tan(y-x)) y x - tan sin + log y pow x One possible representation of the states is a tree structure… Another is a bitstring… For problems where we are trying to find the best order to do some thing (TSP), a linked list might work...

Genetic Algorithms IIII y x - tan sin + log y pow x Usually the fitness function is fairly trivial. For the function maximizing problem we can evaluate the given function with the state (the values for x, y, z... etc) For the function finding problem we can evaluate the function and see how close it matches the data. For TSP the fitness function is just the length of the tour represented by the linked list

Genetic Algorithms V Parent state A Parent state B Child of A and B Parent state A Parent state B Child of A and B Sexual Reproduction (crossover)

Genetic Algorithms VI Parent state A Child of A Asexual Reproduction Mutation 5 + cos y / x 5 + tan y / x Mutation Parent state A Child of A Parent state A Child of A

Discussion of Genetic Algorithms It turns out that the policy of “keep the best n individuals” is not the best idea… Genetic Algorithms require many parameters... (population size, fraction of the population generated by crossover; mutation rate, number of sexes... ) How do we set these? Genetic Algorithms are really just a kind of hill-climbing search, but seem to have less problems with local maximums… Genetic Algorithms are very easy to parallelize... Applications Protein Folding, Circuit Design, Job-Shop Scheduling Problem, Timetabling, designing wings for aircraft.

y x - tan sin + log y pow x 5 + cos y / x