Presentation is loading. Please wait.

Presentation is loading. Please wait.

11 Human Competitive Results of Evolutionary Computation Presenter: Mati Bot Course: Advance Seminar in Algorithms (Prof. Yefim Dinitz)

Similar presentations


Presentation on theme: "11 Human Competitive Results of Evolutionary Computation Presenter: Mati Bot Course: Advance Seminar in Algorithms (Prof. Yefim Dinitz)"— Presentation transcript:

1 11 Human Competitive Results of Evolutionary Computation Presenter: Mati Bot Course: Advance Seminar in Algorithms (Prof. Yefim Dinitz)

2 22  Human-Competitiveness Definition  Evolving Hyper-Heuristics using Genetic Programming Rush-Hour (bronze Humies prize in 2009 by Ami Hauptman) Freecell(gold Humies prize in 2011 by Achiya Elyasaf)  Other Examples Human Competitive Results of Evolutionary Computation Outline

3 33 What is Human competitive?  John Koza defined “Human-Competitiveness” in his book: Genetic Algorithms IV (2003).  There are 8 criteria by which a result can be considered Human-Competitive.(will be explained in next slide)  Our mission: Creation of Human-Competitive innovative solutions by means of Evolution.

4 44 The 8 Criteria of Koza for Human Competitivenes.  (A) result is a Patent from the past, improvement of a patent. A New patent.

5 55  (B) result is equal to or better than another result that was published in a journal. The 8 Criteria of Koza for Human Competitivenes.

6 66  (C) result is equal to or better than a result in a known DB of results. The 8 Criteria of Koza for Human Competitivenes.

7 77  (D) publishable in its own right as a new scientific result. independent of the fact that the result was mechanically created. The 8 Criteria of Koza for Human Competitivenes.

8 88  (E) The result is equal to or better than the best human-created solution. The 8 Criteria of Koza for Human Competitivenes.

9 99  (F) equal to or better than an human achievement in its field at the time it was first discovered. The 8 Criteria of Koza for Human Competitivenes.

10 10  (G) The result solves a problem of indisputable difficulty in its field. The 8 Criteria of Koza for Human Competitivenes.

11 11  (H) The result holds its own or wins a regulated competition involving human contestants (in the form of either live human players or human-written computer programs). The 8 Criteria of Koza for Human Competitivenes.

12 12 Humies Competition  “Humies annual Competition” gives $$$ for the best HC results. (in GECCO conference)  Awarding a gold, silver and bronze prizes to the best entries. (money $$$)  BGU won 1 gold, 1 silver and 6 bronze prizes since 2005.  I counted more than 75 Human-Competitive results on the Humies competition site.  http://www.sigevo.org/gecco-2012/ http://www.sigevo.org/gecco-2012/  http://www.genetic-programming.org/hc2011/combined.html http://www.genetic-programming.org/hc2011/combined.html

13 13 Evolving Hyper-Heuristics using Genetic Programming Ami Hauptman and Achiya Elyasaf

14 14 Overview  Introduction Searching Games State-Graphs Uninformed Search Heuristics Informed Search  Evolving Heuristics  Test Cases Rush Hour FreeCell

15 15 Representing Games as State-Graphs  Every puzzle/game can be represented as a state graph: In puzzles, board games etc., every piece move can be counted as an edge/transition between states In computer war games etc. – the place of the player / the enemy, all the parameters (health, shield…) define a state

16 16 Rush-Hour as a state-graph Move blue Move purple

17 17 Searching Games State-Graphs Uninformed/naïve Search  BFS – Breadth First Search Optimal solution Exponential space in the search depth  DFS – Depth First Search(without node coloring). We might “never” track down the right path. Usually games contain cycles Linear Space  Iterative Deepening: Combination of BFS & DFS Iterative Deepening Each iteration DFS with a depth limit is performed. Limit grows from one iteration to another Worst case - traverse the entire graph

18 18 Iterative Deepening

19 19 Searching Games State-Graphs Uninformed Search  Most of the game domains are PSPACE- Complete!  Worst case - traverse the entire graph  We need an informed-search! (or an intelligent approach to traversing the graph)

20 20 Searching Games State-Graphs Heuristics  Heuristic function h:states -> Real. For every state s, h(s) is an estimation of the minimal distance/cost from s to a solution In case h is perfect: an informed search that tries states with the lowest h-value first – will simply stroll to a solution For hard problems, finding a good h is hard Bad heuristic means the search might never track down the solution  We need a good heuristic function to guide the informed search

21 21 Searching Games State-Graphs Informed Search  Best-First search: Like DFS but select nodes with higher heuristic value first Best-First search Not necessarily optimal

22 22 Best-First Search 12 3 4

23 23 Searching Games State-Graphs Informed Search  A*: A* G(s)=cost from root till s H(s)=Heuristic estimation F(s)=G(s)+H(s) Holds closed and sorted open lists(the list of states needs to be checked out). Best (=lowest F(s)) node of all open nodes is selected.

24 24 A* 4 12 3

25 25 Searching Games State-Graphs Informed Search (Cont.)  IDA*: Iterative-Deepening with A* IDA* The expanded nodes are pushed to the DFS stack by descending heuristic values Let g(s) be the cost to reach state s from root: Only nodes with f(s)=g(s)+h(s)<depth-limit are visited

26 26 Overview  Introduction Searching Games State-Graphs Uninformed Search Heuristics Informed Search  Evolving Heuristics  Previous Work Rush Hour FreeCell

27 27  For H 1, …,H n – heuristics building blocks. How should we choose the fittest heuristic? Minimum? Maximum? Linear combination?  GA/GP may be used for: Building new heuristics from existing building blocks Finding weights for each heuristic (for applying linear combination) Finding conditions for applying each heuristic Evolving Heuristics

28 28 Evolving Heuristics: GA W 1 =0.3W 2 =0.01W 3 =0.2…W n =0. 1

29 29 Evolving Heuristics: GP If And ≤ ≤ H1 0.4 ≥ ≥ H2 0.7 + + H2 * * H1 0.1 * * H5 / / H1 0.1 Condition True False

30 30 Evolving Heuristics: Policies ConditionResult Condition 1Hyper Heuristics 1 Condition 2Hyper Heuristics 2 Condition nHyper Heuristics n Default Hyper Heuristics

31 31 Evolving Heuristics: Fitness Function

32 32 Overview  Introduction Searching Games State-Graphs Uninformed Search Heuristics Informed Search  Evolving Heuristics  Test cases Rush Hour FreeCell

33 33 Rush Hour GP-Rush [Hauptman et al, 2009] Bronze Humies award

34 34 Domain-Specific Heuristics  Hand-Crafted Heuristics / Guides: Blocker estimation – lower bound (admissible) Blocker estimation Goal distance – Manhattan distance Goal distance Hybrid blockers distance – combine the above two Hybrid blockers distance Is Move To Secluded – did the car enter a secluded area? (last move blocks all other cars) Is Move To Secluded Is a Releasing Move – if the last move increased the number of free cars.

35 35 Blockers Estimation  Lower bound for number of steps to goal  By: Counting moves to free blocking cars Example:  O is blocking RED  Need at least: Move O Move C Move B Move A  H = 4

36 36 Goal Distance 16  Deduce goal  Use “Manhattan Distance” from goal as h measure

37 37 Hybrid 16+8=24  “Manhattan Distance” + Blockers Estimation

38 38 Policy “Ingredients”  Functions & Terminals: ConditionsResults TerminalsIsMoveToSecluded, isReleasingMove, g, PhaseByDistance, PhaseByBlockers, NumberOfSiblings, DifficultyLevel, BlockersLowerBound, GoalDistance, Hybrid, 0, 0.1, …, 0.9, 1 BlockersLowerBound, GoalDistance, Hybrid, 0, 0.1, …, 0.9, 1 SetsIf, AND, OR, ≤, ≥+, *

39 39 Results  Average reduction of nodes required to solve test problems, with respect to the number of nodes scanned by iterative deepening:  H1: the heuristic function BlockersLowerBound.  H2: GoalDistance.  H3: Hybrid.  Hc is our hand-crafted policy.  GP is the best evolved policy, selected according to performance on the training set. Heuristic: Problem IDH1H2H3HcGP 6x6100%72%94%102%70%40% 8x8100%69%75%70%50%10%

40 40 Results (cont’d)  Time (in seconds) required to solve problems JAM01... JAM40:  ID – iterative deepening,  Hi – average of our three hand-crafted heuristics,  Hc – our hand-crafted policy.  GP – our best evolved policy.  human players (average of top 5).

41 41 FreeCell  FreeCell remained relatively obscure until Windows 95  There are 32,000 solvable problems (known as Microsoft 32K), except for game #11982, which has been proven to be unsolvable Evolving hyper heuristic-based solvers for Rush-Hour and FreeCell [Hauptman et al, SOCS 2010] GA-FreeCell: Evolving Solvers for the Game of FreeCell [Elyasaf et al, GECCO 2011]

42 42 FreeCell (cont’d)  As opposed to Rush Hour, blind search failed miserably  The best published solver to date solves 96% of Microsoft 32K  Reasons: High branching factor Hard to generate a good heuristic

43 43 Learning Methods: Random Deals Which deals (( חלוקות קלפים should we use for training? First method tested - random deals This is what we did in Rush Hour Here it yielded poor results Very hard domain

44 44 Learning Methods: Gradual Difficulty Second method tested - gradual difficulty Sort the problems by difficulty Each generation tests solvers against 5 deals from the current difficulty level + 1 random deal easyhard

45 45 A few words on Co-evolution Population 1Population 2 Test for fitness Solution, Solvers. Examples: Freecell Solver Rush Hour Solver Chess player Problems, adversaries, Examples: Freecell Deals Rush Hour Boards Another Chess Player Examples?

46 46 Learning Methods: Hillis-Style Co-evolution Third method tested - Hillis-style co-evolution using “Hall-of-Fame”: A deal population is composed of 40 deals (=40 individuals) + 10 deals that represent a hall-of- fame Each hyper-heuristic is tested against 4 deal individuals and 2 hall-of-fame deals

47 47 Learning Methods: Rosin-style Co-evolution Fourth method tested - Rosin-style co-evolution: Each deal individual consists of 6 deals Mutation and crossover:  Crossover:  Mutation 118973042238457364 2837118923983412 179875984 3001113498 p1 p2 118973042238457364 2837118923983412 179875984 3001113498 p1 118973042238457364179875984 2015

48 48 Results Learning Method Run Node Reduction Time Reduction Length Reduction Solved -HSD100% 96% Gradual Difficulty GA-123%31%1%71% GA-227%30%103%70% GP---- Policy28%36%6%36% Rosin-style coevolution GA 87%93%41%98% Policy 89%90%40%99%

49 49 Other Human Competitive results  Antenna Design for the International Space Station  Automatically finding patches using genetic programming  Evolvable Malware  And many more on Humies site.

50 50 Thank you for listening any questions?


Download ppt "11 Human Competitive Results of Evolutionary Computation Presenter: Mati Bot Course: Advance Seminar in Algorithms (Prof. Yefim Dinitz)"

Similar presentations


Ads by Google