Download presentation

Presentation is loading. Please wait.

1
Game playing

2
Outline Optimal decisions α-β pruning Imperfect, real-time decisions

3
**Games vs. search problems**

"Unpredictable" opponent specifying a move for every possible opponent repl Time limits unlikely to find goal, must approximate

4
**Game tree (2-player, deterministic, turns)**

5
**Minimax Perfect play for deterministic games**

Idea: choose move to position with highest minimax value = best achievable payoff against best play E.g., 2-ply game:

6
Minimax algorithm

7
**Properties of minimax Complete? Yes (if tree is finite)**

Optimal? Yes (against an optimal opponent) Time complexity? O(bm) Space complexity? O(bm) (depth-first exploration) For chess, b ≈ 35, m ≈100 for "reasonable" games exact solution completely infeasible

8
α-β pruning example

9
α-β pruning example

10
α-β pruning example

11
α-β pruning example

12
α-β pruning example

13
**Properties of α-β Pruning does not affect final result**

Good move ordering improves effectiveness of pruning With "perfect ordering," time complexity = O(bm/2) doubles depth of search A simple example of the value of reasoning about which computations are relevant (a form of metareasoning)

14
Why is it called α-β? α is the value of the best (i.e., highest-value) choice found so far at any choice point along the path for max If v is worse than α, max will avoid it prune that branch Define β similarly for min

15
The α-β algorithm

16
The α-β algorithm

17
**How much do we gain? Assume a game tree of uniform branching factor b**

Minimax examines O(bh) nodes, so does alpha-beta in the worst-case The gain for alpha-beta is maximum when: The MIN children of a MAX node are ordered in decreasing backed up values The MAX children of a MIN node are ordered in increasing backed up values Then alpha-beta examines O(bh/2) nodes [Knuth and Moore, 1975] But this requires an oracle (if we knew how to order nodes perfectly, we would not need to search the game tree) If nodes are ordered at random, then the average number of nodes examined by alpha-beta is ~O(b3h/4)

18
**Heuristic Ordering of Nodes**

Order the nodes below the root according to the values backed-up at the previous iteration Order MAX (resp. MIN) nodes in decreasing (increasing) values of the evaluation function computed at these nodes

19
**Games of imperfect information**

Minimax and alpha-beta pruning require too much leaf-node evaluations. May be impractical within a reasonable amount of time. SHANNON (1950): Cut off search earlier (replace TERMINAL-TEST by CUTOFF-TEST) Apply heuristic evaluation function EVAL (replacing utility function of alpha-beta)

20
**Cutting off search Change:**

if TERMINAL-TEST(state) then return UTILITY(state) into if CUTOFF-TEST(state,depth) then return EVAL(state) Introduces a fixed-depth limit depth Is selected so that the amount of time will not exceed what the rules of the game allow. When cuttoff occurs, the evaluation is performed.

21
Heuristic EVAL Idea: produce an estimate of the expected utility of the game from a given position. Performance depends on quality of EVAL. Requirements: EVAL should order terminal-nodes in the same way as UTILITY. Computation may not take too long. For non-terminal states the EVAL should be strongly correlated with the actual chance of winning. Only useful for quiescent (no wild swings in value in near future) states

22
**Heuristic EVAL example**

Eval(s) = w1 f1(s) + w2 f2(s) + … + wnfn(s)

23
**Heuristic EVAL example**

Addition assumes independence Eval(s) = w1 f1(s) + w2 f2(s) + … + wnfn(s)

24
**Heuristic difficulties**

Heuristic counts pieces won

25
**Horizon effect Fixed depth search thinks it can avoid**

the queening move

26
**Games that include chance**

Possible moves (5-10,5-11), (5-11,19-24),(5-10,10-16) and (5-11,11-16)

27
**Games that include chance**

chance nodes Possible moves (5-10,5-11), (5-11,19-24),(5-10,10-16) and (5-11,11-16) [1,1], [6,6] chance 1/36, all other chance 1/18

28
**Games that include chance**

[1,1], [6,6] chance 1/36, all other chance 1/18 Can not calculate definite minimax value, only expected value

29
**Expected minimax value**

EXPECTED-MINIMAX-VALUE(n)= UTILITY(n) if n is a terminal maxs successors(n) MINIMAX-VALUE(s) if n is a max node mins successors(n) MINIMAX-VALUE(s) if n is a max node s successors(n) P(s) . EXPECTEDMINIMAX(s) if n is a chance node These equations can be backed-up recursively all the way to the root of the game tree.

30
**Position evaluation with chance nodes**

Left, A1 wins Right A2 wins Outcome of evaluation function may not change when values are scaled differently. Behavior is preserved only by a positive linear transformation of EVAL. פונקציות הערכה למשחקים עם מזל מתנהגות שונה מאשר אלה של שחמט למשל – שבהן רק הסדר קובע. כאן הסקלה יכולה להכריע.

31
State-of-the-Art

32
**Checkers: Tinsley vs. Chinook**

Name: Marion Tinsley Profession: Teach mathematics Hobby: Checkers Record: Over 42 years loses only 3 games of checkers World champion for over 40 years Mr. Tinsley suffered his 4th and 5th losses against Chinook

33
**Chinook First computer to become official world champion of Checkers!**

Has all endgame table for 10 pieces or less: over 39 trillion entries.

34
**Chess: Kasparov vs. Deep Blue**

5’10” 176 lbs 34 years 50 billion neurons 2 pos/sec Extensive Electrical/chemical Enormous Height Weight Age Computers Speed Knowledge Power Source Ego Deep Blue 6’ 5” 2,400 lbs 4 years 32 RISC processors VLSI chess engines 200,000,000 pos/sec Primitive Electrical None 1997: Deep Blue wins by 3 wins, 1 loss, and 2 draws

35
**Chess: Kasparov vs. Deep Junior**

8 CPU, 8 GB RAM, Win 2000 2,000,000 pos/sec Available at $100 August 2, 2003: Match ends in a 3/3 tie!

36
**Othello: Murakami vs. Logistello**

Takeshi Murakami World Othello Champion 1997: The Logistello software crushed Murakami by 6 games to 0

37
**Go: Goemate vs. ?? Name: Chen Zhixing Profession: Retired**

Computer skills: self-taught programmer Author of Goemate (arguably the best Go program available today) Gave Goemate a 9 stone handicap and still easily beat the program, thereby winning $15,000

38
Go: Goemate vs. ?? Name: Chen Zhixing Profession: Retired Computer skills: self-taught programmer Author of Goemate (arguably the strongest Go programs) Go has too high a branching factor for existing search techniques Current and future software must rely on huge databases and pattern-recognition techniques Gave Goemate a 9 stone handicap and still easily beat the program, thereby winning $15,000 Jonathan Schaeffer

39
Backgammon 1995 TD-Gammon by Gerald Thesauro won world championship on 1995 BGBlitz won 2008 computer backgammon olympiad

40
Secrets Many game programs are based on alpha-beta + iterative deepening + extended/singular search + transposition tables + huge databases + ... For instance, Chinook searched all checkers configurations with 8 pieces or less and created an endgame database of 444 billion board configurations The methods are general, but their implementation is dramatically improved by many specifically tuned-up enhancements (e.g., the evaluation functions) like an F1 racing car

41
**Perspective on Games: Con and Pro**

Chess is the Drosophila of artificial intelligence. However, computer chess has developed much as genetics might have if the geneticists had concentrated their efforts starting in 1910 on breeding racing Drosophila. We would have some science, but mainly we would have very fast fruit flies. John McCarthy Saying Deep Blue doesn’t really think about chess is like saying an airplane doesn't really fly because it doesn't flap its wings. Drew McDermott

42
**Other Types of Games Multi-player games, with alliances or not**

Games with randomness in successor function (e.g., rolling a dice) Expectminimax algorithm Games with partially observable states (e.g., card games) Search of belief state spaces See R&N p

Similar presentations

OK

ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.

ADVERSARIAL SEARCH Chapter 6 Section 1 – 4. OUTLINE Optimal decisions α-β pruning Imperfect, real-time decisions.

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google