Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

Similar presentations


Presentation on theme: "Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006."— Presentation transcript:

1 Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006

2 Class admin No lab today and no lab next Monday Lab work starts next week Work in groups of no more than 6 (self selected) Single 2 hour session in the Kelvin lab (60 seats) Finding a slot – Fri 11-1? Thu 11 for lecture? Alternative 1: Mon 11-1? Alternative 2: Fri 1-3, moving the people doing the 52222 practical to a different slot

3 The story so far Lecture 1: Why algorithms are so important Lecture 2: Big-Oh notation, a first look at different complexity classes (log, linear, log-linear, quadratic, polynomial, exponential, factorial) Lecture 3: simple search in a list of items is O(n) with unordered data but O(log n) with ordered data, and can be even faster with a good indexing scheme and parallel processors Lecture 4: sorting: random sort O(n!), naïve sort is O(n 2 ), bubblesort is O(n 2 ), quicksort is O(n log n)

4 The story so far Lecture 5: more on sorting: why comparison sort is O(n log n), doing better than O(n log n) by not doing comparisons (e.g. bucket sort) Lecture 6: harder search: how to represent a problem in terms of states and moves Lecture 7: uninformed search through states using an agenda: depth-first search and breadth-first search Lecture 8: making it smart: informed search using heuristics; how to use heuristic search without losing optimality – the A* algorithm

5 Problem Solving using Search Many diverse problems involve search: Route finding Blocks World planning 15-puzzle Rubik’s cube Travelling salesman problem Crisis management Logistics planning

6 What are Problems? Each of these problems can be characterised by: Problem states, including the start state and the goal state Legal moves, or actions which transform problem states into other states Example: Rubik’s cube The start state is the muddled up cube, the goal is to have the state in which all sides are the same colour and the moves are the rotations of sides of the cube

7 Solutions Solutions are sequences of moves which transform the start state into the goal state The quality of the solution required will affect the amount of work we need to do –any solution will do –fixed amount of time, return best solution –near optimal solution needed –optimal solution needed

8 How to Solve it In general, problems can be solved using –knowledge –search –or some combination of the two

9 Formulating Problems A good formulation saves work –less search for the answer Three requirements for a search algorithm: –formal structures to describe the states –rules for manipulating them –identifying what constitutes a solution This gives us a state space representation

10 State Space Representation A state space comprises –states: snapshots of the problem –operators: how to move from one state to another Example problem: Towers of Hanoi Only move one disc at a time Never put a larger disc on top of a smaller one

11 State Space Search Problem solving using state space search consists of the following four steps: 1.Design a representation for states (including the initial state and the goal state) 2.Characterise the operators 3.Build a goal state recogniser 4.Search through the state space somehow by considering (in some or other order) the states reachable from the initial and goal states

12 Example: Blocks World A “classic” problem in AI planning The aim is to rearrange the blocks using the single robot arm so that the configuration in the goal state is achieved An optimal solution performs the transformation using as few steps as possible Any solution: linear complexity Optimal solution: exponential complexity (NP hard) C A B B A C

13 Blocks World Representation The blocks world problem can be represented as: States: stacks are lists, states are sets of stacks e.g. initial state = { [a,b],[c] } Transitions between states can be done using a single move operator: move(x,y) picks up object x and puts it on y (which may be the table) { [a,b,c] }  { [b,c],[a] } by applying move(a,table) { [a],[b,c] }  { [a,b,c] } by applying move(a,b)

14 Blocks World Representation NextStates(State)  list of legal moves and resulting states e.g. NextStates({ [a,b],[c] })  { [a],[b],[c] } by applying move(a,table) { [b],[a,c] } by applying move(a,b) { [c,a,b] } by applying move(c,a) Goal(State) returns true if State is identical with the goal state Search the space: start with the start state, explore reachable states, continue until the goal state is found

15 The Power of Search In May 1997, Garry Kasparov is beaten by Deep Blue Deep Blue is not a conventional computer: it uses specialised hardware to search 200,000,000 chess positions per second Is Deep Blue intelligent?


Download ppt "Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006."

Similar presentations


Ads by Google