Presentation is loading. Please wait.

Presentation is loading. Please wait.

Using AI: Learning as Search. Introduction to Artificial Intelligence Semester 1, Lecture 3:

Similar presentations


Presentation on theme: "Using AI: Learning as Search. Introduction to Artificial Intelligence Semester 1, Lecture 3:"— Presentation transcript:

1 Using AI: Learning as Search. Introduction to Artificial Intelligence Semester 1, Lecture 3:

2 Overview. Computers and state spaces Black box model of computing Learning as search: – Spaces of possible solutions, – Decision/quality formulations – The Fitness Landscape Metaphor, – Optimality: Global vs. Local search. 2

3 Computers and States. computer + program +inputs = state. – e.g. finite state automata:

4 Computers and state 2: OR current situation of problem = state. e.g. noughts and crosses.

5 Computers and State 3; Either way: – thinking about possible states of the system, – and finding ways to count them, – enables to consider what might or might not be feasible with a computer. – We’ll return to this later

6 6 The Black box model of computing We have a program/function/method/routine Takes an input, or a sequence of them, Produces an output. The bit in between can the thought of as a model of those aspects of the real world we are interested in. Needn’t necessarily express our model explicitly as a FSM

7 7 Example: traffic sign recognition Input = video signal Model = image processing to pick out signs, classifier to recognise them Output = signal saying what signs are present – could be superimposed on original image This is an application of AI, but might still need to be hand- coded Wouldn’t it be better if we could use AI to automate the production of our system?

8 8 Problem Solving as Search. Can view problem solving as consisting of a search for the missing component of a three-part system: inputs, model, outputs. e.g.: recognising some inputs

9 Example: Inductive Learning Inductive Learning may be thought of as: – search through a set of possible hypotheses (models), – to find one that matches our current knowledge (input- output pairs), – and which correctly generalises (predicts the outputs for as yet unseen inputs). So for traffic sign recognition: – Input-output pairs are images and correct responses – Model is combination of IP and classifier 9

10 10 Two alternative formulations. Decision type problems have a yes/no response: – does the solution meet my requirements? Alternative is to consider a solution quality: – how well does the solution meet requirements? Some problems can be formulated either way. – choice of formulation might aid search process.

11 11 Problem type 1 : Optimisation. We have a model of our system and seek inputs that give us a specified goal.

12 12 Example search spaces: strategies. the problem of beating Brazil at football involves choosing a strategy: – many possible strategies /team formations, – some will nullify their attacking strengths, – some will exploit their defensive weaknesses, Strategy= inputs to a simulator/real team value of a given strategy = the final score, or decision model: win/lose.

13 13 Example search spaces : design optimisation. Hospital Widget X might be specified by: – several measurements describing its shape, – other variables describing the choice of materials, – could have several constraints. Design Widget must satisfy safety constraints solution = a value for each measurement & variable search space = all possible combinations of values. Solutions= inputs to safety testing model. Outputs: – decision formulation = all constraints satisfied ? – quality of solution = number of satisfied constraints.

14 14 Example Search Spaces: State Space Search in Finite State Machines. Often useful to consider the system as a FSM. Generally it is assumed that a goal state is defined. – decision type formulation Transitions of FSMs – State space is connected to form a graph. – Transitions are defined by the model Task is: – to find a route from where you are now to the goal state. – Search problem equates to finding a sequence of inputs.

15 15 Example: the 2 buckets problem Given pail of water, a 3 and 4 pint buckets. Task is to produce 2 pints of water. state of system = volume of water in buckets: – {(0,0),(0,3),(0,4),(3,0),(3,3),(3,4),…}. set of possible actions (inputs) : – {fill3, 3from4, fill4, empty3,empty4, 4from3}. We can define all the possible transitions.

16 16 How does search work here? As we’ve stated it, this is a decision problem: – you either have 2 pints, or you don’t. Could see this as optimisation: – model = FSM, – condition on output => state one of {(2,0),(0,2)…} – seek inputs =sequence to reach goal state. – problem: no info. to discriminate helpful states. Lots more on this next week.

17 17 Problem types 2: Modelling We have corresponding sets of inputs & outputs and seek model that delivers correct output for every known input

18 18 Example search spaces : learning to do medical image classification. Medical images described by a set of (hopefully relevant) numerical features. Have a database of images (inputs) and outcomes. Solution = model (classifier) that maps values of features onto medical outcome, e.g. – configuration + weight values for a Artificial Neural Network, – set of nodes specifying a decision tree … Search space = set of possible models. Quality of model = estimated accuracy on database.

19 19 Example : learning to recognise traffic signs. Want to be able to input an image and output if a sign is recognised. Typically we’d do the learning (modelling) off-line, and on-line we’d just use the classifier. Solution (model) maps images onto responses, e.g.: – a set of parameters for IP to detect signs. – plus nearest-neighbour classifier to recognise them. Quality = number of examples model gets right. Search space = set of possible models.

20 20 Problem type 3: Simulation We have a given model and wish to know the outputs that arise under different input conditions.

21 21 Simulation examples. Could see this as the final application of the systems we have built e.g.: – Controlling actuators, – Classifying images Also used as part of bigger decision system; – to aid selecting table tennis shots, – models of global warming, – other chaotic systems.

22 “Combinatorial Explosion” theories such as NP completeness: – showed that for a large class of problems: – the number of possible states/solutions explodes at a rate that is faster than polynomial in the number of variables. – x 2, x 789 are polynomial, so increase relatively slowly, but e.g. Travelling Salesperson: – shortest route that visits each city exactly once. – N cities : N! possible routes = N x N-1 x N-2 … – 10! = 3628800, but 20! = 2432902008176640000.

23 23 Global Search and Heuristics. Global Optimisation = search for finding best solution x * out of some fixed set S. Deterministic approaches: – e.g. box decomposition (branch and bound etc). – guarantee to find x *, but may take too long. Heuristic Approaches (generate and test): – rules for deciding which x  S to generate next. – best solution found may not be globally optimal.

24 24 Adaptive Landscape Metaphor. Initially used to describe evolution, Now used more generally to image search spaces. Can envisage a solution with n traits as existing in a n+1-dimensional space (landscape) with height corresponding to quality. Point on the landscape = potential solution. Aim of search is to find the highest point, or one that is “high enough”. Can formulate as minimisation.

25 25 Example with two traits.

26 26 Local vs. Global Search. Many heuristics impose a neighbourhood structure on S. – Such heuristics may guarantee that best point found is locally optimal, – Often very quick to identify good solutions. – But problems often exhibit many local optima, Examples of local search methods include: – back-propagation to learn ANN weights, – Hill-Climbers, – Greedy Constructive Heuristics for TSP routes.

27 27 Meta-Heuristics. Meta-heuristics attempt to get around this problem, by using approaches such as: Population based search: – Evolutionary Computation, Ant Colony Optimisation. Accepting worsening moves: – Tabu Search, Simulated Annealing. Repeated Runs: – iterated local search + variants.

28 28 Caveat: landscape implies structure. for some problems the structure is natural. but sometimes “local” depends on how we move. e.g. neighbours of pawn, knight, bishop, rook in chess. PKBR

29 29 Summary Learning / problem solving = Search Possible formulations: – Decision; states = goal, invalid, partial – Quality: =>fitness landscape metaphor Global / Local search black box models: search space = – inputs => optimisation, goal seeking – models => building classifiers


Download ppt "Using AI: Learning as Search. Introduction to Artificial Intelligence Semester 1, Lecture 3:"

Similar presentations


Ads by Google