Department of Computer Science Undergraduate Events More https://my.cs.ubc.ca/students/development/eventshttps://my.cs.ubc.ca/students/development/events.

Slides:



Advertisements
Similar presentations
Constraint Satisfaction Problems
Advertisements

Local Search Jim Little UBC CS 322 – CSP October 3, 2014 Textbook §4.8
CPSC 322, Lecture 14Slide 1 Local Search Computer Science cpsc322, Lecture 14 (Textbook Chpt 4.8) Oct, 5, 2012.
1 Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides)
Department of Computer Science Undergraduate Events More
Review: Constraint Satisfaction Problems How is a CSP defined? How do we solve CSPs?
Planning under Uncertainty
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) February, 9, 2009.
CPSC 322, Lecture 14Slide 1 Local Search Computer Science cpsc322, Lecture 14 (Textbook Chpt 4.8) February, 3, 2010.
CPSC 322, Lecture 15Slide 1 Stochastic Local Search Computer Science cpsc322, Lecture 15 (Textbook Chpt 4.8) February, 6, 2009.
4 Feb 2004CS Constraint Satisfaction1 Constraint Satisfaction Problems Chapter 5 Section 1 – 3.
Iterative improvement algorithms Prof. Tuomas Sandholm Carnegie Mellon University Computer Science Department.
CPSC 322, Lecture 12Slide 1 CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12 (Textbook Chpt ) January, 29, 2010.
CPSC 322, Lecture 14Slide 1 Local Search Computer Science cpsc322, Lecture 14 (Textbook Chpt 4.8) February, 4, 2009.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Constraint Satisfaction Problems
Introduction to Artificial Intelligence Local Search (updated 4/30/2006) Henry Kautz.
Intelligent Agents What is the basic framework we use to construct intelligent programs?
CS 4700: Foundations of Artificial Intelligence
Informed Search Chapter 4 Adapted from materials by Tim Finin, Marie desJardins, and Charles R. Dyer CS 63.
Department of Computer Science Undergraduate Events More
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Constraint Satisfaction Problems
Constraint Satisfaction Problems
Local Search and Optimization
Slide 1 CSPs: Arc Consistency & Domain Splitting Jim Little UBC CS 322 – Search 7 October 1, 2014 Textbook §
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
Stochastic Local Search CPSC 322 – CSP 6 Textbook §4.8 February 9, 2011.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Local Search CPSC 322 – CSP 5 Textbook §4.8 February 7, 2011.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Chapter 5 Section 1 – 3 1.  Constraint Satisfaction Problems (CSP)  Backtracking search for CSPs  Local search for CSPs 2.
Hande ÇAKIN IES 503 TERM PROJECT CONSTRAINT SATISFACTION PROBLEMS.
Computer Science CPSC 322 Lecture 16 CSP: wrap-up Planning: Intro (Ch 8.1) Slide 1.
Chapter 5: Constraint Satisfaction ICS 171 Fall 2006.
Department of Computer Science Undergraduate Events More
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Fall 2006 Jim Martin.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
Local Search and Optimization Presented by Collin Kanaley.
CHAPTER 5 SECTION 1 – 3 4 Feb 2004 CS Constraint Satisfaction 1 Constraint Satisfaction Problems.
1. 2 Outline of Ch 4 Best-first search Greedy best-first search A * search Heuristics Functions Local search algorithms Hill-climbing search Simulated.
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Department of Computer Science Undergraduate Events More
Chapter 5 Team Teaching AI (created by Dewi Liliana) PTIIK Constraint Satisfaction Problems.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) Oct, 11, 2013.
Domain Splitting, Local Search CPSC 322 – CSP 4 Textbook §4.6, §4.8 February 4, 2011.
© J. Christopher Beck Lecture 16: Local Search.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
Constraint Satisfaction Problems
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
CPSC 322, Lecture 15Slide 1 Stochastic Local Search Computer Science cpsc322, Lecture 15 (Textbook Chpt 4.8) Oct, 9, 2013.
CSCI 4310 Lecture 10: Local Search Algorithms
Stochastic Local Search Computer Science cpsc322, Lecture 15
Department of Computer Science
Lecture 11 SLS wrap up, Intro To Planning
Local Search Algorithms
Computer Science cpsc322, Lecture 14
Artificial Intelligence (CS 370D)
Computer Science cpsc322, Lecture 14
Stochastic Local Search Variants Computer Science cpsc322, Lecture 16
Artificial Intelligence
Local Search Algorithms
CS 8520: Artificial Intelligence
Constraint Satisfaction Problems
Presentation transcript:

Department of Computer Science Undergraduate Events More Ericsson Info Session Mon., Oct 6, 5:30 – 7 pm DMP 110 MDA Info Session Tues., Oct 7, 5:30 – 7 pm DMP 110 IBM Info Session Wed., Oct 8, 5:30 – 7 pm DMP 110 CSIC Workshop: Panic to Power: Building Your Confidence Tues., Oct 14, 3:30 – 5 pm Brock Hall, East Wing Study in Germany (DAAD) Info Session Tues., Oct 14, 5 – 6 pm Buchanan Tower 104A CSIC Workshop: Leveraging LinkedIn in Your Job Search Wed., Oct 15, 3 – 4 pm Brock Hall, East Wing Co-op Info Session Thurs., Oct 16, 12:30 – 1:30 pm DMP 301 CSIC Workshop: Interview Preparation Thurs., Oct 16, 3 – 4 pm Brock Hall, East Wing

Slide 2 Stochastic Local Search Jim Little UBC CS 322 – CSP October 6, 2014 Textbook §4.8

Announcements Assignment-2 on CSP will be out soon. Slide 3

Slide 4 Lecture Overview Recap Local Search in CSPs Stochastic Local Search (SLS) Comparing SLS algorithms

Slide 5 Local Search: Summary A useful method in practice for large CSPs Start from a possible world, randomly chosen Generate some neighbors ( “similar” possible worlds) e.g., ? Move from current node to a neighbor, selected to minimize/maximize a scoring function which combines: Info about how many constraints are violated/satisfied Information about the cost/quality of the solution (you want the best solution, not just a solution)

Slide 6 Hill Climbing NOTE: Everything that will be said for Hill Climbing is also true for Greedy Descent

Slide 7 Problems with Hill Climbing Local Maxima. Plateau - Shoulders (Plateau)

Slide 8 In higher dimensions……. E.g., Ridges – sequence of local maxima not directly connected to each other From each local maximum you can only go downhill

Slide 9 Corresponding problem for GreedyDescent Local minimum example: 8-queens problem A local minimum with h = 1, h=0 for solution For all the moves h>1

Slide 10 Lecture Overview Recap Local Search in CSPs Stochastic Local Search (SLS) Comparing SLS algorithms

Slide 11 Stochastic Local Search GOAL: We want our local search to be guided by the scoring function Not to get stuck in local maxima/minima, plateaus etc. SOLUTION: We can alternate a) Hill-climbing steps b) Random steps: move to a random neighbor. c) Random restart: reassign random values to all variables. current possible world a) move to that improves scoring function b) select randomly c) jump to a random possible world

Which randomized method would work best in each of these two search spaces? Slide 12

Which randomized method would work best in each of the these two search spaces? Slide 13

Slide 14 Random Steps (Walk) Let’s assume that neighbors are generated as assignments that differ in one variable's value How many neighbors there are given n variables with domains with d values? One strategy to add randomness to the selection of the variable-value pair. Sometimes choose the pair According to the scoring function A random one e.g., in 8-queen How many neighbors? n*(d-1) 56 choose one of the circled ones choose randomly one of the 56

Slide 15 Random Steps (Walk): two-step Another strategy: select a variable first, then a value: Sometimes select variable: 1. that participates in the largest number of conflicts. 2. at random, any variable that participates in some conflict. 3. at random Sometimes choose value a)That minimizes # of conflicts b)at random Aispace 2 a: Greedy Descent with Min-Conflict Heuristic (v4 v5 v8) v5 Strategy 1.a = select neighbor v5=1

Slide 16 Successful application of SLS Scheduling of Hubble Space Telescope: reducing time to schedule 3 weeks of observations: from one week to around 10 sec.

17 Example: SLS for RNA secondary structure design RNA strand made up of four bases: cytosine (C), guanine (G), adenine (A), and uracil (U) 2D/3D structure RNA strand folds into is important for its function Predicting structure for a strand is “easy”: O(n 3 ) But what if we want a strand that folds into a certain structure? Local search over strands Search for one that folds into the right structure Evaluation function for a strand Run O(n 3 ) prediction algorithm Evaluate how different the result is from our target structure Only defined implicitly, but can be evaluated by running the prediction algorithm RNA strand GUCCCAUAGGAUGUCCCAUAGGA Secondary structure Easy Hard Best algorithm to date: Local search algorithm RNA-SSD developed at UBC [Andronescu, Fejes, Hutter, Condon, and Hoos, Journal of Molecular Biology, 2004]

CSP/logic: formal verification 18 Hardware verification Software verification (e.g., IBM) (small to medium programs) Most progress in the last 10 years based on: Encodings into propositional satisfiability (SAT)

Slide 19 (Stochastic) Local search advantage: Online setting When the problem can change (particularly important in scheduling) E.g., schedule for airline: thousands of flights and thousands of personnel assignment Storm can render the schedule infeasible Goal: Repair with minimum number of changes This can be easily done with a local search starting form the current schedule Other techniques usually: require more time might find solution requiring many more changes

SLS limitations Typically no guarantee to find a solution even if one exists SLS algorithms can sometimes stagnate Get caught in one region of the search space and never terminate Very hard to analyze theoretically Not able to show that no solution exists SLS simply won’t terminate You don’t know whether the problem is infeasible or the algorithm has stagnated Slide 20

SLS Advantage: anytime algorithms When should the algorithm be stopped ? When a solution is found (e.g. no constraint violations) Or when we are out of time: you have to act NOW Anytime algorithm: maintain the node with best h found so far (the “incumbent”) given more time, can improve its incumbent Slide 21

Slide 22 Lecture Overview Recap Local Search in CSPs Stochastic Local Search (SLS) Comparing SLS algorithms

Evaluating SLS algorithms SLS algorithms are randomized The time taken until they solve a problem is a random variable It is entirely normal to have runtime variations of 2 orders of magnitude in repeated runs! E.g. 0.1 seconds in one run, 10 seconds in the next one On the same problem instance (only difference: random seed) Sometimes SLS algorithm doesn’t even terminate at all: stagnation If an SLS algorithm sometimes stagnates, what is its mean runtime (across many runs)? Infinity! In practice, one often counts timeouts as some fixed large value X Still, summary statistics, such as mean run time or median run time, don't tell the whole story E.g. would penalize an algorithm that often finds a solution quickly but sometime stagnates Slide 23

Slide 24 Comparing Stochastic Algorithms: Challenge Summary statistics, such as mean run time, median run time, and mode run time don't tell the whole story What is the running time for the runs for which an algorithm never finishes (infinite? stopping time?) 100% runtime / steps ….. % of solved runs

Slide 25 First attempt…. How can you compare three algorithms when A. one solves the problem 30% of the time very quickly but doesn't halt for the other 70% of the cases B. one solves 60% of the cases reasonably quickly but doesn't solve the rest C. one solves the problem in 100% of the cases, but slowly? 100% Mean runtime / steps of solved runs % of solved runs

Slide 26 Runtime Distributions are even more effective Plots runtime (or number of steps) and the proportion (or number) of the runs that are solved within that runtime. log scale on the x axis is commonly used Fraction of solved runs, i.e. P(solved by this # of steps/time) # of steps

Comparing runtime distributions x axis: runtime (or number of steps) y axis: proportion (or number) of runs solved in that runtime Typically use a log scale on the x axis

Comparing runtime distributions x axis: runtime (or number of steps) y axis: proportion (or number) of runs solved in that runtime Typically use a log scale on the x axis Slide 28

Comparing runtime distributions Slide 29

Comparing runtime distributions

x axis: runtime (or number of steps) y axis: proportion (or number) of runs solved in that runtime Typically use a log scale on the x axis Fraction of solved runs, i.e. P(solved by this # of steps/time) # of steps 28% solved after 10 steps, then stagnate 57% solved after 80 steps, then stagnate Slow, but does not stagnate Crossover point: if we run longer than 80 steps, green is the best algorithm If we run less than 10 steps, red is the best algorithm Slide 31

Runtime distributions in AIspace Let’s look at some algorithms and their runtime distributions: 1. Greedy Descent 2. Random Sampling 3. Random Walk 4. Greedy Descent with random walk Simple scheduling problem 2 in AIspace: Slide 32

Slide 33 What are we going to look at in AIspace When selecting a variable first followed by a value: Sometimes select variable: 1. that participates in the largest number of conflicts. 2. at random, any variable that participates in some conflict. 3. at random Sometimes choose value a)That minimizes # of conflicts b)at random AIspace terminology Random sampling - keeps restarting Random walk Greedy Descent Greedy Descent Min conflict Greedy Descent with random walk Greedy Descent with random restart …..

Slide 34 Stochastic Local Search Key Idea: combine greedily improving moves with randomization As well as improving steps we can allow a “small probability” of: Random steps: move to a random neighbor. Random restart: reassign random values to all variables. Stop when Solution is found (in simple CSP) Run out of time (return best solution so far) Always keep best solution found so far Possible world sat. all C e.g., 1% e.g., 5%

Slide 35 Learning Goals for today’s class You can: Implement SLS with random steps (1-step, 2-step versions) random restart Compare SLS algorithms with runtime distributions

Slide 36 Next Class More SLS variants Finish CSPs (if time) Start planning Assign-2 Will be out RSN