We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byAlexus Baptiste
Modified about 1 year ago
IBM Labs in Haifa © 2005 IBM Corporation Adaptive Application of SAT Solving Techniques Ohad Shacham and Karen Yorav Presented by Sharon Barner
IBM Labs in Haifa © 2005 IBM Corporation 2 Agenda DPLL SAT Solvers – Terms Motivation Performance Metrics Adaptive Solving Experimentation Conclusion
IBM Labs in Haifa © 2005 IBM Corporation 3 DPLL SAT Solvers Input: Boolean Formula, in CNF Output: Satisfying assignment / Unsatisfiable Iterative exhaustive search Decisions Assign a value to a single variable – Decision Heuristic increment decision level BCP propagate effect of assignment Learning when a conflict occurs, add a “conflict clause” to the database so that the same combination of assignments will not happen again backtrack appropriately, decrement decision level
IBM Labs in Haifa © 2005 IBM Corporation 4 Motivation SAT solving is based on heuristics and strategies There is no winning strategy Best choice cannot be determined beforehand Previous Solutions Learning from a training set does not work well for BMC – no representative set Choosing on-the-fly according to biased random function applicable only to decision heuristic does not stabilize on best option Herbstritt & Becker 2003 Lagoudakis & Littman 2001, Nudelman et al 2004
IBM Labs in Haifa © 2005 IBM Corporation 5 Solution – Adaptive Solving Switching options when not progressing well Track the progress of the search on-the-fly Decision Heuristic Clause Deletion Conflict Clause Generation Algorithm Clause Replication ...
IBM Labs in Haifa © 2005 IBM Corporation 6 Performance Metrics Produces a numerical score Calculated On-The-Fly Every fixed number of decisions evaluate the metric Cheap to evaluate Calculating the space to be searched is not an option... [SATometer] Must rely on readily available information Corresponds (roughly) to the effectiveness of the search
IBM Labs in Haifa © 2005 IBM Corporation 7 Metrics Average Decision Level Too high - implies the solver may be “stuck” on a small space with no solution Acceptable level varies between solvers Average Conflict Clause Size Smaller clauses potentially reduce the space more significantly Percentage of Binary Conflict Clauses Binary clauses are beneficial because they cause implications with little overhead
IBM Labs in Haifa © 2005 IBM Corporation 8 More Metrics BCP Ratio Average number of steps per clause v 1 0 v1v1 A high BCP Ratio means the solver makes less decisions per second
IBM Labs in Haifa © 2005 IBM Corporation 9 More Metrics Unary Clause Learning Permanent values for variables Each one reduces the search space by half And others...
IBM Labs in Haifa © 2005 IBM Corporation 10 Adaptive Solving Evaluate the performance metric every fixed number of decisions Given the metric score, decide whether to make a switch Can have different switching conditions for different options Adaptive Solving requires tuning ! Choose the parameters wisely Tune the metric to the chosen parameter Tune the metric to the solver structure Tune according to the chosen domain
IBM Labs in Haifa © 2005 IBM Corporation 11 Insights The Parameter to control capable of high impact on performance – both ways easy to switch The Sample Size Large enough to allow a change to take effect Switching Better to disable switches for a while after a switch is performed Switching condition becomes stronger after each switch Total limit on the number of switches
IBM Labs in Haifa © 2005 IBM Corporation 12 Experimentation IBM benchmarks Time out set to seconds No time outs to prevent the time out constant from influencing speedup results Parameter controlled – the value given to a decision variable first the decision variable is chosen, and only then the value by default - value is according to the literal with the higher score attempting to satisfy more clauses -sign option switches the choice to the literal with the lower score attempting to generate more conflicts In general, the default is much better in some cases “-sign” improves run times significantly
IBM Labs in Haifa © 2005 IBM Corporation 13 Experimentation NativeSignDLCCSBINBCPUNARY SAT Time Speedup Min Max UNSAT Time Speedup Min Max ALL Time Speedup Min Max Works better on UNSAT instances Maximum Speedup is on larger example Detrimental effect more pronounced on smaller examples Global Runtime Native Global Runtime Adaptive
IBM Labs in Haifa © 2005 IBM Corporation 14 Conclusion Adaptive solving enables making use of ideas that don’t always work Enabling an option on parts of the search space can give better results than enabling it or disabling it all of the time! Even when the option is inherently bad for this example Need more and better metrics Combine metrics Relate metrics to the parameter they control Can also be applied in-between BMC instances be careful – the “best configuration” for short instances is not the best for long ones (experimentation found no correlation)
Local Search and Optimization CS 271: Fall 2007 Instructor: Padhraic Smyth.
Constraint Satisfaction Problems CS 271: Fall 2007 Instructor: Padhraic Smyth.
G5BAIM Artificial Intelligence Methods Graham Kendall Simulated Annealing.
Automated abstraction refinement II Heuristic aspects Ken McMillan Cadence Berkeley Labs.
Artificial Intelligence 17. Genetic Programming Course V231 Department of Computing Imperial College © Simon Colton.
© Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems Introduction.
Hybrid BDD and All-SAT Method for Model Checking Orna Grumberg Joint work with Assaf Schuster and Avi Yadgar Technion – Israel Institute of Technology.
Scheduling An Engineering Approach to Computer Networking.
Exploiting SAT solvers in unbounded model checking K. L. McMillan Cadence Berkeley Labs.
1001ICT Programming 1 Semester 1, 2011 Lecture 8 Course Review 1.
Automated Parameter Setting Based on Runtime Prediction: Towards an Instance-Aware Problem Solver Frank Hutter, Univ. of British Columbia, Vancouver, Canada.
The Average Case Complexity of Counting Distinct Elements David Woodruff IBM Almaden.
Relevance Heuristics for Program Analysis Ken McMillan Cadence Research Labs TexPoint fonts used in EMF: A A A A A.
Backpropagation Learning Algorithm. The backpropagation algorithm was used to train the multi layer perception MLP MLP used to describe any general Feedforward.
Artificial Intelligence 1: Constraint Satisfaction pr oblems Lecturer: Tom Lenaerts SWITCH, Vlaams Interuniversitair Instituut voor Biotechnologie.
Artificial Intelligence 16. Genetic Algorithms Course V231 Department of Computing Imperial College © Simon Colton.
Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B+-Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered.
1 Constraint Satisfaction Problems Chapter 5 Sections 1 – 3.
Genetic Algorithms Logical and Artificial Intelligence in Games Lecture 14.
Hypothetical Reasoning in Propositional Satisfiability SAT02, May, 2002 Joao Marques-Silva Technical University of Lisbon, IST/INESC, CEL Lisbon, Portugal.
Data Structures and Algorithms Courses slides: Sorting Algorithms
Deep Dive Into Autoguiding. Contents Introduction Brief on Basics Guide Technical Blurb PHD Guiding Software PHD Advanced Settings EQMod Pulse Guiding.
SAT, Interpolants and Software Model Checking Ken McMillan Cadence Berkeley Labs.
MTCMTC Control Systems Group, Crieff Hills, 2005 Fitting Models to Data M. M. Taylor Martin Taylor Consulting
RISC Instruction Pipelines and Register Windows By: Andy Le CS147 – Dr. Sin-Min Lee San Jose State University, Fall 2003.
1 Congestion Control Reading: Sections COS 461: Computer Networks Spring 2006 (MW 1:30-2:50 in Friend 109) Jennifer Rexford Teaching Assistant:
UNIT V: LEARNING. LEARNING Learning from Observation Inductive Learning Decision Trees Explanation based Learning Statistical Learning methods Reinforcement.
G5BAIM Artificial Intelligence Methods Graham Kendall Genetic Algorithms.
Cointegration and Error Correction Models. Introduction Assess the importance of stationary variables when running OLS regressions. Describe the Dickey-Fuller.
© 2016 SlidePlayer.com Inc. All rights reserved.