Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Empirical Study of the Performance of Preprocessing and Look-ahead Techniques for Solving Finite Constraint Satisfaction Problems Zheying Jane Yang.

Similar presentations


Presentation on theme: "An Empirical Study of the Performance of Preprocessing and Look-ahead Techniques for Solving Finite Constraint Satisfaction Problems Zheying Jane Yang."— Presentation transcript:

1

2 An Empirical Study of the Performance of Preprocessing and Look-ahead Techniques for Solving Finite Constraint Satisfaction Problems Zheying Jane Yang Under the supervision of Prof. B.Y. Choueiry July 30, 2003 Constraint Systems Laboratory Department of Computer Science and Engineering University of Nebraska-Lincoln

3 2 Outline 1.Motivation & previous results 2.Background 3.Experimental design and empirical study 4.Results and analysis 5.Conclusions & relation to previous work 6.Summary of contributions 7.Future work

4 3 1. Motivation  CSP used to model NP-complete decision problems  Search (exponential) is necessary, improved with  Preprocessing algorithms: remove inconsistent combinations prior to search  Look-ahead algorithms: remove inconsistencies during search Preprocessing Search w/ look-ahead CSP Smaller CSP Solution(s)

5 4 Algorithms studied & goal  Several competing algorithms  Preprocessing: removes inconsistencies prior to search  Arc-consistency: AC3, AC2001  Neighborhood inverse consistency (NIC), requires search  Look-ahead: filters search space during search  Forward checking (FC)  Maintaining arc-consistency (MAC)  Controversies about their relative performance exist Our goal is to characterize empirically the relative performance of combinations of preprocessing and look-ahead schemes as a function of the CSP’s constraint probability & tightness

6 Current beliefs & results Our results: We specify when the above claims hold or not Bessière & Régin 2001 AC3 vs AC2001 AC2001 is better Claims Issue Freuder & Elfe 1996 NIC is betterAC vs NIC MAC vs FC Haralick & Elliott 1980 Nadel 1989 Bessière & Régin 1996 FC is better MAC is much better MAC is the winner in large sparse CSPs FC is the winner in dense CSPs Sabin & Freuder 1994 Grant 1997 Gent & Prosser 2000 Preprocessing Look- ahead No winner performs extremely well on all types of CSPs

7 6 2. Background

8 7 What is a CSP?  A CSP problem is defined as a triple P=(V,D,C)  One solution  All solutions  The goal is to find Z Y X {(b,c), (b,d)} {(b,e), (a,f)} {(c,e), (d,f)} {c,d}{e,f} {a,b}

9 8 Solving CSPs with systematic BT search  Backtrack search is sound and complete, but exponential  expands a partial solution  explores every combination systematically  Improve performance of search  Preprocessing (constraint filtering/propagation)  Look-ahead (constraint filtering during search)  Backtracking: chronological/intelligent  Variable & value ordering: static/dynamic, heuristic  etc.

10 9 Solving a CSP  We study the performance of  Preprocessing  Hybrid search = preprocessing x look-ahead algorithms Preprocessing Look-ahead Algorithms: AC3, AC2001, NIC Algorithms: FC & MAC Following an assignment, removes values from the domains of the future variables that are inconsistent with the current assignment. Enforces different levels of local consistency by deleting inconsistent values from variable domains

11 Preprocessing a graph coloring problem V2V2 V3V3 {R, G} {G} {R, G, B} V1V1 {R} {G} {B} V1V1 V3V3 V2V2   V2V2 V3V3 {R, G} {G} {R, G, B} V1V1  V2V2 V3V3 {R, G} {G} {R, B} V1V1 V2V2  V3V3 {R} {G} {R, B} V1V1

12 V1V1 V2V2 V3V3 Start {R, G} {G} {R} V1V1 V3V3 V2V2 Solving CSP using forward checking (FC)

13 V1V1 V2V2 V3V3 Start {R, G} {G} {R} V1V1 V3V3 V2V2 Solving CSP using forward checking (FC)

14 V1V1 V2V2 V3V3 Start {G}{G}{G} {R} V1V1 V3V3 V2V2 Solving CSP using forward checking (FC)

15 {G} V3V3 V1V1 V2V2 V3V3 Start {G} {R} V1V1 V2V2 Domain wiped-out backtrack Solving CSP using forward checking (FC)

16 {G} {R} V1V1 V3V3 V2V2 V1V1 V2V2 V3V3 Start Domain wiped-out backtrack Solving CSP using forward checking (FC)

17 {R, G} {G} V1V1 V3V3 V2V2 V1V1 V2V2 V3V3 Start Solving CSP using forward checking (FC)

18 V3V3 V1V1 V2V2 V3V3 Start {R, G} {G} V1V1 V2V2 Solving CSP using forward checking (FC) Domain wiped-out backtrack

19 V3V3 {R, G} {G} V1V1 V2V2 Solving CSP using forward checking (FC) V1V1 V2V2 V3V3 Start Domain wiped-out backtrack

20 {R, G} {G} {B} V1V1 V3V3 V2V2 V1V1 V2V2 V3V3 Start Solving CSP using forward checking (FC)

21 {R} {G} {B} V1V1 V3V3 V2V2 V1V1 V2V2 V3V3 Start Solving CSP using forward checking (FC)

22 {R} {G} V1V1 V3V3 V2V2 V1V1 V2V2 V3V3 Start Solution! Solving CSP using forward checking (FC)

23 22 3. Experimental design & empirical study

24 23 Algorithms tested PreprocessingLook-ahead AC3 AC2001 NIC  FC MAC-AC3 MAC-AC2001  FC MAC-AC3 MAC-AC2001   Preprocessing: 5 algorithms  Look-ahead: 3 algorithms  Hybrid search: 7 combinations

25 24  Considered only binary constraints  Restricted to find first solution  Restricted to chronological backtracking  Use least domain (LD) as variable ordering heuristic  Variable ordering done dynamically  No value ordering heuristic, too costly in general Working assumptions

26 25  Random CSPs  We need many instances with similar characteristics in order to analyze the performance of search  Real-world problems cannot be controlled by explicit parameters to enable statistical analysis of the performance  Generated instances  connected graphs  instances guaranteed solvable Problems tested

27 26 Control parameters  Number of variables n  We choose n = 50  Domain size d (uniform)  We choose d = 10. Thus, problem size is 10 50, relatively large  Constraint tightness t = (uniform)  We vary t = [0.1, 0.2, …, 0.9] and t = [0.05, 0.1, …, 0.95]  Number of constraints C determines constraint probability p=, C max = n(n-1)/2  We vary C = [20, 490] corresponding to p = [0.024, 0.4]  We report C = 30, 50, 80, 120, 130, 245, 340, 490 Disallowed tuples All possible tuples C C max

28 27 Comparisons  Evaluation criteria  Filtering effectiveness measures reduction of CSP size  Number of constraint checks #CC measures filtering effort  Number of nodes visited # NV measures backtracking effort  CPU time [ms] measures overall performance Since constraints are defined in extension, CPU time reflects # CC  Preprocessing: Filtering effectiveness, #CC, CPU time  Hybrid search: #CC, # NV, CPU time

29 28  Evaluated performance by running each algorithm  on 30 CSP instances of equal characteristics,  calculating average & standard deviation values for each evaluation criterion  Generated approximately 6,000 CSP instances  Implemented in JAVA:  21 JAVA classes  4,000 lines of code  Experiments carried out on PrairieFire.unl.edu Sampling and code characteristics

30 29 Part I: Preprocessing Part II: Hybrids 4. Results and analysis

31 Preprocessing: filtering effectiveness NIC reduces search space at a large amount

32 31 Preprocessing results p < 0.05 0.05  p < 0.10.1  p < 0.2p  0.2 Filtering effectiveness Filtering effectiveness increases with p NIC-MAC-x » NIC-FC » AC3  AC2001 » Better than,  comparable

33 Preprocessing: #CC, sparse CSPs CC (AC3) > CC (AC2001) CC (NICx) >> CC (AC3, AC2001)

34 Preprocessing: #CC, denser CSPs When p=0.2, NICx are all costly NIC should never be combined with MAC

35 34 Preprocessing results p < 0.05 0.05  p < 0.10.1  p < 0.2p  0.2 Filtering effectiveness Filtering effectiveness increases with p NIC-MAC-x » NIC-FC » AC3  AC2001 #CC AC2001 » AC3 » NICx Never use MAC with NIC » Better than,  comparable

36 Preprocessing: CPU time When p > 0.2, NIC-x is too costly

37 36 Preprocessing results p < 0.05 0.05  p < 0.10.1  p < 0.2p  0.2 Filtering effectiveness Filtering effectiveness increases with p NIC-MAC-x » NIC-FC » AC3  AC2001 #CC AC2001 » AC3 » NICx Never use MAC with NIC CPU timeAC3 » AC2001 » NICx » Better than,  comparable

38 37 Preprocessing: summary When to use what p < 0.05 0.05  p < 0.10.1  p < 0.2p  0.2 ACx Preprocessing not effective Not effectiveEffective NIC-x Quite effectiveEffective, but MAC too costly Too costly, avoid  AC2001 is not significantly better than AC3, and is not worth the extra data structures In general, we disagree with Bessière and Régin  NIC-x: powerful filtering, but too costly. Use it on large problems when checking constraints is cheap

39 Hybrids: #CC, p = 5%, 11% When p < 0.05: ACx is not effective,ACx-FC are costly, NIC-FC-FC is OK, NIC-MAC-x is best When p > 0.10: avoid MAC, stick with FC

40 39 Hybrids results p < 0.05 0.05  p < 0.100.10  p < 0.150.15  p < 0.2p  0.2 NIC-MAC-Acx » NIC-FC-FC » AC-MAC-x » ACx-FC NIC-FC-FC » NIC-MAC-ACx » ACx-FC  AC-MAC-x » Better than,  comparable When to use what p < 0.05 0.05  p < 0.100.10  p < 0.150.15  p < 0.2p  0.2 FCAvoid FCFC dominates MACMAC dominates Avoid MAC NICNIC helpsNIC still helps ACxACx useless #CC & CPU time

41 As p increases: MAC deteriorates, NIC becomes expensive, use ACx-FC Hybrids: #CC, p= 15%, 28%

42 41 Hybrids results p < 0.05 0.05  p < 0.100.10  p < 0.150.15  p < 0.2p  0.2 NIC-MAC-Acx » NIC-FC-FC » AC-MAC-x » ACx-FC NIC-FC-FC » NIC-MAC-ACx » ACx-FC  AC-MAC-x NIC-FC-FC » ACx-FC » NIC-MAC-ACx » AC-MAC-x ACx-FC » NIC-FC-FC » NIC-MAC-ACx » AC-MAC-x ACx-FC » AC-MAC-x » NIC-FC-FC » NIC-MAC-Acx » Better than,  comparable When to use what p < 0.05 0.05  p < 0.100.10  p < 0.150.15  p < 0.2p  0.2 FCAvoid FCFC dominates MACMAC dominatesAvoid MAC NICNIC helpsNIC still helpsNIC deteriorates Avoid NIC ACxACx uselessAC-x starts to workACx helps #CC & CPU time

43 Hybrids: #NV At phase transition, NIC and MAC  do powerful filtering  but influence of MAC is stronger

44 43 Hybrids: summary p < 0.05 0.05  p < 0.100.10  p < 0.150.15  p < 0.2p  0.2 NIC-MAC-Acx » NIC-FC-FC » AC-MAC-x » ACx-FC NIC-FC-FC » NIC-MAC-ACx » ACx-FC  AC-MAC-x NIC-FC-FC » ACx-FC » NIC-MAC-ACx » AC-MAC-x ACx-FC » NIC-FC-FC » NIC-MAC-ACx » AC-MAC-x ACx-FC » AC-MAC-x » NIC-FC-FC » NIC-MAC-Acx # NV NIC-MAC-ACx » AC-MAC-x » NIC-FC-FC » ACx-FC AC-MAC-x » NIC-MAC-ACx » ACx-FC » NIC-FC-FC » Better than,  comparable When to use what p < 0.05 0.05  p < 0.100.10  p < 0.150.15  p < 0.2p  0.2 FCAvoid FCFC dominates MACMAC dominatesAvoid MAC NICNIC helpsNIC still helpsNIC deteriorates Avoid NIC ACxACx uselessAC-x starts to workACx helps #CC & CPU time

45 44 5. Conclusions  Preprocessing  AC3 vs. AC2001: AC2001 is not significantly better than AC3, and is not worth the extra data structures In general, we disagree with Bessière and Régin  NIC-x: powerful filtering, but too costly Use it on large problems when checking constraints is cheap  Look-ahead  MAC vs. FC: performance depends on constraint probability and tightness. MAC only wins in low p and high t In general we disagree with results of Sabin, Freuder, Bessière & Régin

46 45 Relation to previous work (I)  NIC is better than AC Freuder & Elfe 1996 The instances tested were all with low probability (p < 0.05). In this region, AC is ineffective.  MAC is better than FC Sabin & Freuder 1994 They tested CSPs with low probability (p = 0.018-0.09), and relatively high constraint-tightness (t = 0.15 – 0.675). In our study, MAC is effective in this region, but not outside it.  MAC is better than FC Bessière & Régin 1996 The instances tested were also in the region of low probability (p = 0.017, 0.024, 0.074, 0.08, 0.1, 0.12, 0.15), except instance#1 and instance#2, with relative high probability (p = 0.3, and 0.84). But here they test only 2 instances

47 46 Relation to previous work (II)  Gent & Prosser 2000 questioned validity of previous results on MAC. They concluded that:  in large, sparse CSPs with tight constraints, MAC is winner  in dense CSPs with loose constraints, FC is winner.  Grant 1997 showed that FC is winner on small CSPs with all range of probabilities  All concluded that: “A champion algorithm which perform extremely well on all types of problems does not exist.”  Our characterizations are more thorough and precise.

48 47 6. Summary of contributions  Random generator that guarantees solvability  Empirical evaluation of the performance of 7 combinations of preprocessing and look-ahead  Uncovered (restricted) validity conditions of previously reported results  Summarized best working conditions for preprocessing and look-ahead algorithms  Developed a Java library with 7 hybrid algorithms

49 48 7. Directions for future work  Compare to other, less common filtering algorithms, e.g. SRPC, PC, SAC, Max-RPC Debryune & Bessière 2001  Combining these preprocessing algorithms with intelligent backtrack search algorithms  Validate results on larger CSPs, real-world applications, non-binary  Test and analyze the effect of the topology of constraint networks on the performance of search

50 49 Acknowledgments Dr. Berthe Y. Choueiry (advisor) Dr. Sebastian Elbaum Dr. Peter Revesz Ms. Catherine L. Anderson Mr. Daniel Buettner Ms. Deborah Derrick (proof reading) Mr. Eric Moss Mr. Lin Xu Ms. Yaling Zheng Mr. Hui Zou

51 THANK YOU!


Download ppt "An Empirical Study of the Performance of Preprocessing and Look-ahead Techniques for Solving Finite Constraint Satisfaction Problems Zheying Jane Yang."

Similar presentations


Ads by Google