Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Compressing Pattern Databases Ariel Felner Bar-Ilan University. March 2004 Joint work with Ram Meshulam, Robert Holte and Richard.

Similar presentations


Presentation on theme: "1 Compressing Pattern Databases Ariel Felner Bar-Ilan University. March 2004 Joint work with Ram Meshulam, Robert Holte and Richard."— Presentation transcript:

1 1 Compressing Pattern Databases Ariel Felner Bar-Ilan University. March 2004 Joint work with Ram Meshulam, Robert Holte and Richard E. Korf Submitted to AAAI04. Available at:

2 2 A* and its variants A* (and IDA*) is a best-first search algorithm that uses f(n)=g(n)+h(n) as its cost function. Nodes are sorted in an open- list according to their f-value. g(n) is the shortest known path between the initial node and the current node n. h(n) is an admissible (lower bound) heuristic estimation from n to the goal node Recently, the attention has shifted towards creating more accurate heuristic functions.

3 3 Pattern databases Many problems can be decomposed into subproblems (patterns) that must be also solved. The pattern space is a domain abstraction of the original space The cost of a solution to a subproblem is a lower-bound on the cost of the complete solution Instead of calculating the lower bounds on the fly, we expand the whole pattern-space and store the solution to each pattern configuration in a pattern database Pattern space Search space Mapping function

4 4 Non-additive pattern databases Fringe database for the 15 puzzle by [Culberson and Schaeffer 1996]. Stores the number of moves including tiles not in the pattern Rubik s Cube. [Korf 1997] The best way to combine different non-additive pattern databases is to take their maximum!

5 5 Additive pattern databases We can add values from different pattern databases if they are disjoint (and count their own moves) There are two ways to build additive databases: Statically-partitioned additive databases (they were also called disjoint pattern databases) Dynamically-partitioned additive databases. Applications of additive pattern databases – Tile puzzles – 4-peg Towers of Hanoi Puzzle (TOH4)

6 6 Statically-partitioned additive databases These were created for the 15 and 24 puzzles [Korf & Felner 2002] We statically partition the tiles into disjoint patterns and compute the cost of moving only these tiles into their goal states. For the 15 puzzle: 36,710 nodes seconds. 575 MB For the 24 puzzle: 360,892,479,671 2 days 242 MB

7 7 4-peg Towers of Hanoi (TOH4) There is a conjecture about the length of optimal path but it was not proven. Systematic search is the only way to solve this problem or to verify the conjecture. There are too many cycles. IDA* as a DFS will not prune these cycle. Therefore, A* (actually frontier A* [Korf & Zhang 2000]) was used.

8 8 Additive PDBS for TOH4 Partition the disks into disjoint sets (patterns). For example, 10 and 6 for the 16-disk problem. Store the cost of the complete pattern space of each set in a pattern database. (There are many enhancements) The n-disk problem contains 4^n states and 2n bits suffice to store each state. The largest databases that we stored was of size 14 which needed 4^14=256MB.

9 9 TOH4: results 16 disks secondsNodesAvg hh(s)solutionHeuristic 48134,653, Static ,479, Static ,872, Dynamic disks 2,501238,561, Dynamic 14-3

10 10 How to best use the memory The speed of the search is directly related to the size of the pattern database. We usually omit the computation time of the PDBs but cannot ignore the memory requirements [Holte, Newton, Felner, Mushulam and Furcy 2004] showed that it is better to use many small databases and take their maximum instead of one large database. We limit the discussion to 1 Giga bytes.

11 11 Compressing pattern databases Traditionally, each configuration of the pattern had a unique entry in the PDB. Our main cliam Nearby entries in PDBs are highly correlated !! We propose to compress nearby entries by storing their minimum in one entry. We show that most of the knowledge is preserved Consequences: Memory is saved, larger patterns cab be used speedup in search is obtained.

12 12 Cliques in the pattern space The values in a PDB for a clique are d or d+1 In permutation puzzles cliques exist when only one object moves to another location. Usually they have nearby entries in the PDB G d d d+1

13 13 Storing cliques Assume a clique of size K with values d or d+1 Lossy compression Store only one entry for the clique with the minimum d. Loose at most 1. Lossless compression Store the minimum d. Also store K additional bits, one per entry. A clique in TOH4

14 14 Compressing PDBs in TOH4 If we compress the last index of smallest disk then a PDB with P disks can now be stored in only 4^(P-1) entries instead of 4^P This can be generalized to a set of nodes with diameter D. (for cliques D=1) For TOH4, we fix the position of the largest P-2 disks and compress all the 4^2=16 entries of the smallest 2 disks. In general, compressing any block will work, not necessarily cliques.

15 15 TOH4 results: 16 disks (14+2) Mem MBTimeNodesDAvg HH(s)PDB ,479, / ,964, / ,055, / ,996, / ,808, / ,132, / ,479, /1s +2 Memory was reduced by a factor of 1000!!! at a cost of only a factor of 2 in the search effort. Lossless compressing is not efficient in this domain.

16 16 TOH4: larger versions MemTimeNodesAvg HTypePDBsize 256>421>393,887, static14/ ,501238,561, dynamic14/ ,737, static15/ ,293, static16/ ,117, static16/ Memory was reduced by a factor of 1000!!! At a cost of only a factor of 2 in the search effort. Lossless compressing is noe efficient in this domain. For the 17 disks problem a speed up of 3 orders of magnitude is obtained!!! The 18 disks problem can be solved in 5 minutes!!

17 17 Tile Puzzles We can take advantage of the simple heuristics. We can store only the addition above the Manhattan distance heuristic Storing PDBs for the tile puzzle (Simple mapping) A multi dimensional array A[16][16][16][16][16] size=1.04Mb (Packed mapping) One dimensional array with A[16*15*14*13*12 ] size = 0.52Mb. The time and memory tradeoff is straightforward!!

18 18 15 puzzle results A clique in the tile puzzle is of size 2. We compressed the last index by two A[16][16][16][16][8] Avg HMemTimeNodescompressTypePDB , , packed , , packed , , packed , , simple , ,881lossysimple , ,430losslesssimple , ,336lossysimple , ,692lossysimple

19 19 24 puzzle The same tendencies were obtained for the 24 puzzle. The partitioning is so good that adding another set of did not speedup the search. We have also tried a partitioning but it did not speedup the search.

20 20 Ongoing and future work An item for the PDB of tiles (a,b,c,d) is in the form: =d Store the PDBs in a Trie A PDB of 5 tiles will have a level in the trie for each tile. The values will be in the leaves of the trie. This data-structure will enable flexibility and will save memory as subtrees of the trie can be pruned

21 21 Trie pruninig Simple (lossless) pruning: Fold leaves with exactly the same values. No data will be lost.

22 22 Trie pruninig Intelligent (lossy)pruning: Fold leaves/subtrees with are correlated to each other (many option for this!!) Some data will be lost. Admissibility is still kept

23 23 Trie: Initial Results MemNodes/secTimeNodesH(s)MDPDB 3,145,7285,150, ,090, Simple 1,572,480988, ,090, Packed 765,7781,191, ,090, Trie A partitioning stored in a trie with simple folding

24 24 Neural Networks (NN) We can feed a PDB into a neural network engine. Especially, Addition above MD For each tile we focus on its dx and dy from its goal position. (i.e. MD) Linear conflict : dx 1 = dx 2 = 0 dy 1 > dy 2+ 1 A NN can learn these rules 21 dy 1 =2 dy 2 =0

25 25 Neural network We train the NN by feeding the entire (or part of the) pattern space. For example for a pattern of 5 tiles we have 10 features, 2 for each tile. During the search, given the locations of the tiles we look them up in the NN.

26 26 Neural network example dx 4 dy 4 dx 5 dy 5 dx 6 dy 6 4 Layout for the pattern of the tiles 4, 5 and 6

27 27 Neural Network: problems We face the problem of overestimating and will have to bias the results towards underestimating. We keep the overestimating values in a separate hash table Results are encouraging!! MemTimeNodesH(s)PDB 1,572, , Regular 33,611d+472w , Neural Network

28 28 Selective Pattern Database Only part of the pattern space is queried for a single problem instance. If we can identify that part we can only generate that part.


Download ppt "1 Compressing Pattern Databases Ariel Felner Bar-Ilan University. March 2004 Joint work with Ram Meshulam, Robert Holte and Richard."

Similar presentations


Ads by Google