Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Precise Dynamic Slicing Algorithms Xiangyu Zhang, Rajiv Gupta and Youtao Zhang Presented By: Krishna Balasubramanian Presented By: Krishna Balasubramanian.

Similar presentations


Presentation on theme: "1 Precise Dynamic Slicing Algorithms Xiangyu Zhang, Rajiv Gupta and Youtao Zhang Presented By: Krishna Balasubramanian Presented By: Krishna Balasubramanian."— Presentation transcript:

1 1 Precise Dynamic Slicing Algorithms Xiangyu Zhang, Rajiv Gupta and Youtao Zhang Presented By: Krishna Balasubramanian Presented By: Krishna Balasubramanian

2 2 Slicing Techniques ? Dynamic Slicing Dynamic Slicing Isolates unique statements computing variable for given inputs Criteria: Criteria: Static Slicing Static Slicing Isolates all possible statements computing a particular variable Criteria:

3 3 Example – Data dependences Static Slice = {1, 2, 3, 4, 7, 8, 9, 10} Dynamic Slice = {3, 4, 9, 10}

4 4 Slice Sizes: Static vs Dynamic ProgramStatementsStaticDynamicStatic/Dynamic 126.gcc585,491 51,098 6, go95,459 16,941 5, perl116,182 5, li31,829 2, expresso74,039 2, Static slicing gives huge slices On an average, static slices much larger

5 5 Precise Dynamic Slicing Data dependences exercised during program execution captured precisely and saved Data dependences exercised during program execution captured precisely and saved Only dependences occurring in a specific execution of program are considered Only dependences occurring in a specific execution of program are considered Dynamic slices constructed upon users requests by traversing captured dynamic dependence information Dynamic slices constructed upon users requests by traversing captured dynamic dependence information Limitation : Costly to compute Limitation : Costly to compute

6 6 Imprecise Dynamic Slicing Reduces cost of slicing Reduces cost of slicing Found to greatly increase slice sizes Reduces effectiveness Worthwhile to use precise algorithms

7 7 Precise vs Imprecise: Slice Size Implemented two imprecise algorithms: Algorithm I and Algorithm II Imprecise increases the Slice Size Algorithm II better than Algorithm I

8 8 Precise Dynamic Slicing - Approach Program executed Program executed Execution trace collected Execution trace collected PDS involves: PDS involves: Preprocessing: Preprocessing: Builds dependence graph by recovering dynamic dependences from program’s execution trace Builds dependence graph by recovering dynamic dependences from program’s execution trace Slicing Slicing Computes slices for given slicing requests by traversing dynamic dependence graph Computes slices for given slicing requests by traversing dynamic dependence graph

9 9 3 Algorithms Proposed Full preprocessing (FP) – Builds entire dependence graph before slicing Full preprocessing (FP) – Builds entire dependence graph before slicing No preprocessing (NP) No preprocessing (NP) No preprocessing performed No preprocessing performed Does demand driven analysis during slicing Does demand driven analysis during slicing Caches the recovered dependencies Caches the recovered dependencies Limited preprocessing (LP) Limited preprocessing (LP) Adds summary info to execution trace Adds summary info to execution trace Uses demand driven analysis to recover dynamic dependences from compacted execution trace Uses demand driven analysis to recover dynamic dependences from compacted execution trace What do you think is better and why?

10 10 Comparison FP algorithm impractical for real programs FP algorithm impractical for real programs Runs out of memory during preprocessing phase Runs out of memory during preprocessing phase Dynamic dependence graphs extremely large Dynamic dependence graphs extremely large NP algorithm does not run out of memory but is slow NP algorithm does not run out of memory but is slow LP algorithm is practical LP algorithm is practical Never runs out of memory Never runs out of memory Fast Fast

11 11 1) Full Preprocessing Edges corresponding to data dependences extracted from execution trace Edges corresponding to data dependences extracted from execution trace Added to statement level control flow graph Added to statement level control flow graph Execution instances labeled on graph Execution instances labeled on graph Uses instance labels during slicing Uses instance labels during slicing Only relevant edges traversed Only relevant edges traversed

12 12 FP - Example Load to store edge on left labeled (1,1) Load to store edge on left labeled (1,1) Load to store edge on right labeled (2,1) Load to store edge on right labeled (2,1) 1 st /2 nd instance of load’s execution gets value from 1 st instance of execution of store on the left/right 1 st /2 nd instance of load’s execution gets value from 1 st instance of execution of store on the left/right When load included in dynamic slice, not necessary to include both stores in dynamic slice. When load included in dynamic slice, not necessary to include both stores in dynamic slice. Instance Labels

13 13 FP - Example

14 14 FP - Example Dynamic data dependence edges shown Dynamic data dependence edges shown Edges labeled with execution instances of statements involved in data dependences Edges labeled with execution instances of statements involved in data dependences Data dependence edges traversed during slice computation of Z used in the only execution of statement 16 is: Data dependence edges traversed during slice computation of Z used in the only execution of statement 16 is: (16 1, 14 3 ), (14 3, 13 2 ), (13 2, 12 2 ), (13 2, 15 3 ), (15 3, 3 1 ), (15 3, 15 2 ), (15 2, 3 1 ), (15 2, 15 1 ), (15 1, 3 1 ), (15 1, 4 1 ) Precise dynamic slice computed is: Precise dynamic slice computed is: DS = {16,14,13,12,4,15,3} Compute the slice corresponding to the value of x used during the first execution of statement 15 ?? Compute the slice corresponding to the value of x used during the first execution of statement 15 ?? DS = Slice {4,15} DS = Slice {4,15}

15 15 2) No Preprocessing Demand driven analysis to recover dynamic dependences Demand driven analysis to recover dynamic dependences Requires less storage compared to FP Requires less storage compared to FP Takes more time Takes more time Caching used to avoid repetitive computations Caching used to avoid repetitive computations Cost of maintaining cache vs repeated recovery of same dependences from trace Cost of maintaining cache vs repeated recovery of same dependences from trace

16 16 NP Example No dynamic data dependence edges present initially No dynamic data dependence edges present initially To compute slice for z at only execution of st 16: To compute slice for z at only execution of st 16:  single backward traversal of trace (16 1, 14 3 ), (14 3, 13 2 ), (13 2, 12 2 ), (13 2, 15 3 ), (15 3, 3 1 ), (15 3, 15 2 ), (15 2, 3 1 ), (15 2, 15 1 ), (15 1, 3 1 ), (15 1, 4 1 ) extracted

17 17 NP with Cache Data dependence edges added to program flow graph Data dependence edges added to program flow graph Compute slice for use of x in 3rd instance of st 14 Compute slice for use of x in 3rd instance of st 14  All dependences required already present in graph  Trace not reexamined Compute slice for use of x by 2nd instance of st 10 Compute slice for use of x by 2nd instance of st 10  Trace traversed again  Additional dynamic data dependences extracted

18 18 3) Limited Preprocessing LP strikes a balance b/w preprocessing & slicing costs LP strikes a balance b/w preprocessing & slicing costs  Limited preprocessing of trace  Augments trace with summary information  Faster traversal of augmented trace  Demand driven analysis to compute slice using augmented trace Addresses Addresses Space problems of FP Space problems of FP Time problems of NP Time problems of NP

19 19 LP – Approach Trace divided into trace blocks Trace divided into trace blocks Each trace block of fixed size Each trace block of fixed size Store summary of all downward exposed definitions of variable names & memory addresses Store summary of all downward exposed definitions of variable names & memory addresses Look for variable definition in summary of downward exposed definitions Look for variable definition in summary of downward exposed definitions If definition found, traverse trace block to locate it If definition found, traverse trace block to locate it Else, use size information to skip to start of trace block Else, use size information to skip to start of trace block

20 20 Evaluation Execution traces on 3 different input sets for each benchmark computed Execution traces on 3 different input sets for each benchmark computed Computed 25 different slices for each execution trace Computed 25 different slices for each execution trace Slices computed wrt end of program’s execution End) Slices computed wrt end of program’s execution End) Computed 25 slices at an additional point in program’s execution midpoint) for 1 st input Computed 25 slices at an additional point in program’s execution midpoint) for 1 st input

21 21 Results – Slice sizes PDS Sizes for additional Input PDS sizes for 2 nd & 3 rd program inputs End are shown No. of statements in dynamic slice is small fraction of statements executed Different inputs give similar observations Thus, Dynamic slicing is effective across different inputs

22 22 Evaluation - Slice computation times Compared FP, NPwoC, NPwC, and LP Compared FP, NPwoC, NPwC, and LP Cumulative execution time in seconds as slices are computed one by one is shown Cumulative execution time in seconds as slices are computed one by one is shown Graphs include both preprocessing times & slice computation times Graphs include both preprocessing times & slice computation times

23 23 Execution Times

24 24 Observations FP rarely runs to completion FP rarely runs to completion Mostly runs out of memory Mostly runs out of memory NPwoC, NPwC and LP successful NPwoC, NPwC and LP successful Makes computation of PDS feasible Makes computation of PDS feasible NPwoC shows linear increase in cumulative exec time with no. of slices NPwoC shows linear increase in cumulative exec time with no. of slices LP cumulative exec time rises much more slowly than NPwoC and NPwC LP cumulative exec time rises much more slowly than NPwoC and NPwC

25 25 Observations Exec times of LP are 1.13 to 3.43 times < than NP Exec times of LP are 1.13 to 3.43 times < than NP Due to % of trace blocks skipped by LP Due to % of trace blocks skipped by LP Shows that limited preprocessing does pay off Shows that limited preprocessing does pay off Cumulative times: NP vs LP Trace Blocks skipped by LP

26 26 LP (Precise) vs Algorithm II (Imprecise) Slice Sizes: Slice Sizes:  Slices computed by LP 1.2 to times smaller than imprecise data slices of Algorithm II  Relative performance was similar Execution Times: Execution Times: End, total time taken by LP 0.55 to 2.02 times Algorithm II Midpoint, total time taken by LP is 0.51 to 1.86 times Algorithm II

27 27 Results Both have no memory problems Both have no memory problems Smaller slice sizes for LP Smaller slice sizes for LP For large slices, execution time greater than imprecise For large slices, execution time greater than imprecise For small slices, execution time less than imprecise For small slices, execution time less than imprecise

28 28 Summary Precise LP algorithm performs the best Precise LP algorithm performs the best Imprecise dynamic slicing algorithms are too imprecise, hence not an attractive option Imprecise dynamic slicing algorithms are too imprecise, hence not an attractive option LP algorithm is practical LP algorithm is practical Provides Precise Dynamic Slices at reasonable space and time costs Provides Precise Dynamic Slices at reasonable space and time costs

29 29 Thank you!


Download ppt "1 Precise Dynamic Slicing Algorithms Xiangyu Zhang, Rajiv Gupta and Youtao Zhang Presented By: Krishna Balasubramanian Presented By: Krishna Balasubramanian."

Similar presentations


Ads by Google