Presentation is loading. Please wait.

Presentation is loading. Please wait.

Automated Software Engineering Research Group 1 Fix 12?: Title should be Limitations (?? Not Challenges) Slide 18: Verification -> counterexample collectoin.

Similar presentations


Presentation on theme: "Automated Software Engineering Research Group 1 Fix 12?: Title should be Limitations (?? Not Challenges) Slide 18: Verification -> counterexample collectoin."— Presentation transcript:

1 Automated Software Engineering Research Group 1 Fix 12?: Title should be Limitations (?? Not Challenges) Slide 18: Verification -> counterexample collectoin

2 Computer Science 2 Mining Likely Properties of Access Control Policies via Association Rule Mining JeeHyun Hwang Advisor: Dr. Tao Xie Preliminary Oral Examination Department of Computer Science North Carolina State University, Raleigh

3 Automated Software Engineering Research Group 3 Access Control Mechanism Access control mechanisms control which subjects (such as users or processes) have access to which resources Policy defines rules according to which access control must be regulated Request Response (Permit, Deny) Policy Policy Evaluation Engine

4 Automated Software Engineering Research Group 4 Access Control Mechanism Access control mechanisms control which subjects (such as users or processes) have access to which resources Policy defines rules according to which access control must be regulated Policy Request Response (Permit, Deny) Policy Evaluation Engine Faults

5 Automated Software Engineering Research Group 5 Research Accomplishments Quality of Access Control Automated test generation [SRDS 08][SSIRI 09] Likely-property mining [DBSec 10] Property Quality Assessment [ACSAC 08] (Tool) Access Control Policy Tool (ACPT) [POLICY Demo 10] Debugging Fault localization for firewall policies [SRDS 09 SP] Automated fault correction for firewall policies [USENIX LISA 10] Performance Efficient policy evaluation engine [Sigmetrics 08]

6 Automated Software Engineering Research Group 6 Outline Motivation Our approach Future work

7 Automated Software Engineering Research Group 7 Outline Motivation Our approach Future work

8 Automated Software Engineering Research Group 8 Motivation Access control is used to control access to a large number of resources [1,2] Specifying and maintaining correct access control policies is challenging [1,2] – Authorized users should have access to the data – Unauthorized users should not have access to the data Faults in access control policies lead to security problems [1,3] 1.A. Wool. A quantitative study of firewall configuration errors. Computer, 37(6):62–67, 2004. 2.Lujo Bauer, Lorrie Cranor, Robert W. Reeder, Michael K. Reiter, and Kami Vaniea. Real Life Challenges in Access-control Management. CHI 2009 3.Sara Sinclair, Sean W. Smith. What's Wrong with Access Control in the Real World?. IEEE Security and Privacy 2010

9 Automated Software Engineering Research Group 9 Motivation – cont. Need to ensure the correct behaviours of policies – Property verification [1,2] Model a policy and verify properties against the policies Check whether properties are satisfied by a policy Violations of a property expose policy faults 1.Kathi Fisler, Shriram Krishnamurthi, Leo A. Meyerovich, Michael Carl Tschantz. Verification and change-impact analysis of access-control policies. ICSE 2005 2.Vladimir Kolovski, James Hendler, Bijan Parsia, Analyzing Web Access Control Policies, WWW 2007 3.Michael Carl Tschantz, Shriram Krishnamurthi. Towards Reasonability Properties for Access-Control Policy Languages. SACMAT 2006 Policy Property Verification Property Satisfy? (True, False) 1. Faculty member is permitted to assign grades [3] 2. Subject (who is not a faculty member) is permitted to enroll in courses [3]

10 Automated Software Engineering Research Group 10 Problem Quality of properties is assessed in terms of fault- detection capability [1] Properties help detect faults Confidence on policy correctness is dependent on the quality of specified properties CONTINUE subject [2] : 25% fault-detection capability with its seven properties In practice, writing properties of high quality is not trivial 1.Evan Martin, JeeHyun Hwang, Tao Xie, and Vincent C. Hu. Assessing quality of policy properties in verification of access control policies. ACSAC 2008 2.Kathi Fisler, Shriram Krishnamurthi, Leo A. Meyerovich, Michael Carl Tschantz. Verification and change-impact analysis of access-control policies. ICSE 2005

11 Automated Software Engineering Research Group 11 Proposed Solution Mine likely properties automatically based on correlations of attribute values (e.g., write and modify) High quality of properties (with high fault-detection capability) Our assumption: Policy may include faults Mine likely properties, which are true for all or most of the policy behaviors (>= threshold) Policy Faults Likely Properties Mine Detect Faults

12 Automated Software Engineering Research Group 12 Limitations (?? Not Challenges) Policy is domain-specific Mine likely properties within a given policy Limited set of decisions Two decisions (Permit or Deny) for any request Prioritization Which counterexamples should be inspected first? Expressiveness of likely properties How to find counterexamples?

13 Automated Software Engineering Research Group 13 XACML Policy Example RBAC_school policy Faculty InternalGrade ExternalGrade View Assign Rule 1 FacultyFamily ExternalGrade Receive Rule 3 Student ExternalGrade Receive Rule 2 If role = Faculty and resource = (ExternalGrade or InternalGrade) and action = (View or Assign) then Permit If role = Faculty and resource = (ExternalGrade or InternalGrade) and action = (View or Assign) then Permit

14 Automated Software Engineering Research Group 14 XACML Policy Example – cont. Rule 3: Jim can change grades or records. RBAC_school policy Lecturer InternalGrade ExternalGrade View Assign Rule 4 TA InternalGrade View Assign Rule 5 Rule 6

15 Automated Software Engineering Research Group 15 XACML Policy Example – cont. Rule 3: Jim can change grades or records. RBAC_school policy Lecturer InternalGrade ExternalGrade View Assign Rule 4 TA InternalGrade View Assign Rule 5 Rule 6 View Receive Inject a fault (Receive instead of View) Incorrect Policy Behaviors: 1.TA is Denied to View InternalGrade 2.TA is Permitted to Receive InternalGrade Incorrect Policy Behaviors: 1.TA is Denied to View InternalGrade 2.TA is Permitted to Receive InternalGrade (View) Permit → (Assign) Permit : Frequency: 4 (100%) (Assign) Permit → (View) Permit : Frequency: 4 (80%) (Assign) Permit → (Receive) Deny : Frequency: 4 (80%) (View) Permit → (Assign) Permit : Frequency: 4 (100%) (Assign) Permit → (View) Permit : Frequency: 4 (80%) (Assign) Permit → (Receive) Deny : Frequency: 4 (80%) (View) Permit → (Assign) Permit : Frequency: 5 (100%) (Assign) Permit → (View) Permit : Frequency: 5 (100%) (Assign) Permit → (Receive) Deny : Frequency: 5 (100%) (View) Permit → (Assign) Permit : Frequency: 5 (100%) (Assign) Permit → (View) Permit : Frequency: 5 (100%) (Assign) Permit → (Receive) Deny : Frequency: 5 (100%)

16 Automated Software Engineering Research Group 16 Policy Model Role-Based Access Control Policy [1] – Permissions are associated with roles – Subject (role) is allowed or denied access to certain objects (i.e., resources) in a system Subject: role of a person Action: command that a subject executes on the resource Resource: object Environment: any other related constraints (e.g., time, location, etc.) 1.XACML Profile for Role Based Access Control (RBAC), 2004

17 Automated Software Engineering Research Group 17 Likely-Property Model Implication relation Correlate decision (dec1) for an attribute value (v1) with decision (dec2) for another attribute value (v2) (v1) dec1 → (v2) dec2 Types Subject attribute (TA) permit → (Faculty) permit Action attribute (Assign) permit → (View) permit Subject-action attribute: (TA & Assign) permit → (Faculty & View) permit

18 Automated Software Engineering Research Group 18 Framework Our assumption: Policy may include faults Mine likely properties, which are true for all or most of the policy behaviors (>= threshold)

19 Automated Software Engineering Research Group 19 Relation Table Generation Find all possible request-response pairs in a policy Generate relation tables (including all request- response pairs) of interest Input for an association rule mining tool 1.Faculty is Permitted to Assign ExternalGrade 2.Faculty is Permitted to View ExternalGrade

20 Automated Software Engineering Research Group 20 Association Rule Mining 1.Agrawal, R., Srikant, R.: Fast algorithms for mining association rules in large databases. VLDB 1994 2.Borgelt, C.: Apriori - Association Rule Induction/Frequent Item Set Mining. 2009 Given a relation table, find implication relations of attributes via association rule mining [1,2] Find three types of likely properties Report likely properties with confidence values over a given threshold Support: Supp (X) = D / T % of the total number of records - T is #total rows - D is #rows that includes attribute-decision X = (Assign) Permit, Y = Supp (View) Permit Supp (X) = 5/10 = 0.5, Supp (Y) = 4/10 = 0.4 Supp (X ∪ Y) = 4/10 = 0.4 Confidence : Confidence (X → Y) = supp(X ∪ Y)/supp(X) * Likelihood of an likely property Confidence (X → Y) = 4/5 = 0.8

21 Automated Software Engineering Research Group 21 Likely Property Verification Verify a policy with given likely properties and find counterexamples Counterexample: (v1) dec1 → (v2) ¬ dec2 Inspect to determine whether counterexamples expose a fault Rationale: counterexamples (which do not satisfy the likely properties) deviate from the policy’s normal behaviors and are special cases for inspection

22 Automated Software Engineering Research Group 22 Basic and Prioritization Techniques Basic technique: inspect counterexamples in no particular order Prioritization technique designed to reduce inspection effort Inspect counterexamples by the order of their fault-detection likelihood Duplicate CE first CE produced from likely properties with fewer CE Likely Properties CE : Counterexamples CE Detect CE Remove Duplication

23 Automated Software Engineering Research Group 23 Evaluation RQ1: fault-Detection Capability – How higher percentage of faults are detected by our approach compared to an existing related approach [1] ? RQ2: cost – How lower percentage of distinct counterexamples are generated by our approach compared to the existing approach [1] ? RQ3: cost – For cases where a fault in a faulty policy is detected by our approach, how high percentage of distinct counterexamples (for inspection) are reduced by our prioritization? 1.Evan Martin and Tao Xie. Inferring Access-Control Policy Properties via Machine Learning. POLICY 2006

24 Automated Software Engineering Research Group 24 Metrics Fault-detection ratio (FR) Counterexample count (CC) Counterexample-reduction ratio (CRB) for our approach over the existing approach Counterexample-reduction ratio (CRP) for the prioritization technique over the basic technique

25 Automated Software Engineering Research Group 25 Mutation Testing Fault-detection capability [1,2] – Seed a fault into a firewall policy and generate a mutant (a faulty version) – # of detected faults / Total # of faults Decisions Countere xample Mutant (faulty version) Expected Decisions Policy (correct version) 1.Evan Martin, JeeHyun Hwang, Tao Xie, and Vincent C. Hu. Assessing quality of policy properties in verification of access control policies. ACSAC 2008 2.Evan Martin and Tao Xie. A Fault Model and Mutation Testing of Access Control Policies. WWW 2007

26 Automated Software Engineering Research Group 26 Evaluation Setup Seed a policy with faults for synthesizing faulty policies – One fault in each faulty policy for ease of evaluation – Four fault types [1] Change-Rule Effect (CRE) Rule-Target True (RTT) Rule-Target False (RTF) Removal Rule (RMR) Compare results of our approach with those of the previous DT approach based on decision tree [2] 1.Evan Martin and Tao Xie. A Fault Model and Mutation Testing of Access Control Policies. WWW 2007 2.Evan Martin and Tao Xie. Inferring Access-Control Policy Properties via Machine Learning. POLICY 2006

27 Automated Software Engineering Research Group 27 4 XACML Policy Subjects Real-life access control policies – codeD2 : modified version of codeD [1] – continue-a, continue-b [1] : policies for a conference review system – Univ [2] : policies for a univ. The number of rules ranges 12-306 rules 1.Kathi Fisler, Shriram Krishnamurthi, Leo A. Meyerovich, Michael Carl Tschantz. Verification and change-impact analysis of access-control policies. ICSE 2005 2.Stoller, S.D., Yang, P., Ramakrishnan, C., Gofman, M.I.. Efficient policy analysis for administrative role based access control. CCS 2007

28 Automated Software Engineering Research Group 28 Evaluation Results (1/2) – CRE Mutants FR: Fault-detection ratioCC: Counterexample count CRB: Counterexample-reduction ratio for our approach over DT approach CRP: Counterexample-reduction ratio for the prioritization technique over the basic technique Fault detection ratios: DT (25.9%), Basic (62.3%), Prioritization (62.3%) Our approach (including Basic and Prioritization techniques) outperform DT in terms of fault-detection capability

29 Automated Software Engineering Research Group 29 Evaluation Results (1/2) – CRE Mutants FR: Fault-detection ratioCC: Counterexample count CRB: Counterexample-reduction ratio for our approach over DT approach CRP: Counterexample-reduction ratio for the prioritization technique over the basic technique Our approach reduced the number of counterexamples by 55.5% over DT Our approach reduced the number of counterexamples while our approach detected a higher percentage of faults (addressed in RQ1) Prioritization reduced averagely 38.5% of counterexamples (for inspection) (in Column “% CRP”) over Basic

30 Automated Software Engineering Research Group 30 Evaluation Results (2/2) – Other Mutants Prioritization and Basic achieve the highest fault- detection capability for policies with RTT, RTF, or RMR faults Fault-detection ratios of faulty policies

31 Automated Software Engineering Research Group 31 Conclusion A new approach that mines likely properties characterizing correlations of policy behaviors w.r.t. attribute values An evaluation on 4 real-world XACML policies – Our approach achieved >30% higher fault-detection capability than that of the previous related approach based on decision tree – Our approach helped reduce >50% counterexamples for inspection compared to the previous approach

32 Automated Software Engineering Research Group 32 Outline Motivation Our approach Future work

33 Automated Software Engineering Research Group 33 Future Work  Dissertation Goal  Improving quality of access control:  Automated Test Generation, likely properties mining  Debugging  Fault Localization. Policy Combination Access Control Policy Tool (ACPT) Testing of policies in healthcare system e.g., interoperability and regular compliance (e.g., HIPPA)

34 Automated Software Engineering Research Group 34 Questions?

35 Automated Software Engineering Research Group 35 Other Challenges  Generate properties of high quality  Cover a large portion of policy behaviours  Obligation/Delegation/Environments

36 Automated Software Engineering Research Group 36 Related Work Assessing quality of policy properties in verification of access control policies [Martin et al. ACSAC 2008] Inferring access-control policy properties via machine learning [Martin&Xie Policy 2006] Detecting and resolving policy misconfigurations in access-control systems [Bauer et al. SACMAT 2008]

37 Automated Software Engineering Research Group 37 My Other Research Work

38 Automated Software Engineering Research Group 38 Systematic Structural Testing of Firewall Policies JeeHyun Hwang 1, Tao Xie 1, Fei Chen 2, and Alex Liu 2 North Carolina State University 1 Michigan State University 2 (SRDS 2008)

39 Automated Software Engineering Research Group 39 Problem Factors for misconfiguration – Conflicts among rules – Rule-set complexity – Mistakes in handling corner cases Systematic testing of firewall policies – Exhaustive testing is impractical – Considering test effort and their effectiveness together – Complementing firewall verification How to test Firewall?

40 Automated Software Engineering Research Group 40 Firewall Policy Structure A Policy is expressed as a set of rules. RuleSrcSPortDestDPortProtocolDecision r1r1 **192.168.*.***accept r2r2 1.2.*.****TCPdiscard r3r3 ***** A Rule is represented as → is a set of predicat e decisio n is “accept” or “discard” Given a packet (Src, Sport, Dest, Dport, Protocol) –When is evaluated “True”, is returned –Src, Sport, Dest, Dport, Protocol –Representing Integer range Given a packet (Src, Sport, Dest, Dport, Protocol) – can be evaluated “True” or “False”. Firewall Format: CISCO REFLEXIVE ACLS

41 Automated Software Engineering Research Group 41 Random Packet Generation Given domain range (e.g., IP addresses [0, 2 8 -1]), random packets are generated within the domain. SrcSPortDestDPortProtocol Domain***** 162.168.12.55192.168.0.510TCP Easy to generate packets Due to its randomness, difficult to achieve high structural coverage

42 Automated Software Engineering Research Group 42 Packet Generation based on Local Constraint Solving Considering an individual rule, generates packets to evaluate constraints of clauses in a specified way – For example, every value is evaluated to true 162.168.12.55168.1.0.510TCP 162.168.12.55192.168.0.510TCP –For example, Dest field value is evaluated to false, and the remaining values are evaluated to true RuleSrcSPortDestDPortProtocolDecision r1r1 **192.168.*.***accept TTTTTTTTTF Conflicts among rules

43 Automated Software Engineering Research Group 43 Packet Generation based on Global Constraint Solving Considering preceding rules are not applicable, generates packets to evaluate constraints of certain rule’s clauses in a specified way – Packet is applicable to r 3 (considering that r 1 and r 2 are not applicable) 162.168.12.551.5.0.510TCP TTTTT Resolving conflicts among rules and require analysis time to solving such conflicts RuleSrcSPortDestDPortProtocolDecision r1r1 **192.168.*.***accept r2r2 1.2.*.****TCPdiscard r3r3 ***** F F

44 Automated Software Engineering Research Group 44 Mutation Testing Why mutation testing? – Measure the quality of a test packet set (i.e., fault detection capability) Seed a fault into a firewall policy and generate a mutant (a faulty version). Decisions Test Packets Mutant (faulty version) Expected Decision s Firewall (correct version) Compare their decisions –The fault is detected in a mutant (i.e., the mutant is “killed”).

45 Automated Software Engineering Research Group 45 11 Mutation Operators Remove RuleRMR Change Rule DecisionCRD Change Rule OrderCRO Change Range End point OperatorCREO Change Range Start point OperatorCRSO Change Range End point ValueCREV Change Range Start point ValueCRSV Rule Clause FalseRCF Rule Clause TrueRCT Rule Predicate FalseRPF Rule Predicate TrueRPT DescriptionOperator

46 Automated Software Engineering Research Group 46 Experiment Given a firewall policy (assuming correct!) – Mutants – Packet sets (for each technique) Investigating the following correlations – Packet sets and their achieved structural coverage – Structural coverage criteria and fault-detection capability – Packet sets and their reduced packet sets in terms of fault-detection capability Characteristics of each mutation operator

47 Automated Software Engineering Research Group 47 Experiment (Contd...) Notations – Rand : packet set generated by the random packet generation technique – Local : packet set generated by the packet generation technique based on local constraint solving – Global : packet set generated by the packet generation technique based on global constraint solving – R-Rand, R-Local, and R-Global are their reduced packet sets

48 Automated Software Engineering Research Group 48 Subjects We used 14 firewall policies Number of test packets : approximately 2 packets per rule # Rules : Number of rules # Mutants : Number of mutants Gen time (milliseconds) : packet generation time (particularly for Global) Global : global constraint solving

49 Automated Software Engineering Research Group 49 Measuring Rule Coverage Rand < Local ≤ Global Rand achieves the lowest rule coverage In general, Global achieves slightly higher rule coverage than Local

50 Automated Software Engineering Research Group 50 Reducing the number of packet sets Reduced packet set (e.g., R-Rand) –Maintain same level of structural coverage –R-Rand (5% of Rand), R-Local (66% of Local), and R-Global (60% of Global) –Compare their fault-detection capabilities

51 Automated Software Engineering Research Group 51 Fault detection capability by subject policies R-Rand ≤ Rand < R-Local ≤ Local < R-Global≤ Global Packet set with higher structural coverage has higher fault-detection capability

52 Automated Software Engineering Research Group 52 Fault detection capability by mutation operators Mutant killing ratios vary by mutation operators –Above 85% : RPT –30% - 40% : RPF, RMR –10 – 20% : CRSV, CRSO –0% - 10% : RCT, RCF, CREV, CREO, CRO

53 Automated Software Engineering Research Group 53 Related Work Testing of XACML access control policies [Martin et al. ICICS 2006, WWW 2007] Specification-based testing of firewalls [J¨urjens et al. PSI 2001] – State transition model between firewall and its surrounding network Defining policy criteria identified by interactions between rules [El-Atawy et al. Policy 2007]

54 Automated Software Engineering Research Group 54 Conclusion Firewall policy testing helps improve our confidence of firewall policy correctness Systematic testing of firewall policies – Structural coverage criteria – Three automated packet generation techniques Measured Coverage: Rand < Local ≤ Global Mutation testing to show the fault detection capability – Generally, a packet set with higher structural coverage has higher fault-detection capability – Worthwhile to generate test packet set for achieving high structural coverage

55 Automated Software Engineering Research Group 55 Fault Localization for Firewall Policies 55 JeeHyun Hwang 1, Tao Xie 1, Fei Chen 2, and Alex Liu 2 North Carolina State University 1 Michigan State University 2 Symposium on Reliable Distributed Systems (SRDS 2009)

56 Automated Software Engineering Research Group 56 Fault Model Faults in an attribute in a Rule – Rule Decision Change (RDC) Change Rule Decision – R1: F1 ∈ [0,10] ∧ F2 ∈ [3, 5] → accept – R1’: F1 ∈ [0,10] ∧ F2 ∈ [3, 5] → deny – Rule Field Interval Change (RFC) Change selected rule’s interval randomly – R1: F1 ∈ [0,10] ∧ F2 ∈ [3, 5] → accept – R1’: F1 ∈ [2,7] ∧ F2 ∈ [3, 5] → accept

57 Automated Software Engineering Research Group 57 Overview of Approach Input – Faulty Firewall Policy – Failed and Passed Test Packets Techniques – Covered-Rule-Fault Localization – Rule Reduction Technique – Rule Ranking Technique Output – Set of likely faulty rules (with their ranking)

58 Automated Software Engineering Research Group 58 Covered-Rule-Fault Localization R 1 : F 1 ∈ [0,10] ∧ F 2 ∈ [3, 5] ∧ F 3 ∈ [3, 5] → accept R 2 : F 1 ∈ [5, 7] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [3, 5] → discard R 3 : F 1 ∈ [5, 7] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [6, 7] → accept R 4 : F 1 ∈ [2,10] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [5,10] → discard R 5 : F 1 ∈ [0,10] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [0,10] → discard Inspect a rule covered by a failed test R4 is selected for inspection RDC faulty rule is effectively filtered out # Rule Cov Selected #Failed #Passed 0002000020 2220222202 ● accept Inject a Rule Decision Change Fault in R4 “accept” rather than “discard”

59 Automated Software Engineering Research Group 59 Rule Reduction Technique Reduce # of rules for inspection The earliest-placed rule covered by failed tests; R4 Other rules with following criterion Rules above r’; R1, R2, R3 Rules with different decision of r’; R1, R3 R 1 : F 1 ∈ [0,10] ∧ F 2 ∈ [3, 5] ∧ F 3 ∈ [3, 5] → accept R 2 : F 1 ∈ [5, 7] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [3, 5] → discard R 3 : F 1 ∈ [5, 7] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [6, 7] → accept R 4 : F 1 ∈ [2,10] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [5,10] → discard R 5 : F 1 ∈ [0,10] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [0,10] → discard # Rule Cov Selected #Failed #Passed 0002100021 2221222212 ●●●●●● Inject a Field Interval Change Fault in R1’s F3 F3 ∈ [3, 3] rather than F3 ∈ [3, 5] F 3 ∈ [3, 3]

60 Automated Software Engineering Research Group 60 Rule Ranking Technique Rank rules based on their likelihood of being faulty using clause coverage. – FC1 <= FC2 FC1: # clauses that are evaluated to false in a faulty rule FC2: # clauses that are evaluated to false in other rules – Ranking is calculated based on the following formula FF(r) : # clauses evaluated to false FT(r) : # clauses evaluated to true

61 Automated Software Engineering Research Group 61 Experiments 14 firewall policies # Rules: # of rules # Tests: # generated test packets # RDC: # RDC faulty policies #RFC# RFC faulty policies

62 Automated Software Engineering Research Group 62 Results: Covered-Rule-Fault Localization 100 % of RDC faulty rules are detected 69 % of RFC faulty rules are detected 31% of RFC faulty rules are not covered by only failed test

63 Automated Software Engineering Research Group 63 Results: Rule Reduction for Inspection Rule reduction percentage: % Reduce (30.63 % of Rules) Ranking-based rule reduction percentage: % R-Reduce (66% of Rules)

64 Automated Software Engineering Research Group 64 Conclusion and Future Work Our techniques help policy authors locating faults effectively by reducing # of rules for inspection – 100% of RDC faulty rules and 69% of RFC faulty rules can be detected by inspecting covered rules – 30.63% of rules on average are reduced for inspection based on our rule reduction technique and 56.53% of rule ranking technique We plan to investigate fault localization for multiple faults in a firewall policy


Download ppt "Automated Software Engineering Research Group 1 Fix 12?: Title should be Limitations (?? Not Challenges) Slide 18: Verification -> counterexample collectoin."

Similar presentations


Ads by Google