Presentation is loading. Please wait.

Presentation is loading. Please wait.

Self-defending software: Automatically patching security vulnerabilities Michael Ernst University of Washington.

Similar presentations


Presentation on theme: "Self-defending software: Automatically patching security vulnerabilities Michael Ernst University of Washington."— Presentation transcript:

1 Self-defending software: Automatically patching security vulnerabilities Michael Ernst University of Washington

2 Demo Automatically fix previously unknown security vulnerabilities in COTS software.

3 Goal: Security for COTS software Requirements: 1.Preserve functionality 2.Protect against unknown vulnerabilities 3.COTS (commercial off-the-shelf) software Key ideas: 4.Learn from failure 5.Leverage the benefits of a community

4 1. Preserve functionality Maintain continuity: application continues to operate despite attacks For applications with high availability requirements –Important for mission-critical applications –Web servers, air traffic control, communications Technique: create a patch (repair the application) Previous work: crash the program when attacked –Loss of data, overhead of restart, attack recurs

5 2. Unknown vulnerabilities Proactively prevent attacks via unknown vulnerabilities –“Zero-day exploits” –No pre-generated signatures –No time for human reaction –Works for bugs as well as attacks

6 3. COTS software No modification to source or executables No cooperation required from developers –No source information (no debug symbols) –Cannot assume built-in survivability features x86 Windows binaries

7 4. Learn from failure Each attack provides information about the underlying vulnerability Detect all attacks (of given types) –Prevent negative consequences –First few attacks may crash the application Repairs improve over time –Eventually, the attack is rendered harmless –Similar to an immune system

8 5. Leverage a community Application community = many machines running the same application (a monoculture) –Convenient for users, sysadmins, hackers Problem: A single vulnerability can be exploited throughout the community Solution: Community members collaborate to detect and resolve problems –Amortized risk: a problem in some leads to a solution for all –Increased accuracy: exercise more behaviors during learning –Shared burden: distribute the computational load

9 Evaluation Adversarial Red Team exercise –Red Team was hired by DARPA –Had access to all source code, documents, etc. Application: Firefox (v1.0) –Exploits: malicious JavaScript, GC errors, stack smashing, heap buffer overflow, uninitialized memory Results: –Detected all attacks, prevented all exploits –For 7/10 exploits, generated a patch that maintained functionality After an average of 8 attack instances –No false positives –Low steady-state overhead

10 Collaborative learning approach Monitoring: Detect attacks Pluggable detector, does not depend on learning Learning: Infer normal behavior from successful executions Repair: Propose, evaluate, and distribute patches based on the behavior of the attack All modes run on each member of the community Modes are temporally and spatially overlapped Monitoring RepairLearning normal executionattack (or bug) Model of behavior

11 Collaborative learning approach Monitoring: Detect attacks Pluggable detector, does not depend on learning Learning: Infer normal behavior from successful executions Repair: Propose, evaluate, and distribute patches based on the behavior of the attack All modes run on each member of the community Modes are temporally and spatially overlapped Monitoring RepairLearning normal executionattack (or bug) Model of behavior

12 Collaborative learning approach Monitoring: Detect attacks Pluggable detector, does not depend on learning Learning: Infer normal behavior from successful executions Repair: Propose, evaluate, and distribute patches based on the behavior of the attack All modes run on each member of the community Modes are temporally and spatially overlapped Monitoring RepairLearning normal executionattack (or bug) Model of behavior

13 Collaborative learning approach Monitoring: Detect attacks Pluggable detector, does not depend on learning Learning: Infer normal behavior from successful executions Repair: Propose, evaluate, and distribute patches based on the behavior of the attack All modes run on each member of the community Modes are temporally and spatially overlapped Monitoring RepairLearning normal executionattack (or bug) Model of behavior

14 Collaborative learning approach Monitoring: Detect attacks Pluggable detector, does not depend on learning Learning: Infer normal behavior from successful executions Repair: Propose, evaluate, and distribute patches based on the behavior of the attack All modes run on each member of the community Modes are temporally and spatially overlapped Monitoring RepairLearning normal executionattack (or bug) Model of behavior

15 … Collaborative Learning System Architecture Constraints Patch/Repair Code Generation Patches … Client Workstations (Learning)‏Client Workstations (Protected)‏ Patches Patch/Repair results Merge Constraints (Daikon)‏ MPEE Data Acquisition Client Library Application Learning (Daikon) Sample Data MPEE Application Memory Firewall Live Shield Evaluation (observe execution)‏ MPEE Application Memory Firewall Live Shield Evaluation (observe execution)‏ MPEE Data Acquisition Client Library Application Learning (Daikon) Sample Data Central Management System

16 Structure of the system (Server may be replicated, distributed, etc.) Encrypted, authenticated communication Server Community machines Threat model does not (yet!) include malicious nodes

17 Learning Community machines Server distributes learning tasks Observe normal behavior Server

18 Learning … copy_len ≤ buff_size … Clients send inference results Server Community machines Server generalizes (merges results) Clients do local inference Generalize observed behavior copy_len < buff_size copy_len ≤ buff_size copy_len = buff_size

19 Monitoring Detector collects information and terminates application Server Community machines Detect attacks Assumption: few false positives Detectors used in our research: –Code injection (Memory Firewall) –Memory corruption (Heap Guard) Many other possibilities exist

20 Monitoring Server Violated: copy_len ≤ buff_size Instrumentation continuously evaluates learned behavior What was the effect of the attack? Community machines Clients send difference in behavior: violated constraints Server correlates constraints to attack

21 Repair Candidate patches: 1.Set copy_len = buff_size 2.Set copy_len = 0 3.Set buff_size = copy_len 4.Return from procedure Server Propose a set of patches for each behavior correlated to attack Community machines Correlated: copy_len ≤ buff_size Server generates a set of patches

22 Repair Server Patch 1 Patch 3 Patch 2 Distribute patches to the community Community machines Ranking: Patch 1: 0 Patch 2: 0 Patch 3: 0 …

23 Repair Ranking: Patch 3: +5 Patch 2: 0 Patch 1: -5 … Server Patch 1 failed Patch 3 succeeded Evaluate patches Success = no detector is triggered When attacked, clients send outcome to server Community machines Detector is still running on clients Server ranks patches

24 Repair Server Patch 3 Server redistributes the most effective patches Redistribute the best patches Community machines Ranking: Patch 3: +5 Patch 2: 0 Patch 1: -5 …

25 Outline Overview Learning: create model of normal behavior Monitoring: determine properties of attacks Repair: propose and evaluate patches Evaluation: adversarial Red Team exercise Conclusion

26 Learning Clients send inference results Community machines Server generalizes (merges results) Clients do local inference Generalize observed behavior … copy_len ≤ buff_size … Server copy_len < buff_size copy_len ≤ buff_size copy_len = buff_size

27 Dynamic invariant detection Idea: generalize observed program executions Initial candidates = all possible instantiations of constraint templates over program variables Observed data quickly falsifies most candidates Many optimizations for accuracy and speed –Data structures, static analysis, statistical tests, … copy_len < buff_size copy_len ≤ buff_size copy_len = buff_size copy_len ≥ buff_size copy_len > buff_size copy_len ≠ buff_size copy_len: 22 buff_size: 42 copy_len < buff_size copy_len ≤ buff_size copy_len = buff_size copy_len ≥ buff_size copy_len > buff_size copy_len ≠ buff_size Candidate constraints:Remaining candidates: Observation:

28 Quality of inference results Not sound –Overfitting if observed executions are not representative –Does not affect attack detection –For repair, mitigated by correlation step –Continued learning improves results Not complete –Templates are not exhaustive Useful! –Sound in practice –Complete enough

29 Enhancements to learning Relations over variables in different parts of the program –Flow-dependent constraints Distributed learning across the community Filtering of constraint templates, variables

30 Learning for Windows x86 binaries No static or dynamic analysis to determine validity of values and pointers –Use expressions computed by the application –Can be more or fewer variables (e.g., loop invariants) –Static analysis to eliminate duplicated variables New constraint templates (e.g., stack pointer) Determine code of a function from its entry point

31 Outline Overview Learning: create model of normal behavior Monitoring: determine properties of attacks –Detector details –Learning from failure Repair: propose and evaluate patches Evaluation: adversarial Red Team exercise Conclusion

32 Detecting attacks (or bugs) Code injection (Determina Memory Firewall) –Triggers if control jumps to code that was not in the original executable Memory corruption (Heap Guard) –Triggers if sentinel values are overwritten These have low overhead and no false positives Goal: detect problems close to their source

33 Memory Firewall as Detector NETWORK KERNEL Make payment Change prefs Read statement Write RecordUpdate Registry Open port HIJACK Program Counter HIJACK Program Counter COMPROMISE ENTER call br jmp ENTER Monitoring is simple – Port monitoring or system call monitoring Don’t know good guy from bad guy – Only “known criminals” can be identified Even known bad guys are hard to detect – Encrypted channels Used by IDS, Application Firewalls HIJACK “Catch in the act of criminal behavior” All programs follow strict conventions – ABI (Application Binary Interface) ‏ – The Calling Convention (MS/Intel) ‏ Currently no enforcement All attacks violate some of these conventions COMPROMISE Monitoring can be done – System call monitoring Hard to distinguish between actions of a normal program vs. a compromised program – Leads to false positives Used by “System Call Interception” HIPS systems SYSTEM & APPLICATION MEMORY 3) ABI Violation 1) NO ABI Violation 2) NO ABI Violation Attack Code

34 Memory Firewall Security Policies Goal: Closest approximation to programmer intent Infer Control Flow Graph nodes and edges –ISA requirements – x86 minimal –OS requirements – page RW- –ABI requirements – imports, exports, SEH –Calling conventions (inter-module)‏ –Compiler idiosyncrasies – C, gcc C, C++, VB, Delphi Restricted Code Origins – Prevent code injection Restricted Control Transfers - Prevent code reuse attacks –For x86 User mode: RET IND CALL IND JUMP

35 Other possible attack detectors Non-application-specific: –Crash: page fault, assertions, etc. –Infinite loop Application-specific checks: –Tainting, data injection –Heap consistency –User complaints Application behavior: –Different failures/attacks –Unusual system calls or code execution –Violation of learned behaviors Collection of detectors with high false positives Non-example: social engineering

36 Learning from failures Each attack provides information about the underlying vulnerability –That it exists –Where it can be exploited –How the exploit operates –What repairs are successful

37 Monitoring Server Community machines Detect attacks Assumption: few false positives Detector collects information and terminates application

38 Monitoring Server Community machines scanf read_input process_record main Detector collects information and terminates application Client sends attack info to server Where did the attack happen? (Detector maintains a shadow call stack)

39 Monitoring Clients install instrumentation Server scanf read_input process_record main Server generates instrumentation for targeted code locations Server sends instru- mentation to all clients Monitoring for main, process_record, … Extra checking in attacked code Evaluate learned constraints Community machines

40 Monitoring Server Violated: copy_len ≤ buff_size Instrumentation continuously evaluates inferred behavior What was the effect of the attack? Community machines Clients send difference in behavior: violated constraints Server correlates constraints to attack Correlated: copy_len ≤ buff_size

41 Correlating attacks and constraints Evaluate only constraints at the attack sites –Low overhead A constraint is correlated to an attack if: –The constraint is violated iff the attack occurs Create repairs for each correlated constraint –There may be multiple correlated constraints –Multiple candidate repairs per constraint

42 Outline Overview Learning: create model of normal behavior Monitoring: determine properties of attacks Repair: propose and evaluate patches Evaluation: adversarial Red Team exercise Conclusion

43 Repair Server Patch 1 Patch 3 Patch 2 Distribute patches to the community Success = no detector is triggered Community machines Ranking: Patch 1: 0 Patch 2: 0 Patch 3: 0 … Patch evaluation uses additional detectors (e.g., crash, difference in attack)

44 Attack example Target: JavaScript system routine (written in C++) –Casts its argument to a C++ object, calls a virtual method –Does not check type of the argument Attack supplies an “object” whose virtual table points to attacker-supplied code A constraint at the method call is correlated –Constraint: JSRI address target is one of a known set Possible repairs: –Skip over the call –Return early –Call one of the known valid methods –No repair

45 Enabling a repair Repair may depend on constraint checking – if (!(copy_len ≤ buff_size)) copy_len = buff_size; –If constraint is not violated, no need to repair –If constraint is violated, an attack is (probably) underway Repair does not depend on the detector –Should fix the problem before the detector is triggered Repair is not identical to what a human would write –A stopgap until an official patch is released –Unacceptable to wait for human response

46 Evaluating a patch In-field evaluation –No attack detector is triggered –No crashes or other behavior deviations Pre-validation, before distributing the patch: Replay the attack +No need to wait for a second attack +Exactly reproduce the problem –Expensive to record log; log terminates abruptly –Need to prevent irrevocable effects –Delays distribution of good patches Run the program’s test suite –May be too sensitive –Not available for COTS software

47 Outline Overview Learning: create model of normal behavior Monitoring: determine properties of attacks Repair: propose and evaluate patches Evaluation: adversarial Red Team exercise Conclusion

48 Red Team Red Team attempts to break our system –Hired by DARPA; 10 engineers –(Researchers = “Blue Team”) Red Team created 10 Firefox exploits –Each exploit is a webpage –Firefox executes arbitrary code –Malicious JavaScript, GC errors, stack smashing, heap buffer overflow, uninitialized memory

49 Rules of engagement Firefox 1.0 –Blue team may not tune system to known vulnerabilities –Focus on most security-critical components No access to a community for learning Focus on the research aspects –Red Team may only attack Firefox and the Blue Team infrastructure No pre-infected community members –Future work Red Team has access to the implementation and all Blue Team documents & materials –Red Team may not discard attacks thwarted by the system

50 Results Blue Team system: –Detected all attacks, prevented all exploits –Repaired 7/10 attacks After an average of 5.5 minutes and 8 attacks –Handled attack variants –Handled simultaneous & intermixed attacks –Suffered no false positives –Monitoring/repair overhead was not noticeable

51 Results: Caveats Blue team fixed a bug with reusing filenames Communications infrastructure (from subcontractor) failed during the last day Learning overhead is high –Optimizations are possible –Can be distributed over community

52 Outline Overview Learning: create model of normal behavior Monitoring: determine properties of attacks Repair: propose and evaluate patches Evaluation: adversarial Red Team exercise Conclusion

53 Related work Distributed detection –Vigilante [Costa] (for worms; proof of exploit) –Live monitoring [Kıcıman] –Statistical bug isolation [Liblit] Learning (lots) –Typically for anomaly detection –FSMs for system calls Repair [Demsky] –requires specification; not scalable

54 Credits Saman Amarasinghe Jonathan Bachrach Michael Carbin Danny Dig Michael Ernst Sung Kim Samuel Larsen Carlos Pacheco Jeff Perkins Martin Rinard Frank Sherwood Greg Sullivan Weng-Fai Wong Yoav Zibin Subcontractor: Determina, Inc. Funding: DARPA (PM: Lee Badger) Red Team: SPARTA, Inc.

55 Contributions Framework for collaborative learning and repair Pluggable detection, learning, repair Handles unknown vulnerabilities Preserves functionality by repairing the vulnerability Learns from failure Focuses resources where they are most effective Implementation for Windows x86 binaries Evaluation via a Red Team exercise

56 My other research Making it easier (and more fun!) to create reliable software Security: quantitative information-flow; web vulnerabilities Programming languages: –Design: User-defined type qualifiers (in Java 7) –Type systems: immutability, canonicalization –Type inference: abstractions, polymorphism, immutability Testing: –Creating complex test inputs –Generating unit tests from system tests –Classifying test behavior Reproducing in-field failures; combined static & dynamic analysis; analysis of version history; refactoring; …

57 Contributions Framework for self-defending software Pluggable detection, learning, repair Handles unknown vulnerabilities Preserves functionality by repairing the vulnerability Learns from failure Uses a community Focuses resources where they are most effective Implementation for Windows x86 binaries Evaluation via a Red Team exercise

58

59

60 Underlying technology x86 instrumentation/patching: DynamoRIO (HP/MIT) & LiveShield (Determina) –Low overhead –No observable change to applications Learning: Dynamic invariant detection (UW) Detector for code injection: Memory Firewall (Determina)

61 Determina MPEE and Client Library Application (binary) Basic Block Checking And Transformation Basic Block Checked, Transformed Basic Block Code Cache PC In learning mode: Send trace data to learner In monitoring mode: Checked learned constraints In repair mode: Patch data or control

62 … Collaborative Learning System Architecture Constraints Patch/Repair Code Generation Patches … Client Workstations (Learning)‏Client Workstations (Protected)‏ Patches Patch/Repair results Merge Constraints (Daikon)‏ MPEE Data Acquisition Client Library Application Learning (Daikon)‏ Sample Data MPEE Application Memory Firewall Live Shield Evaluation (observe execution)‏ MPEE Application Memory Firewall Live Shield Evaluation (observe execution)‏ MPEE Data Acquisition Client Library Application Learning (Daikon)‏ Sample Data Central Management System

63 Learning Mode Architecture Tracing Client Library Determina MPEE Application Local Daikon Node Manager Central Daikon Management Console Invariant Database Trace Data Invariants Invariant Updates (https/ssl) Community MachineServer Machine

64 What Is Trace Data? Sequence of observations Binary variables –Variable at binary (not source) level –Type determined by use Example 1: mov edx, [eax] 2: cmp edx, [ecx+4] Five binary variables – –1:eax (ptr) 1:[eax] (int) –2:edx (int) 2:ecx (ptr) 2:[ecx+4] (int)

65 Learning Mode Architecture Tracing Client Library Determina MPEE Application Local Daikon Node Manager Central Daikon Management Console Invariant Database Trace Data Invariants Invariant Updates (https/ssl) Community MachineServer Machine

66 What Does the Local Daikon Do? Local Daikon –Reads trace data –Performs invariant inference Standard set of invariants –One of (var = one of {val 1, …, val n }) –Not null (var != null) –Less than (var 1 - var 2 < c) –Many more (75 different kinds) Variables from same basic block (for now)

67 Learning Mode Architecture Tracing Client Library Determina MPEE Application Local Daikon Node Manager Central Daikon Management Console Invariant Database Trace Data Invariants Invariant Updates (https/ssl) Community MachineServer Machine

68 What Does Central Daikon Do? Takes invariants from Local Daikons Logically merges invariants into Invariant Database –Each kind of invariant has merge rules –For example x = 5 merge x = 6 is x one-of {5, 6} x > 0 merge x > 10 is x > 10 x = 5 merge no invariant about x is no invariant about x x = 5 merge no data yet about x is x = 5

69 Application Community Issues Lots of community members learning at same time Each community member instruments a (randomly chosen) subset of basic blocks –Minimizes learning overhead –While obtaining reasonable coverage Learning takes place over successful executions (without attacks) –Controlled environment –A posteriori judgement

70 Monitoring Mode Architecture Client Library Determina MPEE Application Node Manager Protection Manager Management Console Attack Information (https/ssl) Community MachineServer Machine Attack Detection

71 Community Machine Detects attack signal –Determina Memory Firewall –Fatal error (invalid address, divide by zero) –In principle, any indication of attack Attack information –Program counter where attack occurred –Stack when attack occurred Sent to server as application dies

72 Invariant Localization Overview Goal: Find out which invariants are violated when program is attacked Strategy: –Find invariants close to attack –Make running applications check for violations of these invariants –Correlate invariant violations with attacks

73 Invariant Localization Mode Architecture Attack & Invariant Violation Detector Client Library Determina MPEE Application Node Manager Protection Manager Management Console Invariant Database Attack & Invariant Information Invariants (https/ssl) Community MachineServer Machine LiveShield Generation LiveShield Installation LiveShields

74 Finding Invariants Close to Attack Attack Information –PC of instruction where attack detected (jump to invalid code) (instruction that accessed invalid memory) (divide by zero instruction) –Call stack Duplicate stack Preserved even for stack smashing attacks Find basic blocks that are close to involved PCs Find invariants for those basic blocks

75 Detecting Invariant Violations Add checking code to application –Check for violations of selected invariants –Log any violated invariants Use Determina LiveShield mechanism –Distribute code patches to basic blocks –Eject basic blocks from code cache –Insert new version of basic block with new checking code –Updates programs as they run

76 Using LiveShield Mechanism Protection manager selects invariants to check Generates C code that implements check Passes C code to scripts –Compile the code –Generate patch –Sign it, convert to LiveShield format Distribute LiveShields back to applications –Each application gets all LiveShields –Goal is to maximize checking information

77 Correlating Invariant Violations and Attacks Protection manager fed two kinds of information –Invariant violation information –Attack information Correlates the information –If invariant violation is followed by an attack –Then invariant is a candidate for enforcement

78 Protection Mode Architecture Client Library Determina MPEE Application Node Manager Protection Manager Management Console Invariant Database Attack & Invariant Information Invariants (https/ssl) Community MachineServer Machine LiveShield Generation LiveShield Installation LiveShields Attack Detector & Invariant Enforcement

79 Given an invariant to enforce Protection manager generates LiveShields that correspond to different repair options Current implementation for one-of constraints –Variable is a pointer to a function –Constraint violation is a jump to function previously unseen at that jump instruction –Potential repairs Call one of previously seen functions Skip call Return immediately back to caller

80 Selecting A Good Repair Protection manager generates a LiveShield for each repair option Distributes LiveShields across applications Random assignment, biased as follows Each LiveShield has a success number –Invariant enforcement followed by continued successful execution increments number –Attack or crash decrements number –Probability of selection is proportional to success number –Periodically reassign LiveShields to applications

81 System in Action – Concrete Example Learning mode –Key binary variable is target of jsri instruction –Learn a one-of constraint (target is one-of invoked functions) Monitoring mode –Memory Firewall detects attempt to execute unauthorized function Invariant localization mode –Attack information identifies jsri instruction as target of attack –Correlates invariant violation with attack Protection Mode –Distribute range of repairs (skip call, call previously observed function) –Check that they successfully neutralize attack

82 Attack Surface Issues Determina Runtime as attack target Addressed with page protection policies Also randomize placement –Runtime data –Runtime code, code cache Page TypeRuntime ModeApplication Mode App codeRR App dataRW Runtime codeRER Code CacheRWRE Runtime dataRWR

83 Communication Issues What about forged communications? Management console has certificate authority –Clients use password to get certificates –All communications Signed, authenticated, encrypted Revocation if necessary Invariant Database Management Console Certificate Authority

84 Parameterized Architecture and Implementation Parameterization points –Attack signal –Invariants Inference Enforcement mechanisms Flexibility in implementation strategies –Invariant localization strategies –Invariant repair strategies

85 Class of Attacks Prerequisites for stopping an attack Attack characteristics –Attack signal –Attack must violate invariants –Enforcing invariants must neutralize attack Invariant characteristics –Daikon must recognize invariants –System must be able to successfully repair violations of invariants

86 Examples of Attacks We Can Stop Function pointer –Attack signal – Determina Memory Firewall –Invariant One-of invariant Function pointer binary variable –Repair Jump to previously seen function Skip call

87 Examples of Attacks We Can Stop Code injection attacks via stack overwriting –Attack signal – Determina Memory Firewall –Invariant Less than invariant Stack pointer binary variable –Repair Skip writes via binary variable Coerce binary variable back into range

88 Conclusion Critical vulnerabilities are recognized –Code injection –Denial of service –Framework can be extended to other detectors Vulnerability is repaired –This attack will fail in the future Overhead is low –Detector overhead is low –Compute correlation only for constraints at attack sites Effective on legacy x86 binaries

89

90

91 Collaborative learning approach 1.Learning: Infer normal behavior from successful executions 2.Monitoring: Detect attacks (plug-in module) When an attack occurs: 3.Localization: What code does the attack target? How does the attack change behavior? 4.Fixing: Propose patches based on the specifics of the attack 5.Protection: Evaluate patches, distribute the most successful ones Result: applications are automatically protected

92 Collaborative Learning Framework Provide best possible out-of-box proactive protection against vulnerabilities –Protect (and take advantage of) a community Automatically repair vulnerabilities for improved continuity –Use dynamically learned constraints Key Features: –Proactive protection (Memory Firewall) for unknown vulnerabilities –Attack detector based constraint checking focuses repair on exploited vulnerabilities –Code injection and Denial of Service (crashes) vulnerabilities are protected and repaired –Other detectors can be added to the framework –Supports arbitrary x86 binaries –Adaptive: Repairs that perform poorly are removed

93 Goal: Security for COTS software Proactively prevent attacks via unknown vulnerabilities: zero-day exploits –No pre-generated signatures –No time for human reaction Maintain continuity and preserve functionality –Program continues to operate despite attacks –Not the right goal for all applications –Create a patch (a repair) –Low overhead COTS software –No built-in survivability features –No modification to source or executables –x86 Windows binaries Leverage the benefits of a monoculture

94 My other research Making it easier (and more fun!) to create software Security: Quantitative information-flow –How much information does a (real) program leak? Testing: –Creating test inputs for complex systems –Test factoring: generate unit tests from system tests –Classifying test behavior, feedback-directed random generation PL design: User-defined type qualifiers –To appear in Java 7 Types: Polymorphism, immutability, canonicalization –Detect and prevent errors at low programmer cost –Type inference: Polymorphism, immutability, abstract types Analysis of version history, refactoring, …


Download ppt "Self-defending software: Automatically patching security vulnerabilities Michael Ernst University of Washington."

Similar presentations


Ads by Google