Presentation is loading. Please wait.

Presentation is loading. Please wait.

INF523: Assurance in Cyberspace Applied to Information Security Course Introduction Prof. Clifford Neuman Lecture 1 13 Jan 2016 OHE 120.

Similar presentations


Presentation on theme: "INF523: Assurance in Cyberspace Applied to Information Security Course Introduction Prof. Clifford Neuman Lecture 1 13 Jan 2016 OHE 120."— Presentation transcript:

1 INF523: Assurance in Cyberspace Applied to Information Security Course Introduction Prof. Clifford Neuman Lecture 1 13 Jan 2016 OHE 120

2 Course Identification INF 523 –Assurance in Cyberspace –3.0 units Class meeting schedule –6:40-9:20pm Wed –Room OHE 120 Class communication –inf523@csclass.info –Goes to instructor and TA (when assigned) and is archived. 1

3 General Course Information Professor office hours –Wednesday 5-6PM and Friday 3-4PM (but not on 1/13/16) –Other times by appointment Primary office in MDR at Information Sciences Institute –E-mail: bcn@isi.edubcn@isi.edu –PHE514 TA/Grader for the class –TBA 2

4 Guidelines for Students Class will be primarily individual study Student deliverables –Homework assignments –Spontaneous class participation –Quizzes –Individual project –Midterm exam –Final exam Read the assigned readings before class! –Responsible for content of assigned reading –Quizzes heavily focused on assigned reading 3

5 Guidelines for Students All assignments are to be submitted individually –Help each other understand concepts –Help each other understand assignments –Work should reflect your own efforts Academic integrity is taken very seriously –Libraries are a resource for USC academic standards –http://www.usc.edu/libraries/about/reference/tutorials/ academic_integrity/index.phphttp://www.usc.edu/libraries/about/reference/tutorials/ academic_integrity/index.php 4

6 Grading Schema Final: 30% Mid-Term: 25% Quizzes: 10% Class Participation: 10% Homework Assignments: 25% __________________ Total100% 5

7 Letter Grade Assignment 6 A letter grade will be assigned for each assignment, project, or exam. The individual assignment scores are based on overall class performance. Course grade is determined by weighted calculation from the component grades.

8 Assurance Course Context Required course for Masters of Cyber Security Builds on hard science from previous courses –Includes computability, Turing machines Significance evident in cyber security problem –INF 520: Foundations of Information Security Continuation of other courses –INF 522: Policy is only a starting point Basis for the notion of trusted system design –INF 525: Trusted System Design, Analysis and Development 7

9 Relationship to Informatics 527 INF 527 provided case studies of systems and the class discussed the assurance characteristic of those systems. This year I will be drawing some of those case studies into the curriculum for INF523 to illustrate the concepts you are learning. INF 527 students prepare a case study of their own as a group project. In 2017, this project will become part of INF523, and the class will become a 4 unit class. Students taking INF523 this semester will be allowed to meet the INF527 requirement by completing a one unit DR in a later semester where they complete the case study project. 8

10 Questions on Course Structure 9

11 Initial Reading Assignment (read before January 20 lecture) Bishop book Chapter 18, “Introduction to Assurance” –Computer Security Art and Science: Bishop, Matt, 2003 Introduction to the Secure Software Development Lifecycle http://resources.infosecinstitute.com/intro- secure-software-development-life-cycle/ http://resources.infosecinstitute.com/intro- secure-software-development-life-cycle/ 10

12 More Reading ( before January 22 ) ISACA - How Can Security Be Measured? http://www.isaca.org/Journal/Past- Issues/2005/Volume-2/Documents/jpdf052-how- can-security.pdf http://www.isaca.org/Journal/Past- Issues/2005/Volume-2/Documents/jpdf052-how- can-security.pdf ISACA - Performing a Security Risk Assessment (http://www.isaca.org/Journal/Past- Issues/2010/Volume-1/Documents/1001- performing-a-security.pdf)http://www.isaca.org/Journal/Past- Issues/2010/Volume-1/Documents/1001- performing-a-security.pdf 11

13 Trust Trustworthy entity has sufficient credible evidence leading one to believe that the system will meet a set of requirements Trust is a measure of ones belief in trustworthiness relying on the evidence –To trust makes one vulnerable to violations trust. Assurance is process of building confidence that an entity meets its security requirements based on evidence provided by applying assurance techniques –“Meets security requirements” == Enforces policy 12

14 What is Assurance? 13

15 Problem Sources 1.Policy flaws 2.Requirements definitions, omissions, and mistakes 3.System design flaws 4.Hardware implementation flaws, such as wiring and chip flaws 5.Software implementation errors, program bugs, and compiler bugs 6.System use and operation errors and inadvertent mistakes 7.Willful system misuse 8.Hardware, communication, or other equipment malfunction 9.Environmental problems, natural causes, and acts of God 10.Evolution, maintenance, faulty upgrades, and decommissions 14

16 Assurance as a (non-)Priority Industry emphasis: –time to market –features Security is considered to be fixable later –Another way of saying that: “patch and pray” What is the result? 15

17 Result of Shipping Low Assurance Products 16

18 Examples Challenger explosion –Sensors removed from booster rockets to meet accelerated launch schedule Deaths from faulty radiation therapy system –Hardware safety interlock removed –Flaws in software design Bell V22 Osprey crashes –Failure to correct for malfunctioning components; two faulty ones could outvote a third Intel 486 chip –Bug in trigonometric functions 17

19 Result of Shipping Low Assurance Products Uncountable system vulnerabilities Endless patching Expensive and (nearly) useless security add-ons Focus on fixing the wrong things Continual losses to individuals, business, and government 18

20 When is High Assurance Warranted? Industry emphasis makes sense for most uses –Consider, e.g., what most Windows systems used for –Another example: credit cards without pin and chip –“Good enough” based on estimated risk –(but assumptions change) Assurance is expensive –Extra time, trained staff, tools –May reduce product features High assurance required for –Protection of human life –Highly sensitive information –Whenever high value of potential loss 19

21 What is Assurance? Assurance is the generation of confidence that system satisfies (strongly and reliably enforces) security policy Confidence gained as result of evidence 20

22 Question A vendor advertises that its system was connected to the Internet for three months and no one was able to break into it. The vendor claims that this means the system cannot be broken into. –Do you share the vendor’s confidence? –Why or why not? 21

23 What is Assurance? Assurance is the generation of confidence that system satisfies (strongly and reliably enforces) security policy Confidence gained as result of evidence No way to prove this with current technology Must make “assurance argument” –Based on body of collected evidence –Evidence collected through assurance techniques E.g., testing 22

24 System Development Lifecycle Sequential stages of development and use Many variant SDL definitions. Here is one: 1.Requirements gathering/definition 2.Design 3.Implementation (coding) 4.Testing 5.Release 6.Operation 7.Disposal 23

25 Types of Assurance Policy assurance is evidence establishing security requirements in policy are complete, consistent, technically sound Design assurance is evidence establishing design sufficient to meet requirements of security policy Implementation assurance is evidence establishing implementation consistent with design –  security requirements of security policy 24

26 Types of Assurance (cont.) Operational assurance is evidence establishing system sustains the security policy requirements during installation, configuration, and day-to-day operation –Also called administrative assurance 25

27 “Waterfall” Model Waterfall model: sequential design process –With backtracking 26

28 Each stage must provide assurance justification for earlier stage –E.g., Does design satisfy requirements? –Is implementation faithful to design? Assurance must be built into every SDL stage Assurance in SDL 27

29 Assurance in the System Lifecycle Assurance techniques must be applied in all stages of the system lifecycle, e.g., –The system’s security policy is internally consistent and reflects the requirements of the organization –The design of the security functions is sufficient to enforce the security requirements –The functions are implemented correctly –The assurances hold up through the maintenance, installation, configuration, and other operational stages 28

30 “Assurance Waterfall” 29 Org. Req’s Policy Security Req’s Design Implementation Disposal Distribution Instal. & Config. Maintenance Version Mgmt Threats Threat Modeling Modularization and layering Secure coding Testing Secure Distribution Patching; Monitoring Secure Install & Config Secure Disposal Version Mgmnt Informal analysis FSPM FTLS Inter- mediate spec(s) Proof Code Correspondence Informal analysis

31 Other Lifecycle Models Exploratory programming –Develop working system quickly –Used when detailed requirements specification cannot be formulated in advance, and adequacy is goal –No requirements or design specification, so low assurance Prototyping –Objective is to establish system requirements –Future iterations (after first) allow assurance techniques 30

32 Other Lifecycle Models (contd.) Formal transformation –Create formal specification –Translate it into program using correctness-preserving transformations –Very conducive to assurance methods System assembly from reusable components –Depends on whether components are trusted –Must assure connections and composition –Very complex, difficult to create assurance argument 31

33 Other Lifecycle Models (contd.) Extreme programming –Rapid prototyping and “best practices” –Project driven by business decisions –Requirements open until project complete –Programmers work in teams –Components tested, integrated several times a day –Objective is to get system into production as quickly as possible, then enhance it –Evidence adduced after development needed for assurance 32

34 Assurance Course Questions Strengths and weaknesses of each technique Which techniques give most value When to use specific techniques How to balance risk and reward 33

35 Question A vendor advertises that its system was connected to the Internet for three months and no one was able to break into it. The vendor claims that this means the system cannot be broken into. If a commercial evaluation service had monitored the testing of this system and confirmed that, despite numerous attempts, no attacker had succeeded in breaking in, would your confidence in the vendor’s claim increase, decrease, or stay the same? Why? Does this constitute “proof”? 34

36 Key Points Assurance is critical for determining trustworthiness of systems Different levels of assurance, from informal evidence to rigorous mathematical evidence Assurance needed at all stages of system life cycle 35

37 Why Assurance in All Lifecycle Stages? 36 Testing cannot (in other than trivially reduced circumstances) ever be complete Vendors’ claims should always be suspect until sufficient evidence is provided (if it exists) Assurance argument based on totality of evidence for all stages of the system lifecycle Gaps in the assurance argument are not good Inconsistencies in the assurance argument are not good

38 INF523: Assurance in Cyberspace Applied to Information Security Measuring Security and Risk

39 Professional Orgs: SANS Organization that provides IT security training and certifications (https://www.sans.org/) Home of “Internet Storm Center” – Internet monitoring and alert system (https://isc.sans.org/) Lots of practical resources (http://www.sans.org/security-resources/) Useful newsletters (http://www.sans.org/newsletters/)http://www.sans.org/newsletters/ Assignment: Subscribe to SANS NewsBites and @RISK newsletters 38

40 Assurance News SANS NewsBites Vol. 17 Num. 005 Verizon Fixes Data Exposure Vulnerability in My FiOS, (January 18 & 19, 2015) –Verizon has fixed a security flaw in its My FiOS mobile application that exposed inboxes and private messages of as many as five million user accounts. The data could be viewed by manipulating user ID numbers in web requests. http://www.theregister.co.uk/2015/01/19/verizon_fios_vulnerability/ http://www.computerworld.com/article/2871488/flawed-verizon-my-fios-mobile-app- exposed-email-accounts.htmlhttp://www.computerworld.com/article/2871488/flawed-verizon-my-fios-mobile-app- exposed-email-accounts.html Where was the assurance failure? 39

41 Measurement and Assessment Measuring Security –Security metrics and their applicability to assurance Security Risk Assessment –Rationale for, and methods of, identifying and quantifying risks to an organization’s information assets in order to allocate resources to increase assurance 40

42 Why Measure Things? Cornerstone of science, engineering, and management –“You can’t manage what you can’t measure” –To ensure decisions are made based on fact Necessary for quantitative evaluation –“Thing ‘A’ is different than thing ‘B” by this much’ 41

43 Why Measure Things? The most important of science’s characteristics: a record of improvement in predictive range and accuracy –For example, when we put a satellite in orbit, we have the scientific knowledge that guarantees accuracy and precision in the prediction of its orbit. 42

44 Measuring Computer Security But the record of the discipline of computer security in providing any similar level of certainty about outcomes is not good –Exception: Products evaluated under Class A1 of the TCSEC (we’ll talk about that later) Want to believe that Computer Security is a branch of engineering, the application of scientific principles or, at least, has some basis in science What can be basis for this belief? 43

45 Computer Security’s Job Minimize security risk while Maximizing “business value” (however that is determined for the specific organization) Both require measurement and metrics 44

46 Measurement and Metrics Measurement: Dimensions, quantity, or capacity as ascertained by comparison with a standard –Length in meters (standard length) –Time in seconds (standard time interval) Concrete, quantitative, measure one thing Metric: Interpretation of measurements to ascertain properties or qualities of that which is measured –E.g., Process effectiveness, achieving of objectives Qualitative, often relative to a baseline 45

47 What is Measurement of Security? Question: When you add antivirus protection to your laptop, how many units of security do you gain? Answer: ??? Are “units of security” even possible? If not, what then? 46

48 Things We’d Like to Measure How much more secure is an application, system, or network after adding a particular security control? What’s the best mix of controls that will get the most security for a given investment? Are we secured enough? Are the security controls worth the price? 47

49 Current Metrics Insufficient Current metrics cannot answer these questions Many “security metrics” are really “badness-o- meters” –They indicate if system security is bad, but not if it is good E.g., vulnerability scanners –Detection of known vulnerabilities tells you that your system security is bad; more means it’s worse –Absence of detected vulnerabilities does not necessarily mean that your system security is good 48

50 Anti-phishing Training Anti-phishing training for employees Sends phishing emails to employees If they click on the link, they get training Goal is to reduce successful phishing attacks Later, new phishing email sent Measure reduction in clicking after training Question: If training reduces clicking on links in phishing emails by 75%, is company 75% more secure? From pov of IT security, does reduce workload 49

51 “Checklist” Metrics Based on standards documents Lists of “best practices” –E.g., NIST 800-53, Security and Privacy Controls for Federal Information Systems and Organizations (http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf) Counts existence of controls, but –Little guidance about methods, processes, or tools –When is each control necessary? –Effectiveness or value of each control? Useful for measuring compliance with standards But how much security does each control get me? 50

52 Claimed Effectiveness Metrics Australian “Strategies to Mitigate Targeted Cyber Intrusions” (http://www.asd.gov.au/infosec/mitigationstrategies.htm)http://www.asd.gov.au/infosec/mitigationstrategies.htm –“At least 85% of the targeted cyber intrusions that the Australian Signals Directorate (ASD) responds to could be prevented by following the Top 4 mitigation strategies listed in our Strategies to Mitigate Targeted Cyber Intrusions: 1.use application whitelisting to help prevent malicious software and unapproved programs from running 2.patch applications such as Java, PDF viewers, Flash, web browsers and Microsoft Office 3.patch operating system vulnerabilities 4.restrict administrative privileges to operating systems and applications based on user duties.” 51

53 Effective? I.e., At least 85% of the detected intrusions could be prevented by the top 4 controls –What about 100% of the undetected intrusions? –To be fair, they are effective against common attacks –Reduce IT security effort and cost What do the top 4 controls all have in common? –What is the common threat they all try to block? 1.Prevent malicious software from running 2.Patch applications 3.Patch operating system 4.Restrict user privileges Preventing system subversion! 52

54 Not Measuring Security Metrics like these do not measure security They measure something that “stands-in” for security, e.g.: –Checklists measure compliance with “best practices” Belief that good practices increase security –Password strength measures complexity Belief more complex means more secure Sometimes valid –E.g., more complex passwords are harder to guess Often based on assumptions and guessing –E.g., Metric for difficulty of attacking a system, based on estimated time and effort 53

55 Are Real Security Metrics Possible? Limits to accuracy and quantifiability Can just say “more” or “less”, without quantifying Consider TCSEC –Class A1 is more secure than B1 –But how much more secure? Can’t quantify. What is fundamental about security that makes measurement so hard? 54

56 What is Security? A system is only secure with respect to a computer security policy A computer security policy denotes what is allowed and what is not allowed with respect to people accessing information stored in the system So computer security is the control of access by people to information stored in a computer system in order to enforce the policy How can we measure “control of access”? 55

57 Does That Definition Help? Security isn’t tangible Nothing physical with measurable properties –E.g., temperature Not a measurable quantity in a large system –Like gross national product, in economics Security is made up by people, like rules in chess, so meaning is entirely human-defined 56

58 Many Definitions of Security Humans define “security” as policies But there are many types of policies –E.g., C.I.A., plus others Policies don’t compose into one thing Different components of the system have different security requirements –OS and applications and DNS and networking and authentication and audit and... –Multi-dimensional, not additive Policies may contradict each other Keep policies separate, forget “unified theory” –E.g., TNI valid assuming secure network connections 57

59 Can’t Prove Always in Secure State For some policies it is impossible to prove a system will always be “safe” –HRU result Bad choice of policy makes metric impossible Need to choose from classes of policies we know we can reason about –E.g., MAC as modeled by BLP 58

60 Need a Good Model Scientists use models to reason about systems Abstract model simplifies calculations Airplane wings built to satisfy fluid mechanics model Model doesn’t work for predicting flight of wrinkled dollar bill in a windstorm –Wrinkled dollar bill does not fit model Experiments used to validate model 59

61 Need a Good Security Model Few organizations give precise definitions of security –Requirements, Policy, Model Must build information systems to meet requirements of a model Why do we expect to be able to predict security behavior when so many uncontrolled factors? Most secure systems were based on BLP model for policy –RM model for mechanisms 60

62 Requirements for Analytics Systems Stable problems –E.g., ad targeting, airline scheduling –“Rules” are fairly well established Predictable range of input values No (or slow) drift from training data Large data sets If not stable, model needs constant updating Does value justify expense? 61

63 Difficult to Create Security Experiments Can’t isolate –Chaotic Internet means too many confounding factors Too small a sample –Conditions widely different at different places and times Changes in environment happen too quickly –Results of an experiment no longer valid Can only detect and count known exploits and vulnerabilities –E.g., to measure effectiveness of AV –Can’t measure what you can’t see 62

64 Can’t Prove a Negative Absence of evidence is not evidence of absence Can only detect known exploits, so have no idea what is being missed Know there are uncountable vulnerabilities Know there are uncountable zero-days How can we know if we are already p0wned? How can we say that we are not? Ideally, design and build for high-assurance –Protect against all attacks, known and unknown 63

65 Feedback Loop Willful, intelligent attackers Actions of defenders affect actions of attackers Actions of attackers affect actions of defenders Constant change so experiments only reflect one point in time with particular conditions Ideally, want solutions that always work –Independent of attackers actions 64

66 Security is Binary A system is secure or it is not –Secure means always in secure state Reducing exploit instances insufficient –Ideally, want to eliminate exploit instances –Anti-phishing training example from before –Fewer fall prey –That reduces # detectable instances and cost –But still many successful exploits Advanced attackers only need to succeed once 65

67 IT Risk Assessment The process of calculating quantitatively the potential for damage or monetary cost caused by an event that affects an organizations IT assets Requires –Identifying possible events –Quantifying the probability that an event will occur –Quantifying in $ the potential damage Risk = frequency(event) * damage(event) –Risk = Annual Loss Expectancy (ALE) –Product of ARO and single loss expectancy (SLE) 66

68 Identifying Possible Events Consider –Threats – Potential sources of incidents –Vulnerabilities – Weaknesses in assets Event happens when threat meets vulnerability –Tornado and flimsy data center building –Hacker and unpatched windows server Security controls primarily focus on eliminating or mitigating vulnerabilities We’ll talk about threat modelling later in semester 67

69 Quantifying Probability of Events Based on estimates –Estimate frequency of threat –Estimate existence of vulnerability –Estimate difficulty of exploiting vulnerability –Estimate cost of exploiting vulnerability Estimates performed by “subject matter experts” –“SMEs” –Estimate based on lots of assumptions and intuition guided by experience Did I say “estimate”? I meant “guess”. 68

70 Example of Estimating Minimum password length and complexity –Threat: attacker will guess a password using brute- force guessing at login –Vulnerability: Short, simple passwords are easy to guess Estimates –Frequency of threat: Constant –Existence of vulnerability: 100% –Difficulty of exploiting vulnerability: Based on how long would it take using guesses/sec –Cost of exploiting vulnerability: Proportional to difficulty –Damage or loss: “high” 69

71 Problem with Estimates Frequency and vulnerability estimate does not take other controls into account –Limit on failed attempts –What if password hash stolen? A different threat! Potentially almost zero cost if using rainbow tables and passwords are not salted –If attack done by calculating hash on guessed passwords and comparing to stolen hashes Speed and cost of HW/SW moving target –Assumptions may be violated almost from the start 70

72 Mitigating Controls Mitigating controls or “remediations” –Reduce event probability or lessen impact New (reduced) calculated risk Risk reduction = |old_risk – new_risk| But remediations have costs Benefit = risk reduction – cost Want benefits > 0, but costs may be mandated Easy to miss some costs when estimating 71

73 Increasing Bits of Entropy in Passwords 72 https://xkcd.com/936/

74 Another Example Threat: Intruders exploiting systems over the network without detection Vulnerability: Can’t detect attacks Remediation: Use IDS Estimates: –Frequency of threat: ??? –Existence of vulnerability: 100% –Difficulty of exploiting vulnerability: ??? –Cost of exploiting vulnerability: ??? –Damage or loss: “high” 73

75 Problems with Estimates How to estimate frequency of attacks? –Published studies? –Do the conditions of such studies match our conditions? Existence of vulnerability varies according to protected resources –Type of system, version, patch level, configuration, … Difficulty/cost of exploit varies according to protected resources What do SMEs do? They guess! 74

76 IDS Mitigating Control Cost of IDS easy to calculate –Call a salesperson How to calculate risk reduction? –What percentage of total attacks does IDS see? –How to know? Can you trust the vendor? –Usually signature based, so won’t see previously unknown or mutated attacks How to measure benefit? (Benefits are real) In this case, use of IDS might be mandated Cost must also take into account staff to monitor IDS and to review and follow-up on alerts 75

77 Measuring Security Conclusions No quantitative measurement possible –Just levels of effort to improve security –Effectiveness is relative, not absolute No way to combine all vectors into security score Helps to choose a good policy and model –Can build system secure even under changing conditions –Secure systems evaluated under TCSEC, based on BLP and RM, are a secure system existence proof Assurance is like other security measurements –Ultimately based on levels of effort –Effectiveness is relative –No exact measurement possible (with current technology) 76

78 Caveat “The perfect is the enemy of the good.” –Voltaire “Better a diamond with a flaw than a pebble without.” –Confucius, attrib. "Give them the third best to go on with; the second best comes too late, the best never comes.” –Watson-Watt “I’ve got to tell Management something.” –Unknown, stressed IT Security Analyst 77

79 Reading for Next Week TCSEC, pp. 10, 50-53, 62-63, 67-68, 77-79 Common Criteria, Part 3, pp. 15-17, 44-45 Final Evaluation Report, Gemini Trusted Network Processor – Section 7 (GTNP-NCSC-FER-94-008.pdf) –[All above on DEN in “Readings” module] SSE-CMM/ISO 21827 Capability Maturity Model (http://standards.iso.org/ittf/PubliclyAvailableStandards/index.html - search for “ISO 21827”)http://standards.iso.org/ittf/PubliclyAvailableStandards/index.html Build Security In Maturity Model (BSIMM) (http://www.bsimm.com/)http://www.bsimm.com/ Microsoft Security Development Lifecycle (http://www.microsoft.com/security/sdl/default.aspx)http://www.microsoft.com/security/sdl/default.aspx 78

80 INF523: Assurance in Cyberspace as Applied to Information Security System Assurance Requirements Lecture 3

81 Security (Bad) News From SANS NewsBites Vol. 16 Num. 72 (9 Sep 2014) “Home Depot Breach Launched With Same Malware Used in Target Breach” –Phishing email stole login credentials (breaking authentication) –Infected point-of-sale (POS) systems (subversion!) –Copied and exfiltrated credit card data (violating confidentiality policy) Where in SDLC did failure occur and why? –Threat modeling/Security requirements Did they consider stolen login credentials as a threat? –“Strong” authentication Did they consider subversion a possibility? –Common IT security controls to resist subversion: »Restrictive access control policy, “Lock-down”, “TripWire”, Application white- listing –What if they’d implemented the *-property? 80

82 Security (Bad) News From SANS NewsBites Vol. 17 Num. 006 (23 Jan 2015) NSA Information Assurance Directorate Report Offers Malware Defense Best Practices (January 22, 2015) The NSA's Information Assurance Directorate has released a report titled "Defensive Best Practices for Destructive Malware." The document encourages proactive defense so organizations can minimize the possibility that they will have to clean up after a massive attack like the one launched against Sony Pictures. Recommended best practices include segregating network systems and functions, and reducing and protecting administrator privileges. ( http://www.darkreading.com/attacks-breaches/nsa-report-how-to-defend- against-destructive-malware/d/d-id/1318734 ?) 81

83 Foundations of Computer Security 1.A security policy that states the laws, rules, and practices that regulate how an organization manages, protects, and distributes sensitive information 2.The functionality of internal mechanisms to enforce that security policy 3.Assurance that the mechanisms correctly enforce the security policy 82

84 Review of Key Concepts 83 A system security policy is a statement of requirements that defines security expectations Cyber assurance is trust that system correctly enforces security policy Trust must be gained through evidence, using assurance techniques Assurance techniques must be applied at all stages of the system lifecycle

85 Assurance in the System Lifecycle The system’s security policy is internally consistent and reflects the requirements of the organization There are sufficient security functions to support the security policy The security functions meet a desired set of properties (and only those properties) The functions are implemented correctly The assurances hold up through the manufacturing, delivery, and other stages of the system lifecycle 84

86 High vs. Low Assurance We provide assurance through techniques such as structured design processes, documentation, and testing Higher assurance through use of more, and more rigorous, processes, documentation, and testing –Intuitively, compare a little testing vs. a great deal of testing There are some fundamental ways to organize these processes to make the job easier and to have a more robust product 85

87 A Good Start Implement a reference monitor –Tamperproof –Always invoked –“Small enough to be subject to independent testing, the completeness of which can be assured” (TCSEC) Trusted Computing Base (TCB) –Combination of hardware, software, and firmware that is responsible for enforcing the system's security policy Minimization of the complexity in the TCB is a goal That puts more code outside the TCB, but so what? –We are concerned here with the security policy only 86

88 Question A vendor says they have thoroughly tested their system and found no flaws. They say their system can be “trusted” with sensitive information. Would you believe it? –Why or why not? 87

89 Question If Microsoft tomorrow was to announce that “we’ve identified the security flaws in Windows 8 and came up with new security requirements that we implemented in Windows 10, so Windows 10 is totally secure”, would you believe it? –Why or why not? –If not, what would it take for you to believe it? 88

90 Question Your company wants to build a “secure” application that will run on Windows (version 7 and up) –You can expend as much effort, tools, and other resources necessary to make the application secure –You determine the security req’s, model threats, create a policy, generate the security req’s and use formal methods to ensure the security features map to the policy, create a design and use formal methods to map the design to the security req’s, implement the design and formally “prove” that the code satisfies the specification. Is your application high-assurance? 89

91 What do these examples tell us? 90 Testing cannot (in other than trivially reduced circumstances) ever be complete Vendors’ claims should always be suspect until sufficient evidence is provided (if it exists) You can’t spin gold out of straw Must consider entire TCB, not just components Assurance argument based on totality of evidence for all stages of the system lifecycle Gaps in the assurance argument are not good Inconsistencies in the assurance argument are not good

92 Evaluating Assurance Arguments How does a normal purchaser of software and systems evaluate vendor security claims and evidence? Depend on 3 rd -party expert evaluators –Presumed: Trained, experienced, good judgment, unbiased 91

93 Topics Covered in this Lecture Assurance Requirements in Evaluation Criteria –Assurance requirements at different evaluation levels Capability Maturity Models –An approach for assessing the capability of a vendor to produce a secure system Microsoft’s SDLC –A contemporary, real-world assurance process at a major software company 92

94 An Example Evaluation Criteria The Orange Book View of Assurance –Orange Book = TCSEC (http://csrc.nist.gov/publications/history/dod85.pdf) No longer used, but good example of what is needed to make assurance arguments 93

95 Orange Book Assurance The Assurance Control Objective Systems that are used to process or handle classified or other sensitive information must be designed to guarantee correct and accurate interpretation of the security policy and must not distort the intent of that policy. Assurance must be provided that correct implementation and operation of the policy exists throughout the system's life-cycle. Operational Assurance System Architecture System Integrity Covert Channel Analysis Trusted Facility Management Trusted Recovery 94

96 A Complication Should we always expect very high assurance evidence in every instance of security solutions / technology? –What is the case for? What is the case against? Consider the padlock industry –There are many different locks on the market –There is the notion of best, better, good, good enough and sort of ok –We make a risk judgment every time we buy one –Implicit assurance argument about sufficiency to mitigate the threat and “fit for purpose” (trustworthiness) –But the assurance is not the same across the lock space 95

97 Subtleties – Balanced Assurance Suppose you have a really strong, high assurance perimeter control system. You’ve made a heavy investment to make sure only permitted individuals are able to have access through this perimeter. By allowing those individuals through the perimeter you have acknowledged great trust. Each of them has an office inside the perimeter. each office has a lock on it. What threat does the office lock mitigate? Does the lock on the office need to be a super strong high assurance lock, comparable with the perimeter? 96

98 Subtleties - Composability Consider a high security jail that is built by assembling modules from various different (likely by a lowest bid, but no kickback suppliers) There are modules for the walls (with and without windows) doors, cells, services (food, medical and other). The modules “snap” together into the jail. You, of course, are worried about the management of this jail and very worried that someone will try to break out of this jail. –Each module is constructed to your specification with your assurance requirements and validated at the manufacturer –What would be the assurance argument for the jail as a unit? –How would you go at this problem –How would you decide if / when the jail was fit for purpose? 97

99 Subtleties – Assurance and “the right stuff” Vendors market what sells, independently of what matters Vendors might try to pad the assurance with stuff that is not overly germane to your specific needs For example paint quality on patrol cars, The vendor could make a big deal out of this but that is not one of your most important concern Consider intrusion detection systems. You want an IDS that stops all intrusions. What would be an assurance argument that the Vendor might present to you to substantiate that claim and what might be the important part(s) he/she might leave out? 98

100 TCSEC Classes D – Minimal Protection C – Discretionary Protection –C1 – Discretionary Security Protection - DAC –C2 – Controlled Access Protection – DAC + audit, etc. B – Mandatory Protection –B1 – Labeled Security Protection (has MAC labels) –B2 – Structured Protection (FSPM) –B3 – Security Domains (implements RM) A – Verified Protection –A1 – Verified Design (formal design, spec, and verify) 99

101 TCSEC summary Security Policy 100

102 TCSEC summary Accountability 101

103 TCSEC summary Documentation 102

104 TCSEC summary Assurance 103

105 Orange Book Assurance Requirements Orange book has different assurance requirements for different classes Except for system integrity, each of the assurance measures is graded –The measure is incrementally increased as the threat mitigation expectation of the system increases Logical grouping of the assurance measures. –Design Specification and Verification is not part of the DAC-only policy systems (Why?) –In fact, most of the assurance measures only apply to systems that provide MAC policy enforcement –I.e., they apply to the machine in the middle (the RM) 104

106 Example Local Grouping A1’s chief concern is subversion, so yet more requirements for, e.g., trusted distribution

107 Example of Graded Assurance Measures We’ll look at Security Testing assurance requirements The security testing “ramp” –Higher assurance for higher security class

108 Class C1 Security Testing The security mechanisms of the ADP system shall be tested and found to work as claimed in the system documentation. Testing shall be done to assure that there are no obvious ways for an unauthorized user to bypass or otherwise defeat the security protection mechanisms of the TCB. (See the Security Testing Guidelines.)

109 Class C2 Security Testing The security mechanisms of the ADP system shall be tested and found to work as claimed in the system documentation. Testing shall be done to assure that there are no obvious ways for an unauthorized user to bypass or otherwise defeat the security protection mechanisms of the TCB. Testing shall also include a search for obvious flaws that would allow violation of resource isolation, or that would permit unauthorized access to the audit or authentication data. (See the Security Testing guidelines.)

110 Class B1 Security Testing The security mechanisms of the ADP system shall be tested and found to work as claimed in the system documentation. A team of individuals who thoroughly understand the specific implementation of the TCB shall subject its design documentation, source code, and object code to thorough analysis and testing. Their objectives shall be: to uncover all design and implementation flaws that would permit a subject external to the TCB to read, change, or delete data normally denied under the mandatory or discretionary security policy enforced by the TCB; as well as to assure that no subject (without authorization to do so) is able to cause the TCB to enter a state such that it is unable to respond to communications initiated by other users. All discovered flaws shall be removed or neutralized and the TCB retested to demonstrate that they have been eliminated and that new flaws have not been introduced. (See the Security Testing Guidelines.)

111 Class B2 Security Testing The security mechanisms of the ADP system shall be tested and found to work as claimed in the system documentation. A team of individuals who thoroughly understand the specific implementation of the TCB shall subject its design documentation, source code, and object code to thorough analysis and testing. Their objectives shall be: to uncover all design and implementation flaws that would permit a subject external to the TCB to read, change, or delete data normally denied under the mandatory or discretionary security policy enforced by the TCB; as well as to assure that no subject (without authorization to do so) is able to cause the TCB to enter a state such that it is unable to respond to communications initiated by other users. The TCB shall be found resistant to penetration. All discovered flaws shall be corrected and the TCB retested to demonstrate that they have been eliminated and that new flaws have not been introduced. Testing shall demonstrate that the TCB implementation is consistent with the formal top level specification. (See the Security Testing Guidelines.)

112 Class B3 Security Testing The security mechanisms of the ADP system shall be tested and found to work as claimed in the system documentation. A team of individuals who thoroughly understand the specific implementation of the TCB shall subject its design documentation, source code, and object code to thorough analysis and testing. Their objectives shall be: to uncover all design and implementation flaws that would permit a subject external to the TCB to read, change, or delete data normally denied under the mandatory or discretionary security policy enforced by the TCB; as well as to assure that no subject (without authorization to do so) is able to cause the TCB to enter a state such that it is unable to respond to communications initiated by other users. The TCB shall be found resistant to penetration. All discovered flaws shall be corrected and the TCB retested to demonstrate that they have been eliminated and that new flaws have not been introduced. Testing shall demonstrate that the TCB implementation is consistent with the formal top level specification. (See the Security Testing Guidelines.) No design flaws and no more than a few correctable implementation flaws may be found during testing and there shall be reasonable confidence that few remain.

113 Class A1 Security Testing The security mechanisms of the ADP system shall be tested and found to work as claimed in the system documentation. A team of individuals who thoroughly understand the specific implementation of the TCB shall subject its design documentation, source code, and object code to thorough analysis and testing. Their objectives shall be: to uncover all design and implementation flaws that would permit a subject external to the TCB to read, change, or delete data normally denied under the mandatory or discretionary security policy enforced by the TCB; as well as to assure that no subject (without authorization to do so) is able to cause the TCB to enter a state such that it is unable to respond to communications initiated by other users. The TCB shall be found resistant to penetration. All discovered flaws shall be corrected and the TCB retested to demonstrate that they have been eliminated and that new flaws have not been introduced. Testing shall demonstrate that the TCB implementation is consistent with the formal top level specification. (See the Security Testing Guidelines.) No design flaws and no more than a few correctable implementation flaws may be found during testing and there shall be reasonable confidence that few remain. Manual or other mapping of the FTLS to the source code may form a basis for penetration testing.

114 Security Testing Guidelines – Division C At least two people with Bachelor degrees in CS Must be familiar with “flaw hypothesis” testing methodology (a form of pen-testing) –Create list of possible flaws through analysis of specs and documentation of a system –Prioritize based on likelihood and ease of exploit Must carry out system developer-defined tests Must independently design and implement at least 5 tests –1 month <= testing <= 3 months –20 hours <= testing for each team member 113

115 Security Testing Guidelines – Division B At least two people with Bachelor degrees in CS At least one person with Master’s Degree in CS Must be fluent in TCB implementation language(s) Must be experienced with assembly language Must have completed system developer’s internals course for the system At least one team member must have completed a security test on another system Team must independently design and implement at least 15 tests –2 months <= testing <= 4 months –30 hours <= testing for each team member 114

116 Security Testing Guidelines – Division A At least one person with Bachelor degree in CS At least two people with Master’s degrees in CS At least one team member must be familiar enough with the system hardware to understand diagnostic programs and HW documentation At least 2 team members must have completed a security test on another system At least one team member must have demonstrated expertise on the system under test sufficient to, e.g., add a device driver Team must independently design and implement at least 25 tests –3 months <= testing <= 6 months –50 hours <= testing for each team member 115

117 Example FER for Class A1 System FER for Gemini Trusted Network Processor The FER provides descriptions of the following system aspects: –Architecture HW (e.g., segmentation and rings) and SW –Design –Mapping to policy model (BLP) –Assurance Architecture – modularization and layering, data hiding, minimization System Integrity Covert channel analysis and testing Trusted Recovery Security Testing – functional and exception testing Design specification and verification Configuration management and maintenance Trusted Distribution –Evaluation against security requirements for specific use 116

118 Assurance Takeaway from the TCSEC Tied to security policy and FSPM Based on notion of TCB –All the HW/SW/FW in the system that enforces the policy –The other stuff doesn’t matter from security pov Identifies SDLC + assurance techniques for each step Trades off level of assurance against required protection Permits different levels of effort against different layers of the system (“balanced assurance”) Considers composition of parts (TNI) 117

119 Topics Covered in this Lecture Assurance Requirements in Evaluation Criteria –Assurance requirements at different evaluation levels Capability Maturity Models –An approach for assessing the capability of a vendor to produce a secure system Microsoft’s SDLC –A contemporary, real-world assurance process at a major software company 118

120 Capability Maturity Models How can a user assess the security of a product? –After lengthy, third-party evaluation (but product may be nearly obsolete by then) –Immediately, but assurance rests on claims by vendor Improve assurance and time-to-market by pre- reviewing security engineering processes of vendor –Third-party review of vendor security engineering processes (capabilities) –Focus on measuring organization competency (maturity) and improvements 119

121 Capability Maturity Models (2) Goals: –Continuity - knowledge acquired in previous efforts is used in future efforts –Repeatability - a way to ensure that projects can repeat a successful effort –Efficiency - a way to help both developers and evaluators work more efficiently –Assurance - confidence that security needs are being addressed. 120

122 Capability Maturity Models (3) Focus on existence of process Organizations appraised by third-party Score based on number and sophistication of practices followed Goal for vendor is to be appraised high for competitive advantage Acquirers can put required CMM level in RFPs Similar to “six sigma” or ISO 9000 certification for quality and process improvement 121

123 System Security Engineering - Capability Maturity Model (SSE-CMM, ISO/IEC 21827) Covers entire organization, including management as well as engineering Based on observed engineering best practices at over 50 large organizations (including multi- nationals) Addresses the complete product life cycle: –Concept definition –Development –Production –Utilization –Support –Retirement 122

124 System Security Engineering Capability Maturity Model (SSE-CMM, ISO/IEC 21827) Two dimensions: domain and capability Domain is “base practices” of security engineering –E.g., Base Practice 05.02, “Identify System Security Vulnerabilities” Capability is “generic practices” that should be part of base practices –E.g., Generic Practice 2.1.1, “Allocate Resources” Intersection indicates an organization’s capability to perform a particular activity –E.g., “Does the organization allocate resources for use in identifying system security vulnerabilities?” 123

125 Base Practices Apply to entire life cycle Represents a “best practice” in the security community Organized into Process Areas –Not all organizations have same needs or goals –Some provide products, some systems, others services 124

126 Process Areas for Systems Security Engineering PA01 Administer Security Controls PA02 Assess Impact PA03 Assess Security Risk PA04 Assess Threat PA05 Assess Vulnerability PA06 Build Assurance Argument PA07 Coordinate Security PA08 Monitor Security Posture PA09 Provide Security Input PA10 Specify Security Needs PA11 Verify and Validate Security Additional process areas for project and org. practices 125

127 PA05 - Assess Vulnerability Description: –Identify and characterize security vulnerabilities Analyze system assets Define specific vulnerabilities Provide an assessment of the overall system vulnerability –Performed any time during a system's life-cycle Goals: –An understanding of system security vulnerabilities within a defined environment is achieved 126

128 PA05 - Assess Vulnerability Base Practice List: –BP.05.01 Select the methods, techniques, and criteria by which security system vulnerabilities in a defined environment are identified and characterized –BP.05.02 Identify system security vulnerabilities –BP.05.03 Gather data related to the properties of the vulnerabilities –BP.05.04 Assess the system vulnerability and aggregate vulnerabilities that result from specific vulnerabilities and combinations of specific vulnerabilities –BP.05.05 Monitor ongoing changes in the applicable vulnerabilities and changes to their characteristics 127

129 BP.05.02 – Identify Vulnerabilities Description: The methodology of attack scenarios (description of specific attacks) as developed in BP.05.01 should be followed to the extent that vulnerabilities are validated. All system vulnerabilities discovered should be recorded. Example Work Products: –Vulnerability list describes the vulnerability of the system to various attacks –Penetration profile includes results of the attack testing (e.g., vulnerabilities) 128

130 SSE-CMM Capability Levels 129 0 INITIAL 1 PERFORMED INFORMALLY n Base practices performed 2 PLANNED & TRACKED n Planning performance n Disciplined performance n Verifying performance n Tracking performance 3 WELL-DEFINED n Defining a standard process n Perform the defined process n Coordinate practices 4 QUANTITATIVELY CONTROLLED n Establishing measurable quality goals n Objectively managing performance 5 CONTINUOUSLY IMPROVING n Improving organizational capability n Improving process effectiveness

131 Capability Levels Capability levels represent the maturity of orgs 130

132 Generic Practices Organized by capability level –I.e., add additional generic practices at each level Each level decomposed into set of common features Each set of common features consists of set of generic practices 131

133 Capability Level 1 Common Feature 1.1 - Base Practices Are Performed GP 1.1.1 - Perform the Process 132

134 Capability Level 2 Common Features: –Common Feature 2.1 - Planning Performance –Common Feature 2.2 - Disciplined Performance –Common Feature 2.3 - Verifying Performance –Common Feature 2.4 - Tracking Performance 133

135 Capability Level 2 Common Feature 1 Focuses on aspects of planning to perform the Process Area and its associated Base Practices Generic Practices: –GP 2.1.1 - Allocate Resources –GP 2.1.2 - Assign Responsibilities –GP 2.1.3 - Document the Process –GP 2.1.4 - Provide Tools –GP 2.1.5 - Ensure Training –GP 2.1.6 - Plan the Process 134

136 Capability Level 3 Common Feature 3.1 - Defining a Standard Process –GP 3.1.1 - Standardize the Process –GP 3.1.2 - Tailor the Standard Process Common Feature 3.2 - Perform the Defined Process –GP 3.2.1 - Use a Well-Defined Process –GP 3.2.2 - Perform Defect Reviews –GP 3.2.3 Use Well-Defined Data Common Feature 3.3 - Coordinate Practices –GP 3.3.1 - Perform Intra-Group Coordination –GP 3.3.2 - Perform Inter-Group Coordination –GP 3.3.3 Perform External Coordination 135

137 Capability Level 4 Common Feature 4.1 - Establishing Measurable Quality Goals –GP 4.1.1 - Establish Quality Goals Common Feature 4.2 - Objectively Managing Performance –GP 4.2.1 - Determine Process Capability –GP 4.2.2 - Use Process Capability 136

138 Capability Level 5 Common Feature 5.1 - Improving Organizational Capability –GP 5.1.1 - Establish Process Effectiveness Goals –GP 5.1.2 - Continuously Improve the Standard Process Common Feature 5.2 - Improving Process Effectiveness –GP 5.2.1 - Perform Causal Analysis –GP 5.2.2 - Eliminate Defect Causes –GP 5.2.3 - Continuously Improve the Defined Process 137

139 Limits of Capability Maturity Models Doesn’t consider security policy (e.g., no MAC) Measures existence of process, but not quality –Existence is (roughly) objective – measurement is possible –Quality is subjective – no standard for comparison Doesn’t guarantee good results –Does not measure effectiveness of processes –Can do everything but achieve nothing Non-uniformity of appraisals Misunderstanding of model and its use –Doesn’t replace testing/evaluation Doesn’t consider subversion 138

140 Build Security In Maturity Model (BSIMM) On-going project (http://www.bsimm.com/) Collected practices observed at real-world places (“67 leading software security initiatives” at leading companies) Compare target against comparison set –“Here’s what everybody else is doing” 139 Adobe Aetna Bank of America Box Capital One Citi Comerica Bank EMC Epsilon F-Secure Fannie Mae Fidelity Goldman Sachs HSBC Intel Intuit JPMorgan Chase & Co. Lender Processing Services Inc. Marks and Spencer Mashery McAfee McKesson Microsoft NetSuite Neustar Nokia Nokia Siemens Networks PayPal Pearson Learning Technologies QUALCOMM Rackspace Salesforce Sallie Mae SAP Sony Mobile Standard Life SWIFT Symantec Telecom Italia Thomson Reuters TomTom T. Rowe Price Vanguard Visa VMware Wells Fargo Zynga

141 BSIMM Domains Governance: organization, management, and measurement of a software security initiative Intelligence: Collection of “corporate security knowledge” SDL Touchpoints: analysis and assurance of software development artifacts and processes Deployment: software configuration, maintenance, and other environment issues that have direct impact on software security 140

142 BSIMM Practices 112 activities organized into 12 practices in 4 domains –Activities organized into increasing level of sophistication 141 GovernanceIntelligenceSDL TouchpointsDeployment Strategy and Metrics Attack ModelsArchitecture Analysis Penetration Testing Compliance and Policy Security Features and Design Code ReviewSoftware Environment TrainingStandards and Requirements Security TestingConfiguration Management and Vulnerability Management

143 Example Practice: Architecture Analysis Capturing software architecture diagrams, applying lists of risks and threats, adopting a process for review, building an assessment and remediation plan. Activity IDObjectiveActivityLevel AA1.1get started with AAperform security feature review 1 AA1.2 demonstrate value of AA with real data perform design review for high-risk applications AA1.3 build internal capability on security architecture have SSG lead review efforts AA1.4 have a lightweight approach to risk classification and prioritization use a risk questionnaire to rank applications AA2.1model objectsdefine and use AA process 2 AA2.2 promote a common language for describing architecture standardize architectural descriptions (including data flow) AA2.3 build capability organization-wide make SSG available as AA resource or mentor AA3.1 build capabilities organization-wide have software architects lead review efforts 3 AA3.2 build proactive security architecture drive analysis results into standard architectural patterns 142

144 Example Activity: AA1.1 Perform security feature review –Identify security features in an application (authentication, access control, use of cryptography, etc.) –Look for problems that would cause these features to fail at their purpose or otherwise prove insufficient Example: “a system that was subject to escalation of privilege attacks because of broken access control would both be identified in this kind of review.” Where is the policy mentioned? 143

145 BSIMM Activity Coverage Graph Source: http://www.bsimm.c om/community/ 144

146 Limits of BSIMM Divorced from consideration of security policy Ad hoc. “Common practices” are not necessarily complete or even good Notes existence of process, but not quality No coding standards, except for reusing “mature” and “secure-by-design” frameworks Doesn’t consider subversion –Version management? 145

147 Microsoft Security Development Lifecycle (SDL) Mandated for use at Microsoft since 2004 7 phases: –Training –Requirements –Design –Implementation –Verification –Release –Response 146

148 Training Phase For all developers Major topics: –Common excuses for not fixing bugs –Secure design - identify and avoid issues that can lead to compromise (does this imply there is a policy?) –Threat modeling –Secure coding –Security testing –Privacy best practices 147 At last, a policy?

149 Requirements Phase Establish security and privacy requirements –Based on what criteria? Where is the policy? Create quality gates/bug bars –E.g., require fixing of all “critical” vulnerabilities before release Perform security and privacy risk assessments –Determine need for threat modeling and security design reviews of components –Based on costs and regulatory requirements 148

150 Design Phase Establish design requirements –Validate all design specifications against a functional specification –Presumably, security functions identified in previous phase Perform attack surface analysis and reduction –Disable or restrict access to services –Reduce privilege –Layered defenses Use threat modeling –Microsoft STRIDE approach –We’ll look at that in a future lecture 149

151 Implementation Phase Use approved tools –Approved compilers and linkers (and associated options and warnings) Deprecate unsafe functions and APIs –Ban unsafe functions e.g., gets does not check bounds; use fgets instead –Use newer header files, compilers, or code scanning tools Perform static analysis –Automated tool scans code for common flaws without executing the code –We’ll look at this in a future lecture 150

152 Verification Phase Perform Dynamic Analysis –Run-time verification of software functionality –Checks for memory corruption, user privilege issues, etc. –We’ll discuss this topic in a future lecture Perform Fuzz Testing –Try to induce failure through malformed or random input data Conduct attack surface review –As in design phase, but on system basis –Tool takes snapshot of Windows system state before and after installation of product (http://www.microsoft.com/en- us/download/details.aspx?id=24487)http://www.microsoft.com/en- us/download/details.aspx?id=24487 151

153 Attack Surface Attack surface: Exposed parts of systems that may have exploitable vulnerabilities, e.g., –Open ports on outward facing web and other servers –Code that processes incoming data –Employees who can be “socially-engineered” Reference monitor (RM) abstraction helps –TCB shrinks attack surface –But still must consider threats to RM May have low-assurance components, not RM –E.g., your typical corporate IT network –Can still try to show non-bypassable, tamper-proof, and make assurance arguments –Security mechanisms probably have vulnerabilities! 152

154 Release Phase Create an Incident Response Plan –Includes emergency contacts and maintenance plan –For incidents with both internally developed and licensed software Conduct final security review –Examine artifacts of security activities (e.g., threat models) against quality gates/bug bars Certify release and archive –Attest that all security and privacy requirements met –Archive all specs, source, binaries, SDL artifacts, etc. 153

155 Limits of Microsoft SDL Still little or no connection to policy –Policy is implicit, perhaps –How do you know you have identified all (well, most) threats and requirements? Does it work? (http://www.gfi.com/blog/report-most-vulnerable-operating-systems-and-applications-in- 2013/) –Application in 2013 with the most critical flaws discovered: Microsoft Internet Explorer –OS in 2013 with the most critical flaws discovered: Microsoft Windows Server 2008 Next 5: Windows 7, Vista, XP, Windows Server 2003, Windows 8 154

156 Why Is Microsoft Software Still so Vulnerable? Different targets –TCSEC: TCB that enforces a policy –MSDL: Any software; no policy Different requirements –TCSEC: Only goal is enforcing security policy –MSDL: “Security and privacy” requirements possibly secondary to other requirements E.g., legacy code components and support Different testing –TCSEC: Think like an attacker – directed and prioritized –MSDL: Passive search for flaws – random, flat 155

157 Some Other Models SAFECode “Fundamental Practices for Secure Software Development” –http://www.safecode.org/publication/SAFECode_Dev_Practices0211.pdf –Aimed at reducing software weaknesses OWASP Software Assurance Maturity Model (OpenSAMM) https://www.owasp.org/index.php/Category:Software_Assurance_Maturity_Model –Many similarities to BSIMM – e.g., 4 domains, 12 practices –But more prescriptive focus Dept. of Homeland Security “Build Security In” initiative https://buildsecurityin.us-cert.gov/ 156

158 Reading for Next Time Attack Trees https://www.schneier.com/paper-attacktrees-ddj-ft.html Foundations of Attack–Defense Trees http://satoss.uni.lu/members/barbara/papers/adt.pdf Threat Risk Analysis for Cloud Security based on Attack-Defense Trees http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6268478 A Requires/Provides Model for Computer Attacks http://seclab.cs.ucdavis.edu/papers/NP2000-rev.pdf Uncover Security Design Flaws Using the STRIDE Approach http://msdn.microsoft.com/en-us/magazine/cc163519.aspx 157

159 Reading for This Time Attack Trees https://www.schneier.com/paper-attacktrees-ddj-ft.html A Requires/Provides Model for Computer Attacks http://seclab.cs.ucdavis.edu/papers/NP2000-rev.pdf Uncover Security Design Flaws Using the STRIDE Approach http://msdn.microsoft.com/en-us/magazine/cc163519.aspx Additional reading: Foundations of Attack–Defense Trees http://satoss.uni.lu/members/barbara/papers/adt.pdf Threat Risk Analysis for Cloud Security based on Attack-Defense Trees http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6268478 158

160 INF523: Assurance in Cyberspace as Applied to Information Security Threat Modeling Lecture 4 3 Feb 2016

161 Quick Quiz Closed book –(DEN students, on your honor) Write on a piece of paper –(DEN students: email inf523@csclass.info) Be sure to also write your name on the paper Explain your answers –Single-word answers are usually insufficient for full credit 10 minutes Try not to panic 160

162 Quick Quiz 161 ____ provide a formal, methodical way of describing the security of systems, based on varying attacks. ADTrees provide an intuitive and visual representation of interactions between an ______ and a ______ of a system One model you may find useful to group threats into categories is STRIDE model which is derived from an acronym for six threat categories. What are they? And which property will be at risk for each threat category?

163 Security (Bad) News From SANS NewsBites, 12 Sep 2014, Vol. 16, Num. 073 “Traffic Sensor Vulnerabilities Patched” –Wireless traffic sensors could be exploited to damage sensors or cause “inaccuracies” in collected data –Could lead to “all green” condition in intersection Some vulnerabilities: –Sensors accepted software mods w/o integrity checking –Sensitive data was not encrypted and could be “replayed” What threats? –People who try to manipulate the devices Recently patched, but how well? –What other flaws are there in the system? 162

164 Topics Covered in this Lecture Threat modeling techniques and tools –Attack Trees and Attack-Defense Trees –Requires/Provides modeling –Microsoft STRIDE approach and tool 163

165 Purpose of Threat Modeling Identify threats against a system –Identify deficiencies in security requirements and design Identify threat countermeasures –Include, but not limited to, technical mechanisms –May include administrative and physical controls –Must also consider threats to the countermeasures! Increase assurance Process should be repeatable, methodical 164

166 Attack Trees Intended to be a “formal” way of modeling attacks “Tree-like representation of an attacker’s goal recursively refined into conjunctive or disjunctive sub-goals” Attacker’s “goal” is root of tree Different ways of achieving goal are leaves –Called “refinements” of the parent goal Initially proposed by Schneier in 1999 Formalized by Mauw and Oostdijk in 2005 (Foundations of Attack Trees [ICISC’05], http://www.win.tue.nl/~sjouke/publications/papers/attacktrees.pdf) 165

167 Attack Trees Schneier’s safe example: Mark leaves as “possible” or “impossible”. “Or” nodes and “and” nodes When is goal possible? 166

168 Attack Trees Node is “possible” if any of the “or” children beneath it are possible, or if all of the “and” children are possible Schneier’s example: 167

169 Attack Trees Any binary value can be used, not just “possible” and “impossible” –Indicates likelihood or risk –E.g., “special equipment” vs. “no special equipment”, as in example: –“easy” versus “difficult” –“expensive” or “cheap” –legal versus illegal –“intrusive” versus “nonintrusive” 168

170 Attack Trees Can also use a continuous value function E.g., Schneier’s example: Estimated cost to attacker of each refinement Value in each node is sum of “and” leaves or lowest value of “or” leaves –Assumes cost isimportant factor for attacker 169

171 Attack Trees As with boolean values, continuous functions used to indicate likelihood or risk of particular attack Can combine multiple functions –E.g., “cheapest attack with the highest probability of success” 170

172 Attack Tree Exercise Continue to fill out this attack tree 171 Steal login credentials Find written down Phish Shoulder surf … …

173 Attack Trees Knowledge and creativity needed by analysts –Think like an attacker –All sorts of vulnerabilities in different sub-systems –Analysts must understand all parts of the system well How do you determine a good value for a leaf node? –Analyst must study presumed attackers as well –E.g., if organized international crime, have lots of money, expertise and little fear of jail, so what is threshold? Often highly subjective 172

174 Attack Trees 173 https://xkcd.com/538/

175 Countermeasures Once tree “complete”, use it to identify countermeasures Bring value of node below threshold to “deactivate” –E.g., a countermeasure that makes a leaf “impossible” –Or that makes too expensive Do that for all “or” leaves or any “and” leaf to deactivate parent Recurse up the tree to root 174

176 Attack Trees, Pros and Cons Pros –Conceptually simple –Scalable –Reusable Cons –Only considers attacker’s point of view –No countermeasures in the graph How do you show attacks on the countermeasures? –No attacker/defender interactions –Simple signatures or single-point exploits –Weak or no explicit link between steps How are they related? Ordering? 175

177 Attack-Defense Trees Introduced by Kordy et al. ( Foundations of Attack–Defense Trees [FAST’10], http://satoss.uni.lu/members/barbara/papers/adt.pdf) Includes countermeasures, so can show attacks on countermeasures 176

178 Attack-Only Tree Example 177

179 Attack-Defense Tree Example 178

180 Attack-Defense Trees 179

181 Attack-Defense Tree Example 180 DeMorgan’s law => f = (pin /\ card) \/ (online /\ (~(key fobs \/ pin pad) \/ malware))

182 ADTool Tool for supporting the ADTree methodology –http://satoss.uni.lu/members/piotr/adtool/ Let’s try it out 181

183 A-D Trees Pros –Conceptually simple, but not as simple as plain trees –Scalable (assuming you don’t go hog-wild with the countermeasures) –Reusable –Consider defender’s POV as well as attacker’s –Incorporates countermeasures and attacks on countermeasures Cons –Simple signatures or single-point exploits –Weak or no explicit link between steps How are they related? Ordering? 182

184 Unified Modelling Language (UML) Language for specifying, visualizing, constructing, and documenting models for systems A set of notations, not a model itself Different diagram types: –Use case, Class, Activity, Collaboration, Sequence, State, … –For more info, http://www.uml-diagrams.org/http://www.uml-diagrams.org/ 183

185 Example of UML Use Case Diagram Actors Associations Relationship s 184

186 UML Component Diagrams Activities –Tasks that must take place in order to “fulfill operational contract” –Invocation of operations –Steps in processes or entire process –Can decompose down to atomic actions Components of the system –Modules, but with “required” and “provided” interfaces How the components interact –Component diagram shows wiring 185

187 Requires/Provides Model Templeton and Levitt, 2000 186

188 Single Exploits vs. Sequence Single exploit –Short term goal –May or may not violate some part of the security policy –E.g., a port scan Sequence of single exploits (scenario) –Has an end goal in mind –Explicitly violates security policy –E.g., port scan followed by buffer overflow followed by installation of back door … –Very dangerous 187

189 Generalized Sequences of Attacks Port scan followed by buffer overflow followed by installation of back door is very specific More general, recon followed by exploit followed by penetration –Exploit depends on knowledge gained by recon –Penetration depends on capability gained by exploit Want to abstractly model attacks based on –the requirements of the abstract components, –the capabilities provided by the abstract components, and –the method of composing the components into complete attacks 188

190 Requires/Provides Model To successfully launch an attack, certain properties must hold –These are the requires properties After a successful attack, a new set of properties hold –These are the provides properties The attack “goal” is a property that holds after a sequence of attack events 189

191 Example Attack Sequence 190 Kafka has rsh access on sartre Spock wants to run code on sartre 1.Spock DoSes kafka with flood 2.Spock probes sartre for TCB seq num 3.Spock sends spoofed SYN packet (as kafka) 4.Sartre sends to kafka, which is blinded 5.Spock sends rsh packet to sartre

192 Connection Spoofing R/P Requires: –“Trustor” running active service (Sartre) –Trusted partner (pretend to be trusted partner) (kafka) –Ability to prevent trusted partner from receiving –Ability to probe trustor for TCB sequence number –Ability to send a forged packet Provides: –Ability to send data to trusted channel –Ability to have data remotely executed These are general properties Instantiate for rsh or other protocols 191

193 Similarity to Attack Trees Goal: Get Sartre to execute commands from untrusted host Spock Sub-goal: Get Sartre to believe trusted host Kafka is sending the commands –Must prevent ACK from Sartre from reaching Kafka –Must determine what sequence number Sartre would use, so Spock can use that in “response” to blocked ACK But different from attack trees in specifying order 192

194 Creating Variant Attacks Different events can cause the same effects Different orderings of events can cause the same effects Want to reason in terms of the effects of an event, not on the details of an event itself –E.g., instead of SYN-flood, the attacker on Spock could have use a packet storm, ping-of-death, or even physically disabled the network cable to Kafka –Each of these would have had the same effect of blocking Kafka from receiving ACKs from Sartre 193

195 Concepts and Capabilities Capabilities are the (generalized) information or situation required for an attack to proceed –E.g., User login requires access, user name, password –System requires access to password validation database –Atomic elements of the model –Generalized capability is template for instantiations Concepts map required capabilities to provided capabilities and instantiate capabilities Attacks are defined as the composition of abstract concepts 194

196 Inherent Implication Existence of a capability implies existence of another –E.g., A prevented from sending a packet to B  B is prevented from receiving a packet from A –B is prevented from receiving a packet from A  B is prevented from sending reply packet back to A Don’t depend on implication Must explicitly state concepts that define each implication 195

197 JIGSAW Language developed to express capabilities and concepts Permits mechanization –Can automatically discover ways that capabilities can be combined into attacks Capability templates –Named collection of typed attribute-value pairs Concepts –Set of required and provided capabilities –“With” section gives relations that must hold between the required capabilities 196

198 Example Capability capability Trusted_Partner is service: service_type; trustor: ip_addr_type; trusted: ip_addr_type; end. 197

199 Example Concept (abbreviated) Concept RSH_Connection_Spoofing requires Trusted_Partner:TP; ForgedPacketSend:FPS; PreventPacketSend:PPS; … with TP.service is RSH, PPS.host is TP.trusted, FPS.dst.host is TP.trustor, … end; 198

200 Example Concept (abbreviated)(cont.) Concept RSH_Connection_Spoofing, continued provides push_channel:PSC; remote_execution: REX; with PSC.from<- FPS.true_src; PSC.to<- FPS.dst; PSC.using<- RSH; REX.from<- FPS.true_src; … end; 199

201 Power of Model Ordering and relationship of attack steps implicit in that provides must precede requires –Compare to attack trees –Capabilities essentially form edges of R/P attack graph Multiple events can provide equivalent capabilities Attack scenarios can have many variants –instantiate different events/protocols that provide same capabilities Exploits can be combined in new ways to create previously unexpected attacks –Just have to satisfy capabilities 200

202 Weakness of RP A technique for modelling multi-step abstract attacks No connection to policy (same as attack trees) 201

203 Microsoft STRIDE Model Developed by Microsoft and refined over the last 10 years Applied to all software development activities 202

204 Microsoft’s Software Security Properties PropertyDescription ConfidentialityData is only available to the people intended to access it. IntegrityData and system resources are only changed in appropriate ways by appropriate people. AvailabilitySystems are ready when needed and perform acceptably. AuthenticationThe identity of users is established (or you’re willing to accept anonymous users). AuthorizationUsers are explicitly allowed or denied access to resources. NonrepudiationUsers can’t perform an action and later deny performing it. 203

205 STRIDE Acronym for categories of threats: 204 ThreatSecurity Property at Risk SpoofingAuthentication TamperingIntegrity RepudiationNon-repudiation Information disclosureConfidentiality Denial of serviceAvailability Elevation of privilegeAuthorization

206 Meaning of Each Threat Class Spoofing : Impersonating something or someone else Tampering : Modifying data or code Repudiation : Claiming to have not performed an action Information Disclosure : Exposing information to someone not authorized to see it Denial of Service : Deny or degrade service to users Elevation of Privilege : Gain capabilities without proper authorization 205

207 STRIDE Steps Decompose system into components –May need to recurse down to necessary level of detail Analyze each component for susceptibility to each relevant type of threat Develop countermeasures until no component has susceptibility Is system secure? –Maybe, but probably not –Due to emergent properties of composition Does this give higher assurance? –Yes, because flaw in one component affects entire system 206

208 Data Flow Diagram (DFD) Used to graphically represent a system and its components Standard set of elements: –Data flows –Data stores –Processes –Interactors One more for threat modeling: –Trust boundaries 207

209 DFD Symbols ElementShapeDescription ProcessAny running computations or programs InteractorA user, service, or machine that interacts with the application and is external to it – either as a data producer or consumer Data StoreAny data “at rest” on some form of storage (e.g., files, DBs, registry keys, etc.) Data FlowAny transfer of data from one element to another (via network, pipe, RPC, etc.) Trust BoundaryBorder between “trusted” and “untrusted” elements 208

210 Relevant Threats for Elements 209 InteractorsProcessData Store Data Flow Spoofingxx Tamperingxxx Repudiationxx* Information disclosurexxx Denial of Servicexxx Elevation of Privilegex * Logs held in data stores are usually the mitigation against a repudiation threat. Data stores often come under attack to allow for a repudiation attack to work.

211 STRIDE Process Create DFD of system –Represent all key components –Represent all data flows –Identify trust boundaries Repeat, adding more details to the diagram if required Recurse on each component as required 210

212 Example: First Cut 211 Data Sink! Useless details Data store

213 Example: Second Try 212 3 data flows

214 Analysis: Data Flow 1 Sales to Collection Someone could tamper with the data in transit Someone could sniff the data Someone could DoS the collection service 213 Data Flow Spoofing Tamperingx Repudiation Information disclosure x Denial of Servicex Elevation of Privilege

215 MS Threat Modeling Tool 2014 Software for applying STRIDE model –Build DFD directly in program –Automatically finds STRIDE threats 214

216 Mitigate Threats Tool has places to specify status of mitigation: –Not Started –Needs Investigation –Not Applicable –Mitigated If you say Mitigated or Not Applicable, must enter Justification Also can select priority (Low, Medium, High) –Used for the “bug bar” (ranking of threats by priority) –E.g., see http://msdn.microsoft.com/en- us/library/windows/desktop/cc307404.aspx 215

217 Controls to Mitigate Threats Remove vulnerable feature “Fix” with technology, e.g.: –Spoofing Strong authentication –Tampering Strong authorization (restrict modify access) –Repudiation Digital signatures, timestamps –Information Disclosure Encryption –Denial of Service Packet filtering –Elevation of Privilege Restrict admin privilege 216

218 Mitigation Choices in Reality Redesign –Change the design to eliminate threats –E.g., reduce elements that touch a trust boundary Use standard mitigations –Firewalls, validated authentication systems, … Use custom mitigations –If you are a gambling sort of person Accept risk –If you think risk is low, or too expensive to mitigate 217

219 Validation Make sure diagram is up-to-date and accurate Make sure you’ve captured all trust boundaries Enumerate all threats –The tool is an aid, but not necessarily complete Analyze all threats Mitigate all threats 218

220 Diagram Layers Context Diagram –Very high-level; entire component / product / system Level 1 Diagram –High level; single feature / scenario Level 2 Diagram –Low level; detailed sub-components of features Level 3 (, 4,…) Diagram –More detailed yet, if necessary 219

221 Combine STRIDE With Other Techniques Use UML instead of DFD to determine threat targets Determine threats to each component using STRIDE Use threat trees to help determine vulnerabilities –Each STRIDE threat is the root of a tree Use a risk assessment method to rank threats 220

222 STRIDE Pros and Cons STRIDE identifies security properties and threats against them –Confidentiality, Integrity, Availability, Authentication, Authorization, Nonrepudiation – Those are effectively security policies –But where in the model are all those Windows bugs? And IE bugs Are threats comprehensive? Patch and pray school of system design No reference monitor concept for access policies –Better to try to design RM, then look for threats To isolation, completeness, verifiability 221

223 Topics Covered in this Lecture Threat modeling techniques and tools –Attack Trees and Attack-Defense Trees –Requires/Provides modeling –Microsoft STRIDE approach and tool 222

224 Homework Due next week at start of class –Submit screen shots and other documents on D2L Remember, you can help each other understand the assignment, the concepts, and the tools –But the work you turn in must be your own Analyze threats to a simple on-line payment system 223

225 Homework Problem A (simple) on-line payment system runs on a web server Users connect using a web browser via HTTPS Users authenticate using passwords The server runs the payment application The application consults a back-end authorization database The application connects to a back-end DB server to record payments The DB server stores credit card information An attacker wants to steal credit card information 224

226 Homework 1.Create a plain attack tree –Use “hard” and “easy” as node values –What is the easiest route? 2.Create a corresponding A-D tree –Use ADTool (requires Java 6 or later) http://satoss.uni.lu/members/piotr/adtool/ –Include defensive measures and attacks on defensive measures –Give the propositional interpretation of the tree 3.Write-up R/P capabilities and a concept for an attack on this system via the web connection 4.Create a STRIDE threat model –Show all processes, interactors, stores, flows, and boundaries –Use Threat Modeling Tool if you have a Windows machine –Identify threats and some countermeasures 225

227 Reading for Next Time (Software Design) D.L. Parnas, On the Criteria To Be Used in Decomposing Systems into Modules, 1972 –https://www.cs.umd.edu/class/spring2003/cmsc838p/Design/criteria.pdfhttps://www.cs.umd.edu/class/spring2003/cmsc838p/Design/criteria.pdf Daniel Hoffman, On Criteria for Module Interfaces, 1990 –http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=52776http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=52776 Paul Karger, et. al., A VMM security kernel for the VAX architecture, 1990 – Section 3.7 –http://www.scs.stanford.edu/nyu/04fa/sched/readings/vmm.pdfhttp://www.scs.stanford.edu/nyu/04fa/sched/readings/vmm.pdf Final Evaluation Report, Gemini Trusted Network Processor, 1995 – Section 4.2 –http://aesec.com/eval/NCSC-FER-94-008.pdfhttp://aesec.com/eval/NCSC-FER-94-008.pdf 226

228 INF523: Assurance in Cyberspace as Applied to Information Security Structured Design Lecture 5 10 Feb 2016

229 Security (Bad) News Today’s security analyst: Dan Dmytrisin Progressive Insurance, Inc. “SnapShot” device 228

230 Security (Bad) News From SANS NewsBites, 23 Sep 2014, Vol. 16, Num. 76 –Google Shuts Down Malvertising Attack –Malicious attacks coming from browser ads –Ads hosted on “Zedo” platform distributed through Doubleclick –Ads contained script that linked to exploit kit –Exploit installed backdoor What is the fundamental problem? –Lack of integrity policy –Allows untrusted inputs to run scripts at user’s level –Trusts everything in page at same integrity level Even when source is different How can this be fixed? –Redesign browser “trust model” Constrain based on source integrity level Some browser extensions can help with this (e.g., Scriptsafe) 229

231 “Assurance Waterfall” 230 Org. Req’s Policy Security Req’s Design Implementation Disposal Distribution Instal. & Config. Maintenance Version Mgmt Threats Threat Modeling Modularization and layering Informal analysis

232 Security Requirements Many different definitions/approaches –E.g., Square, Clasp, STRIDE, … See “Security Requirements for the Rest of Us: A Survey”, IEEE Software, January/February 2008 Differences: –Security mechanisms or policy? –Level of detail? –Level of expert knowledge? 231

233 Factors in Determining Security Requirements Organizational requirements –Hopefully based on well-defined policy Sometimes to counter specific threats –E.g., MS STRIDE tool: Spoofing - Strong authentication Tampering - Strong authorization (restrict modify access) Repudiation - Digital signatures, timestamps Information Disclosure - Encryption Denial of Service - Packet filtering Elevation of Privilege - Restrict admin privilege Regulations or laws 232

234 Example of Reqs due to Law or Regulation HIPAA 45 CFR 164.312 - Technical safeguards (e) –(1) Standard: Transmission security. Implement technical security measures to guard against unauthorized access to electronic protected health information that is being transmitted over an electronic communications network. –(2) Implementation specifications: (i) Integrity controls (Addressable). Implement security measures to ensure that electronically transmitted electronic protected health information is not improperly modified without detection until disposed of. (ii) Encryption (Addressable). Implement a mechanism to encrypt electronic protected health information whenever deemed appropriate. 233 “Addressable” means alternative may be used if requirement is unreasonable or inappropriate.

235 Other Factors Costs Priority based on –Threat actors Goals Expertise Resources –Likelihood of attack –Degree of difficulty for attacker –Value (estimated loss) –“L”, “M”, “H” is probably best resolution possible 234 Accurate estimates of these are likely impossible!

236 Security Requirements Chain “Security objectives” (org reqs) + threats –Lead to policy –Don’t forget subversion! Policy defines –what assets need protection –what “security” means for the assets –“Security” in terms of secrecy and integrity Also identification/authentication, audit, authorization May also be availability, non-repudiation, etc. Policy leads to mechanisms to enforce the policy Which are the security requirements? 235

237 Policies or Mechanisms; Which are the Security Reqs? It depends on the organization If an organization doesn’t have a security policy: –Have no choice but to include policy in reqs If an organization has a security policy: –Reqs as in HIPAA are a good level –But where do you specify mechanisms? Who builds them? –Developers often have no security experience or interest Role of security analyst sometimes comprehensive –Specify requirements –and enforcement mechanisms –and verification of the enforcement mechanisms 236

238 Security Requirements - Mechanisms Policy reqs lead to mechanisms to enforce policy Reference Monitor concept useful here –Rather than scattershot reqs on system components –Mediate access by subjects to objects –Attempt to implement isolation, completeness, verifiability Look for threats against isolation and completeness –E.g., transmitted data could be sniffed Mechanisms counter the threats to the RM –E.g., in network, encrypt data in transit –TNI can be helpful 237

239 Implementing Security Requirements Must trust mechanisms to enforce policy Structured design helps to provide assurance The subject of today’s lecture 238

240 Reading for This Time (Software Design) D.L. Parnas, On the Criteria To Be Used in Decomposing Systems into Modules, 1972 –https://www.cs.umd.edu/class/spring2003/cmsc838p/Design/criteria.pdfhttps://www.cs.umd.edu/class/spring2003/cmsc838p/Design/criteria.pdf Daniel Hoffman, On Criteria for Module Interfaces, 1990 –http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=52776http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=52776 Paul Karger, et. al., A VMM security kernel for the VAX architecture, 1990 – Section 3.7 –http://www.scs.stanford.edu/nyu/04fa/sched/readings/vmm.pdfhttp://www.scs.stanford.edu/nyu/04fa/sched/readings/vmm.pdf Final Evaluation Report, Gemini Trusted Network Processor, 1995 – Section 4.2 –http://aesec.com/eval/NCSC-FER-94-008.pdfhttp://aesec.com/eval/NCSC-FER-94-008.pdf 239

241 Structured Design Essential for high assurance Modularization and Layering Isolate protection-critical components; always invoke Minimize complexity –May require minimizing the number of types of objects the system supports What if you don’t? –System will be vulnerable to outside malware attacks –More likely to contain residual errors in design and implementation (or even malicious software) 240 Q: why can malware disable AV?

242 Software Design Principles From IEEE Guide to the Software Engineering Body of Knowledge, Version 3.0 Decomposition and Modularization Coupling and Cohesion Abstraction Separation of interface and implementation Encapsulation and Information hiding Sufficiency, completeness, and primitiveness 241

243 Decomposition and Modularization Divide large system into smaller components –Each has well-defined interface Goal is to divide by functions and responsibilities Modularization is good –Manage complexity by using smaller parts –System is easier to Understand Develop (e.g., by a team in parallel) Test Maintain 242

244 Coupling and Cohesion Ideas developed by Larry Constantine in late 1960s Coupling is measure of interdependence among modules –Amount of shared infrastructure –Amount of coordination –Amount of information flow Cohesion is measure of the degree to which elements of a module belong together Want low coupling and high cohesion 243

245 Low Coupling; High Cohesion 244

246 Abstraction View of an object –focuses only on relevant information –ignores the rest Parameterization abstracts details of data representation by using names Specification abstraction hides details of algorithm, data storage, and control by focusing on effects/results Want to increase abstraction at interface of modules 245

247 Separate Interface from Implementation Define module by specifying public interface –Parameters –Effects –Results –Exceptions Separate from details of how module is implemented –Algorithm –Data structures and storage –Control flow 246

248 Interface vs. Implementation 247 Caller Interface Implementation

249 Encapsulation and Information Hiding Grouping and packaging internal details of modules Creating an abstraction Make the internal details inaccessible from outside 248

250 Sufficiency, Completeness, and Primitiveness Sufficiency: Module has enough characteristics of abstraction to be useful Completeness: Module implements entirety of abstraction –Otherwise, feature likely in some other module and high coupling results Primitiveness: Operations require access to underlying representation –Want “building blocks” that can be combined into higher-level patterns 249

251 Software “Architecture” Software Architecture in Practice (2nd edition), by Bass, Clements, and Kazman : Architecture is –The structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them. –Architecture is concerned with the public side of interfaces Not details having to do solely with internal implementation 250

252 Architectural Considerations Division of functions –Modules –Information hiding Distribution of functions (in processes or systems) –Concurrency - Modules run in parallel Synchronization is an issue –May be driven by underlying system architecture Dependencies –Which modules need to call which modules Interfaces (externally visible properties of elements) –Data types and methods; effects 251

253 Strategy for Modularization Manage complexity by using smaller parts But many possible ways of modularizing a system Some ways are better than others –… which leads us to Parnas’ paper 252

254 KWIK Index Production System Accepts ordered set of lines Each line contains ordered set of words Each word contains ordered set of characters Lines circularly-shifted –Move first word to end making a new line –Do for all words in each line Outputs a listing of all circular shifts of all lines in alphabetical order 253

255 KWIK Example Original lines: BCF, ADE  –ADE (ambiguous if 0 shifts included) –BCF –CFB –DEA –EAD –FBC 254

256 Functional Modularization System modeled as data flow, flow chart Each module implements one function in flow E.g., first modularization in Parnas’ paper 255 Input – Read and store data lines Circular Shift – Index to first char of shifts Alphabetizer – Alphabetized index Output – Print all lines Control – Sequence other modules Stored Lines Is this system loosely or tightly coupled?

257 “Information Hiding” Modularization System modeled on hiding implementation decisions Each module hides “a secret” 256 Input Circular Shift Alphabetizer Output Control Line storage Hides how lines are stored Hides input device Hides how circ shifted lines are stored. Creates abstraction of storing all shifted lines Hides algorithm(?) Hides output device

258 Information Hiding and Abstractions Module creates abstraction Examples: –Abstract data types: Users operate on the data without knowing its representation –GUI creation environments: Users construct GUIs without knowing details of how to display E.g., X-Windows, MS VB –Protocols: Users send and receive data without knowing details of channel operation –Methods: Users invoke methods without knowing class’s algorithms 257

259 An Advantage of Information Hiding Can metaphorically “lift” the interface and slide a new implementation under it –Take advantage of new technology but disrupt only one module Choose modules based on design decisions that are likely to change –Make that the hidden “secret” 258

260 “Secrets” and Changes SecretTypical Change How to monitor a sensorNew type (more reliable, higher resolution, etc.) of sensor How to control a deviceNew type (faster, larger, etc.) of device Platform characteristicsNew processor, multiprocessor, more memory, different chipset How to control a displayReorganization of user interface How to exchange dataProtocol change Database physical structureFields added or changed, optimized storage AlgorithmDifferent time-space tradeoff, greater accuracy 259 Courtesy David Weiss, Iowa State Uni.

261 Differences between Approaches Different in way work is divided Different in interfaces Different in maintainability when changes made –If method or format of line storage changes Approach 1: EVERY module must change Approach 2: Only “line storage” module must change –If change method or timing of circular shift Approach 1: Circular shift and Alphabetizer modules change Approach 2: Only Circular shift module must change –If change method or timing of alphabetization Approach 1: Alphabetizer and output modules change Approach 2: Only Alphabetizer module must change 260

262 Reasonable Conclusion Decompose a system into modules using information hiding –General criterion: each module hides design decision from rest of system (esp. likely-to-change decision) DON’T decompose by function –E.g., using a flow chart or DFD But don’t over-modularize! –E.g., if need same data structures, put in same module 261

263 Parnas’ Specific Criteria Data structure and accessing/modifying procedures Sequence of preparation steps to call routine and routine itself Control blocks –e.g., structure of actual records in a queue; Process Control Block in OS Character codes, alphabetic orderings, etc. Sequencing – the order things are done 262 What about access control labels and the dominance relation in a RM that enforces a mandatory access control policy?

264 Layering Second modularization method (by hiding) is layered 263 Input Circular Shift Alphabetizer Output Control Line storage

265 Hierarchical Structure Layering – Hierarchy of layers Partial ordering of “uses” or “depends on” Lower layers provide abstract machines/data types –E.g., Line storage provides abstract original lines –Circular shifter provides abstraction of all shifted lines Permits simplification of higher layers Lower layers provide usable basis for new system But note: Hierarchical layers and good modularization are independent properties –Information hiding does not guarantee layering 264

266 Example of Layering 265 User Programs Services OS/File System 1 DAC MAC OS/File System 2 TCB Boundary

267 Layering in an OS External device characteristics –E.g., keyboard/display I/O External system characteristics –E.g., network communications protocols Resource allocation –E.g., process and thread management –Process scheduling –Memory management Janson developed idea of levels of abstraction in security kernel design (MIT Dissertation, 1976) –Used in VAX security kernel and GTNP –Each layer implements abstraction in part by calling lower layers 266

268 VAX Security Kernel VMM that runs on VAX processors Creates isolated virtual VAX processors –VMs run VMS or Ultrix (Unix variant) OSes Security labels (simplified): –Subjects: VMs – Each has an access class –Objects: Virtual disks – Access classes and ACLs Two-layer scheduler for performance (Reed, MIT): –Level 1: Small set of processes, per-process DBs all in memory –Level 2: User processes, require bringing per-process DBs from disk to load in Level 1 process 267

269 Example OS layering: VAX Security Kernel 268 Note two-layer scheduler:two-layer scheduler LLS – Assigns layer 1 virtual CPUs (vp1s) to physical CPUs – some vp1s reserved for kernel processes HLS – Schedules layer 2 virtual CPUs (vp2s) to vp1s – some vp2s used for VAX VMs

270 Example OS Layering: GEMSOS Security kernel of GTNP Note 2-level scheduling ITC provides VM abstraction UTC schedules processes on abstract VMs UTC doesn’t know when VM blocks Where is DAC layer? Right! DAC outside Ring 0 in GEMSOS 269

271 Rings Hardware rings enforce layering Calls only to lower layers Calls only to restricted entry points Interface must be carefully specified Entry points must be carefully code –E.g., to sanitize/filter/normalize inputs –E.g., to handle violations of interface specification 270

272 Module Interfaces Public part of module Hides details: data structures, control sequences, etc. Can metaphorically “lift” the interface and put a new implementation under it Treat module as “black box” 271

273 Module Interface Specification Module interface specification defines –Entry points –Syntax –Parameters –Data types –Constants –Exceptions –Semantics of state change When is access call legal? What effect on other calls? Makes all assumptions explicit 272

274 Ideal Interface Specs Written before implementation, not after! Easy to read and maintain Describe how to verify (test) behavior of module –Module must conform to spec –Spec says exact effects –Module can do that, and only that 273

275 Some Benefits Supports partitioning of system into modules Defines expected behavior of module Permits parallel development Gives verification requirements –Test requirements –Acceptance criteria Helps find errors 274

276 Example Interface Specification 275

277 Example Interface Specification (cont.) 276

278 Interface Criteria Consistent –Naming conventions, parameter passing, exception –People tend to skip details that look familiar, so inconsistencies will cause problems Essential –Omit needless features –Don’t duplicate functions General –Support usage for many purposes Minimal (primitive) –If independent features, consider using independent calls Opaque –Apply information hiding –Interface should be unlikely to change when implementation does 277

279 “Bad” Example Stack module interface Sg_pop sets and gets at same time How to examine top element w/o changing stack? Violation and how to fix? Minimality – separate into s_pop and g_top 278

280 Another “Bad” Example Character input module interface Sg_getc removes next char from input and returns its value Want to check if end of token If not end, must use s_ungetc to put char back Violation and how to fix? Minimality – separate into s_next and g_cur –S_ungetc no longer needed 279

281 Good Example of Non-minimality Unix malloc call Allocates space and returns pointer Always want to do both together No need for separate “set” and “get” calls 280

282 Tradeoffs Generality increases number of entry points –Consistency of calls for possibly unused functions Minimality increases number of entry points and number of calls to them –Primitives tend to be called more often Sometimes opacity must be violated due to implementation concerns –Example of initialization call to create matrix representation 281

283 Reading for Next Week Bishop book, Chapter 23, “Vulnerability Analysis”, pp. 660-685 (vulnerability classification) IEEE – Avoiding the top 10 Software Security Design Flaws (http://cybersecurity.ieee.org/images/files/images/pdf/CybersecurityInitiative-online.pdf) [Skim] CERT Top 10 Secure Coding Practices (https://www.securecoding.cert.org/confluence/display/seccode/Top+10+Secure+Coding+Practice s) [Skim] Common Weakness Enumeration (http://cwe.mitre.org/) [Skim] OWASP Top 10 2013 (http://owasptop10.googlecode.com/files/OWASP%20Top%2010%20-%202013.pdf) [Skim] SANS Top 25 Software Errors (https://www.sans.org/top25-software-errors/) 282

284 INF523: Assurance in Cyberspace as Applied to Information Security Secure Programming Lecture 6 17 Feb 2016

285 Stuff You Should Have Read for This Week Bishop book, Chapter 23, “Vulnerability Analysis”, pp. 660-685 (vulnerability classification) IEEE – Avoiding the top 10 Software Security Design Flaws (http://cybersecurity.ieee.org/images/files/images/pdf/CybersecurityInitiative-online.pdf) [Skim] CERT Top 10 Secure Coding Practices (https://www.securecoding.cert.org/confluence/display/seccode/Top+10+Secure+Coding+Practice s) [Skim] Common Weakness Enumeration (http://cwe.mitre.org/) [Skim] OWASP Top 10 2013 (http://owasptop10.googlecode.com/files/OWASP%20Top%2010%20-%202013.pdf) [Skim] SANS Top 25 Software Errors (https://www.sans.org/top25-software-errors/) 284

286 “Assurance Waterfall” 285 Org. Req’s Policy Security Req’s Design Implementation Disposal Distribution Instal. & Config. Maintenance Version Mgmt Threats Threat Modeling Modularization and layering Secure programming Informal analysis

287 Today’s Outline What is Secure Programming? –Common software weaknesses –Secure coding practices “Secure Languages” Bug Tracking 286

288 Secure Programming Practice of developing software in a way that helps prevent vulnerabilities –Actually a set of practices Sometimes called “defensive programming” Characterized by having few assumptions about inputs or the environment 287

289 IEEE Top 10 Security Design Flaws Attempt to “shift focus from finding bugs to identifying common design flaws” 2014 workshop participants discussed types of flaws Ad hoc (like most software development) –No claim about completeness, for example –An opinion poll; no underlying engineering or theory –Unknowingly repeats RM principles in unstructured way But nevertheless applicable to high-assurance Illuminating (and distressing) point: –Many of the flaws that made the list have been well known for decades, but continue to persist 288

290 IEEE Top 10 Security Design Flaws (1) 1.Earn or give, but never assume, trust Don’t trust clients to behave in particular way Don’t expect data sent to a client to be protected Assume data sent by untrusted clients compromised until proven otherwise Properly validate all data received from client before processing Integrity issue –Consider OS calls or ring crossings –Source of many web application vulnerabilities 289

291 IEEE Top 10 Security Design Flaws (2) 2.Use an authentication mechanism that cannot be bypassed or tampered with Non-bypassable, hmmm… ring any bells? Unlike RM, conjoins identity and access control For users and other machines Require authentication; don’t assume identity Use unforgeable credentials Protect credentials from theft Limit lifetime of session Use a single, well-proven mechanism/framework 290

292 IEEE Top 10 Security Design Flaws (3) 3.Authorize after you authenticate Authorization may change (e.g., revocation) Authorization may depend on context (e.g., time) Identity alone is insufficient to determine authorization 291

293 IEEE Top 10 Security Design Flaws (4) 4.Strictly separate data and control instructions, and never process control instructions received from untrusted sources I.e., don’t trust input from untrusted sources Could lead to injection attacks Assembling insufficiently validated, untrusted data with trusted control instructions –E.g., shellshock Bash vulnerability –E.g., SQL injection –E.g., cross-site scripting 292

294 IEEE Top 10 Security Design Flaws (5) 5.Define an approach that ensures all data are explicitly validated Don’t make assumptions about data Attackers may subvert and invalidate assumptions –Injection, bypass, memory corruption, resource exhaustion Examine, sanitize, filter before processing Design software so security reviewer can verify correctness and comprehensiveness of data validation (ring any bells?) –Use common, standard validation mechanism –Transform data to canonical form Re-do checks at module boundaries –State requirements in interface description 293

295 IEEE Top 10 Security Design Flaws (6) 6.Use cryptography correctly Use standard, vetted algorithms, libraries, and frameworks Don’t roll you own Protect the keys –Don’t hard-code embed them in a program –Permit revocation and rotation –Use strong keys –Use strong distribution mechanisms Design to allow for use of new algorithms (ring a bell?) 294

296 IEEE Top 10 Security Design Flaws (7) 7.Identify sensitive data and how they should be handled This sure sounds like a need for a policy, doesn’t it? Consider laws, regulations, company policy, contractual obligations (e.g., NDAs), and user expectation Protect data at rest and in transit when designing controls 295

297 IEEE Top 10 Security Design Flaws (8) 8.Always consider the users Some users are sophisticated and interested in using a system securely, but most are not Security is not an add-on; it is a property emerging from how the system is built and operated Make controls easy to deploy, configure, use, and update Also consider needs of programmers This is such a sprawling “flaw” that the recommendations are sometimes contradictory and overwhelming 296

298 IEEE Top 10 Security Design Flaws (9) 9.Understand how integrating external components changes your attack surface E.g., off-the-shelf or open source libraries Reusing components means inheriting their security weaknesses Attempt to isolate external components Validate provenance (integrity) of components Authenticate data-flow, validate inputs 297

299 IEEE Top 10 Security Design Flaws (10) 10.Be flexible when considering future changes to objects and actors Design for change –I.e., modularize, hide secrets, layer Design for secure updates Design so security components can be easily updated –E.g., keys, passwords 298

300 Common Weakness Enumeration The CWE is a “dictionary” of common software security flaws –http://cwe.mitre.org/ –Approximately 1000 currently –Many are clusters of similar or related weaknesses E.g., more than 30 related to “Path Traversal” or “Path Equivalence” 299

301 CWE Listings CWE entries are multi-page listings that consist of –Description –Consequences –Likelihood of exploit –Detection methods –Short examples –Observed examples in the CVE database –Potential mitigations –Relationships to other weaknesses –Research notes –Attack patterns that exploit the weakness –References 300

302 CWE Example (just the description) 301

303 Prioritizing Weaknesses Which bugs should you fix first? Common Weakness Scoring System (CWSS) (http://cwe.mitre.org/cwss/cwss_v1.0.1.html) Helpful rankings: –OWASP Top 10 (https://www.owasp.org/index.php/OWASP_Top_10) Web application focus Mapped to CWE, but uses OWASP Risk Rating Methodology –SANS Top 25 Software Errors (https://www.sans.org/top25-software-errors/) SANS uses the CWSS 302

304 CWSS Metric Groups 303

305 CWSS Weights Weight for each value based on estimates of risk, confidence, or other hard to quantify attributes E.g., for “Finding Confidence” (Base Finding): 304 Proven TrueT1.0The weakness is reachable by the attacker. Proven Locally True LT0.8The weakness occurs within an individual function or component whose design relies on safe invocation of that function, but attacker reachability to that function is unknown or not present. For example, a utility function might construct a database query without encoding its inputs, but if it is only called with constant strings, the finding is locally true. Proven False F0.0The finding is erroneous (i.e. the finding is a false positive and there is no weakness), and/or there is no possible attacker role. DefaultD0.8Median of the weights for Proven True, Proven Locally True, and Proven False. UnknownUK0.5There is not enough information to provide a value for this factor. Further analysis may be necessary. In the future, a different value might be chosen, which could affect the score.

306 CWSS Score Formula A CWSS 1.0 score can range between 0 and 100. It is calculated as follows: –BaseFindingSubscore * AttackSurfaceSubscore * EnvironmentSubscore E.g., the Base Finding subscore (BaseFindingSubscore) is calculated as follows: –Base = [ (10 * TechnicalImpact + 5*(AcquiredPrivilege + AcquiredPrivilegeLayer) + 5*FindingConfidence) * f(TechnicalImpact) * InternalControlEffectiveness ] * 4.0 –f(TechnicalImpact) = 0 if TechnicalImpact = 0; otherwise f(TechnicalImpact) = 1 The other metric groups are similarly complex Precision when using estimated values? 305

307 OWASP Top 10 A1-Injection A2-Broken Authentication and Session Management A3-Cross-Site Scripting (XSS) A4-Insecure Direct Object References A5-Security Misconfiguration A6-Sensitive Data Exposure A7-Missing Function Level Access Control A8-Cross-Site Request Forgery (CSRF) A9-Using Components with Known Vulnerabilities A10-Unvalidated Redirects and Forwards 306 A1, A3, A4, A8, A10: Unvalidated inputs

308 OWASP Mapping to CWE E.g., A1-Injection maps to following CWE items: –CWE Entry 77 on Command Injection –CWE Entry 89 on SQL Injection –CWE Entry 564 on Hibernate Injection Hibernate is framework for mapping Java to relational db 307

309 SANS Top 25 (the first 9) RankScoreIDName [1]93.8 CWE-89 Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') [2]83.3 CWE-78 Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') [3]79.0 CWE-120 Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') [4]77.7 CWE-79 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') [5]76.9 CWE-306 Missing Authentication for Critical Function [6]76.8 CWE-862 Missing Authorization [7]75.0 CWE-798 Use of Hard-coded Credentials [8]75.0 CWE-311 Missing Encryption of Sensitive Data [9]74.0 CWE-434 Unrestricted Upload of File with Dangerous Type 308

310 CERT Top 10 Secure Coding Practices https://www.securecoding.cert.org/confluence/display/seccode/Top+10+ Secure+Coding+Practices –V–Validate inputs – esp. from untrusted data sources –H–Heed compiler warnings – Use highest warning level Use static and dynamic analysis tools –A–Architect and design for security policies –K–Keep it simple –D–Default deny –U–Use principle of least privilege –S–Sanitize data sent to other systems (such as command shells!) –P–Practice defense in depth –U–Use effective quality assurance techniques –A–Adopt a secure coding standard –B–Bonus: Define security requirements and model threats 309

311 SANS-CWE “Monster Mitigations” http://cwe.mitre.org/top25/index.html#Mitigations 310 IDDescription M1Establish and maintain control over all of your inputs. (CWE-20) M2Establish and maintain control over all of your outputs. (CWE-116) M3 Lock down your environment. ( CWE-250 Execution with Unnecessary Privileges) M4 Assume that external components can be subverted, and your code can be read by anyone. M5Use industry-accepted security features instead of inventing your own. GP1 (general) Use libraries and frameworks that make it easier to avoid introducing weaknesses. GP2(general) Integrate security into the entire software development lifecycle. GP3 (general) Use a broad mix of methods to comprehensively find and prevent weaknesses. GP4(general) Allow locked-down clients to interact with your software.

312 OWASP Secure Coding Practices Checklist Areas 311 Input Validation Output Encoding Authentication and Password Management Session Management Access Control Cryptographic Practices Error Handling and Logging Data Protection Communication Security System Configuration Database Security File Management Memory Management General Coding Practices https://www.owasp.org/index.php/OWASP _Secure_Coding_Practices_- _Quick_Reference_Guide

313 OWASP Secure Coding Practices Checklist Checks Each area has several pages of specific checks E.g., for Output Validation –Conduct all encoding on a trusted system (e.g., The server) –Utilize a standard, tested routine for outbound encoding –Contextually output encode all data returned to the client that originated outside the application's trust boundary. Encode all characters unless they are known to be safe for the intended interpreter –Contextually sanitize all output of un-trusted data to queries for SQL, XML, and LDAP –Sanitize all output of un-trusted data to operating system commands 312

314 Microsoft Coding Best Practices Use the latest compiler and supporting tools –E.g., highest warning level Make use of defenses provided by the compiler –E.g., Buffer security check, safe exception handling, DEP Use source-code analysis tools –Static testing Do not use banned functions –Legacy functions with known exploits Reduce potentially exploitable constructs –Static checking doesn’t always catch these Use a secure coding checklist 313

315 Good Coding Practices Art, not science Different groups have different standards But many similarities –Major focus: control of inputs and outputs 314

316 Outline What is Secure Programming? –Common software weaknesses –Secure coding practices “Secure Languages” Bug Tracking 315

317 “Secure Languages” Are there programming languages that are more secure than others? How would you measure that? How do we know if that is due to the language? –May be due to coding practices at company –May be due to skill of the programmers Which types of vulnerabilities are preventable by a language and which are independent of language? 316

318 Example Attempt to Answer Those Q’s WhiteHat security 2014 survey of languages used to implement web site applications Compared vulnerabilities on sites with languages used to implement those sites Languages, in order of popularity: –.NET –Java –ASP –PHP –ColdFusion –Perl 317

319 Results of Survey: # Vulnerabilities # detected vulnerabilities (For each site? For each application?) ranged from 11 to 6 –Highest:.Net, Java, ASP: 11; PHP: 10 –Lowest: Perl and ColdFusion: 6 and 7 Conclusion: Language choice has little effect on # vulnerabilities detected –Maybe the problem is their detection method? #1 vulnerability type for almost every language: XSS –Close #2: information leakage (revealing system data or debugging information through an output stream) 318

320 Results of Survey: # Types of Vulnerabilities #1 vulnerability type for almost every language: XSS –Close #2: information leakage (revealing system data or debugging information through an output stream) Median days to remediate XSS ranged from 184 for Perl sites down to 49 for PHP sites –This has to largely be due to policies and staffing at the companies running the servers Conclusion: Language choice has little effect of types of vulnerabilities Language choice probably has nothing to do with time to remediate 319

321 Results of Survey: Conclusions Language choice does not matter (for commonly used web app programming languages) SDLC processes matter Testing matters Developer skill matters Management of server and environment matters –Inventory of assets –Policy enforcement 320

322 But are there Secure Languages? The WhiteHat survey was for commonly used scripting languages Overwhelmingly, the vulnerabilities were based on incorrectly validating/sanitizing input data or revealing too much specific data about the system These are language-agnostic problems –more likely due to lack of training for programmers Are there features of programming languages to help create more secure code? Specifically, what languages are suitable for developing high-assurance systems? 321

323 Good Language Features Clear syntax; conceptual simplicity Modularity, data abstraction and objects Program behavior is the same on different systems Type safety - Type errors are detected Run-time errors properly trapped Memory leaks prevented Program analysis –Automated error detection, programming environments, compilation checks Isolation and special security features –Sandboxing, language-based security, … 322

324 What Does this C Statement Mean? 323 *p++ = *q++ increments p increments q modifies *p Does this mean…… or *p = *q; ++p; ++q; *p = *q; ++q; ++p; tp = p; ++p; tq = q; ++q; *tp = *tq; … or Example from Vitaly Shmatikov, U. Texas

325 Good Java Features Modularity and information hiding Array bounds checking – data cannot be accessed from area outside of allocated array –ArrayOutOfBoundsException Exception handling –But must correctly handle exceptions or can get DoS Managed memory to prevent memory leaks Code signing –Use cryptography to establish origin of class file This info can be used by Java Security Manager 324

326 What about System Programming? Java is good for applications, but not for system programming –Can’t use interpreter; must be native code –Can’t use sandbox Most OS, even JVM, written in C or C++ Even some assembly language, when can’t avoid it But we know C/C++ have problems wrt security Want language to help increase assurance How to choose? 325

327 Choosing a Language for System Programming Language features that increase assurance: –Strongly-typed –Information hiding and static data –Semantics reflected by syntax Binary clearly reflects source –Unambiguous semantics e.g., no pointer arithmetic –Indirect referencing w/o pointers –Compiled Most A1 systems were implemented in Pascal 326

328 Active Area of Research Many research projects to create programming-language technology for software security E.g., –Manifest Security at U. Penn. and CMU SOL at U Penn. The Grey Project at CMU. SELinks at U. Maryland, College Park Jif at Cornell University. FlowCaml at INRIA. Polymer at Princeton Cryptyc at DePaul University OPA at OWASPManifest Security SOL The Grey Project SELinks Jif FlowCaml Polymer Cryptyc OPA Most allow programmers to specify information flow and access control security policies on data Most based on Java, or on encapsulation/monitoring, or explicitly for applications, so can’t be used for system programming 327

329 Outline What is Secure Programming? –Common software weaknesses –Secure coding practices “Secure Languages” Bug Tracking 328

330 Bug Tracking Necessity for software development assurance Use bug tracking tool Many systems for tracking bugs –E.g. Bugzilla Ideally, can integrate with version control systems, like “subversion”, “Git”, and “CVS” Database of known bugs –Date discovered –Current status –Assignment to programmer to remediate Careful not to reintroduce fixed bugs 329

331 Mid-Term Details Wednesday February 24 th in Class Open Book and Open Note –Paper only, no electronics Goal of my questions are to determine your understanding of the material rather than simple memorization Will cover sample questions as part of this review. 330

332 Mid-Term review Trusted, Trustworthy, Assurance System Development Lifecycle 331

333 Trust Trustworthy entity has sufficient credible evidence leading one to believe that the system will meet a set of requirements Trust is a measure of ones belief in trustworthiness relying on the evidence –To trust makes one vulnerable to violations trust. Assurance is process of building confidence that an entity meets its security requirements based on evidence provided by applying assurance techniques –“Meets security requirements” == Enforces policy 332

334 System Development Lifecycle Sequential stages of development and use Many variant SDL definitions. Here is one: 1.Requirements gathering/definition 2.Design 3.Implementation (coding) 4.Testing 5.Release 6.Operation 7.Disposal 333

335 System Development Lifecycle Sequential stages of development and use Many variant SDL definitions. Here is one: 1.Requirements gathering/definition 2.Design 3.Implementation (coding) 4.Testing 5.Release 6.Operation 7.Disposal 334

336 Assurance in the System Lifecycle Assurance techniques must be applied in all stages of the system lifecycle, e.g., –The system’s security policy is internally consistent and reflects the requirements of the organization –The design of the security functions is sufficient to enforce the security requirements –The functions are implemented correctly –The assurances hold up through the maintenance, installation, configuration, and other operational stages 335

337 “Assurance Waterfall” 336 Org. Req’s Policy Security Req’s Design Implementation Disposal Distribution Instal. & Config. Maintenance Version Mgmt Threats Threat Modeling Modularization and layering Secure coding Testing Secure Distribution Patching; Monitoring Secure Install & Config Secure Disposal Version Mgmnt Informal analysis FSPM FTLS Inter- mediate spec(s) Proof Code Correspondence Informal analysis

338 Things We’d Like to Measure How much more secure is an application, system, or network after adding a particular security control? What’s the best mix of controls that will get the most security for a given investment? Are we secured enough? Are the security controls worth the price? 337

339 What is Security? A system is only secure with respect to a computer security policy A computer security policy denotes what is allowed and what is not allowed with respect to people accessing information stored in the system So computer security is the control of access by people to information stored in a computer system in order to enforce the policy How can we measure “control of access”? 338

340 Many Definitions of Security Humans define “security” as policies But there are many types of policies –E.g., C.I.A., plus others Policies don’t compose into one thing Different components of the system have different security requirements –OS and applications and DNS and networking and authentication and audit and... –Multi-dimensional, not additive Policies may contradict each other Keep policies separate, forget “unified theory” –E.g., TNI valid assuming secure network connections 339

341 Can’t Prove a Negative Absence of evidence is not evidence of absence Can only detect known exploits, so have no idea what is being missed Know there are uncountable vulnerabilities Know there are uncountable zero-days How can we know if we are already p0wned? How can we say that we are not? Ideally, design and build for high-assurance –Protect against all attacks, known and unknown 340

342 Security is Binary A system is secure or it is not –Secure means always in secure state Reducing exploit instances insufficient –Ideally, want to eliminate exploit instances –Anti-phishing training example from before –Fewer fall prey –That reduces # detectable instances and cost –But still many successful exploits Advanced attackers only need to succeed once 341

343 Security is Binary A system is secure or it is not –Secure means always in secure state Reducing exploit instances insufficient –Ideally, want to eliminate exploit instances –Anti-phishing training example from before –Fewer fall prey –That reduces # detectable instances and cost –But still many successful exploits Advanced attackers only need to succeed once 342

344 IT Risk Assessment The process of calculating quantitatively the potential for damage or monetary cost caused by an event that affects an organizations IT assets Requires –Identifying possible events –Quantifying the probability that an event will occur –Quantifying in $ the potential damage Risk = frequency(event) * damage(event) –Risk = Annual Loss Expectancy (ALE) –Product of ARO and single loss expectancy (SLE) 343

345 Identifying Possible Events Consider –Threats – Potential sources of incidents –Vulnerabilities – Weaknesses in assets Event happens when threat meets vulnerability –Tornado and flimsy data center building –Hacker and unpatched windows server Security controls primarily focus on eliminating or mitigating vulnerabilities We’ll talk about threat modelling later in semester 344

346 Quantifying Probability of Events Based on estimates –Estimate frequency of threat –Estimate existence of vulnerability –Estimate difficulty of exploiting vulnerability –Estimate cost of exploiting vulnerability Estimates performed by “subject matter experts” –“SMEs” –Estimate based on lots of assumptions and intuition guided by experience Did I say “estimate”? I meant “guess”. 345

347 High vs. Low Assurance We provide assurance through techniques such as structured design processes, documentation, and testing Higher assurance through use of more, and more rigorous, processes, documentation, and testing –Intuitively, compare a little testing vs. a great deal of testing There are some fundamental ways to organize these processes to make the job easier and to have a more robust product 346

348 Orange Book Assurance The Assurance Control Objective Systems that are used to process or handle classified or other sensitive information must be designed to guarantee correct and accurate interpretation of the security policy and must not distort the intent of that policy. Assurance must be provided that correct implementation and operation of the policy exists throughout the system's life-cycle. Operational Assurance System Architecture System Integrity Covert Channel Analysis Trusted Facility Management Trusted Recovery 347

349 Subtleties – Balanced Assurance Suppose you have a really strong, high assurance perimeter control system. You’ve made a heavy investment to make sure only permitted individuals are able to have access through this perimeter. By allowing those individuals through the perimeter you have acknowledged great trust. Each of them has an office inside the perimeter. each office has a lock on it. What threat does the office lock mitigate? Does the lock on the office need to be a super strong high assurance lock, comparable with the perimeter? 348

350 TCSEC Classes D – Minimal Protection C – Discretionary Protection –C1 – Discretionary Security Protection - DAC –C2 – Controlled Access Protection – DAC + audit, etc. B – Mandatory Protection –B1 – Labeled Security Protection (has MAC labels) –B2 – Structured Protection (FSPM) –B3 – Security Domains (implements RM) A – Verified Protection –A1 – Verified Design (formal design, spec, and verify) 349

351 TCSEC summary Security Policy 350

352 TCSEC summary Accountability 351

353 TCSEC summary Documentation 352

354 TCSEC summary Assurance 353

355 Capability Maturity Models How can a user assess the security of a product? –After lengthy, third-party evaluation (but product may be nearly obsolete by then) –Immediately, but assurance rests on claims by vendor Improve assurance and time-to-market by pre- reviewing security engineering processes of vendor –Third-party review of vendor security engineering processes (capabilities) –Focus on measuring organization competency (maturity) and improvements 354

356 Microsoft Security Development Lifecycle (SDL) Mandated for use at Microsoft since 2004 7 phases: –Training –Requirements –Design –Implementation –Verification –Release –Response 355

357 Purpose of Threat Modeling Identify threats against a system –Identify deficiencies in security requirements and design Identify threat countermeasures –Include, but not limited to, technical mechanisms –May include administrative and physical controls –Must also consider threats to the countermeasures! Increase assurance Process should be repeatable, methodical 356

358 Attack Trees Intended to be a “formal” way of modeling attacks “Tree-like representation of an attacker’s goal recursively refined into conjunctive or disjunctive sub-goals” Attacker’s “goal” is root of tree Different ways of achieving goal are leaves –Called “refinements” of the parent goal Initially proposed by Schneier in 1999 Formalized by Mauw and Oostdijk in 2005 (Foundations of Attack Trees [ICISC’05], http://www.win.tue.nl/~sjouke/publications/papers/attacktrees.pdf) 357

359 Attack-Only Tree Example 358

360 Attack-Defense Tree Example 359

361 Requires/Provides Model To successfully launch an attack, certain properties must hold –These are the requires properties After a successful attack, a new set of properties hold –These are the provides properties The attack “goal” is a property that holds after a sequence of attack events 360

362 Microsoft’s Software Security Properties PropertyDescription ConfidentialityData is only available to the people intended to access it. IntegrityData and system resources are only changed in appropriate ways by appropriate people. AvailabilitySystems are ready when needed and perform acceptably. AuthenticationThe identity of users is established (or you’re willing to accept anonymous users). AuthorizationUsers are explicitly allowed or denied access to resources. NonrepudiationUsers can’t perform an action and later deny performing it. 361

363 Data Flow Diagram (DFD) Used to graphically represent a system and its components Standard set of elements: –Data flows –Data stores –Processes –Interactors One more for threat modeling: –Trust boundaries 362

364 Example: First Cut 363 Data Sink! Useless details Data store

365 Decomposition and Modularization Divide large system into smaller components –Each has well-defined interface Goal is to divide by functions and responsibilities Modularization is good –Manage complexity by using smaller parts –System is easier to Understand Develop (e.g., by a team in parallel) Test Maintain 364

366 Coupling and Cohesion Ideas developed by Larry Constantine in late 1960s Coupling is measure of interdependence among modules –Amount of shared infrastructure –Amount of coordination –Amount of information flow Cohesion is measure of the degree to which elements of a module belong together Want low coupling and high cohesion 365

367 “Information Hiding” Modularization System modeled on hiding implementation decisions Each module hides “a secret” 366 Input Circular Shift Alphabetizer Output Control Line storage Hides how lines are stored Hides input device Hides how circ shifted lines are stored. Creates abstraction of storing all shifted lines Hides algorithm(?) Hides output device

368 Example of Layering 367 User Programs Services OS/File System 1 DAC MAC OS/File System 2 TCB Boundary

369 Example OS Layering: GEMSOS Security kernel of GTNP Note 2-level scheduling ITC provides VM abstraction UTC schedules processes on abstract VMs UTC doesn’t know when VM blocks Where is DAC layer? Right! DAC outside Ring 0 in GEMSOS 368

370 Sample Problems I am working on these and will post them before lecture. 369


Download ppt "INF523: Assurance in Cyberspace Applied to Information Security Course Introduction Prof. Clifford Neuman Lecture 1 13 Jan 2016 OHE 120."

Similar presentations


Ads by Google