Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tools to Make Objective Information Security Decisions — The Trust Economics Methodology SERENE Spring School, Birkbeck College, UK April 14, 2010 Aad.

Similar presentations


Presentation on theme: "Tools to Make Objective Information Security Decisions — The Trust Economics Methodology SERENE Spring School, Birkbeck College, UK April 14, 2010 Aad."— Presentation transcript:

1 Tools to Make Objective Information Security Decisions — The Trust Economics Methodology SERENE Spring School, Birkbeck College, UK April 14, 2010 Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad.vanmoorsel@ncl.ac.uk

2 Part I security metrics and measurements Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

3 motivation Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

4 4 © Aad van Moorsel, Newcastle University, 2010 security and trust data loss http://www.youtube.com/watch?v=JCyAwYv0Ly0 identity theft http://www.youtube.com/watch?v=CS9ptA3Ya9E worms http://www.youtube.com/watch?v=YqMt7aNBTq8 http://www.informationweek.com/news/software/showArti cle.jhtml?articleID=221400323

5 cyber security impact: money lost in UK most recent Garlik UK Cybercrime report with numbers for the year 2008, for the UK: –over 3.6 million criminal acts online –tripling of identity theft to 87 thousand –doubling of online banking losses to 52 million –44 thousand phishing web sites targeting UK banks –of £41 billion online shopping, £600 million credit card fraud, online fraud rose from 4% to 8% in last two years -online harassment: 2.4 million times -FBI: median amount of money lost per scam victim: $1,000

6 cybercrime impact: convictions in the US

7 7 © Aad van Moorsel, Newcastle University, 2010 security and trust security: protection of a system against malicious attacks information security: preservation of confidentiality, integrity and availability of information CIA properties confidentiality integrity availability

8 8 © Aad van Moorsel, Newcastle University, 2010 why metrics and why quantification? two uses: gives the ability to monitor the quality of a system as it is being used (using measurement) gives the ability to predict for the future the quality of a design or a system (using modelling) a good metric is critical to make measurement and modelling useful

9 9 © Aad van Moorsel, Newcastle University, 2010 security and trust trust: a personal, subjective perception of the quality of a system or personal, subjective decision to rely on a system evaluation trust: the subjective probability by which an individual A expects that another individual B performs a given action on which A’s welfare depends (Gambetta 1988) decision trust: the willingness to depend on something or somebody in a given situation with a feeling of relative security, even though negative consequences are possible (McKnight and Chervany 1996)

10 10 © Aad van Moorsel, Newcastle University, 2010 security and trust what is more important, security or trust? “Security Theatre and Balancing Risks” (Bruce Schneier) http://www.cato.org/dail ypodcast/podcast- archive.php?podcast_id=8 12 http://www.cato.org/dail ypodcast/podcast- archive.php?podcast_id=8 12

11 quality of service metrics Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

12 12 © Aad van Moorsel, Newcastle University, 2010 measure security Lord Kelvin: “what you cannot measure, you cannot manage” how true is this: in science? in engineering? in business? and how true is: “we only manage what we measure”?

13 13 © Aad van Moorsel, Newcastle University, 2010 classifying metrics (part one) quantitative vs. qualitative: quantitative metrics can be expressed through some number, while qualitative metrics are concerned with TRUE or FALSE quality-of-service (QoS) metrics: express a grade of service as a quantitative metric non-functional properties are system properties beyond the strictly necessary functional properties IT management is mostly about quantitative/QoS/non- functional

14 14 © Aad van Moorsel, Newcastle University, 2010 classifying metrics (part two) performance metrics: timing and usage metrics CPU load throughput response time dependability or reliability metrics: metrics related to accidental failure MTTF availability reliability security metrics: metrics related to malicious failures (attacks) ? business metrics: metrics related to cost or benefits number of buy transaction in web site cost of ownership return on investment

15 15 © Aad van Moorsel, Newcastle University, 2010 common metrics (performance) throughput = number of tasks a resource can complete per time unit: jobs per second requests per second millions of instructions per second (MIPS) floating point operations per second (FLOPS) packets per second kilo-bits per second (Kbps) transactions per second...

16 16 © Aad van Moorsel, Newcastle University, 2010 common metrics (performance) response time, waiting time, propagation delay: time user request server response arrival at server buffer start processing reply received by user propagation delay waiting time proc. time propagation delay response time (server perspective) response time (client perspective)

17 17 © Aad van Moorsel, Newcastle University, 2010 common metrics (performance) capacity = maximum sustainable number of tasks load = offered number of tasks overload = load is higher than capacity utilization = the fraction of resource capacity in use (CPU, bandwidth); for a CPU, this corresponds to the fraction of time the resource is busy (some times imprecisely called the CPU load)

18 18 © Aad van Moorsel, Newcastle University, 2010 some related areas: performance measuring performance: we know CPU speed (and Intel measured it) we can easily measure sustained load (throughput) we can reasonably okay model for performance (queuing, simulations) we’re pretty good in adapting systems for performance, through load balancing etc. there is a TOP 500 for supercomputers we buy PCs based on performance, and their performance is advertised companies buy equipment based on performance

19 19 © Aad van Moorsel, Newcastle University, 2010 common metrics (dependability/reliability) systems with failures: time failed operating failure repair failure down up

20 20 © Aad van Moorsel, Newcastle University, 2010 common metrics (dependability/reliability) Mean Time To Failure (MTTF) = average length up time period Mean Time To Repair (MTTR) = average length down time period availability = fraction of time system is up = fraction up time = MTTF / (MTTF + MTTR) unavailability = 1 – availability = fraction up time reliability at time t = probability the system does not go down before t

21 relation between dependability metrics availabilityyearly down time 0.937 days 0.994 days 0.9999 hours 0.999950 minutes 0.999995 minutes 21 © Aad van Moorsel, Newcastle University, 2010 availability and associated yearly down time:

22 relation between dependability metrics availability required MTTF if MTTR = 1 hour required MTTR if MTTF = 1 day 0.961 day1 hour 0.994 days14 minutes 0.9996 weeks1½ minutes 0.999914 months9 seconds 0.9999911 years1 second 22 © Aad van Moorsel, Newcastle University, 2010 if you have a system with 1 day MTTF and 1 hour MTTR, would you work on the repair time or the failure time to improve the availability?

23 23 © Aad van Moorsel, Newcastle University, 2010 five nines

24 24 © Aad van Moorsel, Newcastle University, 2010 some related areas: availability measuring availability: we do not know much about CPU reliability (although Intel measures it) it is easy, but time consuming to measure down time we can reasonably okay model for availability (Markov chains), although we do not know parameter values for fault and failure occurrences we’re rather basic in adapting systems for availability, but there are various fault tolerance mechanisms there is no TOP 500 for reliable computers we do not buy PCs based on availability, and their availability is rarely advertised companies buy equipment based on availability only for top end applications (e.g. goods and finances admin of super market chains)

25 25 © Aad van Moorsel, Newcastle University, 2010 how about security measuring security: we do not know much about level of CPU security (and Intel does not know how to measure it) it is possible to measure security breaches, but how much do they tell you? we do not know how to model for levels of security, for instance we do not know what attacks look like we’re only just starting to research adapting systems for security—there sure are many security mechanisms available there is no TOP 500 for secure computers we do not buy PCs based on privacy or security, and their privacy/security is rarely advertised companies are very concerned about security, but do not know how to measure it and show improvements

26 26 © Aad van Moorsel, Newcastle University, 2010 what’s special about security security is a hybrid between functional and non-functional (performance/availability) property it is tempting to think security is binary: it is secured or not  common mistake security deals with loss and attacks –you can measure after the fact, but would like to predict –maybe loss can still be treated like accidental failures (as in availability) –attacks certainly require knowledge of attackers, how they act, when they act, what they will invent security level (even if we somehow divined it) is meaningless: –what’s the possible consequence –how do people react to it (risk averse?)

27 27 © Aad van Moorsel, Newcastle University, 2010 how do people now measure security? reporting after the fact: industry and government are obligated to report breaches (NIST database and others) measure how many non-spam email went through, etc. some predictive metrics as ‘substitute’ for security: how many CPU cycles needed to break encryption technique? risk analysis: likelihood X impact, summed for all breaches  but we know neither likelihood nor impact

28 28 © Aad van Moorsel, Newcastle University, 2010 why is measurable security important? without good measures security is sold as all or nothing security purchase decisions are based on scare tactics system configuration (including cloud, SaaS) cannot be judged for resulting security

29 security metrics Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

30 30 © Aad van Moorsel, Newcastle University, 2010 CIA metrics how about CIA confidentiality: keep organisation’s data confidential (privacy for the organisation) integrity: data is unaltered availability: data is available for use you can score them, and sum them up (see CVSS later) you can measure for integrity and availability if there is centralized control you cannot easily predict them

31 31 © Aad van Moorsel, Newcastle University, 2010 good metrics in practice, good metrics should be: consistently measured, without subjective criteria cheap to gather, preferably in an automated way a cardinal number or percentage, not qualitative labels using at least one unit of measure (defects, hours,...) contextually specific—relevant enough to make decisions from Jaquith’s book ‘Security Metrics’

32 32 © Aad van Moorsel, Newcastle University, 2010 good metrics in practice, metrics cover four different aspects: perimeter defenses –# spam detected, # virus detected coverage and control –# laptops with antivirus software, # patches per month availability and reliability –host uptime, help desk response time application risks –vulnerabilities per application, assessment frequence for an application from Jaquith’s book ‘Security Metrics’

33 33 © Aad van Moorsel, Newcastle University, 2010 good metrics how good do you think the metrics from Jaquith’s book ‘Security Metrics’ are? it’s the best we can do now, but as the next slide shows, there are a lot of open issues

34 34 © Aad van Moorsel, Newcastle University, 2010 good metrics ideally good metrics should: not measure the process used to design, implement or manage the system, but the system itself not depend on things you will never know (such as in risk management) be predictive about the future security, not just reporting the past but these are very challenging requirements we do not yet know how to fulfill

35 data collection and security metrics Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

36 36 © Aad van Moorsel, Newcastle University, 2010 honeypots a honeypot pretends to be a resource with value to attackers, but is actually isolated and monitored, in order to: misguide attackers and analyze the behaviour of attackers.

37 37 © Aad van Moorsel, Newcastle University, 2010 honeypots two types: –high-interaction: real services, real OS, real application  higher risk of being used to break in or attack others honeynets (two or more honeypots in a network) –low-interaction: emulated services  low risk honeypots like nepenthes

38 38 © Aad van Moorsel, Newcastle University, 2010 an example of a hacker in a honeypot SSH-1.99-OpenSSH_3.0 SSH-2.0-GOBBLES GGGGO*GOBBLE* uname -a;id OpenBSD pufferfish 3.0 GENERIC#94 i386 uid=0(root) gid=0(wheel) groups=0(wheel) ps -aux|more USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND root 16042 0.0 0.1 372 256 ?? R 2:48PM 0:00.00 more (sh) root 25892 0.0 0.2 104 452 ?? Ss Tue02PM 0:00.14 syslogd root 13304 0.0 0.1 64 364 ?? Is Tue02PM 0:00.00 portmap... root 1 0.0 0.1 332 200 ?? Is Tue02PM 0:00.02 /sbin/init id uid=0(root) gid=0(wheel) groups=0(wheel) who cat inetd.conf attempt to edit the configuration file for network services

39 data from a honeypot 39 © Aad van Moorsel, Newcastle University, 2010 data from 2003, number of different ‘attack’ sources Pouget, Dacier, Debar: “Attack processes found on the Internet”

40 40 © Aad van Moorsel, Newcastle University, 2010 data from honeypots a lot of other data can be obtained how do worms propagate? how do attackers use zombies? what kind of attackers do exist, and which ones start denial- of-service attacks? which country do the attacks come from... Pouget, Dacier, Debar: “Attack processes found on the Internet”

41 41 © Aad van Moorsel, Newcastle University, 2010 honeypots a honeynet is a network of honeypots and other information system resources T. Holz, “Honeypots and Malware Analysis—Know Your Enemy”

42 42 © Aad van Moorsel, Newcastle University, 2010 honeynets three tasks in a honeynet: 1.data capture 2.data analysis 3.data control: especially high- interaction honeypots are vulnerable of being misused by attackers  control of data flow, neither to come inside the organisation, nor to other innocent parties T. Holz, “Honeypots and Malware Analysis—Know Your Enemy”

43 US CERT and CVSS Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

44 44 © Aad van Moorsel, Newcastle University, 2010 US-CERT security vulnerabilities United State Computer Emergency Readiness Team people submit vulnerability notes, e.g.: Vulnerability Note VU#120541 SSL and TLS protocols renegotiation vulnerability A vulnerability exists in SSL and TLS protocols that may allow attackers to execute an arbitrary HTTP transaction. Credit: Marsh Ray of PhoneFactor

45 45 © Aad van Moorsel, Newcastle University, 2010 CVSS scoring in US-CERT it uses a scoring system to determine how serious the vulnerability is: Common Vulnerability Scoring System (CVSS) P. Mell et al, “CVSS—A Complete Guide to the CVSS Version 2.0”

46 46 © Aad van Moorsel, Newcastle University, 2010 CVSS BaseScore = 0.6 x Impact + 0.4 x Exploitability Impact = 10.41 x (1 - (1 - ConfImpact) x (1 - IntegImpact) x (1 - AvailImpact) ) ConfImpact = case ConfidentialityImpact of none: 0.0 partial: 0.275 complete: 0.660... P. Mell et al, “CVSS—A Complete Guide to the CVSS Version 2.0”

47 DataLossDB from Open Security Foundation Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

48 48 © Aad van Moorsel, Newcastle University, 2010 Open Security Foundation OSF wants to provide knowledge and resources so that organisations may properly detect, protect, and mitigate information security risks. OSF maintains two databases: 1.OSVDB: Open Source Vulnerability Database with all kinds of computer security breaches 2.DataLossDB: data loss –Improve awareness: for consumers, CISOs, governments, ligislators, citizens –gain a better understanding of the effects of, and effectiveness of "compliance"

49 49 © Aad van Moorsel, Newcastle University, 2010 DataLossDB from Open Security Foundation Improve awareness of data security and identity theft threats to consumers. Provide accurate statistics to CSO's and CTO's to assist them in decision making. Provide governments with reliable statistics to assist with their consumer protection decisions and initiatives. Assist legislators and citizens in measuring the effectiveness of breach notification laws. Gain a better understanding of the effects of, and effectiveness of "compliance".

50 50 © Aad van Moorsel, Newcastle University, 2010 DataLossDB, an incident each day

51 51 © Aad van Moorsel, Newcastle University, 2010 DataLossDB: types of incidents

52 52 © Aad van Moorsel, Newcastle University, 2010 DataLossDB: outsiders vs. insiders note: interviews with CISOs suggest that the real number is 75% insider incidents

53 Part II trust economics methodology

54 motivation: the need for metrics of information security

55 Forrester report 2010 in ‘The Value of Corporate Secrets: How Compliance and Collaboration Affect Enterprise Perceptions of Risk’, Forrestors finds: 1.secrets comprise two-thirds of information value 2.compliance, not security, drives security budgets 3.focus on preventing accidents, but theft is 10 times costlier 4.more value correlates with more incidents 5.CISOs do not know how effective their security controls are 55 © Aad van Moorsel, Newcastle University, 2010

56 the value of top-five data assets 56 © Aad van Moorsel, Newcastle University, 2010 in the knowledge industry about 70% of this is secrets, 30% custodial data (credit card, customer data, etc)

57 compliance drives budgets, but doesn’t protect secrets 57 © Aad van Moorsel, Newcastle University, 2010

58 most incidents are employee accidents 58 © Aad van Moorsel, Newcastle University, 2010 75% of incidents is insider (accident or theft)

59 but thefts are much more costly than accidents 59 © Aad van Moorsel, Newcastle University, 2010

60 do CISOs know? CISO at high-value firm scores its security at 2.5 our of 3 CISO at low-value firm scores its security at 2.6 out of 3 high value firms have 4 times as many accidents as low- value firms, with 20 times more valuable data so, the CISOs seem to think security is okay/same, despite differences in actual accidents at a firm... Forrester concludes: to understand more objectively how well their security programs perform, enterprises will need better ways of generating key performance indicators and metrics 60 © Aad van Moorsel, Newcastle University, 2010

61 example of compliance: PCI DSS for credit card companies

62 PCI DSS PCI DSS: Payment Card Industry Data Security Standard mastercard, VISA, American Express, Discover,..., all come together to define a data security compliance standard mostly concerned with protecting customer data –12 requirements –testing procedures for 12 requirements –assessors will go on-site to see if a company passes the testing procedures 62 © Aad van Moorsel, Newcastle University, 2010

63 PCI DSS example 63 © Aad van Moorsel, Newcastle University, 2010

64 PCI DSS requirements 1.Install and maintain a firewall 2.Do not use vendor-supplied defaults for passwords etc. 3.Protect stored cardholder data 4.Encrypt cardholder data across open, public networks 5.Use and regularly update anti-virus software 6.Develop and maintain secure systems and applications 7.Restrict access to cardholder data by business need-to-know 8.Assign a unique ID to each person with computer access 9.Restrict physical access to cardholder data 10.Track and monitor all access to network and cardholder data 11.Regularly test security systems and processes 12.Maintain a policy that addresses information security 64 © Aad van Moorsel, Newcastle University, 2010

65 PCI DSS observations: –it will take a company a lot of effort to show compliance –you do not know how secure it actually makes your company –you hope it protects you against loss of custodial data, which are indeed very embarrassing and bring bad press –but these are not the most costly breaches (losing secrets is costlier) so, how good is such a standard for an industry? would the industry do worse without the standard? 65 © Aad van Moorsel, Newcastle University, 2010

66 a case for trust economics would it be worse without the compliance standard? to answer this question is very difficult: –from a business perspective, you would be able to optimize your security investments better (potentially...) –from the CISO perspective, does he value exactly the same as the company if something ‘minor’ is embarrassing enough to get fired for? –from a legal perspective, how does one show negligence, isn’t it nice to have something written down, even if it makes people waste time? –the psychology of the customer is to listen to the extreme cases, even if they are very rare, how to take that into consideration nevertheless, we are going to try it, make a few steps towards answering these questions using Trust Economics 66 © Aad van Moorsel, Newcastle University, 2010

67 introduction to the trust economics methodology

68 trust economics methodology for security decisions 68 stakeholders discuss a model of the information system trade off: legal issues, human tendencies, business concerns,... © Aad van Moorsel, Newcastle University, 2010

69 trust economics research from the trust economics methodology, the following research follows: 1.identify human, business and technical concerns 2.develop and apply mathematical modelling techniques 3.glue concerns, models and presentation together using a trust economics information security ontology 4.use the models to improve the stakeholders discourse and decisions 69 © Aad van Moorsel, Newcastle University, 2010

70 1. identify human concerns 70 Find out about how users behave, what the business issues are: CISO1: Transport is a big deal. Interviewer1: We’re trying to recognise this in our user classes. CISO1: We have engineers on the road, have lots of access, and are more gifted in IT. Interviewer1: Do you think it would be useful to configure different user classes? CISO1: I think it’s covered. Interviewer1: And different values, different possible consequences if a loss occurs. I’m assuming you would want to be able to configure. CISO1: Yes. Eg. customer list might or might not be very valuable. Interviewer1: And be able to configure links with different user classes and the assets. CISO1: Yes, if you could, absolutely. Interviewer1: We’re going to stick with defaults at first and allow configuration if needed later. So, the costs of the password policy: running costs, helpdesk staff, trade-off of helpdesk vs. productivity CISO1: That’s right. © Aad van Moorsel, Newcastle University, 2010

71 1. identify human concerns 71 Find out about how users behave, what the business issues are: Discussion of "Productivity Losses": CISO2: But it’s proportional to amount they earn. This is productivity. eg. $1m salary but bring $20m into the company. There are expense people and productivity people. Interviewer1: We have execs, “road warrior”, office drone. Drones are just a cost. Interviewer2: And the 3 groups have different threat scenarios. CISO2: Risk of over-complicating it, hard to work out who is income-earner and what proportion is income earning. Interviewer2: But this is good point. CISO2: Make it parameterisable, at choice of CISO. … CISO2: So, need to be able to drill down into productivity, cost, - esp in small company. © Aad van Moorsel, Newcastle University, 2010

72 2. develop modeling techniques 72 © Aad van Moorsel, Newcastle University, 2010

73 3. develop ontology as glue of tools and methodology 73 © Aad van Moorsel, Newcastle University, 2010

74 4. facilitate stakeholder discourse 74 © Aad van Moorsel, Newcastle University, 2010

75 Newcastle’s involvement 1.identify human, business and technical concerns –are working on a case study in Access Management (Maciej, James, with Geoff and Hilary from Bath) 2.develop and apply mathematical modelling techniques –are generalising concepts to model human behaviour, and are validating it with data collection (Rob, Simon, with Doug, Robin and Bill from UIUC) –do a modelling case study in DRM (Wen) 3.glue concerns, models and presentation together using a trust economics information security ontology –developed an information security ontology, taking into account human behavioural aspect (Simon) –made an ontology editing tool for CISOs (John) –are working on a collaborative web-based tool (John, Simon, Stefan from SBA, Austria) 4.use the models to improve the stakeholders discourse and decision –using participatory design methodology, are working with CISOs to do a user study (Simon, Philip and Angela from UCL) 75 © Aad van Moorsel, Newcastle University, 2010

76 example of the trust economics methodology USB sticks

77 USB stick model Tests the hypothesis that there is a trade-off between the components of investments in information security that address confidentiality and availability; Captures trade-off between availability and confidentiality using a model inspired by a macroeconomic model of the Central Bank Problem Conducts an empirical study together with a (rigorously structured) simulation embodying the dynamics of the conceptual model; Empirical data is obtained from semi-structured interviews with staff at two organizations; Demonstrates the use of the model to explore the utility of trade-offs between availability and confidentiality. Modelling the Human and Technological Costs and Benefits of USB Memory Stick Security, Beautement et al, WEIS 2008 77 © Aad van Moorsel, Newcastle University, 2010

78 central bank’s inflation-unemployment model 78 © Aad van Moorsel, Newcastle University, 2010 security investment threats to confidentiality threats to availability

79 optimize utility you can set the value of I, the investment –more monitoring of employees –more training given that investment level, find out how humans would behave –user will use encryption if it optimizes their personal utility function (human scoring function) plug this encryption level in the system behavioural model, and determine the utility 79 © Aad van Moorsel, Newcastle University, 2010

80 the USB stochastic model a base discrete-event stochastic model 80 © Aad van Moorsel, Newcastle University, 2010

81 USB stochastic model (in Möbius) 81 © Aad van Moorsel, Newcastle University, 2010

82 USB model translate human behavioural aspects to a model parameter 82 © Aad van Moorsel, Newcastle University, 2010

83 USB model humans take actions depending on their personal utility they get out of it: in this model, users will encrypt with a probability that optimizes the user’s score e.g., more reprimands will lower the human score, so in the model humans will behave to avoid reprimands  use encryption 83 © Aad van Moorsel, Newcastle University, 2010

84 some results a company can invest in more help desk staff, or more monitoring employees  which of two investments makes little difference if investment increases, one would expect increase in user encrypting  not gradual, sudden sharp increase at some investment level one would expect the user to change its proportion of encryption  optimal proportion seems to be always 0 or 1 84 © Aad van Moorsel, Newcastle University, 2010

85 human behaviour based on the human behaviour score function, we find out what the optimal encryption level for users is –more encryption, less embarrassment –more encryption, more annoying time wasting take all the human scoring functions together and determine what the optimal encryption level is for each investment level plug that in the model, and solve it for utility function 85 © Aad van Moorsel, Newcastle University, 2010

86 confidentiality/availability utility investment horizontally, encryption probability vertical, linear conf/avail utility function as some slides back 86 © Aad van Moorsel, Newcastle University, 2010

87 other stochastic models: effort models Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

88 privilege graphs (Dacier 1994) 88 © Aad van Moorsel, Newcastle University, 2010 nodes are privileges, a path from an attacker to a privilege implies a vulnerability, here arcs are labelled with classes of attacks

89 for comparison: attack tree model 89 © Aad van Moorsel, Newcastle University, 2010

90 Markov security model with ‘effort’ (Kaaniche ‘96) observation: it takes effort to carry out an attack add exponentially distributed time to the privilege graph 90 © Aad van Moorsel, Newcastle University, 2010

91 results for an example 91 © Aad van Moorsel, Newcastle University, 2010

92 results: the metric is an availability metric 92 © Aad van Moorsel, Newcastle University, 2010

93 defining the problem space: ontology of information security Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

94 ontologies a collection of interrelated terms and concepts that describe and model a domain used for knowledge sharing and reuse provide machine-understandable meaning to data expressed in a formal ontology language (e.g. OWL, DAML+OIL) 94 © Aad van Moorsel, Newcastle University, 2010

95 ontology features common understanding of a domain –formally describes concepts and their relationships –supports consistent treatment of information –reduces misunderstandings explicit semantics –machine understandable descriptions of terms and their relationships –allows expressive statements to be made about domain –reduces interpretation ambiguity –enables interoperability 95 © Aad van Moorsel, Newcastle University, 2010

96 ontology features (cont.) expressiveness –ontologies built using expressive languages –languages able to represent formal semantics –enable human and software interpretation and reasoning sharing information –information can be shared, used and reused –supported by explicit semantics –applications can interoperate through a shared understanding of information 96 © Aad van Moorsel, Newcastle University, 2010

97 core information security ontology elements information assets being accessed –information that is of value to the organisation, which individuals interact with and which must be secured to retain its value the vulnerabilities –within IT infrastructure, but also within the processes that a ‘user’ may partake in the intentional or unintentional threats –not just to IT infrastructure, but to process security and productivity the potential process controls that may be used and their identifiable effects –these may be technical, but also actions within a business process this formalised content is then encoded in an ontology –e.g., represented in the Web Ontology Language (OWL) 97 © Aad van Moorsel, Newcastle University, 2010

98 98 © Aad van Moorsel, Newcastle University, 2010 security ontology: relationships Fentz, ASIACCS’09, Formalizing Information Security Knowledge

99 security ontology: concepts 99 © Aad van Moorsel, Newcastle University, 2010 Fentz, ASIACCS’09, Formalizing Information Security Knowledge

100 security ontology: example of fire threat 100 © Aad van Moorsel, Newcastle University, 2010 Fentz, ASIACCS’09, Formalizing Information Security Knowledge

101 an information security ontology incorporating human-behavioural implications Simon Parkin, Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security UK Robert Coles, Bank of America, Merrill Lynch UK

102 trust economics ontology 102 we want to have a set of tools that implement the trust economics methodology needs to work for different case studies need a way to represent, maintain and interrelate relevant information glue between –problem space: technical, human, business –models –interfaces © Aad van Moorsel, Newcastle University, 2010

103 using an ontology 103 We chose to use an ontology to address these requirements, because: –An ontology helps to formally define concepts and taxonomies –An ontology serves as a means to share knowledge Potentially across different disciplines –An ontology can relate fragments of knowledge Identify interdependencies © Aad van Moorsel, Newcastle University, 2010

104 business, behaviour and security 104 Example: Password Management –There is a need to balance security and ease-of-use –A complex password may be hard to crack, but might also be hard to remember Is there a way to: –Identify our choices in these situations? –Consider the potential outcomes of our choices in a reasoned manner? © Aad van Moorsel, Newcastle University, 2010

105 requirements 105 Standards should be represented –Information security mechanisms are guided by policies, which are increasingly informed by standards The usability and security behaviours of staff must be considered –Information assets being accessed; –The vulnerabilities that users create; –The intentional or unintentional threats user actions pose, and; –The potential process controls that may be used and their identifiable effects CISOs must be able to relate ontology content to the security infrastructure they manage –Representation of human factors and external standards should be clear, unambiguous, and illustrate interdependencies © Aad van Moorsel, Newcastle University, 2010

106 information security ontology 106 We created an ontology to represent the human- behavioural implications of information security management decisions –Makes the potential human-behavioural implications visible and comparable Ontology content is aligned with information security management guidelines –We chose the ISO27002: “Code of Practice” standard –Provides a familiar context for information security managers (e.g. CISOs, CIOs, etc.) –Formalised content is encoded in the Web Ontology Language (OWL) Human factors researchers and CISOs can contribute expertise within an ontology framework that connects their respective domains of knowledge –Input from industrial partners and human factors researchers helps to make the ontology relevant and useful to prospective users © Aad van Moorsel, Newcastle University, 2010

107 ontology - overview 107 © Aad van Moorsel, Newcastle University, 2010

108 ontology – password policy example 108 © Aad van Moorsel, Newcastle University, 2010

109 example – password memorisation 109 © Aad van Moorsel, Newcastle University, 2010

110 example – recall methods 110 © Aad van Moorsel, Newcastle University, 2010

111 example – password reset function 111 © Aad van Moorsel, Newcastle University, 2010

112 conclusions CISOs need an awareness of the human-behavioural implications of their security management decisions human Factors researchers need a way to contribute their expertise and align it with concepts that are familiar to CISOs –standards –IT infrastructure –business processes we provided an ontology as a solution –serves as a formalised base of knowledge –one piece of the Trust Economics tools 112 © Aad van Moorsel, Newcastle University, 2010

113 an ontology for structured systems economics Adam Beaument UCL, HP Labs David Pym HP Labs, University of Bath

114 ontology to link with the models 114 © Aad van Moorsel, Newcastle University, 2010 thus far, trust economics ontology represent technology and human behavioural issues how to glue this to the mathematical models?

115 ontology 115 © Aad van Moorsel, Newcastle University, 2010

116 example process algebra model 116 © Aad van Moorsel, Newcastle University, 2010

117 conclusion on trust economics ontology 117 © Aad van Moorsel, Newcastle University, 2010 trust economics ontology is work in progress -added human behavioural aspects to IT security concepts -provided an abstraction that allows IT to be represented tailored to process algebraic model to do: -complete as well as simplify... -proof is in the pudding: someone needs to use it in a case study

118 user evaluation for trust economics software Simon Parkin Aad van Moorsel Philip Inglesant Angela Sasse UCL

119 participatory design of a trust economics tool 119 assume we have all pieces together: ontology models CISO interfaces what should the tool look like? we conduct a participatory design study with CISOs from: ISS UCL National Grid method: get wish list from CISOs, show a mock-up tool and collect feedback, improve, add model in background, try it out with CISOs, etc. © Aad van Moorsel, Newcastle University, 2010

120 information security management 120 find out about how users behave, what the business issues are: CISO1: Transport is a big deal. Interviewer1: We’re trying to recognise this in our user classes. CISO1: We have engineers on the road, have lots of access, and are more gifted in IT. Interviewer1: Do you think it would be useful to configure different user classes? CISO1: I think it’s covered. Interviewer1: And different values, different possible consequences if a loss occurs. I’m assuming you would want to be able to configure. CISO1: Yes. Eg. customer list might or might not be very valuable. Interviewer1: And be able to configure links with different user classes and the assets. CISO1: Yes, if you could, absolutely. Interviewer1: We’re going to stick with defaults at first and allow configuration if needed later. So, the costs of the password policy: running costs, helpdesk staff, trade-off of helpdesk vs. productivity CISO1: That’s right. © Aad van Moorsel, Newcastle University, 2010

121 information security management 121 find out about how users behave, what the business issues are: Discussion of "Productivity Losses": CISO2: But it’s proportional to amount they earn. This is productivity. eg. $1m salary but bring $20m into the company. There are expense people and productivity people. Interviewer1: We have execs, “road warrior”, office drone. Drones are just a cost. Interviewer2: And the 3 groups have different threat scenarios. CISO2: Risk of over-complicating it, hard to work out who is income-earner and what proportion is income earning. Interviewer2: But this is good point. CISO2: Make it parameterisable, at choice of CISO. … CISO2: So, need to be able to drill down into productivity, cost, - esp in small company. © Aad van Moorsel, Newcastle University, 2010

122 modelling concepts and model validation Rob Cain (funded by HP) Simon Parkin Aad van Moorsel Doug Eskin (funded by HP) Robin Berthier Bill Sanders University of Illinois at Urbana-Champaign

123 project objectives performance models traditionally have not included human behavioural aspects in their models we want to have generic modelling constructs to represent human behaviour, tendencies and choices: –compliance budget –risk propensity –impact of training –role dependent behaviour we want to validate our models with collected data –offline data, such as from interviews –online data, measure ‘live’ we want to optimise the data collection strategy in some cases, it makes sense to extend our trust economics methodology with a strategy for data collection 123 © Aad van Moorsel, Newcastle University, 2010

124 presentation of Möbius 124 © Aad van Moorsel, Newcastle University, 2010

125 sample Möbius results 125 © Aad van Moorsel, Newcastle University, 2010

126 sample Möbius results (cont.) 126 © Aad van Moorsel, Newcastle University, 2010

127 criticality of using data the goal of using data is to provide credibility to the model: –by defining and tuning input parameters according to individual organization –by assessing the validity of prediction results issues: –numerous data sources –collection and processing phases are expensive and time consuming –no strategy to drive data monitoring –mismatch between model and data that can be collected 127 © Aad van Moorsel, Newcastle University, 2010

128 data collection approach 1.Design specialized model according to requirements 2.Classify potential data sources according to their cost and quality 3.Optimize collection of data according to parameter importance 4.Run data validation and execute model Model Importance Stakeholders Data Sources Cost / Quality 2 2 3 3 4 4 1 1 Input parameter definition Output validation Input parameter definition Output validation 128 © Aad van Moorsel, Newcastle University, 2010

129 data sources classification Cost: –Cost to obtain –Time to obtain –Transparency –Legislative process Quality: –Accuracy –Applicability Importance: –Influence of parameter value on output 129 © Aad van Moorsel, Newcastle University, 2010

130 Low Medium High Organization Budget Parameters input/o utput CategoryParameterDescriptionVariablesInfluenceData Sources and Cost inBudget Total security investment IT budget. Default is 100 medium IT security survey (http://www.gartner.com, http://www.gocsi.com) interview with IT directors public gov. budget data inBudget Training investment Training budget. Always, one-off 100 USB stick = 100, software = 0, install and maintenance = 0 low interview with IT directors public gov. budget data inBudget Support proportion of budget Experimental value. Proportion of Active Security Investment used for support high interview with IT directors public gov. budget data inBudget Monitoring proportion of budget Experimental value. 1 – (Support proportion of budget) high interview with IT directors public gov. budget data 130 © Aad van Moorsel, Newcastle University, 2010

131 input/ output CategoryParameterDescriptionVariablesInfluenceData Sources and Cost in User behavior Compliance budget Effort willing to spend conforming with security policy that doesn't benefit you. in User behavior Perceived benefit of task Effort willing to put in without using compliance budget. Generalised: understanding, investment, incentives User survey Overall Human Parameters 131 © Aad van Moorsel, Newcastle University, 2010

132 input/ou tput CategoryParameterDescriptionVariablesInfluenceData Sources and Cost in Culture of organization Prob, of leaving default password Organization policy, user trainingmedium inUser behaviorPassword strength Organization policy, user trainingmedium in Attacker determination Password strength threshold Compromised by brute force attack Password stength, attacker determination medium inUser behavior Password update frequency Organization policy, user trainingmedium inUser behavior Prob. of being locked out when password is forgottenOrganization policy, user trainingmedium inUser interface Prob. of finding lost password efficiency of password recovery tech. medium inUser interface Prob. of needing support (#support queries / #users)prob. of forgetting passwordmedium inUser behavior Management reprimands medium inUser behavior Negative support experiences medium outUser behavior Prob. password can be compromised high outSecurityAvailability#successful data transfer high outSecurityConfidentiality#exposures + #reveals high password: probability of break-in 132 © Aad van Moorsel, Newcastle University, 2010

133 data collection research four sub problems: determine which data is needed to validate the model: –provide input parameter values –validate output parameters technical implementation of the data collection optimize data collection such that cost is within a certain bound: need to find the important parameters and trade off with cost of collecting it add data collection to the trust economics methodology: –a data collection strategy will be associated with the use of a model 133 © Aad van Moorsel, Newcastle University, 2010

134 conclusion 134 trust economics: ontology for human behavioural aspects, incl. editor and community version tool design with CISOs case studies: password, USB, DRM data collection strategies for validation to be expanded: generic ontology for trust economics, underlying the tools actual tool building evaluation of the methodology © Aad van Moorsel, Newcastle University, 2010

135 trust economics info http://www.trust-economics.org/ Publications: An Information Security Ontology Incorporating Human-Behavioural Implications. Simon Parkin, Aad van Moorsel, Robert Coles. International Conference on Security of Information and Networks, 2009 Risk Modelling of Access Control Policies with Human-Behavioural Factors. Simon Parkin and Aad van Moorsel. International Workshop on Performability Modeling of Computer and Communication Systems, 2009. A Knowledge Base for Justified Information Security Decision-Making. Daria Stepanova, Simon Parkin, Aad van Moorsel. International Conference on Software and Data Technologies, 2009. Architecting Dependable Access Control Systems for Multi-Domain Computing Environments. Maciej Machulak, Simon Parkin, Aad van Moorsel. Architecting Dependable Systems VI, R. De Lemos, J. Fabre C. Gacek, F. Gadducci and M. ter Beek (Eds.), Springer, LNCS 5835, pp. 49—75, 2009. Trust Economics Feasibility Study. Robert Coles, Jonathan Griffin, Hilary Johnson, Brian Monahan, Simon Parkin, David Pym, Angela Sasse and Aad van Moorsel. Workshop on Resilience Assessment and Dependability Benchmarking, 2008. The Impact of Unavailability on the Effectiveness of Enterprise Information Security Technologies. Simon Parkin, Rouaa Yassin-Kassab and Aad van Moorsel. International Service Availability Symposium, 2008. Technical reports: Architecture and Protocol for User-Controlled Access Management in Web 2.0 Applications. Maciej Machulak, Aad van Moorsel. CS-TR 1191, 2010 Ontology Editing Tool for Information Security and Human Factors Experts. John Mace, Simon Parkin, Aad van Moorsel. CS-TR 1172, 2009 Use Cases for User-Centric Access Control for the Web, Maciej Machulak, Aad van Moorsel. CS-TR 1165, 2009 A Novel Approach to Access Control for the Web. Maciej Machulak, Aad van Moorsel. CS-TR 1157, 2009 Proceedings of the First Trust Economics Workshop. Philip Inglesant, Maciej Machulak, Simon Parkin, Aad van Moorsel, Julian Williams (Eds.). CS-TR 1153, 2009. A Trust-economic Perspective on Information Security Technologies. Simon Parkin, Aad van Moorsel. CS-TR 1056, 2007 135 © Aad van Moorsel, Newcastle University, 2010

136 conclusion state of security metrics: no good practical system metrics have been devised (at par with down time, throughput, etc) metrics in abundance (read Jacquit): –macro measures of what’s happening in the world (# worms, etc.) –process metrics, often for compliance –‘derivative’ system properties (# spam messages, virus detected, etc.) 136 © Aad van Moorsel, Newcastle University, 2010

137 conclusion state of security models: models: –are all based on attack tree ideas: what can happen, an attacker, defence mechanisms, in what order, etc. –often metrics are ‘traditional’: MTTF, success probabilities, etc. –plethora of imaginative modelling approaches, e.g., economics based, psychology ideas, etc. to do –validation (beyond lose interviews) –convergence to and widespread use of the best approaches 137 © Aad van Moorsel, Newcastle University, 2010

138 conclusion must the lesson be: it’s not about the metric, stupid! it’s about justified-decision making! and models are the way to add rigour? 138 © Aad van Moorsel, Newcastle University, 2010


Download ppt "Tools to Make Objective Information Security Decisions — The Trust Economics Methodology SERENE Spring School, Birkbeck College, UK April 14, 2010 Aad."

Similar presentations


Ads by Google