Presentation is loading. Please wait.

Presentation is loading. Please wait.

Russell Cameron Thomas

Similar presentations


Presentation on theme: "Russell Cameron Thomas"— Presentation transcript:

1 Russell Cameron Thomas
Total Cost of Cyber (In)security – Integrating operational security metrics into business decision-making In this talk I present an approach to mesh operational security metrics with business decision-making – by which I mean budget decisions, investment decisions, priority decisions, strategy decisions, and tactical decisions in day-to-day implementation or execution. Drawing an analogy to the Total Quality Management movement, I’m calling the approach “the Total Cost of Security (or Insecurity)”. Russell Cameron Thomas Principal, Meritology Mini-Metricon, February 5, 2007 San Francisco, CA

2 Mini-Metricon, San Francisco - Feb 5, 2007
Purpose of this Talk To introduce a new approach Influence thought leaders, academic research, and professional practice Stimulate your thinking and inspire hope Build productive bridges between business and IT Show how key concepts of each can be made compatible Take a stand on what will work and what won’t To get your feedback Is this on the right rack? Is it worth pursuing? Does it fit with other approaches to security metrics? To recruit collaborators and advocates Non-purposes Debate the devilish details Debate politics Debate acceptability in “Mainstream” and “Late Adopter” organizations It will take years, of course! By dividing costs into three categories: “Budgeted”, “Self-insured”, and “Catastrophic”. I’ll show how operational security metrics can be used in each of these cost estimates. I will also touch on how various cost estimates can guide investment and policy decisions. This approach makes the most of existing information, aligns with decision-making processes, and avoids the problem of conflating reliable and unreliable estimates. We combine methods from Enterprise Risk Management, Activity-based Costing, and qualitative reasoning. The approach is roughly analogous to the Total Cost of Quality concept that helped motivate the Total Quality Management movement (see Quality is Free, by Phillip Crosby). In addition to helping with security cost and performance management, this approach highlights the importance of organization learning and discovery. Mini-Metricon, San Francisco - Feb 5, 2007

3 Mini-Metricon, San Francisco - Feb 5, 2007
The Challenge Problem: Disconnect between business decision-makers and security specialists regarding value and risk of InfoSec* “Security directors appear to be politically isolated within their companies” “They face a challenging search for allies when they need to gain support from upper management for new security initiatives.” “Companies reported less alignment of security with long-range strategic objectives of the firm.” “The results suggest that security remains a function that is mired in operations in the eyes of senior executives.” Result: under-spending, over-spending, misallocation, burden-dumping, denial, and worse… Fighting the last war Failures of imagination Unintended consequences One of the main challenges facing IT managers is how to map security metrics and performance to business metrics and performance. This is necessary to align business goals and investments with security requirements, and to balance risks against costs and rewards. * Conference Board Survey Oct. 2006: “Navigating Risk—The Business Case for Security” Mini-Metricon, San Francisco - Feb 5, 2007

4 The Simplistic Approach is a “Blind Alley” ROSI*, ALE**, and variants
^ p(L|ei) Li V = i = incident types i = 1 Loss of Economic Value Probability of loss given incident & exposure Expected loss value Why a “blind alley”? Laplace’s Dream: “If only we had more data…” (see appendix) ROSI* = ∆V / I This formula is a generalization of many treatments of ROSI. Not all authors use the same formula or even the same terminology. (Warning: too often, authors mis-use terms like “Return on Investment (ROI)”, which is a ratio relationship as shown here. Some authors use ROI when they really mean “Net Benefit”, indicating that costs are subtracted from benefits. On the plus side, ROSI is logically consistent and it’s also consistent insurance industry practice (“annualized loss expectency”). Some might object that it doesn’t include the time value of money, but that can be corrected but using cash flow models with appropriate discount rates for each time period. But ROSI call it a “blind alley” because I don’t think it is feasible to collect enough reliable historical data to do the necessary calculations, especially for low probability/high impact events that haven’t happened yet. Furthermore, you MUST have a way to deal with the dependence structure between incident types. If all incident types are statistically independent (not just uncorrelated), then “tail risk” of extreme losses will be relatively less than cases where some or all incident types are statistically dependent (even a little). Statistical dependence can sigificantly increase “tail risk”. (For a thorough mathematical discussion, see: Iceberg Risk by Kent Osband). Almost everyone who proposes ROSI assumes statistical independence across incident types with little or no justification. The other problem with this approach is that it tries to shoe-horn security investments into the same model as ordinary capital investments (value creating). I think this is fundamentally wrong and misleading for decision-makers. More on this later. Security “investment” * “Return on Security Investment” ** “Annualized Loss Expectancy” Example reference: “Calculated Risk - Guide to determining security ROI” - CSO Magazine - December 2002 Mini-Metricon, San Francisco - Feb 5, 2007

5 Two Viewpoints on Economic Risk
#1 “Rational Investor” (Capital Asset Pricing, Discounted Cash Flow) What matters: D Mean, D variance Fat part of the curve p(v) random walk value When: Quarterly EPS Earnings volatility Shorter time periods Normal distributions change in value time #2 “Insurance Actuary” (Ruin Theory, “Iceberg Risk”) p(v) What matters: Extreme events Tail of the curve Both are essential, but don’t mix the two in the same calculation/metric Economic theorists haven’t yet reconciled these two views Accounting practice and MBA training both focus on #1 random walk with “avalanches” value When: Credit rating Solvency Reserve funds Longer time periods “Fat Tailed” and skewed distributions change in value time 99% “Ruin” Mini-Metricon, San Francisco - Feb 5, 2007

6 The Core Idea: Three Costs Categories
Idealized “Budgeted” “Self-insurance” “Catastrophic” mean 1s 2s 3s 4s 5s 6s 7s Annual Probability Here’s the core idea of the Total Cost approach. This chart shows an idealized Total Cost probability distribution curve, with a log scale of cost on the horizontal axis. In practice, we may only have a hazy idea what this curve looks like. Most firms never try to estimate it. For our purposes today, all we care about is the general shape. In general, the total costs of security will have the largest peak somewhere close to what a firm spends every quarter or every year. This is the “Budgeted” category. At the other extreme, there is the cost of bankruptcy (or worse), which is no doubt a much lower probability than the budgeted costs. This is the “Catastrophic” cost category. In between are all the cost levels that are severe, but don’t bankrupt the firm. Let’s call this the “Self-insurance” cost category. The next few slides will explain these categories and what to do with them. 1x 10x 100x 1,000x Total Cost of InfoSec (borrowed from “Value at Risk” concept in Financial Services Risk Management) Mini-Metricon, San Francisco - Feb 5, 2007

7 Mini-Metricon, San Francisco - Feb 5, 2007
Budgeted Costs Q: What is the expected (average) impact of security-related costs on EPS and earnings volatility (+/– budget)? The rule: costs must already be in the budget* somewhere Defined to fit the budget and spending approval processes Results in stable ratio-scale values Theoretically and practically sound Applies Activity-based Costing methods Compatible with accounting practice (GAAP) Fits discounted cash flow assumptions for multi-year analysis Good information available (in principle) Simple Arithmetic ® Tractable and simple to understand Composable across organization units and systems “If you are claiming cost reductions, show me whose budget I should cut. If you are claiming revenue increases, show me whose sales quota I should raise.” (Exec VP) * Includes both operating and capital budgets, but excludes cyber insurance or reserves Mini-Metricon, San Francisco - Feb 5, 2007

8 Calculating Budgeted Costs (1)
Aggregate direct costs Security staff, training, awareness, tools, services, technology, management, threat monitoring, assessments, etc. Direct cost of predictable and expected loss events and remediation w/ portfolio effects Use cost driver models for indirect costs Patch testing, installation, upgrades, etc. Vendor support costs, 3rd party support Help desk New employee screening and hiring process Indirect costs of predictable and expected loss events with portfolio effects Negotiate cost allocation rules for bundled and overhead costs Infrastructure software and hardware costs Application software Internal IT development Legal dept. Identify costs from unintended consequences and “business prevention” It’s a judgment call how best to account for these, but they will win credibility! If possible, use incremental cost analysis, not just total costs Compare to a base case (e.g. a “barely legal” budget) Mini-Metricon, San Francisco - Feb 5, 2007

9 Calculating Budgeted Costs (2)
Modeling indirect costs using cost drivers: e.g. Desktop/Laptop Incidents and Remediation Cost #1: Provisioning Illustrative Cost #2: Help Desk Benefits: Simplicity – many fewer budget categories than incident types, scenarios, etc. Effectiveness – puts attention on the right levers Focus – most often, a few cost drivers dominate (80/20 rule). Platform Policy # devices / yr. Awareness Compliance % Method: Identify cost drivers using security metrics combined with business operational metrics (e.g. number of new employees, turnover, etc.). Aggregate and simplify where possible. Only account for budgeted (forward-looking) costs. Use historical costs as a guide, if available. Mini-Metricon, San Francisco - Feb 5, 2007

10 Calculating Budgeted Costs (3)
Modeling indirect costs using cost drivers: e.g. Indirect costs of predictable and expected loss events, with portfolio effects Benefits: Simpler calculations More robust to varying assumptions Abstracted and Aggregated attacks, breaches, incidents Asset: Customer DB Risk Drivers Exposure, given defenses Damage, violations, etc. Cost Drivers Cost Categories: Staff (extra headcount) Customer Service (damage control) etc. Detection, remediation, etc.. Mini-Metricon, San Francisco - Feb 5, 2007

11 Decision Framework for Budgeted Costs Differential Analysis
#3 Lifetime #1 Total Budgeted Costs vs. benchmarks #4 Self-insurance Cost Implications Higher Indirect Same Current Time Lower Direct “Barely legal” Budget Current Budget “ Premium” Budget #2 Budget Optimization Mini-Metricon, San Francisco - Feb 5, 2007

12 Mini-Metricon, San Francisco - Feb 5, 2007
Self-Insurance Cost Q: How much money would you put aside each year into a reserve fund* to avoid a serious decline in credit rating due to low-probability/high-impact losses? The rule: an actuarially-sound self-insurance premium, given… Budget-busting loss events Severe outage, delay in a key new product, loss of major sales contract, etc. Material to quarterly EPS (> 1% ) Extreme loss events (short of bankruptcy) that threaten credit rating, etc. Long-lasting business interruption, executive fraud, earnings restatement, regulatory action, punitive damages, etc. Interdependencies, correlations (“avalanche effects”), and portfolio effects Parameters: Maximum risk threshold and time horizon set by top management “Mark to Model” approach, calibrated by history & “wisdom of the crowds” A betting man’s judgment: “The race doesn’t always go to the swiftest, but that’s how you bet.” *Analogous to the concept of Economic Capital in financial services Mini-Metricon, San Francisco - Feb 5, 2007

13 Calculating Self-Insurance Cost (1)
Annual premium ≈ Pool ÷ (Time Period) Cost distribution curve (if time period is long enough) Estimation Parameters Budget threshold 99th Percentile threshold 1 2 3 Time period* Self-insurance pool (“Value at Risk”) 5 Fund solvency* 4 Shape of the curve 6 Interest rates While it would be very desirable to calculate self-insurance cost from historical (actuarial) loss information, in practice that won’t be possible in most cases. In this slide we show a parametric approach that might yield robust, usable estimates. Magnitude of costs Modeling: Distribution curves from parameters Monte Carlo simulation of self-insurance pool with funding parameters, interest rates, etc. to calculate annual premium Dominated by largest losses 2 * Policy decisions by top management Mini-Metricon, San Francisco - Feb 5, 2007

14 Calculating Self-Insurance Cost (2)
Parameter values change with new information How: A Competitive Marketplace for Models parameter Prediction Markets Bayesian Networks External data bases, benchmarks time Consensus Estimates This is where the “magic” happens. Currently, there is no good theory or method to calculate self-insurance costs based on a given set of operational security metrics or operating history. This is where loss events in the “self-insurance” category differ from the “budgeted” category. “Budgeted” costs can be reliably forecasted based on historical data. First, the information security landscape for each firm changes rapidly. This makes it almost impossible to build long time series data that are statistically stationary. Second, loss scenarios (breach incident plus loss dynamics) are often beyond the reach of purely historical analysis. Just because a loss scenario hasn’t happened yet doesn’t mean it couldn’t happen. There are many extreme loss events that result from an “unthinkable” combination of effects that were previously considered to be unrelated or uncorrelated. This is analogous to the phenomena of financial market crashes and other “avalanche” phenomena. Third, no one perspective seems to offer complete information. Cause-effect relationships are best understood by looking at relatively detailed models of business process, technology architectures, and policies, for example. Loss magnitudes, on the other hand, often lead to competitive analysis of a firm and it’s supply chain, it’s market position, and even it’s position in the public mind (when reputation losses are considered). Finally, over a strategic time horizon, information security can be viewed as a repeating evolutionary game between attackers (incl. nature) and defenders. Purely historical treatments will not be effective at capturing the co-evolutionary nature of these dynamics and the resulting complex implications on information security risk. Putting all these together, we recommend an open-ended “marketplace of estimation models” that might help us triangulate on usable self-insurance estimates. As shown, the parameter values would need to change easily and rapidly as new information comes in. This approach also has the advantage of putting emphasis on continuous organization learning, which is often unrecognized in information security risk management. Statistical analysis of historical loss data Qualitative Reasoning (e.g. Inference to the Best Explanation, Reasoning about Uncertainty, etc.) Simulations Delphi Technique Assessments, Scorecards Mini-Metricon, San Francisco - Feb 5, 2007

15 Ways to Make Self-Insurance Cost “Real”
Link it to real cyber insurance policies Set up a real self-insurance fund via Finite Risk program* or tradable subordinated debt Use it as the “glue” for multi-firm “risk sharing” pools Focused on information sharing and mutual assistance, with incentive instruments Link to performance management and incentive compensation Subdivide Self-Insurance Cost into a “Risk Budget” for each org. unit, or Use it as a “risk adjustment” factor for other performance metrics Create incentive instruments tied to self-insurance costs or cost drivers for… Security outsource vendors Supply chain partners Channel partners Customers Alliance partners Public disclosure SEC filings, other regulatory filings Stakeholder reports Credit rating agencies “Cap and Trade” markets One big practical problem with the “self insurance” concept is that executives may not view it as being real: “If it’s not in the accounting system, not in the financial statements, not in the share price, and not in shareholder reports, then why should I care about it?” Of course, one approach is simply to make it real by measuring it, based on the notion that “it’s the right thing to do”. This approach has been tried for intellectual capital with limited success, and also for social responsibility, with better results. This slide lists various ways of making self-insurance real for hard-nosed pragmatic business decision-makers. All of these last bullet items except the last one effectively do the same thing: turn information security risk into a “risk adjustment” that can be factored into ROI or return on capital calculations. *See appendix Mini-Metricon, San Francisco - Feb 5, 2007

16 Mini-Metricon, San Francisco - Feb 5, 2007
Catastrophic Costs Q: How much confidence should we have that the firm can survive InfoSec catastrophes? The rule: prioritized loss scenarios above a significance threshold that cover the space of possibilities. Use for business continuity preparation → agility and robustness Avoid failures of imagination and “fighting the last war” Root out unintended consequences Categorize and prioritize – don’t waste time on precision estimates Strategic scenario analysis, “war gaming”, etc. Focus on discovery, “out of the box”, and reframing Challenge conventional wisdom! “It’s not what we don’t know that will kill us. It’s what we know that ain’t so”. The final cost category is “Catastrophic Costs”. Due to time constraints, I’ll only touch on this briefly. The main point here, from the viewpoint of Total Costs, is that there is no need to try to estimate catastrophic costs with great precision because the data is not adequate and the uncertainties are too great. Model errors and rounding errors can be serious problems, also. Instead, all you need is a rank-ordered list of catastrophic scenarios that are above some qualitative threshold of likelihood or possibility, and then use this list for your disaster and business continuity planning exercises. The other main point is that estimates of catastrophic costs should not be rolled into a monolithic cost estimate, as is done in the ROSI and similar methods. The magnitude and estimation errors can completely pollute your overall estimates. Mini-Metricon, San Francisco - Feb 5, 2007

17 Risk Management Decisions
Prudence Gambling Budgeted Costs Catastrophic Costs You may be wondering if breaking the Total Cost curve into three pieces will result in three different decision models, with possibly different implications and recommendations. No so! This slide provides a visual metaphor for the types of analysis and decisions that be come possible within this Total Cost framework. First, it’s important to understand the relationship between costs in each category. Will more spending in the current budget help reduce expected self-insurance cost? This is a bet on “prudence”. Likewise, will some technical or management decisions lead lower self-insurance costs but also increase higher/more catastrophic costs? This is a “gambling” bet that basically says, “It won’t happen to us”. Beneath the surface, it’s a complex analysis, but the benefit is that the shareholder value implications are relatively simple to explain to executives. Self-insurance Costs Mini-Metricon, San Francisco - Feb 5, 2007

18 A Simple Example – Earthquake Preparation
Spend an extra $1,440 per year over 30 years for earthquake loss reduction? Probabilities Min Prep. Max Prep. Benefits Quake 2% No Quake 98% Mod. | Quake 88% 94% 46% lower cost of moderate damage Severe | Quake 10% 5% 50% reduction in probability of severe damage Death | Quake 1% reduction in probability of death (catastrophe) #1: Minimum Preparation #2: Maximum Preparation Probability Cost ALE Preparation costs 98% $ $ $ ,500 $ ,470 Mod. Damage 1.76% $ ,060 $ 1,004 1.88% $ ,500 $ Severe Damage 0.20% $ ,060 $ 1,000 0.10% $ ,500 $ Death + Severe 0.04% $ 2,500,060 0.02% $ 2,501,500 $ $ 3,063 $ ,064 Mean* $ 2,887 $ ,087 To test the Total Cost framework, I created this toy scenario based on a physical world risk – earthquakes. The decision is whether it’s economical to spend a given amount ($1,440 per year) to reduce the cost of a large earthquake, should one occur. The table gives the assumptions about probabilities of incident and loss likelihood in the two scenarios (“Minimum preparation” vs. “Maximum preparation”). In both cases, the probability of earthquakes is the same. The only difference is whether damage will be moderate, severe, or a death. You can see that, on the surface, the benefits of the investment are compelling – almost 50% lower cost of moderate damage, 50% reduction of probability of both severe damage and death. But the Annualized Loss Expectancy (ALE -- a variant on ROSI) is the same for both, meaning that this analysis is indifferent to the alternatives. Simple averaging actually indicates that the investment would not be justified. So what does a Total Cost model say? *from Monte Carlo simulation ALE same for both Simple average says “no” to extra spending Mini-Metricon, San Francisco - Feb 5, 2007

19 Self-insurance Costs (1)
I did a Monte Carlo simulation of the two scenarios with 1000 runs, each with a 30 year time series. This chart shows the results for the “No investment” option = minimum preparation. Mini-Metricon, San Francisco - Feb 5, 2007

20 Self-insurance Costs (2)
Total Cost Comparison Max. Prep. vs. Min. Prep. Budgeted $ (1,440) Self-insurance $ ,760 Annual Savings $ ,320 Justifies extra spending on maximum preparation Here’s the chart for the “Investment” option = maximum protection. The table shows the comparison under the Total Cost framework. Budgeted costs, as you’d expect, are higher for the maximum protection option. However, annual self-insurance costs would be significantly lower. Netting these out, this analysis shows that the investment would be justified. Not listed here, but you can see that catastrophic costs are essentially the same for both scenarios. Mini-Metricon, San Francisco - Feb 5, 2007

21 Needed: Self-insurance Decision Framework
A. Like other insurance Total Cost Comparison Max. Prep. vs. Min. Prep. Budgeted $ (1,440) Self-insurance $ ,760 Annual Savings $ ,320 Which is more credible? Which leads to better decisions? B. Self-borrowing However, we shouldn’t jump to this conclusion. There’s a hidden assumption that drives this recommendation, namely that self-insurance costs should be treated like other insurance premiums. Is this credible? Does this lead to good decisions? What if we treat self-insurance as a form of self-borrowing, so that the self-insurance cost is really a borrowing cost, not an insurance premium. This dramatically changes the analysis. The answer critically depends on the dynamics of my “enterprise” in the face of severe crisis. Will I have borrowing power when the “Big One” hits? This is an open research question, and also a practical question for business policy. I leave this question open because I want to be honest about what we know and don’t know about this sort of analysis. Total Cost Comparison Max. Prep. vs. Min. Prep. Budgeted $ (1,440) Self-insurance INTEREST 10% $ ,76 Annual Savings $ (1,164) Mini-Metricon, San Francisco - Feb 5, 2007

22 Mini-Metricon, San Francisco - Feb 5, 2007
Summary of the Method Apply enterprise risk management methods Break InfoSec costs into three categories: “Budgeted” “Self-insurance” “Catastrophic” Establish methods, targets, and decision processes for each category Appropriate to the information and uncertainty involved The nature of decisions that apply Link the categories Use operational metrics plus inference to model costs in each category, as appropriate Focus energy on continuous organization learning Here’s a summary of the Total Cost method Mini-Metricon, San Francisco - Feb 5, 2007

23 Mini-Metricon, San Francisco - Feb 5, 2007
Next Steps Need more theoretical development and empirical testing Esp. self-insurance concept, models, and decision rules. Factoring in impact on revenue, market share, profitability (pricing power), and reputation Need to standardize “Budgeted Costs” and map to InfoSec assessments and frameworks Need proofs-of-concept using real companies and real data Make it work politically Enterprise Risk Managers = your new best friends TQM and 6 Sigma Specialists = your allies CFOs = Status excelsior sponsors Neutralize or convert opposition (legal department, auditors, etc.) Lead industries = Financial Services? Supply Chain? other? Political change role model = Indian Gaming??  Make it acceptable to the mainstream managers Q: is it sufficiently promising to continue pursuing? As I hope is obvious, this framework is still under development. My question to the audience is at the bottom – should we continue developing this framework? Thank you very much for your time and attention. I’ll open the floor to questions. Mini-Metricon, San Francisco - Feb 5, 2007

24 Russell Cameron Thomas
Appendix Russell Cameron Thomas Principal, Meritology Mini-Metricon, February 5, 2007 San Francisco, CA

25 Why Measuring the Value of InfoSec is Hard (1)
Information security (InfoSec) should be seen* as a component of enterprise risk management. "Risk” is a forward-looking estimate of uncertain loss over a time period (same as the timeframe for return on the assets). Must cope with all forms of uncertainty and ignorance that apply to actors, assets, threats, vulnerabilities, and learning/adaptation over that timeframe. InfoSec is a repeating evolutionary game Between threatening actors (incl. nature) and protecting actors (incl. nature) Each with an evolving capability set, which may be emergent, nascent, and/or tacit. The terrain for the security game is threats, vulnerabilities, assets, etc. Thus, "security" is not a state of the system or the assets. It's how the protecting actors define success in the game over time. Economics of repeating evolutionary games aren’t well understood yet. They don’t fit existing static equilibrium investment models. They require emergent, dynamic models, e.g. agent-based simulation Because the benefits of security are the avoidance of highly uncertain losses, applying traditional cash flow ROI techniques would be inappropriate and misleading. Furthermore, the domain is rife with “unruly uncertainty”, including unknown-unknowns, which make it difficult or impossible to reliably estimate annualized loss expectation (ALE) or other probabilistic estimates of expected losses for given incident types. "Security" is a repeating evolutionary game over an extended time between threatening actors (incl. nature) and protecting actors (incl. nature), each with an evolving capability set, which may be emergent, nacient, and/or tacit. The terrain for the security game is threats, vulnerabilities, assets, etc. Thus, "security" is *not* a state of the system or the assets. It's a form of value from the viewpoint of the actors -- it's how they define success in the game. Characterizing some system or asset as "secure" or "at risk" is really stating a belief in the relative exploitability, given what that person knows. In practice, terms like "secure" and "at-risk" are mostly statements about the *emotional* state of the speaker. Emotions count since they drive behavior, decisions, and investments. "Vulnerabilities" are *any* functionality or condition of a system that can, potentially, be exploited by a threatening actor with sufficient capability and knowledge. This includes not only the most obvious "holes" but also legitimate functions that can be subverted or repurposed. "Holes" are merely "easy-to-exploit vulnerabilities". The ultimate vulnerability of an asset is information about it's existance and location, since the ultimate threat is destruction by brute force (car bomb, WMD, or comet impact). Since attacker goals, strategies, and capabilities are *radically* emergent, it doesn't make much sense to try to correlate "known vulnerabilities" (and their severity) *directly* with some measure of security. It's necessary to add "unknown" and "paritally known" vulnerabilities, plus learning, discovery, and adaption capabilities of both attackers and defenders. Can "security" be measured by some probability function? In other words, can it be formalized as a random variable on a ratio scale? I believe the answer is no. "Probability of loss" will not ever be a well-formed ratio scale function, IMHO. This connects to Dan Geer's comments that security will, at best, be measured on an ordinal scale. These points lead me to believe that security will ultimately be measured as multi-dimensional value along several types of scales. (See my earlier proposing "budgeted losses", "unbudgeted losses", and "catastrophic losses" as one framework.) Ordinal metrics may be plenty good enough to guide investments and decisions in the evolutionary game, assuming that we have *sophisticated* inference and decision rules, rather than simplistic rules normally applied to ratio scale metrics (e.g. "If NPV is positive, then invest", "Optimize ROI", etc.). *From the viewpoint of business value Mini-Metricon, San Francisco - Feb 5, 2007

26 Why Measuring the Value of InfoSec is Hard (2)
InfoSec* is inextricably part of the cyber trust “fur ball”, including Privacy Digital Rights Intellectual Property, brands, reputation, trade secrets Stakeholder disclosure … and physical security Historical loss data, even if copious and available, has limited use The landscape changes too fast Low frequency / high impact events matter Unique events matter The business value of InfoSec isn’t just loss prevention Value comes from the ability to support profitable risk taking e.g. Brakes, condoms Risk balancing is a reflexive process involving perceptions of risk and reward Varies dramatically by industry and sector E.g. a bank vs. a rock quarry If that’s not enough, there’s more…  When you are considering big loss events, it’s hard or impossible to separate the cost of information security breaches from other cyber trust elements, and even physical security and business continuity. *From the viewpoint of business value Mini-Metricon, San Francisco - Feb 5, 2007

27 Blind Alleys and Dirt Roads
“Blind Alleys” look good in concept, but won’t work by themselves Return on Investment (ROI), Net Present Value (NPV), Payback, etc. Annualized Loss Expectancy (ALE) Cyber insurance Product liability and tort laws (“actual damages”) “Dirt Roads” work, but just barely 2x2 or 3x3 matrix categorization of incident types or risks by frequency vs. severity Assessments using scoring and ranking systems Balanced scorecards Strategic scenario analysis and walkthroughs Are there any “Autobahn” approaches out there? The null / “realist” hypothesis is “no”, assuming insurmountable problems “Total Cost of (In)security” might be such an approach By calling ROI, ROSI, and ALE “blind alleys”, I don’t mean to say that they never apply or are fundamentally wrong. I’m just saying that they don’t address the main challenges of InfoSec risk management as described on the previous few slides, and it doesn’t apply the right decision frame. It’s useful to look at the various “dirt roads” that have been created such as categorization, scoring, and ranking methods. These methods are not as analytically powerful as ROI, but the do seem to work at least to guide operational decision-making. However, there’s a constant challenge to update and tune these methods to fit different contexts and changing landscapes. The multi-million dollar (euro?) question is “Are there any Autobahn approaches out there, waiting to be discovered?” I hope so. I think it’s worth looking for it. Mini-Metricon, San Francisco - Feb 5, 2007

28 Mini-Metricon, San Francisco - Feb 5, 2007
Why ALE is Dumb A Simple Case of Three Loss Event Categories* Firm Equity = $50 million; Annual Earnings = $5 million; ROE = 10% Category A: “Common flood” 50% chance of $10,000 loss = $5,000 ALE Category B: “100 year flood” 1.0% chance of $500,000 loss [10% of earnings, 1% of equity] = $5,000 ALE 26% chance of happening at least once in 30 years Category C: “10,000 year flood” 0.01% chance of $50 million loss [100% of equity] = $5,000 ALE Reason 1: ALE math hides risk drivers A+B+C = A+A+A = B+B+B = C+C+C = $15,000 ALE [1.5% of earnings] Conflates simple random walks with random walks with avalanches “Three independent common risks = three independent catastrophic risks” Reason 2: Unreliable estimates of low probability events dominate Lack of data + psychology means estimation errors for the tail are much higher 50% ® 55% chance for A ® $5,250 ALE 1.0% ® 2.0% chance for B ® $10,000 ALE (45% chance in 30 years!) 0.01% ® 0.05% chance for C ® $25,000 ALE S = $40,250 ALE (2.7 times larger!) Not to beat a dead horse, but this toy case helps me explain why simplistic Annualized Loss Expectancy (ALE) is dumb (= poor match to InfoSec risk management) In this toy case, we use the language of floods risk to make the case simple to follow and intuitive, but you can use any labels you want. In this case, let’s assume that analysis has identified three incident types with increasing severity: 1) Common flood, 2) 100-year flood, and 3) 10,000-year flood All three incident types result in the same ALE. (Yes, we made this happen, but it’s not uncommon to see something like this because of the interplay between likelihood and expected severity.) The first reason that ALE is “dumb” is that it hides risk drivers and other key aspects of underlying risk processes. Simple math shows that any combination of these three risks is equivalent from the ALE perspective, but this defies common sense. Even if the incident types are statistically independent, does it make sense that three common risks would be equivalent to three catastrophic risks? If for no other reason, this probably conflates two different random processes. Common “floods” are governed by Gaussian normal distributions, whereas Catastrophic “floods” are governed by “avalanche” processes. With ALE, these distinctions are invisible and therefore do not inform decision-making. The second reason, and more important, is that unreliable estimates of low probability events can easily dominate your ALE calculations. Plausible estimation errors are shown in the sub-bullets. This shows how easily a plausible error in a very small probability can dramatically change the total ALE calculation (in this case, making it 2.7 times greater). The last reason, not shown, is that hard or impossible to map ALE to accounting statements or actual budget decisions. The Total Cost approach proposed in this presentation preserves the best aspects of ALE while minimizing it’s deficiencies. *Pareto Distribution, k=1, min = 5,000 Mini-Metricon, San Francisco - Feb 5, 2007

29 Mini-Metricon, San Francisco - Feb 5, 2007
Finite Risk Programs The insurance industry offers multi-year self-insurance plans that are commonly called finite risk insurance. The name arises from the fact that the risk transfer is very limited. Therefore, the insured will pay for most (or all) the losses Year 1 time Balance carry-forward Fund established $$$ Operational losses From: “Applying Insurance Modeling Techniques to Quantify OR” Dr Marcelo Cruz, RiskMaths, presented at GARP OR Seminar 18-19 October 2001 London Interest paid Mini-Metricon, San Francisco - Feb 5, 2007

30 Ruin Theory applied to Finite Risk
Losses following a certain stochastic process Finite Risk hedging needs Initial Finite Risk capital Percentage of gross income allocated against Finite Risk From: “Applying Insurance Modeling Techniques to Quantify OR” Dr Marcelo Cruz, RiskMaths, presented at GARP OR Seminar 18-19 October 2001 London “ruin” Mini-Metricon, San Francisco - Feb 5, 2007


Download ppt "Russell Cameron Thomas"

Similar presentations


Ads by Google