Download presentation
Presentation is loading. Please wait.
Published byGeorge Miller Modified over 7 years ago
1
Cost Estimating Basics Adapted from CEBoK Modules 1 and 2
Kevin Cincotta, CCEA, PMP Cassandra M. Capots, PCEA Prepared for: 2016 ICEAA Canada Workshop February 22-23, 2016 Ottawa, ON
2
Acknowledgments ICEAA is indebted to TASC, Inc., for the development and maintenance of the Cost Estimating Body of Knowledge (CEBoK®) ICEAA is also indebted to Technomics, Inc., for the independent review and maintenance of CEBoK® ICEAA is also indebted to the following individuals who have made significant contributions to the development, review, and maintenance of CostPROF and CEBoK ® Module 2 Cost Estimating Techniques Lead authors: Crystal H. Rudloff, Kenneth D. Odom, Colleen M. Craig Assistant author: Daniel V. Cota Senior reviewers: Richard L. Coleman, Richard B. Collins II, Fred K. Blackburn, Kevin Cincotta Reviewer: Laurette Sullivan, Karyn L. Sanders Managing editor: Peter J. Braxton This slide lists the organizational and individual contributors to this module of CEBoK® and its predecessor, the erstwhile Cost Programmed Review Of Fundamentals (CostPROF), over the course of more than a decade. Unit I - Module 2
3
Cost Estimating Cost Estimating: Purpose of cost estimating
The process of collecting and analyzing historical data and applying quantitative models, techniques, tools, and databases to predict the future cost of an item, product, program, or task Purpose of cost estimating Translate system/functional requirements associated with programs, projects, proposals, or processes into budget requirements Determine and communicate a realistic view of the likely cost outcome, which can form the basis of the plan for executing the work “Suppose one of you wants to build a tower. Will he not first sit down and estimate the cost to see if he has enough money to complete it? For if he lays the foundation and is not able to finish it, everyone who sees it will ridicule him, saying, ‘This fellow began to build and was not able to finish.’” -Jesus Christ, Luke 14:28-30 What is cost estimating, and why do we do it? Cost estimating can be defined as the process of collecting and analyzing historical data and applying quantitative models, techniques, tools, and databases to predict the future cost of an item, product, program or task. Basically, cost estimating is the process of predicting the future based on today’s knowledge to help facilitate the successful completion a program, project, or process. Often the cost estimator will be put in the position of adjusting for new materials, new technology, a new software programming language, a different team of individuals, etc., but the estimate must never become divorced from historical experience or it can be too easily assailed as mere soothsaying. The purpose of cost estimating is to translate system/functional requirements associated with programs, projects, proposals, or processes into budget requirements. It determines and communicates a realistic view of the likely cost outcome of a system or program. This outcome can form the basis of the plan for executing the work to develop, field, and support the system or program. For commercial entities (contractors), this plan often begins in the proposal stages, and the cost estimate serves as an important basis for a new business proposal, including the management bid/no-bid decision. For government entities, an independent cost estimate will often serve as the basis for a program’s budget. As we’ll see on the next slide, this is the most commonly cited motivation for cost estimating, and it is the essence of the passage from the Gospels shown here. At your next cocktail party, you can impress your friends that your profession has a Biblical mandate! Unit I - Module 1
4
Motivations for Cost Estimating
Budgeting (Independent) Cost Estimates to establish program budgets should give a reasonable chance of “success” Measured by Track Record Planning Bases of Estimate (BOEs) to establish project baselines should allow an accurate measurement of progress Measured by EVM Performance Indices Trade-Offs Parametric Models and Cost Response Curves (CRCs) to give quick and accurate estimates should guide system requirements and design toward affordability Measured by periodic Life Cycle Cost Estimates (LCCEs) 14 15 13 There are three primary reasons for estimating the cost of a program. Any other motivations for cost estimating can generally be reduced to one of these three cases. Budgeting: Program budgets should be built using an independent cost estimate (ICE), not only to give that program a chance for success, but also because valuable agency resources need to be used as effectively as possible. If a program runs over budget, resources will need to be pulled from other programs to cover costs. This “robbing Peter to pay Paul” causes a serious ripple effect, with detrimental effects to the other programs as well. Some programs that run far over budget will be canceled, leaving a gap in capabilities. The success of estimating to support the budgeting process should be measured by a track record that shows the “shelf-life” of ICEs, to wit, how long each ICE-based budget proves adequate on a year-by-year and cumulative basis. In other words, the track record should answer the questions “In what fiscal year did the program first run a deficit relative to the ICE for that year?” and “In what fiscal year did the program incur a debt (net cumulative shortfall) relative to the ICE?” Planning: Once budgets are established, planning must occur to lay the foundation for successful project execution. Estimating supports the development of proposal basis of estimate (BOE) documentation, which in turn serves to help establish the integrated baseline. Earned Value Management (EVM) is an approach that tracks cost and performance against this baseline, as measured by performance indices. (More on Contracts and EVM in Modules 14 and 15, respectively.) Trade-offs: Trade-off studies use a baseline cost estimate and explore the relative cost of changing requirements. The end goal is to achieve the proper balance of maximizing utility and minimizing cost. Often the cost impact of trades is “quick and dirty,” and overall progress toward cost goals must be measured by periodic life cycle cost estimates (LCCEs), preferably based on a “closed” engineering design. Risk plays a big part of all three. Without risk, budgets are likely to be unrealistically low, resulting in immediate shortfalls; baselines are likely to be unrealistically aggressive, resulting in immediate performance deficiencies; and trade-offs are likely to be unrealistically optimistic, resulting in faulty decisions to pursue ultimately unaffordable options. 16 9 Tip: Proper treatment of Risk is crucial to all three! Unit I - Module 1
5
Applications of Cost Estimating
As part of a total systems analysis, cost estimating helps decision makers to: Make decisions on program viability, structure, and resource requirements Establish and defend budgets Conduct Analysis of Alternatives (AoA) Create new business proposals and perform source selection Conduct in-process reviews of major projects Perform design trade-offs Assess technology changes Comply with public law Satisfy oversight requirements 13 14 15 16 How is cost estimating used, and what benefits does it provide? Cost estimating is a management tool used to help decision makers evaluate resource requirements at key milestones and decision points. It is not an end in itself, but is part of a total systems analysis process that includes programmatic, technical, and schedule analysis. Cost estimating supports various decision-making processes, including determining whether a program is viable, how it should be structured, and what resources are required to support it. It is used to help establish and defend budgets for these resources. It can be applied as a tool to evaluate the cost implications of alternative systems when conducting an Analysis of Alternatives (AoA)). (AoAs are studies comparing technical, cost, and performance characteristics of multiple approaches and are used to select among alternatives before committing to a particular project.) Costing or pricing is a vital part of the new business proposal development and source selection processes, and other aspects of contracting. It can be used to conduct in-process reviews of major projects. Cost estimating enables design trade-offs by quantifying the cost impacts of different levels of performance. Cost analysis can be used to assess impacts of changing technology, new equipment, or new operating or maintenance concepts. Finally, cost estimating is part of a sound management approach, and we should do it whether we are required or not. For government projects, it may be a requirement of public law, and in both the public and private sectors there are oversight organizations whose requirements must be met. The bottom line is that cost estimating helps you and your team make better decisions, and with the appropriate rigor, it also helps you defend those decisions to others outside your program. More information on AoAs, source selection, in-process reviews, and design trade-offs may be found in Module 13 Economic Analysis, Module 14 Contract Pricing, Module 15 Earned Value Management (EVM), and Module 16 Cost Management, respectively. For more on the laws, policies, and procedures governing the application of cost estimating and analysis, follow the link to the Resources section of this module. 1 Unit I - Module 1
6
Cost Estimating Process
WBS Baseline Data Collection Data Analysis Methodology Model Results Work Breakdown Structure (WBS) Development Program/System Baseline Development Data Collection and Data Analysis Cost Element Methodology Model Development and Validation Results and Report Generation 8 This section will address the seven basic-steps of the cost estimating process. Specific process steps may vary between different organizations, but general guidelines apply. The baseline of the estimate should be understood by all, as well as a framework for how to estimate costs. Data need to be gathered, normalized, and analyzed, and methods derived. The estimate can be performed using the baseline and the methods. Finally, the results and reports need to be generated to provide information to the decision-maker. We’ll now take a look at the cost estimating process, which we sketch out as these seven steps: Develop a Work Breakdown Structure (WBS) Develop a program/system baseline Collect data Analyze data Develop cost estimating methodology for each cost element Put together and validate the cost model Generate results and other required output, such as reports This linear progression is a somewhat simplified view of the real process but is close enough to give you a very good idea of how to go about developing your own cost estimate. An alternate 12-step view from the GAO Guide is shown on the bottom half of the slide. The Defense Acquisition University (DAU) has yet another view, though the step are all pretty much isomorphic. Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs, GAO-09-3SP, March 2, 2009. Unit I - Module 1
7
Unit Index Unit I – Cost Estimating Unit II – Cost Analysis Techniques
Cost Estimating Basics Cost Estimating Techniques Parametric Estimating Unit II – Cost Analysis Techniques Unit III – Analytical Methods Unit IV – Specialized Costing Unit V – Management Applications Unit I provides an introduction to the fundamentals of cost estimating and how to go about developing your own cost estimate. In Module 1, you were shown an overview of cost estimating. Module 2 introduces the basic estimating techniques and tools that are available to a cost estimator. Many of the topics discussed herein are covered in greater detail in other modules. For example, the Parametric estimating technique is the subject of the next module. Unit I - Module 2
8
Cost Estimating Techniques Overview
Key Ideas Cost Estimating Techniques Analogy Parametric Build-up Extrapolation from Actuals Cost Element Structure (CES) Practical Applications Estimate Development Cross-checks 2 Analytical Constructs Basic Mathematical Operations Addition, Multiplication, Powers Ratios and Linear Relationships Curve Fitting Hierarchical Tree Structure Related Topics Below-The-Line (BTL) Factors Schedule Estimating Operations and Support (O&S) Estimating 2 The Key Ideas in this module are the cost estimating techniques themselves – analogy, parametric, build-up, and extrapolation from actuals – and the context within which they are applied, namely a cost element structure (CES), which is related to the work breakdown structure (WBS) presented in the first module. The most important Practical Applications of these techniques is – you guessed it! – developing cost estimates (and as we shall see shortly, schedule estimates). Cost estimators are strong believers in using multiple techniques, one to derive the primary estimate, and one or more others to provide a cross-check to lend confidence that we are “in the ballpark.” As for Analytical Constructs, cost estimating is unavoidably (and unabashedly!) a mathematical discipline, but the basic estimating techniques themselves seldom involve more than the three most basic mathematical operations: addition (or subtraction), multiplication (or division), and exponentiation, or raising numbers to a power, in some combination. In particular, having a deep understanding of and intuition for ratios and linear relationships is crucial for cost estimators. The parametric technique, which is the hallmark of cost estimating, often involves curve fitting techniques such as regression, which find the “best” equation for a curve which passes through a scatter plot of data. Finally, the CES is a hierarchical tree structure wherein the costs of “children” sum (or “roll up”) to the cost of their “parent,” or conversely the cost of the parent can be allocated to the children. Related Topics include the use of factors (or CERs) for the estimation of “below-the-line” costs such as systems engineering and program management (SE/PM); use of the same techniques that can estimate cost to estimate schedule durations of tasks, projects, or programs; and use of cost estimating techniques to estimate operations and support (O&S) costs. 2 Unit I - Module 2
9
A Bridge to the Future Your estimate Historical data Time now
Cost estimating always involves going out on a proverbial limb. On this slide, we have chosen a striking illustration of this principle, a photograph of the construction of the Pierre Pflimlin bridge over the river Rhine between Germany and France. This bridge uses a common construction technique called a cantilever. This technique allows the constructed element to extend out over open space without apparent support, but the support comes from the fulcrum of the lever (in this case, the bridge pillar shown) and the point of that lever on the other side of the fulcrum. The great open space into which cost estimates must jut is the unknown future. The required support for estimates is data, and the more data you collect, analyze, and use, the better supported your estimate will be. Having the latest data, right up until the present time, is usually helpful and often vital, but the weight of historical data across similar systems and relevant trends, as complete and accurate as you can get it, is essential. Though it may seem paradoxical to some, a good cost estimator knows that the further you are trying to go out onto that limb, into the future, extending your estimate across multiple years, with new designs and new ways of doing business, the further you have to go into the past to get the historical data needed to support that estimate. This line of thinking is further explained and illustrated on the following slide. Unit I - Module 2
10
The Cost Estimating Framework
Past Understanding your historical data Present Developing estimating tools Future Estimating the new system Identical, off-the-shelf item Catalog price 5 Identical items / capabilities Predicted inflation – recent historical trends 7 Manufactured items Learning curve – complete production run 3 Similar new development items CERs – historical costs from several programs Here we introduce the Cost Estimating Framework, which we will use in many modules to illustrate the basic application of cost estimating principles. This general framework very simply shows the Past, where historical cost and other data have been collected on analogous systems; the Present, where those data are normalized and analyzed to develop estimating tools; and the Future, for which those tools are applied to estimate the new system under consideration. In this case, we use the Cost Estimating Framework to illustrate the roughly symmetrical balance between the extension of estimates into the future and extension of supporting data into the past. In the simplest case, if I’m going to buy an off-the-shelf item like a laptop tomorrow, then I only need to go back to yesterday to look up the catalog price (or the online price, for you young whippersnappers who don’t remember what a catalog is!) to know what I should expect to pay. If I’m going to buy identical items, commodities, or capabilities, such as airplane engines, engineering support labor, or hard disk storage, over the next few years, then I need at least a few years’ worth of historical data (and probably more) to be able to estimate the proper inflation rate and other trends, such as Moore’s Law (in the case of IT). (See Module 5 Inflation and Index Numbers for more details.) If I am producing manufactured items, then I need the complete history of my production run(s) on each item to be able to establish the theoretical first unit cost (T1) and learning curve slope (LCS) needed to accurately project future costs for additional items or lots. (See Module 7 Learning Curve Analysis for more details.) If I’m doing a parametric estimate of a new development item, such as a new class of ships, then I need cost and other data from several representative ship classes, which may extend back twenty years or more, in order to explore cost drivers and develop cost estimating relationships (CERs). (See Module 3 Parametric Estimating for more details.) Finally, if the new development item incorporates entirely new technologies or features that are not captured in the data on analogous systems, additional historical data will be needed to determine how to properly adjust a standard CER. For example, if your new ship design incorporates stealth, which has never been done for past ship classes, then perhaps you can borrow a factor from when aircraft made the jump to stealth. This is not a comprehensive list of all the possible situations you will face as a cost estimator, but it is pretty representative and illustrates the point at hand. The real world is of course more complicated, but by breaking down the system to be estimated properly into “chunks,” you’ll find that each element falls into one of these categories, or similar ones. Dissimilar new development items Adjusted CERs – historical costs from several programs + paradigm shift The further in the future you want to estimate, the further back you need to go into the past! Unit I - Module 2
11
Cost Estimating Techniques Outline
Core Knowledge Introduction Uncertainty and Risk Cost Estimating Techniques Using Cost Estimating Techniques Comparison of Techniques Summary Resources Related and Advanced Topics In this module, we’ll introduce the cost estimating techniques that you should become familiar with to develop your own estimate, explain how to use these cost estimating techniques, and compare the different techniques. As with all modules, we’ll end the Core Knowledge section with a summary and present resources for reference and further study. The Related and Advanced Topics section is also available for those who wish to explore material in greater depth. Unit I - Module 2
12
Introduction The four essential cost estimating techniques (or methodologies) are: Analogy Parametric Build-Up Extrapolation from Actuals Other topics will be discussed in relation to the four essential techniques Expert Opinion 11 The four essential cost estimating techniques are analogy, parametric, build-up, and extrapolation from actuals. Analogy refers to comparing the cost of an item to be estimated to that of a similar item. A Parametric estimate is a mathematical relationship based on historical data that relates cost to one or more technical, performance, cost, or programmatic parameters. Build-Up involves estimating costs at the lowest definable level, frequently making use of industrial engineering techniques such as time standards. Extrapolation from Actuals uses data from prototypes or complete or partially complete units to project the cost of future units, or earned value data to develop an estimate at complete (EAC) for any contract, phase, or program. These are the four basic methodologies available to you when developing a cost estimate. You must determine which of these techniques is most appropriate for the task at hand. You may use multiple techniques in an estimate but you must decide how and when each technique should be used. Each technique has strengths and weaknesses, and varying degrees of applicability at different times during a program’s life cycle and for different levels of fidelity required for an estimate. Other topics will be discussed in conjunction with the four cost estimating techniques. This includes expert opinion. Expert Opinion utilizes the subjective opinion of Subject Matter Experts (SMEs) to corroborate or adjust cost estimates. Unit I - Module 2
13
Warning: Uncertainty and risk are difficult but essential.
Risk Terminology Precision vs. accuracy Precision = narrow range Accuracy = range centered on “right” answer Uncertainty vs. risk Uncertainty = range of possible outcomes Characterization of precision Risk = shift of range to account for lack of accuracy of unadjusted estimates 9 Tip: We want estimates to be both precise and accurate, but imprecisely accurate is better than precisely inaccurate! Correction of bias Cost estimates are never right! An estimate always has some measure of uncertainty, and the best we can hope to do is quantify that uncertainty accurately to support decision making. First we need some definitions. Precision refers to the spread of the range of outcomes our estimate will produce. A narrow range entails greater precision, a wider range less. By contrast, accuracy conveys whether the range is centered on the true value. If the center of the range is close to the true value, it is an accurate estimate; if nowhere near, inaccurate. The standard illustration given in many introductory science textbooks is a dartboard. If your throws produce darts clustered tightly about a single point, you are precise, whether or not that point is the bull’s eye. If your throws are clustered about the bull’s eye, you are accurate, whether they are tightly clustered or not. Of course, we would like to be both precise and accurate in our cost estimates, but when push comes to shove, accuracy is much more important. Misrepresentation of precision often gives a false sense of security, which is shattered when you find out you were precisely wrong! The ideas of precision and accuracy manifest themselves in cost estimating as uncertainty and risk. Uncertainty captures the range of possible outcomes of the estimate, thereby characterizing its precision. This is best done as a probability distribution, usually an empirical one resulting from a Monte Carlo simulation, which can then be summarized with a confidence interval, such as “one million dollars, plus or minus 15 percent.” Risk, which we shall more fully define in Module 9, refers to the upward shift applied to the cost estimate range to account for the fact that unadjusted estimates tend to be systematically low. This adjustment is intended to eliminate what appears to be an inherent bias in estimates and thereby improve their accuracy. One possible analogy is that of an alarm clock. If you know you always oversleep 15 minutes past when your alarm clock goes off, you set your alarm 15 minutes early so that you’ll actually get out of bed when you need to. Similarly, if your costs come in on average 20% higher than your estimates, you should add 20% to your estimates. While this may at first seem like a frighteningly high percentage, it is not atypical of historical cost growth on DoD programs. Because they involve fairly sophisticated probability and statistics, the proper treatment of uncertainty and risk is difficult, but making our best attempt at doing so is essential if we are to produce meaningful and useful estimates. We will delve into these topics in much greater detail later in later modules. Warning: Uncertainty and risk are difficult but essential. Unit I - Module 2
14
Uncertainty and Risk Example
6 Cost estimating, like weather prediction, is not a “repeatable” experiment! 7 Here is a real-world (non-cost-estimating) example of precision and accuracy, uncertainty and risk. The graphic shows, across several years, the preseason prediction of the number of hurricanes in the North Atlantic as a range (the dark rectangular bars) and the actual number (the darker circles). It appears that the forecasters are not putting enough uncertainty in their estimates, because they captured the actual number in their range only three out of eight years, which indicates something on the order of a 37.5% confidence interval. They are giving the illusion, but not delivering the reality, of precision. They are doing a little better on the risk front, as their average prediction understates the actual average by less than one hurricane per year. (Note that the stated average hurricanes per season in the source graphic, 6, appears to be incorrect.) The high number of storms in 2005, the year that Katrina and Rita ravaged the Gulf Coast of the United States, is conspicuous. You might be tempted to call this an “outlier,” but unless there is an analytical explanation that can attribute this number to a trend (increasing temperatures in the Gulf of Mexico due to global warming?), an error in prediction (the error band should’ve been wider), or an error in data collection (highly unlikely in this case – we assume they can count the number of hurricanes!), you should be loath to remove it from the data set. For more information on the proper treatment of outliers, see Module 6 Basic Data Analysis Principles. This graphic is an example of a track record, which estimating organizations should keep, with a feedback mechanism so that the data gathered can be used to improve cost estimating and risk analysis. One important thing to note is that cost estimating, like weather forecasting, is not a repeatable experiment! This means that we never build the same system or unit over and over and observe the range of costs, but rather we only build it once and get one possible value which includes random or unknown elements, causing it to be above or below average. Even when we build multiple units, as in a production run, each individual unit has its own random or unknown elements that produce variation around what we hope will be a smooth learning curve. National Oceanographic and Atmospheric Administration (NOAA) Unit I - Module 2
15
Uncertainty and Risk Illustration
Estimate Based on an Average This example graphic will be used throughout to compare and contrast the uncertainty associated with the various cost estimating techniques. It is a typical scatterplot, with cost on the (vertical) y-axis and a cost driver variable such as weight on the (horizontal) x-axis. (For much more on scatterplots, see Module 6 Basic Data Analysis Principles.) The solid gray line shows the “true” underlying cost driver relationship, and the two dotted gray lines show plus or minus one standard deviation (sigma) of the associated error term. (Note that we have made the fundamental assumptions for ordinary least squares (OLS) regression so that we can appropriately compare the Parametric technique later on.) The red square represents an estimate that is based on an average of the cost data (the vertical value of the eblue diamonds) and the red dotted lines represent plus or minus one standard deviation of the associated error in the estimate. You should notice two key facts about estimating with an average: (1) It ignores possible cost driver variables, which is why the red lines are all horizontal; and (2) because of that, its uncertainty is much great than could be achieved by taking a (true) cost driver into account. Tip: Estimating cost as an average of historical data is generally a good starting point Unit I - Module 2
16
Cost Estimating Techniques
Analogy Parametric Build-Up Extrapolation from Actuals The next section covers the four primary cost estimating techniques, also known as the Big Four, in detail. For each of the techniques, we’ll define the method, discuss its application and its strengths and weaknesses, and provide an example. Unit I - Module 2
17
Cost Estimating Techniques Basics
Cost Estimating Techniques provide the structure of your cost estimate They’re what enable you to predict future costs based on historical data Techniques rely on statistical properties, logical relationships, and emotional appeal Four essential types Analogy: “It’s like one of these” Parametric: “This pattern holds” Build-Up: “It’s made up of these” Extrapolation from Actuals: “Stay the course” Cost estimating techniques are the building blocks of a cost estimate. They provide structure and are what you use to predict future costs based on historical data. Cost estimating techniques rely on statistical properties, logical relationships, and emotional appeal. You will find that some analysts prefer a particular technique over others and will battle to the end to defend the use of their techniques. You will also hear cost estimating referred to as both a science and an art. In exploring cost estimating techniques, you should strive for the curiosity, creativity, and quest for truth that are found in both the arts and the sciences while maintaining a firmly analytical mindset. When we say “creativity,” we mean being versatile in applying various established techniques, not “getting creative” with the numbers, which is the kind of dishonesty found in “cooking the books” or backing into a desired answer. As we begin to delve into the four essential cost estimating techniques of analogy, parametric, build-up, and extrapolation from actuals, we try to sum up the underlying ideas in a catchphrase for each of the techniques. Analogy is based on the idea “It’s like one of these.” You are making a comparison to a single similar system. Parametric is based on the idea “This pattern holds.” In this case you are making a comparison to several similar systems, developing a statistical relationship called a cost estimating relationship (CER) based on several data points. Build-Up is based on the idea “It’s made up of these.” In this case you start from smaller elements and add them up until you have a cost for the entire element in question. Extrapolation from Actuals has you “Stay the course,” using costs accumulated to date to estimate the cost of the same system. Unit I - Module 2
18
Analogy - Method $ Comparative analysis of similar systems
Adjust costs of an analogous system to estimate the new system, using a numeric ratio based on an intuitive physical or countable metric e.g., weight, SLOC, number of users Other adjustments may need to be made for any estimating methodology Programmatic information (quantity/schedule) Government vs. Commercial practices Contract specifics Economic trends 3 “It’s like one of these” If you’re not convinced of it yet, rest assured that cost estimators think in analogies, forever comparing one thing to another. Already in this module, we have used the analogies of a cantilever bridge, a dartboard, and an alarm clock – and we’re only just now getting to the Analogy section! In layman’s terms, an analogy is a comparison drawn for illustrative purposes, and it is generally a little more precise than a metaphor or a simile in that it seeks to exploit parallel logical (and sometimes even quantitative) relationship. A cost estimating analogy is just what it sounds like – an attempt to estimate costs by drawing a comparison between the item in question and a similar (or analogous) item. An analogy can be done at the system, subsystem, or component level. Multiple analogies can be used at the lower WBS levels to build up to a higher level estimate. Generally, some adjustments must be made to the costs of the old item to estimate the new item. These adjustments include those based on: programmatic information such as quantity or schedule; physical characteristics such as weight or materials; performance characteristics such as power or pointing accuracy; government or commercial practices; or contract type such as fixed price or cost plus. Economic adjustments for inflation such as converting from constant dollars to then year dollars are normally considered part of data normalization – see Module 5 Index Numbers and Inflation). When making an adjustment, try to make it as objective as possible. Identify key cost drivers and then determine how the old item is related to the new and how that cost driver affects the costs. Also remember that all estimates must pass the “reasonable person” test. That is, the source(s) of the analogy and any adjustments thereto must be logical, credible, and acceptable to a “reasonable person.” AKA Comparison Technique, Ratio, Analysis of Analogues 5 $ $ Unit I - Module 2
19
Analogy - Application Used early in the program life cycle
Data are not available to support using more detailed methods Not enough data exist for a number of similar systems, but can find cost data from a single similar system The best results are achieved when Adjustments can be quantified Subjective adjustments are minimized Similarities between old and new systems are high Minimize differences to one or more that can be scaled, then Minimize the amount of scaling (size of adjustment factor) Can be used as a cross check for other methods Analogies are generally used early in the program life cycle, when there is neither much definition in the new program nor a pre-existing cost model. Most development programs have some sort of heritage in design. The heritage or legacy system would be used for comparison to the new system to be estimated. One of the first considerations when assessing the cost of a new development program is the percent of new design vice heritage or reuse. This assessment can be performed at system, subsystem, and component levels. An analogy can also be used when there is not enough data or program definition to develop a cost estimate using a more detailed technique. There should be a strong parallel between the historical system and the item to be estimated. Analogy is a one-for-one comparison. An analogy works best when there are many similarities between the old and new systems. If possible, the adjustments should be quantitative not qualitative. Subjective adjustments should be minimized or avoided all together. An analogy is often useful as a cross check for other methods. Even when you’re using more detailed cost estimating techniques, an analogy can provide a sanity check for your estimate. In this case the estimates should be of the same order of magnitude. Unit I - Module 2
20
Analogy – Considerations
Strengths Can be used early in programs before detailed requirements are known Difficult to refute if there is strong resemblance Weaknesses No objective test of validity Danger in choice of scaling factor Which variable Functional form (linear vs. non-linear scaling) What slope (through origin or borrowed slope) Challenges Difficult to obtain cost/technical data on old/new systems for comparison There are several strengths of using an analogy. One strength of the analogy is that it can be used before detailed program requirements are known. The more similar the systems, the stronger the analogy and the easier it will be to stand up to review. The analogy is also an easy technique to use if a sufficient database exists on the analogous system. There are also some weaknesses of using an analogy. One weakness of the analogy technique is that there is a tendency to be too subjective in making an analogy. For analogies that require too many subjective adjustments, this technique is not appropriate. An assessment that a new component will be 20% more complex, without specifying why, will not be acceptable. You should tie the complexity to something less subjective. An appropriate adjustment would be that the new component will have 20% more integrated circuits or will weigh 20% more than the old component. Often it is difficult to find sufficient cost, technical, and programmatic data for drawing the analogies. To compare (or analogize!) the analogy technique with the parametric technique, which we’ll examine in detail next, an adjusted analogy is just like a linear regression, but instead of basing the slope on a number of data points, it is essentially a guess (one point does not determine a line, so we assume the line goes through the origin). Also, since our analogy is a single data point, it represents a point of departure, and any estimate using adjusted analogy constitutes by definition “estimating outside the range of the data.” Warning 1: An adjusted analogy is like a regression, but the slope is just a guess. Warning 2: An adjusted analogy is, by definition, estimating outside the range of the data. Unit I - Module 2
21
Analogy - Example Engine: F-100 F-200 Thrust: 12,000 lbs 16,000 lbs
Tip: The mischief in analogy most often arises in the adjustment. Why do we so readily believe a linear relationship which passes through the origin? Attribute Old System New System Engine: F-100 F-200 Thrust: 12,000 lbs 16,000 lbs Cost: $5.2M ? Q: What is the unit cost of the F-200? A: $5.2M * (16,000/12,000) = $6.9M or ($5.2M/12,000) * 16,000 = $6.9M $Y1 Thrust Cost X1 X 12 $Y2 2 In this example, engine thrust, as a performance parameter, is used to adjust the existing engine (F-100) actual cost to generate an estimate of the new engine (F-200) cost. A linear relationship is used to draw the comparison. To calculate the estimated cost for the F-200, first calculate how much more thrust is required for the new engine as a ratio (16,000/12,000 = 1.33), and then multiply the actual cost of the F-100 by this thrust ratio to determine the estimated cost of the F The calculation is 1.33 * $5.2M = $6.9M. Some analysts may prefer to calculate the “dollars per pound” (of thrust) for the known data point, 5.2M/12,000 = $443K/lb, and then multiply it by the thrust of the new system (16,000 lbs) to obtain the same result. These two methods are mathematically equivalent, so feel free to use whichever one is most intuitive to you. It is important to realize that an adjusted analogy like this, using a ratio, is tantamount to a linear equation whose graph passes through the origin. This is where the mischief of applying the analogy techniques most often lies: the adjustment. There should be a mathematical, scientific, or at least logical reason for the ratio used in the adjustment. In the above example, is there a compelling engineering reason that cost should be directly proportional to thrust for an engine? Relationships based on physical properties (as opposed to performance measures) may be easier to discern and justify: for power cabling of the same cross-section and power rating, linear feet would be a good adjustment attribute; for a surface of identical thickness and composition (e.g., for an aircraft wing or ship hull), surface area; for a solid system or component, mass or volume. As a final example, with no other detail available, you might use the cube of the diameter for the adjustment ratio for munitions cost. That is, though a 6” gun is 20% wider than a 5” gun (6/5 = 1.2), comparable munitions for it are likely to cost about 73% more ((6/5)^3 = 1.728). The cartoon graph vividly illustrates the two Warnings: When we adjust from X1 to X2, we have gone outside the range of the data, and we are adjust Y1 along a line through the origin. X2 Warning 1: An adjusted analogy is like a regression, but the slope is just a guess. Warning 2: An adjusted analogy is, by definition, estimating outside the range of the data. Unit I - Module 2
22
Analogy – Uncertainty and Risk
Uncertainty in point of departure Uncertainty in slope of adjustment Risk Risks not “included” in analogy system Historical growth of scaling quantity 9 As we said before, uncertainty is the range of possible outcomes of the estimate. For estimates based on an analogy, there will be uncertainty in both the point of departure and the slope of the adjustment. The point of departure is the analogous system. There is uncertainty in whether or not the chosen point of departure is truly analogous (i.e., not too different) to the new system. Also, an estimate based on an analogy assumes a linear relationship between the old and the new system. The fact that the true underlying relationship between the two systems is unknown, creates uncertainty. Note that if data exist to understand the underlying relationship between the two systems, it is recommended that you use the parametric estimating technique (introduced next and discussed further in Module 3 Parametric Estimating). See the cited paper for more on applying the parametric thought process to the Analogy technique. We can also generally characterize the risk associated with estimates based on an analogy. By adjusting estimates for risk, we are trying to eliminate what appears to be an inherent bias in estimates and thereby improve their accuracy. For estimates based on an analogy, there will be risk in those risks not “included” in the analogy system. What this means is that the new system may have risks that would not have been captured in the costs associated with the old system. Examples may include risks associated with new technologies, risks associated with economic conditions (i.e., inflation), risks associated with the labor environment, etc. There may also be risk characterized by historical growth in the scaling quantity. As a system or project progresses in the program life cycle and the design matures, the scaling quantity used in the analogy may change. If we have insight into the historical growth of the scaling quantity, we can capture the potential for growth as risk. Examples may include size, power, weight, lines of code, and many others. Do not worry – the concepts of risk and uncertainty are difficult. The intent here is to introduce the types of uncertainty and risk that may be associated with estimates based on an analogy. How to quantify these risks and incorporate them into your cost estimate is discussed further in Module 9 Cost and Schedule Risk Analysis. “Analogies: Techniques for Adjusting Them,” R. L. Coleman, J. R. Summerville, S. S. Gupta, SCEA 2004. Unit I - Module 2
23
Analogy – Uncertainty/Risk Illustration
Estimate Based on an Analogy Here is the same example graphic that we saw earlier, but now it used to show the uncertainty associated with the analogy cost estimating technique. The underlying rules still apply – the graph is a typical scatterplot, with cost on the (vertical) y-axis and a cost driver variable such as weight on the (horizontal) x-axis. (For much more on scatterplots, see Module 6 Basic Data Analysis Principles.) The solid gray line shows the “true” underlying cost driver relationship, and the two dotted gray lines show plus or minus one standard deviation (sigma) of the associated error term. (Note that we have made the fundamental assumptions for ordinary least squares (OLS) regression so that we can appropriately compare the Parametric technique later on.) As described in the previous example, the red lines represent an estimate based on an average (solid) and the associated error of the estimate (dashed). The green square represents an estimate that is based on an analogy. In this example, the analogous point is circled in red. Note that by definition, the line between the estimate and the analogous point is linear and passes through the origin, as shown by the solid green line. To determine the uncertainty surrounding the estimate based on an analogy, we have to consider two possibilities. One is that the scaling is appropriate but the analogy point is not precisely the right point to scale through. Every program has a final cost which is not the “true” average cost for such a program. We call programs “lucky” or “unlucky” depending on whether they ended up below or above the true mean, respectively, which we can never know for sure in practice. To account for this uncertainty, we add to or subtract from the circled analogy point one standard deviation of cost (approximately equal to the distance between the solid and dashed red lines). Scaling through those points creates the upper and lower diagonal dashed green lines. The second possibility is that the scaling is not appropriate in the first place, in which case we’d be better off using the average (red lines). Taking the worse of the two possibilities, we trace out the prediction bands of sorts using the heavy dashed green lines, ending up with a vaguely hyperbolic pair of angles. Unit I - Module 2
24
Parametric Estimating - Method
A mathematical relationship between a parameter and cost Parameter may be physical, performance, operational, programmatic, or cost Uses multiple systems to develop relationship Allows statistical inferences to be made 3 4 Warning: Rates, factors, and ratios in use may not be statistically based. 9 8 The parametric cost estimating technique is a mathematical relationship between certain characteristics (such as weight, thrust, or power) as one or more independent variables of a system and the system’s cost as a dependent variable. These relationships are developed using data collected on similar programs. The independent variables are know as cost drivers, and could be physical characteristics, performance or operational parameters, programmatic variables, or even other costs. Developing a parametric relationship uses multiple systems to cover a broader range than an analogy. A parametric also allows statistical inferences to be made. These statistical relationships will be able to tell you how well your parametric equation works. When developing a parametric, the underlying assumption is that the historical framework on which the parametric is based will remain the same for the new system (e.g., the technology, manufacturing processes, etc., are not drastically changing). A parametric could range in complexity from a simple rule of thumb (such as so many dollars per pound ($/lb)) to a complex regression equation (such as Effort = * [New SLOC + (0.04 * Reused SLOC)]^0.9766). Parametric relationships are commonly known as Cost Estimating Relationships or CERs, of which particular kinds are Rates, Factors, and Ratios. The topic of Parametrics will be covered in much greater detail in Module 3 Parametric Estimating, and statistical techniques for developing parametric equations will be covered in Module 8 Regression Analysis. It must be stressed that CERs should be developed using regression whenever possible, enabling the statistics inferences mentioned above. Be forewarned that many rates, factors, and ratios in use may not be statistically based. Note that “a parametric” is used as colloquial shorthand for “a parametric relationship” or “a parametric equation,” and likewise “parametrics” for “parametric estimating.” “This pattern holds” AKA Cost Estimating Relationships (CERs), Rates, Factors, Ratios 3 Unit I - Module 2
25
Parametric Estimating - Application
Use of Parametrics Requires a good database which is relevant to the the system being estimated Excellent for use early in program life cycle before a detailed design exists Used as the design progresses to capture changes CAIV trades Good as a cross-check for other methods 4 16 The parametric technique can be used in a wide variety of situations. These situations could range from early planning estimates to detailed contract negotiations. You generally need an adequate number of relevant data points to develop a parametric. Care must be taken to normalize the data so the data used to develop a parametric is consistent and complete. Parametrics are used early in a program when the design is not well defined. In many programs, changes occur frequently. Changes to the design can easily be reflected in your estimate by adjusting the values of the input parameters as the program becomes better defined. The parametric technique is also good as a secondary or cross-check technique. You could develop a very detailed estimate of a spacecraft program (structure, power, propulsion, etc.), then use a top level total spacecraft CER to validate your detailed estimate. You should be aware of whether the system you are trying to estimate falls significantly outside the range of the data used to develop the parametric relationship. For example, if your new spacecraft is expected to weigh 250 kg and the CER you are looking at was based on spacecraft that weighed 2000 kg and higher, you should probably find another CER. As cost estimators, however, we are often called on to estimate outside the range of historical data, and one of the strengths of the Parametric technique is quantifying the uncertainty inherent in doing so. Unit I - Module 2
26
Parametric Estimating – Considerations
Strengths Can be easily adjusted for changes by modifying input parameters Sensitivity Analysis - Can show how changes to certain parameters impact the cost Objective measures of validity Statistical measures for uncertainty Weaknesses “Black box syndrome” with pre-existing CERs, commercial models Challenges Difficult to ensure consistency and validity of data Goal is to establish and maintain homogeneous data set Must constantly review relationships to ensure that relationships reflect current status of relevant programs, technology, and other factors 8 9 4 There are several strengths of the parametric technique. One is that it is versatile. You can develop a parametric at any level when you have enough data (e.g., system, subsystem, component, etc.). Then as the design changes, you can quickly and easily capture the effects to the costs by modifying the input parameters. For example, if during the early stages of a program the weight estimates for a system increase by 25%, you can plug the new weights into a weight-based CER and reflect the increased weight in your cost estimate. Along the same lines, you can easily perform sensitivity analysis by varying your input parameters and recording how cost changes with respect to that parameter. A parametric derived through statistical analysis will generally have both objective measures of validity (statistical significance of each estimated coefficient and the model as a whole) and a calculated standard error which can be used in cost risk analysis. (These statistical properties will be discussed in Module 8 Regression Analysis, and the topic of risk will be covered in much greater detail in Module 9 Cost and Schedule Risk Analysis.) A challenge in implementing the parametric technique is that the underlying database upon which the CER is based must be consistent and sufficiently robust. Great care needs to be taken to normalize the data or to make sure that whoever developed the CER did a thorough job of normalizing the data. This is a crucial step that can be overlooked if you are using a “canned” CER. A “canned” CER means that someone besides you developed the CER and you are unable to get access to the raw data used to develop the CER. Without understanding how the data are normalized, you are taking someone else’s word that they did a thorough job of normalizing the data. (These data-related issues will be covered in greater detail in Module 4 Data Collection and Normalization.) This is sometimes called the “black box syndrome,” where you plug in the input(s) and blindly accept the output without understanding how it is generated. Another weakness is that parametrics must be updated to capture the most current cost, technical, and programmatic data. If a CER is too dated, it will likely not provide an adequate cost estimate for the current state of the art. In fact, as technologies change, a dated CER can be exactly wrong. For example, weight will often move from a direct (more weight = more material = more cost) to an indirect (less weight = more advanced material = more cost) relationship. 7 Unit I - Module 2
27
Parametric Estimating - Example
CER for Site Activation as a function of Number of Workstations: Site Act ($K) = * Num Wkstn Site Activation includes site survey and site installation costs for an Automated Information System (AIS) Estimated based on 11 data points for installations ranging from 7 to 47 workstations Example expanded in Module 3 13 3 An example of Parametric Estimating is a Cost Estimating Relationship (CER) for the Site Activation cost element for a Major Automated Information System (MAIS) developed from 11 data points corresponding to similar systems where installations ranged from seven to 47 workstations, inclusive. The number of workstations is the cost driver in this case, and the equation shows that the estimated cost for Site Activation is 82.8 thousand dollars plus 26.5 thousand dollars times the number of workstations in the new installation. This CER might give good estimates provided that number of workstations in the new installation is between about five and 50. You probably would not want to use it for independent variable values much outside the range of the data on which it was based. In an extreme case, you certainly would not claim that the cost of Site Activation for an installation with no workstations is about 82.8 thousand dollars (!), though this is what the equation gives mathematically. This example will be developed in much greater detail in Module 3 Parametric Estimating, including showing the data and graph on which it is based; the statistical results of the regression used to derive the CER; updating and calibrating the CER; and its use in sensitivity analysis and cost risk. Unit I - Module 2
28
Parametric Estimating – ERP Example
12 The graph below shows an example CER for ERP investment as a function of the Number of Interfaces: This graph shows another example of a parametric Cost Estimating Relationship (CER) for the investment cost of Enterprise Resource Planning (ERP) systems based on the number of interfaces. This CER was developed from 8 data points corresponding to similar systems where the number of interfaces ranged from 22 to 119. As depicted in the graph, the number of interfaces is positively correlated with total investment cost, which makes logical sense as the number of interfaces is a measure of the size of an ERP. The graph shows the raw data points, the CER and corresponding goodness of fit statistic R2, as well as the prediction intervals. Note: An Enterprise Resource Planning (ERP) system is defined as a single business support system that provides a variety of business function. ERPs are described in further detail in Module 12 Software Cost Estimating, including discussion of cost estimating-related considerations for this type of system. “Enterprise Resource Planning Systems: Sizing Metrics and CER Development”, D. Brown, SCEA National Conference and Training Workshop, 2011 Unit I - Module 2 NEW!
29
Parametric – Uncertainty and Risk
“bounce” and “wiggle” Uncertainty Uncertainty in intercept and slope of regression line Standard error Confidence Interval (CI) Uncertainty in distribution around regression line SEE Prediction Interval (PI) Risk Risks not “included” in historical data set Historical growth of cost driver(s) 8 9 18 “fuzz” or “noise” The uncertainty and risk inherent in parametric estimates can be characterized similarly to the uncertainty and risk in estimates based on an analogy. Parametric estimates, however, have the advantage of using statistics to capture the uncertainty in estimating beyond the range of data. The uncertainty in the CER itself is in the intercept and slope of the regression line, measured by their respective standard errors. Taking these into account gives a confidence interval, or CI, for the CER prediction at any given value of the independent variable. There is also uncertainty in the distribution around the regression line, captured by the standard error of the estimate, or SEE, which when included expands the CI to a prediction interval, or PI. A prediction interval is an interval estimate of a dependent variable itself (usually cost). When an estimate is made using a regression line (and the true value of the regression line is unknown), the PI predicts the distribution around the estimate. The risks associated with estimates based on a parametric estimate can also be characterized as those not “included” in the historical data set. What this means is that the new system that you are trying to estimate may have risks that are also new and did not occur to the systems in the historical data. Examples may include risks associated with new technologies, risks associated with economic conditions (i.e., inflation), risks associated with the labor environment, etc. There may also be risk characterized by historical growth of the cost driver. As a system or project progresses in the program life cycle and the design matures, the value of the cost driver may grow or change. If we have insight into the historical growth of the cost driver, we can capture the potential for growth as risk. Examples may include size, power, weight, lines of code, and many others. How to quantify these risks and incorporate them into your cost estimate is discussed further in Module 9 Cost and Schedule Risk Analysis. Tip: Parametric has the strength of using statistical results to capture the uncertainty in estimating beyond the range of the data Unit I - Module 2
30
Parametric – Uncertainty/Risk Illustration
Estimate Based on a CER (Parametric) In this case, the example graphic shows the uncertainty associated with the parametric cost estimating technique. As we discussed, the graph is a typical scatterplot, with cost on the (vertical) y-axis and a cost driver variable such as weight on the (horizontal) x-axis. (For much more on scatterplots, see Module 6 Basic Data Analysis Principles.) The solid gray line shows the “true” underlying cost driver relationship, and the two dotted gray lines show plus or minus one standard deviation (sigma) of the associated error term. We have made the fundamental assumptions for ordinary least squares (OLS) regression. The blue square represents an estimate that is based on the parametric. The dotted blue lines show plus or minus one standard deviation of the associated error term. In this example, the uncertainty associated with the parametric estimate is very close to the underlying “true” error of the data. This is not always the case, but in this instance is a function of the fact that the six (dark blue) points on which the CER is based are fairly evenly scattered about the gray line. Unit I - Module 2
31
Parametric – Uncertainty/Risk Illustration Calibrated CER
Estimate Based on a Calibrated CER (Parametric) 3 The purple solid line shows a calibrated CER, vertically shifted to pass through the circled point. (Normally, we would not calibrate a CER through a point in the data set from which it was derived, but we are doing so here for purposes of comparison.) The purple square represents an estimate that is based on the parametric. The red circle highlights the point that was used to calibrate the CER. The dotted purple lines show plus or minus one standard deviation of the associated error term, borrowed from the uncalibrated CER. For more on calibrated CERs, see Module 3 Parametric Estimating. Unit I - Module 2
32
Uncertainty and Risk Illustration
The final slide of our uncertainty and risk illustration is a comparison of the estimates developed using the different cost estimating techniques. The green line represents the estimate based on an analogy. The purple line represents the estimate based on a calibrated parametric. The blue line represents an estimate based on a parametric. The red line represents an estimate based on an average. Note that all of the estimates were based on the same underlying data, but resulted in different values for the cost estimate. Which do you think is the best estimate? Unit I - Module 2
33
Build-Up - Method Estimating is done at lower levels and results rolled up to produce higher-level estimates Often the lowest definable level at which data exist Elements of this method could include Standards Time and Motion Studies Well defined work flow Variance Factors Parts List Lot Size and Program Schedule Considerations Program Stage Support Labor 11 “It’s made up of these” 5 The Build-Up method builds estimates for higher-level cost elements by summing or “rolling up” detailed estimates for lower-level cost elements. While rollup is common to estimates produced by any combination of the four essential techniques, build-up is characterized by estimating at the lowest definable level at which data exist. There are several elements which can be used to develop estimates at these lower levels using this method. Standards Development is the cornerstone of this technique. The standards can be developed by outside sources or internal to a company. Standards are available in industry publications which are used widely. The standards generally reflect an optimal production environment. They capture how long it takes to perform a particular task, based on time and motion studies done in controlled environments. Since no one operates at an optimum level, variance factors (also known as realization factors) are calculated based on measures of a company’s actual experience compared to the standard. The variance factors capture the company’s historical performance against the standard. The estimated labor hours are multiplied by labor rates to determine costs. Also needed is a detailed parts list or material costs. The parts required for a particular task are identified at the lowest level possible, often down to the nuts-and-bolts level. Consideration must be made for quantity and schedule to capture the effects of learning curves and production rate. The stage of the program is also important because the processes and standards that are used in the manufacture of prototypes are generally different than those for full rate production. The non-touch support (production engineering, quality, etc.) labor is often estimated as a factor of the touch labor. The build-up technique is further discussed in Module 11 Manufacturing Cost Estimating. The lower-level estimating associated with the build-up technique uses Industrial Engineering (IE) principles, and so it is sometimes referred to as Engineering Build-Up or IE. Also, this technique may make use of Catalogs to estimate the cost (price) of purchased materials or components and Handbooks which contain standards or other information. Some refer to it as “grass roots” estimating. AKA Engineering Build-Up, Industrial Engineering (IE), Time Standards, Standard Labor Hours, Catalog/Handbook, Detailed Cost Estimating Unit I - Module 2
34
Build-Up - Application
Used when you know detailed product information at the lowest level (i.e., hours, material, etc.) Used in a manufacturing environment where Touch Labor can be accurately estimated Touch Labor = direct work on product As opposed to support or management functions Tip: Engineering drawings (e.g., CAD/CAM) or site surveys are almost always required to do a build-up The Build-Up method is used when you have detailed information at a low level about an item (how many hours, how many parts, etc.). It is also applicable in a touch labor environment where a company is manufacturing a product. Touch labor means that workers (human or robotic!) are actually touching the product and performing some sort of work on it, as opposed to a support function, such as quality assurance (QA) or production engineering, which is not “felt” on the product. This could be building a circuit card or assembling an automobile. In these operations, the process is well known and each step of the work flow can be identified, measured, and tracked for performance. The standards and performance factors developed can be used not only to build a database for estimating costs, but also to manage the work. These metrics are used to identify areas that are performing better or worse than projected. Warning: In application, “engineering judgment” often masquerades as engineering build-up, because they are both bottom-up Unit I - Module 2
35
Build-Up – Considerations
Strengths Easy to see exactly what the estimate includes Can include Time and Motion Study of actual process Variance Factors based on historical data for a given program or a specific manufacturer Weaknesses Omissions are likely Small errors can be magnified Challenges Expensive and requires detailed data to be collected, maintained, and analyzed Detailed specifications required and changes must be reflected There are several strengths to the build-up technique. By developing an estimate at a low level, you can show exactly what your estimate covers, and you will be able to determine later whether anything was overlooked. If time and motion studies are involved, you will get a fairly accurate depiction of the actual process of producing the part or system, because the process is mapped down to a very low level of detail. The variance factors applied to standards are based on verifiable actual cost data. The application of the build-up technique will be unique to a specific program and manufacturer (contractor). A weakness of this techniques is that it can be expensive to implement, especially if companies develop their own standards. It requires extensive data collection and monitoring for variance factors. The product specification must be well known and stable, and all product/process changes must be reflected in the estimate. Since many costs (rework, tooling, quality) are calculated as a percentage of touch labor, small errors at the touch labor level can be magnified into much larger errors. Finally, though it is easy to see what your estimate encompasses at any point, omissions are likely. It is very difficult to anticipate all actual costs beforehand. 6 Unit I - Module 2
36
Build-Up - Example Problem: Estimate hours for the sheet metal element of the inlet nacelle for a new aircraft Similar to F/A –18 E/F nacelle which has a 20% variance factor (actuals to standards) and a support labor factor of 48% of the touch labor hours The standard to produce the sheet metal element of the new inlet nacelle is 2000 touch labor hours Solution: Apply F/A-18 E/F factors to the standard touch labor hours 2000 hrs x 1.2 = 2400 touch labor hours Add the support factor of 48% to get the total hours estimate of 2,400 x 1.48 = 3,552 hours 14 In this example problem illustrating the build-up technique, you must develop an estimate for the sheet metal element for the inlet nacelle for a new demonstrator aircraft like the Joint Strike Fighter (JSF). The new inlet nacelle is similar to what is currently being produced on the F/A–18 E/F program. The Contractor has a stringent work measurement program in place to track performance based on Industrial Engineering Standards. The F/A-18 E/F program is currently experiencing a 20% variance (Actuals to Standards) and an indirect or support labor cost factor of 48% of the touch labor hours. The standards developed to produce the sheet metal element of the new inlet nacelle are 2000 touch labor hours. The estimate is developed using the standard hours and applying the F/A-18 E/F variance factor and support factor. Touch labor hours are estimated to be the standard 2,000 hours times the 1.2 variance factor (100%+20%), or 2,400 hours. Support hours are estimating by applying the 48% factor to the estimated touch labor hours, for 1,152 hours or a total of 3,552 hours. Labor rates could then be used to convert these labor hours into costs. Note that build-up may include elements of the previous two estimating techniques. Inputs, such as the standard and variance factor in this example, may be developed in whole or in part by analogy to a similar system, and the factors used are essentially parametric relationships. Unit I - Module 2
37
Build-Up – Uncertainty and Risk
Uncertainty in Design Specs Uncertainty in performance to standards (labor) Uncertainty in unit costs, scrap rates (material) Risk Omissions Historical growth of design specs Difficulty of integration There are several areas in which uncertainty may affect an estimate based on a build-up. There may be uncertainty in the design specifications, in the performance to standards (usually associated with the labor estimate), and in the unit costs and scrap rates of materials. Risk for estimates developed using a build-up may include an adjustment for potential omissions, the historical growth of design specifications, and the difficulty associated with the integration of sub-systems. Unit I - Module 2
38
Extrapolation from Actuals
Extrapolation from actuals is a subset of some methods Using actual costs to predict the cost of future items of the same system Extrapolation is used in several areas, which include: Averages Learning Curves Estimate at Completion 7 15 Extrapolation from actuals is an umbrella term covering the application of other principles you’ll learn about later in this course to the estimating of costs. It uses actual costs from past or current items to predict future costs for the same item. There are several variants of extrapolation from actuals, including: Averages: The most basic variant is the use of averages (simple, moving, etc.) to determine the average actual cost of the units produced to date and using the average cost as the prediction of cost for future units. Learning Curves: This is probably the most common method used for extrapolation of actuals. Items to consider for this method include the theory used (Cumulative Average or Unit), learning curve slope (LCS), and the theoretical first unit cost (T1). Learning Curves can also be called Cost Improvement Curves or Cost/Quantity Curves and are discussed in greater detail in Module 7 Learning Curve Analysis. Estimate at Completion (EAC): EACs are a special case of extrapolation from actuals, which uses actual cost and schedule data to develop estimates of costs at completion using Earned Value Management (EVM) techniques. See Module 15 Earned Value Management for more information on EACs. “Stay the course” 2 AKA Averages; Learning Curves, Cost Improvement Curves, Cost/Quantity Curve; Estimate at Completion (EAC), or Earned Value (EV) Unit I - Module 2
39
Extrapolation from Actuals - Application
Best application is for follow-on production units/lots Requires accurate cost database At an appropriate level of cost detail Validate and normalize data Once sufficient actuals are accrued, can be used to determine Estimate At Complete (EAC) throughout remainder of current phase 10 Earned Value Management ‘Gold Card’ Management Reserve Cost Variance Schedule Variance ACWP BCWP BCWS $ EAC Time Now Completion Date PMB TAB BAC time 15 Extrapolation from actuals is best suited for follow-on units/lots when you have existing data from current and past production lots. There should also be little change in the product design or manufacturing process from the previous units. If large changes exist, careful adjustments may have to be made or some other method chosen. The key to this method is to have accurate data at an appropriate level of cost detail needed to perform analysis at the system, subsystem, and/or component levels. You should also ensure that the data are validated and normalized. Validation of actuals should be performed by a subject matter expert (SME). In many programs, the accounting of actuals is not sufficient to validate the data. For example, a SME that was involved in a particular subsystem development might know that there were “political” corporate pressures to absorb an overrun where difficulties arose in the development of a component of the subsystem. These additional costs that may have been covered by the corporation may not appear in the contract cost data. Tip: Improved integration between the cost estimating and earned value functions has lead to increased prevalence of this estimating method Unit I - Module 2
40
Extrapolation from Actuals – Considerations
Strengths Utilizes actual costs to predict future costs Can be applied to hours, materials, total costs Highest credibility and greatest accuracy when properly applied Many government bodies require or encourage the use of this technique Weaknesses: Work to date may not be representative of work to go Extrapolating beyond a reasonable range Challenges: Unknown events affecting bookkeeping of actuals Changes in cost accounting methods Contract changes affecting actuals Configuration changes, process changes all have impacts A strength of extrapolation from actuals is that if a stable production environment exists with accurate, validated cost data, this can be a very accurate method. It uses documented actual costs to predict future costs. It can be applied to hours, material costs, or total costs. Many government bodies, such as the Office of the Secretary of Defense (OSD) Cost Assessment and Program Evaluation (CAPE), encourage the use of this technique. In the case of development and production contracts that meet a minimum cost threshold, Earned Value Management (EVM) may be required. A weakness of this technique is that accurate actual costs are often difficult to obtain. The accounting and bookkeeping of actual costs may be affected by unknown events such as internal management pressures. Companies change their accounting systems frequently, which affects the accumulation of actual costs. Contract changes could also impact the accounting of actuals as certain costs that should be accounted for may be allocated to separate contractual items. Perhaps the most important problem that could arise is changes in configuration and processes which will impact the costs and perhaps cause some “loss of learning.” These changes should be assessed carefully and you should understand how they affect the future units. Finally, as with any extrapolation, it should not be carried beyond a reasonable range. For example, we might be nervous about projecting inflation over 20 or 30 years (though that is precisely what we often do in our LCCEs!). Risk and uncertainty can at least quantify the danger, as the next slide reminds us. Finally, remembers that Actuals are only “actual” as long as conditions do not change. When conditions change, even actual costs require adjustment. If the adjustment is simple enough (e.g., inflation) no problem ... but if the adjustments become too fundamental, e.g., materials, then we have moved into Parametric or Analogy, depending on how we handle them. Unit I - Module 2
41
Extrapolation from Actuals – Uncertainty and Risk
Uncertainty in Learning Curve Uncertainty in EAC Risk Insufficient cost history Cost history not representative of future work Unrealistic baselines, excessive optimism, and the EAC “tail chase” 7 15 Depending upon the Extrapolation from Actuals method used, there can be both risk and uncertainty in the estimate. When developing an estimate based on a learning curve, there can be uncertainty in the actual curve itself. Similarly, there can be uncertainty in an Estimate at Complete (EAC). Statistical methods should be used for both when at all possible. The risks associated with these methods can be associated with insufficient cost history, a cost history that is not representative of future work (maybe due to a process change, etc.), unrealistic baselines, excessive optimism (primarily seen in EAC data), and the EAC “tail chase”. “Do Not Sum Earned-Value-Based WBS-Element Estimates-at-Completion”, S.A. Book, SCEA National Conference and Training Workshop, 2000 Unit I - Module 2
42
Expert Opinion Unit I - Module 2
The next few slides cover the use of expert opinion as an estimating method. This method is covered separately to emphasize the fact that expert opinion is considered to be a subordinate estimating method due to its subjectivity. As is discussed later in this section, expert opinion should only be used when one of the four primary techniques are not applicable. Unit I - Module 2
43
Expert Opinion - Method
Uses an expert or a group of experts to estimate the cost of a system One-on-one interviews Round-table discussions Delphi Technique 1 17 AKA Engineering Judgment, Round Table, Delphi Technique Expert Opinion involves using an expert or group of experts to estimate the cost of a system. This is often known as engineering judgment. Expert opinion is generally looked upon as too subjective. One way to alleviate this concern is to keep delving further into the “opinion” until you can determine if the expert is actually basing that opinion on some real data. Once you identify the data the opinion is based on, obtain copies and document the source. Don’t ask the expert to estimate outside the bounds of their experience. Request copies of any data referenced. Ask for another expert for areas outside the scope of their expertise. Validate credentials of the expert. Also, identify alternate experts for the same scope, then formulate consensus after individual consultation. There are several different approaches to Expert Opinion, which include: One-on-One interviews with experts: Request any documentation available on subject. Iterate if possible. Round-table discussion: Multiple experts present all sides of an issue. All the experts stay in a room until consensus is reached. They document areas of risk or “soft spots” in the estimate. Delphi Technique: A group of experts provide their answers anonymously to avoid a single person influencing the results in a group environment. The results are summarized and sent back out for coordination/comments. The approach generates a range of opinions and generally results in convergence to a single number or at least tighter range of possible outcomes. This is a specific technique from operations research (OR) and is more than just a BOGSAT (Bunch Of Guys Sitting Around a Table). The bottom line is that using expert opinion or engineering judgment without an accompanying basis is not a valid technique. It is not sound estimating practice and should not be used. Rather, the insights of functional experts should be used in conjunction with one of the four established cost estimating techniques. Tip: Expert Opinion refers to direct assessment of costs. Expert judgment is expected to be applied in any of the previously-described legitimate cost estimating techniques. Warning: Expert Opinion alone is not widely considered to be a valid technique Unit I - Module 2
44
Expert Opinion - Application
Only used when more objective techniques are not applicable Used to corroborate or adjust objective data Cross check historical based estimate Use for high-level, low-fidelity estimating (e.g., sanity check) Last resort Expert opinion should be used only as a last resort, when other methods are not available or not valid for the situation at hand. It is hard to justify as a primary estimating method and it is difficult to run risk around an expert opinion. The good news is that if you push hard enough, and ask enough questions, there is generally additional data that the opinion was based on. An expert may have his own data, collected over many years and many projects, that were used explicitly or implicitly to develop the estimate. The key is to extract this information from the expert and document what you find. Expert opinion may be used to corroborate or adjust (as in the analogy technique) objective data. For example, an expert with many years of experience may be able to explain a data point which appears anomalous and help guide the analyst to appropriate treatment of an apparent outlier. It could also be used early in a program’s life cycle for a quick, high-level, rough order of magnitude estimate. Expert opinion is often used as a secondary method, to offer a cross check for an estimate developed using established techniques. A sanity check with “graybeards” helps solidify a cost estimate. Often at high levels of an organization it lends a level of comfort to executives to know the estimate was “validated” by a long-standing company expert. Tip: Expert Opinion is the least regarded and most dangerous method, but it is seductively easy. Most lexicons do not even admit it as a technique, but it is included here for completeness. Unit I - Module 2
45
Expert Opinion – Considerations
Strengths Good cross check of other estimate from Subject Matter Expert (SME) point of view Provides expert perspective that facilitates understanding Weaknesses Completely subjective without use of other techniques Low-to-nil credibility Difficult to run risk around an expert opinion 8 A strength of Expert Opinion is that it can provide a sanity check or cross check of an estimate produced using a combination of established techniques. Subject Matter Experts (SMEs) will usually provide a different perspective, and may point out things not previously considered. Interaction with experts allows for a better understanding of the product or process whose cost is being estimated. Remember that Expert Opinion is in and of itself completely subjective and can often be easily refuted using objective data. Without use in conjunction with one of the previously discussed cost estimating techniques, it holds little if any credibility. If you ask five experts for their opinion, you will get at least five different answers – and maybe more! Tip: It is preferable to find data to support a credible basis, which may jibe with the expert-based estimate if it is implicitly founded on the same data Unit I - Module 2
46
Expert Opinion – Uncertainty and Risk
Human tendency to (significantly) understate error bands Risk Faulty recollection of “anecdotal actuals” Gaming Excessive optimism (or conservatism) As previously noted, Expert Opinion is prey to the human foibles of memory and judgment to which data-based techniques are largely immune. It has been noted that humans are lousy random number generators, and they tend to understate both variance (spread) and randomness (lack of uniformity). Insofar as Expert Opinion is grounded in actual experience – so-called “anecdotal actuals” or “expert testimony” – faulty recollection of this experience will tend to understate effort. The expert may recall getting a report done in a week and thus put down 40 hours, but they may forget that they had to work overtime or that the report got returned with review comments that took another couple of days to resolve. In extrapolating from past experience, misremembered or not, experts may be excessively optimistic (especially if “selling,” or trying to paint a rosy picture for a competitive proposal) or conservative (especially if “buying,” or trying to fence a budget to which they will be held in execution). Most perniciously, the experts may be consciously gaming the system. Unit I - Module 2
47
Using Cost Estimating Techniques
Estimate Requirements Top Down vs. Bottom Up Cost Element Structure (CES) Technique Selection Checking Results Documentation The next few slides show how to use cost estimating techniques in developing a cost estimate. Unit I - Module 2
48
Estimate Requirements
Why are we developing this estimate? What will it be used for? Milestone A, B, or C decision Developing a budget Developing a “ballpark” or rough order of magnitude (ROM) estimate Comparing alternatives Developing or evaluating proposals Why are you doing this estimate? When any estimate is started, it is important to understand as much about the program, the purpose of the estimate, and the estimate requirements as possible. These details will help you determine what you need to do to develop the estimate. How much detail is required? How soon is the estimate needed? What data are available? Answering these questions and others will help ensure that the task is well understood and enable you to gauge what cost estimating techniques are best to use. Several different types of estimates are listed above. An estimate for a milestone decision, for example, would require much more detail than a Rough Order of Magnitude (ROM). Unit I - Module 2
49
Top Down vs. Bottom Up The below definitions are correct, although in practice many terms are used as if they are interchangeable Top Down vs. Bottom Up refers to the origin of the estimate Top down (note singular) means either a target or a top-level estimate, which is then allocated to lower levels of the WBS Bottom up (note singular) means estimated at a lower level and then rolled up Top-Level vs. Lower-Level (estimate) refers to the level at which an estimate is performed, whether or not it is allocated or rolled up, respectively Build-Up is a specific estimating methodology Usual associations: {Top-Level estimate} or {cost target or Price to Win (PTW)} with {Top Down} {Lower-Level} with {Bottom Up} {Bottom Up} with {Build-Up} 15 How will you develop your estimate? There are two main ways to structure a cost estimate, Top Down and Bottom Up. Although any of the cost estimating techniques may be used with either estimating approach, Top Down is generally associated with the use of the Parametric or Analogy techniques. It involves using known top-level requirements (weight, power, etc.) or parameters to develop an estimate for an entire system, which may then be allocated to lower levels. Bottom Up involves working from information at the lowest level to develop an estimate for an entire system. You will come up with discrete estimates for each element, often by estimating required labor hours, materials, and other costs and applying direct and indirect rates, and then roll up these lower-level elements to arrive at an estimate for the entire system. Bottom Up frequently uses more than one method at the lowest level but is generally associated with the Build-Up technique. Unit I - Module 2
50
Cost Element Structure
Determine what needs to be estimated and develop an appropriate Cost Element Structure (CES) CES Dictionary defines what is included in each element Characteristics associated with cost elements that are routinely used to classify costs Program Phase: Development, Production, O&S “Color of Money”: RDT&E, Procurement, O&M Funding Source Non-Recurring or Recurring Direct or Indirect 1 What do you have to estimate? In conjunction with determining why you are developing an estimate, and how you will develop your estimate, you need to determine what needs to be estimated. A good way to break down your estimate into manageable pieces is to develop a Cost Element Structure, or CES. The CES is the framework for your estimate. The CES Dictionary defines what is included in each element of the CES. There are several characteristics associated with cost elements which are used to classify the costs. These include: Program Phase (i.e., Development, Production, or Operations and Support (O&S)); Appropriation Type or “Color of Money” (primarily Research, Development, Test, and Evaluation (RDT&E), Procurement, or Operations and Maintenance (O&M)); Funding Source; Non-Recurring or Recurring Costs; and Direct or Indirect Costs. (See also the discussion of Work Breakdown Structure (WBS) in Module 1 Cost Estimating Basics.) While the program WBS may be at a very detailed level, be sure to estimate at no lower a level than is supported by your data. If estimates or budgets are needed at lower levels, you can always allocate downward. Tip: Be sure to estimate at a level of the CES that is well supported by defensible data Unit I - Module 2
51
Technique Selection Review available techniques Compare alternatives
Select or develop appropriate technique Identify primary and secondary techniques Each cost estimating technique has strengths and weaknesses and can be applied at different times in the life cycle of a cost estimate Once you have established the purpose, scope, approach, and structure of your estimate, you must select cost estimating techniques at each appropriate level. Research what techniques are available from all potential sources. You may be able to find several alternative techniques for any given element. Some customers may have specific models that they want you to use. You may have a library of cost estimating techniques available that will have several possibilities. Your colleagues may have their own collections of techniques. In addition to existing techniques, consider developing new ones. After you have researched potential techniques, compare the techniques against your requirements and against each other. To compare techniques, look at the underlying data, do statistical comparisons, and make sure that nothing is overlooked or left out. Another thing to watch out for is techniques which are outdated. If a technique was developed based on old data, it may not accurately estimate new technology. Make sure that the technique is for the appropriate program phase. You should select the technique most appropriate for your estimate or develop a technique that fills your needs. Remember that you can use multiple techniques in the same estimate; you are not limited to a single type. Each technique has strengths and weaknesses and can be applied at different times in the life cycle of a cost estimate. In addition to the primary technique you select, you should also choose a secondary technique as a cross-check. This topic will be discussed more on the next slide. Unit I - Module 2
52
Checking Results Cross Checking your results greatly increases credibility Example: A parametric-based estimate can also show an analogy as a “reasonableness test” Doesn’t necessarily result in the exact same number, but should be a similar number (same order of magnitude) An independent* estimate is more detailed than a cross check and attempts to get the same result using a different technique Example: Use the results from one commercial software estimating package to validate the results of another 16 To increase the credibility of an estimate it is wise to check the results using some other method. A cross check is a test of reasonableness. This could involve, for example, using an analogy to compare with the results of a parametric estimate. The cross check does not have to result in the exact same number, but should be a similar number. At a minimum, it should be the same order of magnitude, “in the same ballpark.” Validation of results is more detailed than a cross check. The intent here is to get nearly the same result using different techniques. An example would be using one of the many commercially available software estimating packages to check the results obtained using another software package. The sense of validation is an Independent Estimate. Note, though, that “independent” has many possible interpretations when referring to cost estimates. The most stringent meaning is in Title 10 USC Section 2434 and involves an organization out of the chain of command of the acquiring agency. A looser meaning is an estimate done by an organization unbeholden to the program manager in funding or accountability. The loosest meaning is simply a separate estimate. Generally cross check, validation, and independent estimate reflect increasing levels of detail, fidelity, and persuasiveness in corroborating a cost estimate. *Note: “Independent” has many meanings. The most stringent meaning is in Title 10 USC Section 2434 and involves an organization out of the chain of command of the acquiring agency. A looser meaning is an estimate done by an organization unbeholden to the program manager in funding or accountability. The loosest meaning is a separate estimate. Unit I - Module 2
53
If You Don’t Show Your Work, You Don’t Get Any Credit!
Documentation Within reason, more information is better than less Any information that is used in the analysis must be included in the documentation Documentation should be adequate for another cost analyst to replicate your technique Like they used to tell you in math class…. Documentation is necessary to show the ground rules, assumptions, data, and calculations used to develop an estimate. If there is any doubt about whether to include something, you should include it. More information is better than less, within reason. All data used in the analysis should be included in the documentation, with sufficient detail that another analyst would be able to recreate your work based solely on the documentation. This replicability is the acid test for documentation. Documentation is another way to lend credibility to an estimate. A well-documented estimate will help convince others that you have done a thorough job in developing your estimate. If You Don’t Show Your Work, You Don’t Get Any Credit! Unit I - Module 2
54
Comparison of Techniques
Thus far, we’ve provided a look at the definition, application, strengths and weaknesses, and examples of the basic cost estimating techniques, and factors to consider in evaluating them for use in developing a cost estimate. The goal is for you to develop the analytic skills and experience to assess potential techniques for yourself. For what it’s worth, this section provides a brief look at the various cost estimating techniques side by side. Unit I - Module 2
55
Hey, it’s a joke, lighten up!
Comparison – Advocacy Advocates of Build-Up drink beer and say: More detailed = more accurate Analogy is prey to invalid comparisons Parametric is too “theoretical” Advocates of Analogy drink bourbon and say: Like things cost like amounts Build-Up is prey to omission and duplication Parametric is “diluted” by less applicable systems Advocates of Parametric drink wine and say: Most thoroughly based on historical data Analogy is just a one-point CER through the origin! Hey, it’s a joke, lighten up! As mentioned at the beginning of this module, there are vehement advocates of each cost estimating technique. We recommend being aware that there is predilection among some to get involved in “holy wars” on this subject but not getting involved at a dogmatic level. This slide is a light-hearted look at how advocates of a particular technique might view three of the major techniques. Build-Up advocates say that a more detailed estimate equates to a more accurate estimate. They appeal to the intuition that a conflation of many small elements is likely to cause uncertainties to balance out instead of compound. They argue that Analogy leads to invalid comparisons, and Parametric is too theoretical. Analogy advocates believe that the analogy is the single best basis for cost and rest on the foundation of this technique that like things cost like amounts. Build-Up is seen as not inclusive, with too high a probability of significant omissions or duplication (double-counting). Parametric is seen as diluted by less applicable systems. Parametric advocates say their technique is most thoroughly based on historical data. They argue that Analogy is just a one-point CER through the origin (!) and have the same objective as the Analogy advocates that Build-Up is not inclusive. Unit I - Module 2
56
Comparison – Life Cycle Applicability
Program Life Cycle Gross Estimates Detailed Estimates Analogy Parametric Extrapolation From Actuals Engineering Phase A Technology Development Phase B Design Phase C Production Operations and Support (O&S) 19 This chart shows when different cost estimating techniques are commonly applied relative to the Department of Defense (DoD) Program Phase Life Cycle Structure. It shows the approximate proportional usage of cost techniques by phase. It helps give an idea of the appropriate time to apply particular techniques and could be applied to non-DoD programs as well. At the beginning of a program, during the concept and design phases, there is more emphasis on using analogies and parametrics. In these early phases, gross estimates are the norm, as detailed estimates are not usually possible with poor program definition, changing requirements, and scarce cost data. As the program matures, it becomes more defined, additional data are collected, and the estimates get more detailed. Industrial Engineering (Build-Up) and Extrapolation from Actuals are used more frequently as the program transitions to Production and Deployment and Operations and Support (O&S). Keep in mind that the timeline on this chart represents when the estimate is conducted, not the phase being estimated. An LCCE should include all phases, and use of parametrics in estimating O&S, for example, is common in early stages. The graphic shown here is but one layer of the expansive DAU Integrated Life Cycle Chart, The Artist Formerly Known As Defense Systems Management College (DSMC) Chart #3000R4 (2001). The complete chart many other related disciplines, such as Budgeting and Systems Engineering, in the Acquisition Management process. Integrated Defense Acquisition, Technology and Logistics Life Cycle Management Chart, Defense Acquisition University (DAU), Unit I - Module 2
57
Cost Estimating Techniques Summary
You need to have all the cost estimating techniques in your repertoire For each, you need to know: What it is When to apply How to execute Strengths and Weaknesses Challenges The supporting data required This module has provided an overview of the cost estimating techniques you should understand. For each technique you should know what it is, when to apply it, how to apply it, strengths, and weaknesses. The most important thing to remember is that there are many different techniques which can be applied in different situations and not to limit your repertoire, but to carefully consider each technique analytically using what you have learned in this module and will learn in the rest of this course. The next module, Module 3 Parametric Estimating, will provide you with much more detail on how to develop and use parametrics. 20 Unit I - Module 2
58
Resources Integrated Defense Acquisition, Technology, and Logistics Life Cycle Management chart, Defense Acquisition University (DAU) International Society of Parametric Analysts (ISPA), Parametric Estimating Handbook, 4th Edition, April 2008 Resources include the famous DAU wall chart, which shows not only Cost Estimating and Analysis but other disciplines relative to the DoD Program Phase Life Cycle Structure; and the seminal handbook on Parametric Estimating, the flagship product of the erstwhile International Society of Parametric Analysts (ISPA), which recently merged with the Society of Cost Estimating and Analysis (SCEA) to form ICEAA. Unit I - Module 2
59
Related and Advanced Topics
Analogies and Rates Below-The-Line (BTL) Factors Schedule Estimating Operations and Support (O&S) estimating The next section of the module covers some related and advanced topics, with a focus on some specific applications of the cost estimating techniques we’ve just learned, including estimating schedules and operating and support (O&S) costs. Unit I - Module 2
60
Analogies and Rates Analogy scaling can be expressed as a rate
Such rates are common in certain circles $/lb, mhrs/ton, mhrs/LOC, etc. Reciprocal = productivity (e.g., LOC/mhr) Mathematically equivalent: $5.2M/12,000 lbs = $433/lb of thrust $433/lb x 16,000 lbs = $6.9M Same concerns as with adjusted analogy Prefer regression-based CER, when possible When estimating using an analogy, the factor used for scaling can also be expressed as a rate. Some rates are commonly used in certain estimating circles. For example, dollars per pound, man hours per ton, man hours per Line of Code (LOC), etc. The reciprocal of the rate can also be used to measure productivity. For example, the number of Lines of Code that were produced in one man hour can provide insight into the productivity of an individual or an overall software development effort. In some cases, the rate may be known for the production or development effort or the rate can be derived from historical data. Continuing the aircraft example from earlier in the module, the historical and analogous system cost $5.2 million, and its engine had 12,000 lbs of thrust. The rate that was derived from this data was that it cost $433 per pound of thrust to build the aircraft. Our new aircraft’s engine will have 16,000 lbs of thrust and by applying our derived rate of $433/lb, we estimate that the new aircraft will cost $6.9 million. Note that this is mathematically equivalent to scaling the cost by the ratio of thrusts, as was shown earlier. The same concerns that we had when we discussed using an adjusted analogy apply when you are using rates - even if the rate is commonly known! Estimating based on an analogy is tantamount to a linear equation whose graph passes through the origin. The “straight scale” represented by this type of approach accounts for neither fixed costs (non-zero intercept) nor (dis)economies of scale (exponent different than one). To ensure that the estimating technique is used correctly, there should be a mathematical, scientific, or at least logical reason for the ratio used in the adjustment. In the above example, is there a compelling scientific or engineering reason that cost should be directly proportional to thrust for an engine? Relationships based on physical properties (as opposed to performance measures) may be easier to discern and justify. When possible, it is preferable to use a regression-based CER to develop a cost estimate. Unit I - Module 2
61
Below-The-Line (BTL) Factors
Typically Systems Engineering, Integration and Test, and Program Management (SEITPM) Often a function of Prime Mission Equipment (PME) Beware non-statistically-based factors Similar in application to burdens like Overhead and G&A, but less “deterministic” Should be modeled using Functional Correlation in the risk model AKA “Cost-on-Cost” CERs 3 Below-The-Line (BTL) factors are also known as “cost-on-cost” CERs. A factor uses the cost of another element to predict cost based on a simple multiplicative relationship. BTL factors are unitless and often expressed as a percentage. BTL factors are typically used to model support functions, such as Systems Engineering, Integration and Test, and Program Management, or SEITPM collectively. SEITPM for a program can be calculated as percentage of the program’s prime mission equipment (hardware and software). Beware that BTL factors in use may not be statistically-based. BTL factors are similar in application to burdens like Overhead and General and Administrative (G&A) costs, but BTL factors are less “deterministic”. BTL factors may vary based on the system or program that is being estimated. These factors should be based on historical information and derived through statistical methods. In order to result in a strict factor, the regression must be “forced” through the origin. In general, it is better to sacrifice the degree of freedom to obtain a better fit by allowing a non-zero y-intercept. The risk associated with costs derived from BTL factors should be modeled using functional correlation in the risk analysis. Because the cost derived from a BTL factor is based on an estimated cost, which has its own risk and uncertainty, that must been taken into account. Module 9 Cost and Schedule Risk Analysis will discuss functional correlation in more detail. For a related discussion on rates, factors, and ratios, see Module 3 Parametric Estimating. 9 Unit I - Module 2
62
Below-The-Line Factor Example
SEITPM for a space system Historically 20% of PME Prime Mission Equipment (PME): Hardware cost + Software cost = $2M The estimate for SEITPM is: 0.2 * $2M = $400K Note: SEITPM may vary based on the historical and current program data In this example, SEITPM is based on the amount of prime mission equipment for an unmanned space system. If the estimate cost of hardware and software for the space system is two million dollars, then the SEITPM for the system is estimated as 20 percent of that, or $400,000. In practice, it is important to analyze the current and historical data associated with the system to know what factors are appropriate to apply. Further analysis may find that SEITPM may vary based on the phase of development of a system or on the work activities. See the cited paper for more detail. “SE/IT/PM Factor Development Using Trend Analysis”, A. Wekluk, N. Menton, ISPA/SCEA, 2007 Unit I - Module 2
63
Schedule Estimating Estimating techniques can also be used to estimate a project schedule (duration) Analogy Parametric Build-up/Extrapolation from Actuals = IMS Same considerations hold true for schedule estimating Method Application Strengths, weaknesses, and challenges Uncertainty and risk 5 “Best Practices for Project Schedules” (Exposure Draft), GAO G, 30 May The estimating techniques that we discussed earlier in this module can also be used to predict a project’s schedule (duration). Schedule estimating is important because it can help assess whether a program’s baseline schedule is realistic, which affects both time-phasing of the cost estimate (which costs occur when) and the total cost for time-dependent elements such as level-effort “standing armies.” To estimate the schedule of a project based on an analogy, historical information from a similar project will be used to scale its duration to that of the new project. The parametric estimating technique can be used to develop a SER or Schedule Estimating Relationship. In this method, the historical information from multiple projects are used to develop a statistical relationship between the schedule and the schedule-driving parameter, often total effort. The build-up technique can be used when schedule estimates are known for the lower-level work tasks and then summed to estimate to total project schedule. This is done within the framework of an integrated master schedule, or IMS, which in turn supports extrapolation once the project has begun to execute that schedule. This approach tends to be the focus of GAO’s “sequel” to the Cost Assessment Guide, available at the given link. The same strengths, weaknesses, and challenges discussed for each of these estimating methods also apply when using the techniques to estimate schedule, as do other considerations such as uncertainty and risk. Time-phasing is a topic related to schedule estimating and is discussed in more detail in Module 5 Inflation and Index Numbers. Schedule risk is also further discussed in Module 9 Cost and Schedule Risk Analysis. 9 Unit I - Module 2
64
Schedule Estimating - Analogy
Estimate the integration schedule for a system that is made up of 15 components Based on the completion time from the integration of System A System B integration schedule = (40 months/10 components) * (15 components) = 60 months OR = 40 months * (15/10) = 60 months In this example, our analogous system (System A) is made up of 10 components, which took 40 months to integrate. This equates to a rate of 4 months per component. Based on the completion time for the integration of System A, the integration schedule for System B will be 15 components times 4 months per component, or 60 months. That’s the “rate view” of the calculation. The more purist analogy view is to scale the 40-month schedule by the ratio of the number of component (15 to 10), or 40 times 1.5 = 60. While this sort of an approach is intuitive and appealing, we should be worried that a complex task like integration may not scale linearly, as is implicit in an analogy. Maybe system complexity increases exponentially with the number of components, so that we would expect System B integration to take substantially longer than 60 months. Or perhaps instead there is a significant non-recurring component that is fairly insensitive to the number of components, after which the recurring integration tasks can be completed quickly, in which case System B integration would take significantly less than 60 months to complete. Unit I - Module 2
65
Schedule Estimating – Parametric
In this example, the historical information from 10 different efforts can be used to predict the schedule for a project based on the total number of labor hours required. Similar to the “cost-on-cost” BTL factors just discussed, this is essentially a “time-on-time” estimating relationship. It requires you to estimate the total labor hours for the job first, and based on that estimate, you can determine an estimated schedule (duration). Note that the slope coefficient of the SER will allow you to derive an approximate average manning level for the predicted duration. A key strength of the parametric technique is that it establishes a clear pattern across several comparable efforts. It also gives you objective measures of statistical significance and uncertainty. Unit I - Module 2
66
Schedule Estimating – Parametric Constrained Schedule
If the previous slide is a typical example of a parametric SER, or Schedule-Estimating Relationship, then this one illustrates SAIV, or Schedule As an Independent Variable. By reversing the axes, we now wish to predict the total effort that will be needed to complete the project in a certain timeframe, under a constrained schedule. This is an example of schedule compression, where you need to ramp up to finish within a shorter duration. In an unconstrained schedule (as seen on the previous slide), you might have more of a direct relationship between total labor hours and schedule duration. Note that in the example shown, since total effort increases linearly as schedule decreases, the required staffing level will increase more than linearly. For example, 70 people-months divided evenly across 20 calendar months gives an average staffing level of 3.5 full-time equivalents (FTEs). However, if you try to cut your schedule in half to 10 calendar months, you now need 95 person-months, or an average of 9.5 FTEs, considerably more than double. As some point, schedule compression will cease to be effective at all. This is the proverbial “nine woman can’t have a baby in one month.” Also beware trying to compress schedule after the initial plan has been established. The very same Frederick Phillips “Fred” Brooks, Jr., gave us Brook’s Law: “Adding manpower to a late software project makes it later.” “The bearing of a child takes nine months, no matter how many women are assigned.” -Fred Brooks, The Mythical Man-Month: Essays on Software Engineering Unit I - Module 2
67
Schedule Estimating – IMS
Estimate of total schedule by rolling-up lower level elements in an Integrated Master Schedule (IMS) Requires basis for individual durations Beware of deterministic sums for network schedules 15 9 Schedules can also be estimated using the build-up technique by constructing what is known as an Integrated Master Schedule, or IMS. In this method, the total schedule is estimated by rolling-up the schedules of the lower level estimates. This method assumes that there is a valid basis for the durations of the lower-level elements. In schedule estimates based on this technique, beware of deterministic sums for network schedules. It is important to understand the critical path of the schedule and how it affects the estimate. While the analogy and parametric techniques may lend themselves to a quick and dirty quantification of risk and uncertainty, a build-up schedule estimate will almost certainly require a Monte Carlo simulation to translate individual task duration estimates into a top-level schedule distribution. Module 15 Earned Value Management (EVM) includes more information on schedule analysis and critical paths. Module 9 Cost and Schedule Risk Analysis includes several examples of schedule risk. While estimating total duration from an IMS constructed prior to project execution is tantamount to the build-up technique applied to schedule, once the project begins to execute, extrapolation from actuals can be used to project from to-date performance against planned schedules to estimate a schedule (duration) at completion. This approach is often termed Earned Schedule (ES). INT 02 “Advanced Schedule Analysis,” David T. Hulett, SCEA/ISPA, 2012. Unit I - Module 2
68
Operations and Support (O&S) Cost Estimating
Operations and Support (O&S) costs normally make up a large portion of system life cycle costs and are often overlooked O&S costs are defined as all of the costs associated with operating, maintaining, and supporting a system O&S costs include costs for: Personnel Consumable and repairable materials Organizational, intermediate, and depot maintenance Hardware and Software Facilities Sustainment AKA Operating and Support (O&S), Operations and Sustainment (O&S), Operations and Maintenance (O&M) The cost estimating techniques described earlier in this module are also applicable to Operations and Support or O&S costs. Because O&S costs normally account for a large portion of the total life cycle cost for a system (some sources say 70-80% for certain types of systems), and because they are often excluded or overlooked in the decision making process, we wanted to include some more information regarding O&S cost estimating in this section and stress its importance. The subject could easily be an entire module in and of itself. O&S costs are defined as all of the costs associated with operating, maintaining, and supporting a system. These costs are necessary and important to include in Life Cycle Cost Estimates (LCCEs) to ensure that users and decision makers understand the total cost of a system or program. The acronym O&S may also be used to refer to Operating and Support costs, Operations and Sustainment costs, or Operations and Maintenance (O&M) costs (which is more properly the associated appropriation type or “color of money”). In this module, we will assume the definition for all of these is the same. O&S costs can include costs for: personnel; consumable and repairable materials; organizational (“O-level”), intermediate (“I-level”), and depot (“D-level”) maintenance (to include both Hardware and Software); facilities; sustainment; etc. The erstwhile Office of the Secretary of Defense (OSD) Cost Analysis Improvement Group (CAIG) published a 2007 update to the 1992 Operating and Support Cost Estimating Guide, which can be found along with other O&S estimating resources at the given link. DoD Operating and Support Cost Estimating Guide, OSD CAIG [sic], October, Unit I - Module 2
69
O&S Cost Estimating - Analogy
Annual developed software maintenance cost for the new Ground Processing System (System B) can be estimated based on an analogy of System A Based on the Source Lines of Code (SLOC) from System A the developed software maintenance for System B can be calculated as ($7.5M/2.5M) * 5M = $15M OR ($7.5K) * (5M/2.5M) = $15M An analogy can be used to develop an estimate for Operations and Support costs. In this example, we are trying to estimate the amount of annual developed software maintenance for a Ground Processing System. System A is the system that is analogous to System B. Based on the amount of Source Lines of Code for the developed software of both of the systems, the annual costs can be estimated in one of two mathematically-equivalent ways: (1) the “rates view,” where the derived rates of $3/SLOC is applied to the new code of 5M SLOC to get an annual cost of $15M; or (2) the “analogy view,” where the annual cost of $7.5M is scaled by the ratio of SLOC (5M / 2.5M = 2.0, or twice as a much code) to get the same answer (twice the cost). Another common approach to estimating software maintenance is to determine the amount of code that can be maintained by each maintainer on an ongoing basis and divide that ratio into the total code base to determine the size of the maintenance staff required. Of course, that ratio should be based on historical data from comparable programs. Unit I - Module 2
70
O&S Cost Estimating - Parametric
3 This example demonstrates how a parametric estimate can be used to predict the O&S costs associated with the overhaul of a ship. In this case, it appears that a non-linear relationship exists between the average age of the ships and the cost of the overhaul of the ship. The first cited paper and its predecessors examine the “age affect” associated with the maintenance costs of ships and other platforms. The second cited paper examines O&S costs for aircraft. Module 3 Parametric Estimating provides a more straightforward example of a Purchased Services CER, where operational parameters (crew size and cold iron hours) are the cost drivers. Purchased Services is a standard O&S cost element for naval ships. “How Age Affects Operations and Support Costs Differently Across Platforms,” S. Grinnell, J. Summerville, R. Coleman , SCEA, 2006. “O&S Physics Based Modeling,” Kevin Cincotta, DoDCAS, 2006. Unit I - Module 2
71
O&S Cost Estimating – Build-up
Example: reliability-based logistics estimate Logistics costs for weapons system broken down into 25 categories Bottom-up estimate rolls up costs associated with each of these categories Some elements are true build-ups The build-up cost estimating technique can also be applied to O&S estimates. An example of a build-up cost estimate could be a reliability-based logistics estimate. The Cost Analysis Strategy Assessment (CASA) model is a total ownership cost decision support tool that is used by the United States Army. It is a bottom-up model that provides a total cost estimate based on lower-level logistics costs for a weapons system across the 25 listed categories. Each of the lower-level estimates needs to have a strong basis, and to qualify as a true build-up, they should be derived from standard relationships and reliability data such as mean time between failures (MTBF), mean time to repair (MTTR), etc. Extrapolation from actuals may be the most common technique for field programs, especially non-major ones. The budget from one year may simply be the previous year’s budget, with or without incremental adjustments. One hopes that at a minimum escalation will be included. Unit I - Module 2
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.