Presentation is loading. Please wait.

Presentation is loading. Please wait.

Schedule & effort.

Similar presentations


Presentation on theme: "Schedule & effort."— Presentation transcript:

1 http://www.flickr.com/photos/28481088@N00/315671189/sizes/o/ Schedule & effort

2 Problem Our ability to realistically plan and schedule projects depends on our ability to estimate project costs and development efforts In order to come up with a reliable cost estimate, we need to have a firm grasp on the requirements, as well as our approach to meeting them Typically costs need to be estimated before these are fully understood

3 1.Figure out what the project entails – Requirements, architecture, design 2.Figure out dependencies & priorities – What has to be done in what order? 3.Figure out how much effort it will take 4.Plan, refine, plan, refine, … Planning big projects

4 What are project costs? For most software projects, costs are: – Hardware Costs – Travel & Training costs – Effort costs

5 Aggravating & mitigating factors Market opportunity Uncertainty/risks Contractual terms Requirements volatility Financial health Opportunity costs

6 Cost drivers Software reliability Size of application database Complexity Analyst capability Software engineering capability Applications experience Programming language expertise Performance requirements Memory constraints Volatility of virtual machine Environment Use of software tools Application of software engineering methods Required development schedule

7 What are effort costs? Effort costs typically largest of the 3 types of costs (hardware, training and effort), and the most difficult to estimate. Effort costs include: – Developer hours – Heating, power, space – Support staff; accountants, administrators, cleaners, management – Networking and communication infrastructure – Central facilities such as rec room & library – Social security and employee benefits

8 Software cost estimation – Boehm (1981) Algorithmic cost modeling – Base estimate on project size (lines of code) Expert judgment – Ask others Estimation by analogy – Cost based on experience with similar projects Parkinson’s Law – Project time will expand to fill time available Pricing to win – Cost will be whatever customer is willing to pay Top-down estimation – Estimation based on function/object points Bottom-up estimation – Estimation based on components

9 Productivity metrics Lines of code – Simple, but not very meaningful metric – Easy to pad, affected by prog language – How to count revisions/debugging etc? Function points – Amount of useful code produced (goals/requirements met) – Less volatile, more meaningful, not perfect

10 Function points Function points are computed by first calculating an unadjusted function point count (UFC). Counts are made for the following categories (Fenton, 1997): –External inputs – those items provided by the user that describe distinct application-oriented data (such as file names and menu selections) –External outputs – those items provided to the user that generate distinct application-oriented data (such as reports and messages, rather than the individual components of these) –External inquiries – interactive inputs requiring a response –External files – machine-readable interfaces to other systems –Internal files – logical master files in the system Each of these is then assessed for complexity and given a weighting from 3 (for simple external inputs) to 15 (for complex internal files).

11 Unadjusted Function Point Count (UFC) Weighting Factor ItemSimpleAverageComplex External inputs346 External outputs457 External inquiries346 External files71015 Internal files5710 Each count is multiplied by its corresponding complexity weight and the results are summed to provide the UFC

12 Object points Similar to function points (used to estimate projects based heavily on reuse, scripting and adaptation of existing tools) Number of screens (simple x1, complex x2, difficult x3) Number of reports (simple x2, complex x5, difficult x8) Number of custom modules written in languages like Java/C x10

13 COCOMO II Model Supports spiral model of development Supports component composition, reuse, customization 4 sub-models: –Application composition model – assumes system written with components, used for prototypes, development using scripts, db’s etc (object points) –Early design model – After requirements, used during early stages of design (function points) –Reuse model – Integrating and adapting reusable components (LOC) –Post architecture model – More accurate method, once architecture has been designed (LOC)

14 Application composition model Used primarily to estimate cost of prototyping efforts Productivity estimates from 4-50 object points/month, depending on experience and the availability/maturity of tools

15 Early Design Model Used once requirements are agreed to get going on an architectural design

16 Reuse model PM = (LOC needing to be adjusted X % auto-generated)/productivity Productivity ~ 2,400 LOC/Month

17 Post-Architectural Model Organic projects - relatively small, simple projects with small teams and good application experience to less than rigid requirements. Semi-detached projects - intermediate (in size and complexity) projects with mixed experience teams and a mix of requirements. Embedded projects - software projects that must be developed within a set of tight hardware, software, and operational constraints. Based on values for each of these the effort cost is estimated

18 Computes software development effort as function of program size and a set of "cost drivers”. Product attributes – Required software reliability – Size of application database – Complexity of the product Hardware attributes – Run-time performance constraints – Memory constraints Intermediate COCOMO

19 Personnel attributes – Analyst capability – Software engineering capability – Applications experience – Virtual machine experience – Programming language experience Project attributes – Use of software tools – Application of software engineering methods – Required development schedule Intermediate COCOMO

20 Each of the 15 attributes receives a rating on a six-point scale that ranges from "very low" to "extra high" (in importance or value). An effort multiplier from the table below applies to the rating. The product of all effort multipliers results in an effort adjustment factor (EAF). Typical values for EAF range from 0.9 to 1.4. Intermediate COCOMO

21 Example: Twitter repression report Repressed citizen UC#1: Report repressionUC#2: Clarify tweet Concerned public UC#3: View reports UC#3a: View on mapUC#3b: View as RSS feed

22 One possible architecture Twitter Façade Geocoder Façade Database Apache+PHP Mapping Web site Google maps RSS Web service Tweet processor MySQL

23 Activity graph: shows dependencies of a project’s activities 1a 1c 2 1b 3a 3b 4 3 Milestone 2: DB contains real data Milestone 3: DB contains real, reliable data Milestone 4: Ready for public use

24 Filled circles for start and finish One circle for each milestone Labeled arrows indicate activities – What activity must be performed to get to a milestone? – Dashed arrows indicate “null” activities Activity graph: shows dependencies of a project’s activities

25 Ways to figure out effort for activities – Expert judgment – Records of similar tasks – Effort-estimation models – Any combination of the above Effort

26 Not a terrible way to make estimates, but… – Often vary widely – Often wrong – Can be improved through iteration & discussion How long to do the following tasks: – Read tweets from Twitter via API? – Send tweets to Twitter via API? – Generate reports with Google maps? Effort: expert judgment

27 Personal software process (PSP) – Record the size of a component (lines of code) Breakdown # of lines added, reused, modified, deleted – Record time taken Breakdown planning, design, implement, test, … – Refer to this data when making future predictions Can also be done at the team level Effort: records of similar tasks

28 Algorithmic (e.g.: COCOMO: constructive cost model) – Inputs = description of project + team – Outputs = estimate of effort required Machine learning (e.g.: CBR) – Gather descriptions of old projects + time taken – Run a program that creates a model  You now have a custom algorithmic method Same inputs/outputs as algorithmic estimation method Effort: estimation models

29 1.Assess the system’s complexity 2.Compute the # of application points 3.Assess the team’s productivity 4.Compute the effort Using COCOMO-like models

30 Assessing complexity e.g.: A screen for editing the database involves 6 database tables, and it has 4 views. This would be a “medium complexity screen”. This assessment calls for lots of judgment. Pfleeger & Atlee

31 Computing application points (a.p.) e.g.: A medium complexity screen costs 2 application points. 3GL component = reusable programmatic component that you create Pfleeger & Atlee

32 Assessing team capabilities e.g.: Productivity with low experience + nominal CASE… productivity = (7+13)/2 = 10 application points per person-month (assuming NO vacation or weekends!!!!!) Pfleeger & Atlee

33 It offer many benefits for developers building large-scale systems. As spiraling user requirements continue to drive system complexity to new levels, CASE tools enable engineers to abstract away from the entanglement of source code, to a level where architecture & design become apparent and easier to understand and modify. The larger a project, the more important it is to use a CASE tool in software development. CASE (comp aided SE) TOOLS

34 As developers interact with portions of a system designed by their colleagues, they must quickly seek a subset of classes and methods and assimilate an understanding of how to interface with them. In a similar sense, management must be able, in a timely fashion and from a high level, to look at a representation of a design and understand what's going on. Hence case tools are used CASE TOOLS

35 Identify screens, reports, components Twitter Façade Geocoder Façade Database Apache+PHP Mapping Web site Google maps RSS Web service Tweet processor MySQL 3GL components - Tweet processor - Twitter façade - Geocoder façade Reports - Mapping web site - RSS web service

36 Use complexity to compute application points 3GL components - Tweet processor - Twitter façade - Geocoder façade Reports - Mapping web site - RSS web service Simple model assumes that all 3GL components are 10 application points. Displays data from only a few database tables (3? 4?) Neither has multiple sections.  Each is probably a “simple” report, 2 application points. 3*10 = 30 a.p. 2*2 = 4 a.p. 30 + 4 = 34 a.p.

37 Assume at your company the team has… – Extensive experience with websites, XML – But no experience with Twitter or geocoders – Since 30 of the 34 a.p. are on this new stuff, assume very low experience – Virtually no CASE support… very low  therefore calculate the productivity as application points in the “person-months”. Note: this assumes no vacation or weekends Assess the team’s productivity & compute effort

38 Distribute the person-months over the activity graph 1a 1c 2 1b 3a 3b 4 3

39 Divide person-months between implementation and other activities (design, testing, debugging) – Oops, forgot to include an activity for testing and debugging the components… revise activity graph Notice that some activities aren’t covered – E.g.: advertising; either remove from diagram or use other methods of estimation The magic behind distributing person-months

40 Ways to get more accurate numbers: – Revise numbers based on expert judgment or other methods mentioned. – Perform a “spike”… try something out and actually see how long it takes – Use more sophisticated models to analyze how long components will really take – Use several models and compare Expect to revise estimates as project proceeds Do you believe those numbers?

41 Further analysis may give revised estimates… 1a 1c 2 1b 3a 3b 3

42 Sort all the milestones in “topological order” – i.e.: sort milestones in terms of dependencies For each milestone (in order), compute the earliest that the milestone can be reached from its immediate dependencies Critical path: longest route through the activity graph

43 Example: computing critical path 1a 1c 2 1b 3a 3b 3

44 Example: tightening the critical path 1a 1c 2 1b 3a 3b 3 What if we get started on the reports as soon as we have a (buggy) version of the database and components?

45 Shows activities on a calendar – Useful for visualizing ordering of tasks & slack – Useful for deciding how many people to hire One bar per activity Arrows show dependencies between activities Milestones appear as diamonds Gantt Chart

46 Example Gantt chart Gantt chart quickly reveals that we only need to hire two people (blue & green)

47 Scheduling with a set of requirements and an architecture? In contrast, assume that you are scheduling before you have requirements and an architecture. How much different would that be? What are the pros and cons of each approach? Two ways of scheduling

48 Updated vision statement – Your chance for extra credit!! – Thursday presentation: Each team is given 15 minutes to present how their vision has been more clear through this time (power-point presentation) – You can include your requirements gathering, constraints and other details of your work until now. – What are your future plans? – You will receive your midterms back tomorrow. What’s next for you?


Download ppt "Schedule & effort."

Similar presentations


Ads by Google