Presentation is loading. Please wait.

Presentation is loading. Please wait.

What Is Software Requirements Engineering and Why Is It So Hard? A Very Brief Look at a (Very) Few of the Many Issues and Some (But Even Fewer) of the.

Similar presentations


Presentation on theme: "What Is Software Requirements Engineering and Why Is It So Hard? A Very Brief Look at a (Very) Few of the Many Issues and Some (But Even Fewer) of the."— Presentation transcript:

1 What Is Software Requirements Engineering and Why Is It So Hard? A Very Brief Look at a (Very) Few of the Many Issues and Some (But Even Fewer) of the Answers (Some of which I’m pretty sure of; some I’m less so)

2 3 Feb 2003MSJ - 2 Requirements: The Most Critical and Least Well Understood Phase in Software Engineering Software errors found in field operations can be up to several hundred times more expensive to fix than if they were found in the requirements phase Requirements errors are responsible for a disproportionate share of fielded software problems Published results range from over 30% up to over 60% For safety critical systems, requirements errors can be a lot more distressing than merely $$$

3 3 Feb 2003MSJ - 3 The Background and Motivation Current software engineering life cycle models and consensus documentation standards are inadequate guides to actually doing requirements engineering Newer OOA techniques such as UML tend to focus on requirements elicitation and high level information portrayal Requirements analysis Requirements analysis Code (Implementation) Code (Implementation) The Standard Waterfall ANSI/IEEE std 830, MIL-STD-2167, etc Design Maintenance Test

4 3 Feb 2003MSJ - 4 Some of the Key Issues With Software Requirements Engineering Little or no agreement as to: Are there really different “types” of requirements? If so, what are they? What is “a” (single) requirement? How much information is really required to specify “a” requirement? How many different levels of abstraction are possible? Useful? How many are appropriate for a given project? A given requirement? What downstream activities (design or requirements analyses) are dependent on which levels of abstraction? Poor definitions for some (not all) of the key quality factors for requirements specifications Completeness? Consistency? (Of what with what?) Traceability? (Of what to what for what purpose?)

5 3 Feb 2003MSJ - 5 Example: Are These All Well Defined, Distinguishable Types of Requirements? Functional requirements Performance requirements High level requirements Detailed requirements Derived requirements Interface requirements Output requirements Input requirements User requirements Design requirements Operational requirements Principal requirements Parasitic requirements Behavioral requirements And what’s really the difference between a requirement and a constraint, anyway?

6 3 Feb 2003MSJ - 6 Functional Requirements: The Starting Point Generally, no two engineers will ever totally agree on exactly how many types of requirements there are But they probably will both agree that “functional” requirements need to be at the core of the requirements engineering process According to Websters, function is “the action for which a person or thing is specially fitted or used or for which a thing exists: Purpose” All software of whatever type always has but a single purpose: provide acceptable outputs Functional requirements are thus statements about the acceptable, observable characteristics of outputs Acceptable: What good are outputs with unacceptable characteristics? Observable: What good is it to document unobservable characteristics?

7 3 Feb 2003MSJ - 7 Function vs Performance: A Misleading Distinction What is observable in an output? Its value (bit pattern) and the time of its initial availability for observation, nothing else Function vs performance does not split cleanly on value vs time Value accuracy (e.g., ±¼ mile) is often, but not universally, considered a performance requirement but does not involve time An output that comes out at the wrong time is not fulfilling the purpose of the software, hence the software is, at least for that instant, non-functional Hence when all the outputs do come out at the right time the software can be said to be (at least partially) functional Thus statements about the acceptable observation times for outputs help determine whether or not the software is functional, so then turning around and stating that descriptions of acceptable observation times are not functional requirements (but performance) seems to be asking for confusion

8 3 Feb 2003MSJ - 8 The Real Point Better perhaps to speak of acceptable behavioral characteristics rather than getting overly hung up on the distinction (if any) between functional and performance requirements Most other common “types” of requirements appear to be either: Waypoints (possibly fictitious) along the process of stepwise refinement of abstraction in the development of behavioral characteristics, or Behavioral characteristics derived from other characteristics via various (too often implicit and imperfectly understood) closure criteria, or Constraints on the development process or the design space The real need is to understand all the types of, and relationships among, the information that must be developed, specified, and analyzed prior to (various stages of) design

9 3 Feb 2003MSJ - 9 Is Behavioral Analysis For Software Really Different Than For Other Types of Engineered Artifacts? Many (most? all?) characteristics are specifiable as a (relatively) small set of numbers † or set of equations and some boundary conditions The number of discrete behavioral cases to be individually engineered is typically amazingly large Always include and are (usually? always?) driven by functional requirements that are not simply stimulus-response behavior † Stimulus/response behavior is the only type of functional requirement for software in any type of system Software artifactsOther artifacts † E.g., the required payload of an airplane

10 3 Feb 2003MSJ - 10 The Significance of Behavioral Complexity The domain-dependent set of conceptual abstractions by which software requirements are initially expressed is much larger and more diverse than the largely standardized basic conceptual vocabulary of other engineering projects “Design a bridge to span the straits of Gibraltar that will carry 6 lanes of highway traffic in each direction” vs “Design an air traffic control system for the United States that will … will … what?” As a result, the initial requirements are usually much less well understood at the beginning of a new software project than for other types of engineered artifacts Often, the very vocabulary used to characterize the behavior of a large software-intensive system may not even be known initially

11 3 Feb 2003MSJ - 11 The Significance of Behavioral Complexity (cont’d) For other engineered artifacts, figuring out the requirements is often not even really considered part of the engineering process, or at least not one which requires special tools, techniques and expertise (separate from design expertise) – the design of a bridge is hard; understanding its requirements much less so For software, the emphasis is reversed: A much larger portion of the overall project’s time and effort is spent in the “softest” of engineering phases (requirements specification) – trying to understand and document the requirements is hard, design much less so Note: Good design is still not trivial, but compliant and theoretically workable designs are almost always a dime a dozen; it’s immaturity in our measures of effectiveness for designs that makes coming up with good ones hard, not difficulty synthesizing something that will (probably) work

12 3 Feb 2003MSJ - 12 My Conclusion? Software requirements engineering needs a much more detailed process model additional derived outputs 1. Initial outputs, boundaries, and constraints 2. Output characteristics (and input references) 3. Standard robustness 4. Logical completeness and consistency 5. Output hazard analyses

13 3 Feb 2003MSJ Initial outputs, boundaries, and constraints 1.1 Initial outputs Initial Outputs, Boundaries, Safety Requirements and Constraints 1.3 Constraints Initial derived outputs semantic HMI design use-case analysis narrative specifications existing interface documentation Principal outputs preliminary hazard analysis 1.2 Black-box boundary identification

14 3 Feb 2003MSJ - 14 Output Characteristics And the Identification and Characterization of the Inputs Necessary to Specify Them 2. Output characteristics and their referenced inputs (and then their characteristics, and eventually, more outputs) 2. Output characteristics and their referenced inputs (and then their characteristics, and eventually, more outputs) 2.2 Output timing Proximate triggers 2.3 Preconditions (a.k.a. States) 2.1 Output fields Reference definition, a.k.a. initial algorithm definition Delineation and classification Basic abstraction(s) 1.1 Initial outputs additional derived outputs 1.2 Black box boundary 3 3, 4, & 5 coupling and cohesion analysis leading to initial modularization (top level design) Stepwise refinement is appropriate here

15 3 Feb 2003MSJ - 15 Where do Inputs Fit in this Picture? In the requirements phase, inputs are references used to help specify how the software should behave Output X must appear within 0.25 seconds after the occurrence of input Z The output value of X must be within ±½ mile of the average of the last two position inputs of type Z The possibilities for stepwise refinement of some of such references has caused some confusion in the past

16 3 Feb 2003MSJ - 16 Abstraction and Stepwise Refinement There is no easy mapping of levels of abstraction to “stages” of systems or requirements engineering or standard engineering specification levels (if such really existed, which they don’t, despite many managers’ religious belief that they do) The system shall be cost effective The survival likelihood over a 2 hour mission shall exceed 98% The individual target P k shall exceed 99% The single shot P k shall exceed 95% The output a/c range shall be sufficiently accurate to permit intercept guidance to acquire the target 98% of the time The output a/c range shall be accurate to within ±½ mile of the actual range of the actual aircraft at the time the range is output The output a/c range shall be accurate to within ±¼ mile of the reference value computed by the following reference algorithm: [20 pages of math]

17 3 Feb 2003MSJ - 17 Accuracy References, Algorithms, and Requirements In the past, that last/lowest level of requirement was often written: The software shall compute aircraft position using the following algorithm: There are at least two problems with that language: That’s not a black box testable requirement: you can’t see what algorithm has actually been implemented without looking inside the box It has also, at least in the past, lead to some rather pointless arguments: Between systems engineering (who wrote the requirement) and software engineering, who wanted to use “an equivalent” algorithm Between software engineering and perhaps overly literal minded QA types who wanted to see the implementation exactly matching the specified requirement, e.g., “the spec says ‘ compute using X=Y+Z ’ but you coded X=Z+Y ” …

18 3 Feb 2003MSJ - 18 Accuracy References and Algorithms (cont’d) By noting that the algorithm itself is not actually the requirement but only the definition of a reference against which the observable behavior will be tested, we can have our cake and eat it too: Analysis, derivation, and specification of reference algorithms is still appropriately considered a requirements engineering activity (can’t write the requirements spec without a reference for an accuracy requirement for each approximate field, and, for that matter, for many definitions of acceptable values in exact fields) Downstream design activities may choose to implement alternative but equivalent algorithms but the notion of equivalence is now well defined – equivalent within the specified accuracy – and the burden of showing equivalence is where it should be: on the design team After completion of refinement of abstraction and shrinkage of the black box boundary, the reference algorithms themselves will refer to actual inputs, whose characteristics are then a source of additional (derived) requirements

19 3 Feb 2003MSJ - 19 Problems with Abstraction References One contributor to some of the historic confusion in this area (e.g., is an algorithm in a requirements specification really a requirement ?) has been that not all outputs permit meaningful specification at every level of abstraction There may not be any externally observable reference to use as an abstract accuracy reference Look at the difference between: The output aircraft range shall be accurate to within ±½ mile of the actual range of the actual aircraft at the time the range is output and The output of recommended course to intercept shall be accurate to within ±3° of … ??? Of what? There’s no observable phenomenon to use there as a (more abstract) reference for that latter requirement

20 3 Feb 2003MSJ - 20 Algorithms and Requirements: Conclusion Algorithms belong in a requirements specification (or an appendix published at the same time); but they are not in and of themselves requirements, they are definitions in terms of which requirements are stated Such an algorithm (in the requirements specification) is also not design – programmers are not required to use it; although they often will, as they are unlikely to want to duplicate the years of labor to come up with a different algorithm and prove its equivalence

21 3 Feb 2003MSJ Standard robustness 3.1 Input validity definition Developing Robustness – Anticipating Unexpected, Undesired, or Even Downright Impossible Events as Seen in Referenced Inputs 3.3 Semi-final triggers and state preconditions Input fields Delineation & classification Validity definition Assumptions about the environment's behavior 3.2 Responses to invalid inputs Input timing State predictability additional derived outputs

22 3 Feb 2003MSJ - 22 Logical Completeness and Consistency 4. Logical completeness & consistency 4.1 Individual requirements completeness 4.3 Consistency Determinacy: Consistency among output requirements Consistency between requirements and safety constraints additional derived outputs Response4.1.1 Stimulus Events, conditions, and states Uniqueness Timing Value STMO 4.2 Set completeness Proximate triggers Positive Negative

23 3 Feb 2003MSJ - 23 Summary of the Key Messages for Today Requirements engineering for software intensive artifacts is (much) more complicated than for other types of artifacts Requirements engineering for software intensive artifacts is much more complicated that most textbooks and practitioners know or admit There is a great deal of cant in modern software engineering, often all too effectively disguising the extent of our ignorance Which is not to say that we don’t know anything; just that you should take everything you don’t fully understand (and a lot of what you think you do) with a grain of salt Don’t accept techno-babble definitions and process descriptions: if we don’t know, we don’t know; but deceiving ourselves about our ignorance is no way to make progress


Download ppt "What Is Software Requirements Engineering and Why Is It So Hard? A Very Brief Look at a (Very) Few of the Many Issues and Some (But Even Fewer) of the."

Similar presentations


Ads by Google