Software Engineering Dr. K. T. Tsang

Slides:



Advertisements
Similar presentations
Figures – Chapter 11. Figure 11.1 Principal dependability properties.
Advertisements

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 2 Slide 1 Systems engineering 2.
The Big Picture.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 2 Slide 1 Socio-technical Systems.
SWE Introduction to Software Engineering
Socio-technical Systems
The Big Picture.
©Ian Sommerville 2000Software Engineering, 6th edition Slide 1 Introduction l Getting started with software engineering l Objectives To introduce software.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 30 Slide 1 Security Engineering.
Chapter 2.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 2 Slide 1 Socio-technical Systems.
Software Engineering Chapter 2 Socio-technical systems Ku-Yaw Chang Assistant Professor Department of Computer Science and Information.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 3 Slide 1 Critical Systems.
©Ian Sommerville 2006Critical Systems Slide 1 Critical Systems Engineering l Processes and techniques for developing critical systems.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 2 Slide 1 Systems engineering 1.
Software Dependability CIS 376 Bruce R. Maxim UM-Dearborn.
1 Chapter 3 Critical Systems (cont.). 2 Safety Safety is a property of a system that reflects the system’s ability to operate, normally or abnormally,
Chapter 10 – Sociotechnical Systems 1Chapter 10 Sociotechnical Systems.
1 Chapter 2 Socio-technical Systems (Computer-based System Engineering)
2. Socio-Technical Systems
3- System modelling An architectural model presents an abstract view of the sub-systems making up a system May include major information flows between.
Socio-technical Systems. Objectives l To explain what a socio-technical system is and the distinction between this and a computer-based system. l To introduce.
Socio-technical Systems
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 9 Slide 1 Critical Systems Specification 2.
Software Engineering Chapter 2: Computer-based System Engineering
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 2Slide 1 Chapter 2 Computer-Based System Engineering As modified by Randy Smith.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 2Slide 1 Systems Engineering l Designing, implementing, deploying and operating systems.
1 Chapter 3 Critical Systems. 2 Objectives To explain what is meant by a critical system where system failure can have severe human or economic consequence.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 3 Slide 1 Critical Systems 1.
Socio-technical Systems (Computer-based System Engineering)
Architectural Design lecture 10. Topics covered Architectural design decisions System organisation Control styles Reference architectures.
1 Chapter 2 Socio-technical Systems (Computer-based System Engineering)
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 2Slide 1 Chapter 2 Computer-Based System Engineering.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 2 Slide 1 Socio-technical Systems.
Safety-Critical Systems T Ilkka Herttua. Safety Context Diagram HUMANPROCESS SYSTEM - Hardware - Software - Operating Rules.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 9 Slide 1 Critical Systems Specification 1.
2/16/06 Page 1Loui Some Notes from Sommerville Software Engineering 7 CS436 (material for quiz)
Chapter 13: Software Quality Project Management Afnan Albahli.
CS, AUHenrik Bærbak Christensen1 Critical Systems Sommerville 7th Ed Chapter 3.
Slide 1 Security Engineering. Slide 2 Objectives l To introduce issues that must be considered in the specification and design of secure software l To.
©Ian Sommerville 2000Dependability Slide 1 Chapter 16 Dependability.
1 Software Engineering, 8th edition. Chapter 3 Courtesy: ©Ian Sommerville 2006 Sep 16, 2008 Lecture # 3 Critical Systems.
1 Software Engineering, 8th edition. Chapter 2 Courtesy: ©Ian Sommerville 2006 Feb 12 th, 2009 Lecture # 3 Socio-technical Systems.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 3 Slide 1 Critical Systems.
1. 2 An Introduction to Software Engineering 3 What is software? Computer programs and associated documentation such as requirements, design models and.
1 Software Testing and Quality Assurance Lecture 38 – Software Quality Assurance.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 2 Slide 1 Socio-technical Systems.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 2 Slide 1 Socio-technical Systems.
Slide 1 CS 310 Chapter 2 Socio Technical Systems A system that includes people, software, and hardware Technical computer-based systems include hardware.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 2Slide 1 Chapter2: Systems Engineering l Designing, implementing, deploying and operating.
CHAPTER 2. Designing, implementing, deploying and operating systems which include hardware, software and people.
Critical Systems.
Critical Systems.
Critical Systems.
Critical Systems.
Presentation transcript:

Software Engineering Dr. K. T. Tsang Lecture 2 Socio-technical systems http://www.uic.edu.hk/~kentsang/SWE/SWE.htm

This lecture is based on chapter 2 in Sommerville

System - a purposeful collection of interrelated components that work together to achieve some objective Technical computer based systems- includes hardware & software but not procedures and processes, e.g. TV, mobile phones Socio-technical systems- systems with defined operational procedures, e.g. pay-roll accounting systems

Socio-technical systems A system that includes people, software and hardware E.g. a publishing system

2.1Emergent system properties Properties that attributed to the whole system, not to any specific part of the system Functional emergent properties: related to its overall function; e.g. car & airplane are Non-functional emergent properties: e.g. performance, reliability, repair-ability, safety, security, usability, volume/space occupied

Reliability of a system Hardware reliability- probability of a hw component failing, how long it takes to repair Software reliability- probability to get incorrect output, sw failure Operator reliability- probability of human error

2.2 System engineering The activity of specifying, designing, implementing, validating, deploying and maintaining socio-technical systems. It involves hardware, software, human users and the system’s operating environment. Many engineering discipline may be involved. Difficult to change design once decisions are made.

Phases of system engineering Requirement definition System design Sub-system development System integration System installation System testing System evolution System decommission

2.2.1System requirement definition Specify what the system should do (its functions), and essential/desirable properties Abstract function requirements System properties (non-functional) Forbidding characteristics

2.2.2 System design process Partition requirements Identify sub-systems Assign requirements to sub-systems Specify sub-system functionality Define sub-system interfaces

2.2.3 System modeling During the analysis and design phase, systems may be modeled as a set of components & relationships between them. This model can be represented as a block diagram showing sub-systems and connections among them.

Simple burglar alarm system in block diagram Movement sensors Door sensors Alarm controller Telephone caller Siren Voice synthesizer

An Architectural model: Air traffic control system see figure in text book:Sommerville

2.2.4 Subsystem development Subsystem development take on its own life. This may involve starting another system engineering process from scratch. Or, some systems are commercial off-the-shelf (COTS) system that can be integrated into the system.

System integration When all systems are developed and tested, they are put together to make up the complete system. This can be done in a “big bang” fashion. Most prudently, they should be integrated one at a time, because: Subsystems can hardly be finished at the same time Incremental integration reduces the cost of error location

System installation

System testing

2.2.6 System evolution: reasons Large, complex system often has a long life time. System requirement may be changed due to changes in business practice or new functions are added, or changes in software/hardware technology. To keep up with the new situation or new hw, system must be evolved accordingly.

System evolution is often costly because Original design must be re-examined in light of the new requirement Changes in one subsystem be affect other subsystems in terms of performance and behavior If reasons for original design decision are un-documented, it will be difficult to make sound decision to modify the original design As system ages, previous changes may add up to the cost of further changes

Taking system out of service after its useful life time: 2.2.7 System decommission Taking system out of service after its useful life time: Disassembling & recycling hw & materials Saving data that may be still valuable to the organization

Software Engineering Dr. K. T. Tsang Lecture 3 Critical systems

Types of critical systems Safety-critical systems- may result in injury or damages if fail Mission-critical systems- may result in failure of goal-directed activity if fail Business-critical systems- may result in high cost to business if fail

This lecture is based on chapter 3 in Sommerville.

Dependability of critical systems The most important emergent property of critical systems because Unreliable critical systems are rejected by users System failure costs are often enormous Untrustworthy systems may cost lost of valuable data/information

Types of system failure Hardware failure - due to bad design, or bad components Software failure – due to bad spec, design or implementation Human failure – fail to operate the system correctly

Example of a safety-critical system Insulin pump system (p.46 Sommerville) Radiotherapy system with software controller

Major aspects of system dependability Availability – able to deliver service at any given time when requested Reliability - able to deliver service over a period of time Safety - able to deliver service without causing damage Security - able to protect itself against accident or deliberate intrusion during operation

Other aspects in dependability Reliability - how quick to recover from system failure. This includes whether it is easy to diagnose problem and replace components in trouble Maintainability – is system easily changed to accommodate new requirement without introducing errors Survivability – ability to continue to deliver service when system is under attack or part of it is disable Error tolerance – whether the system can recover from user errors

It all depends on the system Not all aspects of dependability are important/applicable to all systems For a medical treatment system (Radiotherapy machine, insulin pump … ), availability (available when needed), and safety (able to deliver a safe dose of treatment) are most important consideration. While other aspects are either unimportant or not applicable.

Performance & dependability Generally, high level of dependability can only be achieved at the expense of system performance Increasing dependability can greatly increase developmental cost

3.3 Availability & reliability Availability – the probability that a system will be operational and able to deliver the requested service, at a point in time. Reliability – the probability of providing trouble-free operation as requested in a given environment, over a specified time period.

Types of System problem System failure – not able to deliver service as expected at a point in time System error – an erroneous system state that leads to unexpected behavior System fault – a software condition that leads to system error Human error – e.g. input error, operational error

Approaches to improve reliability Fault avoidance – minimize possibility of or trap mistakes before they cause faults; e.g. avoid pointers Fault detection & removal – detect and remove faults before system is used; e.g. systematic testing & debugging Fault tolerance – ensure faults do not result in system error/failures; e.g. system self checking, use redundant modules

A system as input/output mapping Input causing erroneous outputs Input set System Software Erroneous Outputs Output set

Software usage patterns Possible Inputs User 1 Erroneous Inputs User 2 User 3 User 4

Safety-critical system These systems never damage people or environment even if they fail. Most safety-critical systems are controlled by software. Examples: air traffic control systems, auto-pilot systems for aircraft or automobile, process control system in chemical plant.

Types of Safety-critical software Primary type: embedded as a controller in systems, whose failure will directly cause human injuries and environmental damages. Secondary type: indirectly causing injuries; e.g. computer aided engineering design software, medical database holding info of drugs prescribed to patients.

Reliability & safety They are different attributes of dependability. Software systems that are reliable are not necessary safe due to Incomplete specification, no description of system behavior during critical situations. Hardware failure may throw software in an unanticipated situation. Operator input may be correct only under specific condition which is not met.

Terminology concerning safety Accident/mishap – unplanned event/events which cause human injuries or damages to property/environment. Hazard – condition with potential causing an accident. Damage – a measure of the loss due to a mishap. Hazard severity – assessment of worst possible damage from a hazard. Hazard probability – probability of event which create a hazard. Risk – the probability that the system will cause an accident.

How to assure system safety? Hazard avoidance – in system design Hazard detection & removal before the accident – in system design Damage limitation/control – system may include feature to minimize damage from an accident

Contribution of Software control to safety System complexity contributes to higher probability of accident. Software control increases system complexity. Software control may increase probability of accident. Software controlled system may monitor a wider range of conditions provide sophisticated safety interlock Software controlled system may improve system safety.