Presentation is loading. Please wait.

Presentation is loading. Please wait.

Developing Artificial Neural Networks for Safety Critical Systems

Similar presentations


Presentation on theme: "Developing Artificial Neural Networks for Safety Critical Systems"— Presentation transcript:

1 Developing Artificial Neural Networks for Safety Critical Systems
Department of Computer Science Zeshan Kurd Supervisor: Tim Kelly

2 Outline The problem Current approaches Safety critical systems
Safety argumentation Suitable ANN model Safety lifecycle Feasibility issues NCAF 2003

3 Introduction Used in many areas of industry Attractive features
Defence and medical applications Attractive features Used when little understanding of the relationship between inputs and outputs Ability to learn or evolve Generalisation or efficiency in terms of computational resources Commonly used as advisory roles IEC C.3.4 – Safety bag: independent external monitors to ensure the system does not enter an unsafe state Why advisory roles? Absence of acceptable analytical certification methods NCAF 2003

4 The Problem Justify the use of neural networks in safety-critical systems Highly dependable roles Derive satisfactory safety arguments Argue safety using Safety Case A safety case should present a clear, comprehensive and defensible argument that a system is acceptably safe to operate within a particular context [DEFSTAN 00-55] Proof obligations shall be: Constructed to verify that the code is a correct refinement of the Software Design and does nothing that is not specified NCAF 2003

5 Current Approaches Diverse Neural Networks Fault Tolerance
Choose a single net that covers the whole target function Choose a set of nets that cover the whole target function Overall generalisation performance has been shown to be improved Black box approach Fault Tolerance [IEC-50] The attribute of an entity that makes it able to perform a required function in the presence of certain given sub-entity faults Fault tolerant by inject weight faults Fault hypothesis unrealistic – does not deal with major potential faults ANN Development and Safety Lifecycles No provision for dealing with safety concerns NCAF 2003

6 Safety Critical Systems
Incorrect operation may lead to fatal or severe consequences Safety-critical system directly or indirectly contributing to occurrence of hazardous system state System level hazard is a condition that is potentially dangerous to man, society or environment Safety process & techniques Identify, analyse and mitigate hazards ‘Acceptably’ safe Risk of failure assured to tolerable level (ALARP) Software Safety Lifecycle Software “hazard” – software level condition that could give rise to a system level hazard Hazard Identification Functional Hazard Analysis Preliminary System Safety Analysis (Potential to influence design) System Safety Analysis (confirming causes of hazards) Safety Case NCAF 2003

7 Types of Safety Arguments
Process vs. product based arguments Process-based: Assuming safety given certain processes have been performed Process-based arguments for ANNs in ‘Process Certification Requirements’ (York) Implementation issues (formal methods) Team Management and other process based issues Product-based: Evidence based arguments about the system such as functional behaviour, identifying potential hazards etc. Current standards and practices are working towards removing process based arguments Solution: use product-based arguments and process-based only where improvement can be demonstrated NCAF 2003

8 Safety Criteria Argue functional properties or behaviour
Represented as a set of high-level goals Analysing aspects of current safety standards Key criteria argued in terms of failure modes Need to have more white-box style arguments Apply to most types of networks Leaves open alternative means of compliance Z. Kurd, T.P. Kelly, “Establishing Safety Criteria for Artificial Neural Networks” To appear in Seventh International Conference on Knowledge-Based Intelligent Information & Engineering Systems, Oxford, UK, 2003 NCAF 2003

9 Safety Criteria Goal Strategy Context NCAF 2003

10 Suitable ANN Model Current ANN models have many problems! Objectives
Determining suitable ANN structure, training and test sets Influences functional behaviour (dealing with systematic faults) ‘Forgetting’ of previously learnt samples & noisy data Introducing new ‘faults’ during training Pedagogical approaches to analysing behaviour Black-box style safety arguments Objectives Preserve ability to learn or evolve given input-outputs Control the learning (refinement) process NCAF 2003

11 ‘Hybrid’ ANNs ‘Hybrid’ – Representing symbolic information in ANN frameworks Knowledge represented by the internal structure of the ANN (initial conditions) Translation algorithms Working towards specification Outperforms many ‘all-symbolic’ systems Taken from: J. Shavlik, “Combining symbolic and neural learning”, 1992. NCAF 2003

12 ‘Hybrid’ ANNs Decompositional approach to analysis
Potential for ‘transparency’ or white-box style analysis White-box style analysis which focuses on analysing the internal structure of the ANN Potentially result in strong arguments about the knowledge represented by the network Potential to control learning NCAF 2003

13 Safety Lifecycle Current software safety lifecycle is inadequate for ANNs Relies on conventional software development lifecycle Safety processes are not suitable for ANNs Existing development & safety lifecycles for ANNs are inadequate Focus too much on process based arguments No argumentation on how certain ANN configuration may (or may not) contribute to safety Some models assume ‘intentionally’ complete specification No attempt to find ways to identify, analyse, control and mitigate potential hazards Need a lifecycle for the ‘hybrid’ ANN model Z. Kurd and T. P. Kelly, "Safety Lifecycle for Developing Safety-critical Artificial Neural Networks," To appear in 22nd International Conference on Computer Safety, Reliability and Security, 2003. NCAF 2003

14 Safety Lifecycle NCAF 2003

15 Safety Lifecycle Adapted techniques used in conventional software
Focuses on hazard identification, analysis and mitigation Two-tier learning process Dynamic Learning Static Learning Safety processes performed over meaningful representation Things that can go wrong in real-world terms Preliminary Hazard Identification (PHI) Used to determine initial conditions of the network (weights) Consideration of possible system level hazards (BB approach) Result is a set of rules partially fulfilling desired function NCAF 2003

16 Safety Lifecycle Functional Hazard Analysis (FHA)
Generates final set of rules before insertion Predictive, white-box style analysis How symbolic knowledge could lead to potential hazards Assertion to prevent specific hazards Rule importance Performed during dynamic learning (guiding learning) Targeted training FHA at post-dynamic learning stage Exploratory safety analysis Identify how rule may be refined during ‘static’ learning Implement constraints – to prevent hazardous states during learning Safety lifecycle Respects development methodologies for ‘hybrid’ ANNs Provision for safety process Potential to generate acceptable product-based analytical arguments for certification NCAF 2003

17 Feasibility Issues Consider performance vs. safety trade-off
Any model must have sufficient performance Does not mean output error, but characteristics Learning and generalisation Permissible learning during development but not whilst deployed No compromise in safety Controls or features added to ensure strong safety arguments SCANN model preserves ability to learn Some analysable representation Provide safety assurances in terms of knowledge evolution Deal with large input spaces For problems whose complete algorithmic specification is not available (at start of development) Involves two-tier learning process NCAF 2003

18 Summary Need for analytical safety arguments for certification
Current position of ANNs in safety-related systems Set out safety criteria for functional behaviour Using ‘Hybrid’ ANNs Safety lifecycle for the ‘hybrid’ ANN Challenge: performance vs. safety Trade-off NCAF 2003


Download ppt "Developing Artificial Neural Networks for Safety Critical Systems"

Similar presentations


Ads by Google