Symbolic Encoding of Neural Networks using Communicating Automata with Applications to Verification of Neural Network Based Controllers* Li Su, Howard.

Slides:



Advertisements
Similar presentations
Approaches, Tools, and Applications Islam A. El-Shaarawy Shoubra Faculty of Eng.
Advertisements

Copyright 2000 Cadence Design Systems. Permission is granted to reproduce without modification. Introduction An overview of formal methods for hardware.
Auto-Generation of Test Cases for Infinite States Reactive Systems Based on Symbolic Execution and Formula Rewriting Donghuo Chen School of Computer Science.
Translation-Based Compositional Reasoning for Software Systems Fei Xie and James C. Browne Robert P. Kurshan Cadence Design Systems.
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Timed Automata.
Summer 2011 Monday, 8/1. As you’re working on your paper Make sure to state your thesis and the structure of your argument in the very first paragraph.
Neural Networks Basic concepts ArchitectureOperation.
COGNITIVE NEUROSCIENCE
November 5, 2009Introduction to Cognitive Science Lecture 16: Symbolic vs. Connectionist AI 1 Symbolism vs. Connectionism There is another major division.
Software Requirements
Chapter Seven The Network Approach: Mind as a Web.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
Artificial Neural Networks
1 Formal Engineering of Reliable Software LASER 2004 school Tutorial, Lecture1 Natasha Sharygina Carnegie Mellon University.
Formal verification Marco A. Peña Universitat Politècnica de Catalunya.
SWE Introduction to Software Engineering
Course Instructor: Aisha Azeem
Chapter 10: Architectural Design
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 11 Slide 1 Architectural Design.
Romaric GUILLERM Hamid DEMMOU LAAS-CNRS Nabil SADOU SUPELEC/IETR.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Modelling Language Evolution Lecture 2: Learning Syntax Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Artificial Neural Networks
DEVSView: A DEVS Visualization Tool Wilson Venhola.
Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
SOFTWARE DESIGN.
Future & Emerging Technologies in the Information Society Technologies programme of European Commission Future & Emerging Technologies in the Information.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Applying Neural Networks Michael J. Watts
Neural and Evolutionary Computing - Lecture 9 1 Evolutionary Neural Networks Design  Motivation  Evolutionary training  Evolutionary design of the architecture.
Akram Bitar and Larry Manevitz Department of Computer Science
Formal Methods.
B. Ross Cosc 4f79 1 Neural Nets Neural network or connectionist network: –a set of connected cells or computational units, and one-way connections between.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
SIMULATIONS, REALIZATIONS, AND THEORIES OF LIFE H. H. PATTEE (1989) By Hyojung Seo Dept. of Psychology.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
1 Statistics & R, TiP, 2011/12 Neural Networks  Technique for discrimination & regression problems  More mathematical theoretical foundation  Works.
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Formal Specification: a Roadmap Axel van Lamsweerde published on ICSE (International Conference on Software Engineering) Jing Ai 10/28/2003.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
COMP53311 Other Classification Models: Neural Network Prepared by Raymond Wong Some of the notes about Neural Network are borrowed from LW Chan’s notes.
Perceptrons Michael J. Watts
The Language of Thought : Part II Joe Lau Philosophy HKU.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
1 Software Requirements Descriptions and specifications of a system.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 10Slide 1 Chapter 5:Architectural Design l Establishing the overall structure of a software.
 System Requirement Specification and System Planning.
NEURONAL NETWORKS AND CONNECTIONIST (PDP) MODELS Thorndike’s “Law of Effect” (1920’s) –Reward strengthens connections for operant response Hebb’s “reverberatory.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Sub-fields of computer science. Sub-fields of computer science.
Neural networks.
Fall 2004 Perceptron CS478 - Machine Learning.
Other Classification Models: Neural Network
Soft Computing Applied to Finite Element Tasks
Gabor Madl Ph.D. Candidate, UC Irvine Advisor: Nikil Dutt
The Network Approach: Mind as a Web
Akram Bitar and Larry Manevitz Department of Computer Science
Presentation transcript:

Symbolic Encoding of Neural Networks using Communicating Automata with Applications to Verification of Neural Network Based Controllers* Li Su, Howard Bowman and Brad Wyble Centre for Cognitive Neuroscience and Cognitive Systems, University of Kent, Canterbury, Kent, CT2 7NF, UK *To Appear in Neural-Symbolic Learning and Reasoning Workshop at Nineteenth International Joint Conference on Artificial Intelligence, EDINBURGH, SCOTLAND, 2005.

Outline Background: –Symbolic Computation –Sub-symbolic Computation Motivation for integrating Symbolic and Sub-symbolic Computation –Cognitive Viewpoint –Application Viewpoint Formal Methods –Model Checking –Specification –Properties –Result Summary

Background 1: Symbolic Computation Traditional symbolic computation: –Systems have explicit elements that correspond to symbols organised in systematic ways, representing information in the external world. –Programmes or rules can manipulate these symbolic representations. –Key characteristic: symbol manipulation.

Background 2: Sub-symbolic Computation Connectionism/neural networks are computational models inspired by neuron physiology, which can be regarded as sub-symbolic computation: –Aims at massively parallel simple and uniform processing elements, which are interconnected. –Representations are distributed throughout processing elements.

Motivation 1: Cognitive Viewpoint It has been argued that cognition/mind can be regarded as symbolic computation. (E.g. SOAR, ACT-R and EPIC) Sub-symbolic (neural network) architectures constitute abstract model of the human brain.

Motivation 1: Cognitive Viewpoint (cont.) Combining symbolic and sub-symbolic techniques to specify and justify behaviour of complex cognitive architectures in an abstract and suitable form. –Concurrent, Distributed Control, Hierarchical Decomposition –How do high-level cognitive properties emerge from interactions between low-level neuron components? Our approach is to encode and reason about cognitive systems or neural networks in symbolic form. –E.g. Formal Methods. –Automatic mathematical analysis can be applied.

Motivation 2: Application Viewpoint Connectionist networks can be applied to extending traditional controllers in order to handle: –Catastrophic changes –Gradual degradation –Complex and highly non-linear systems –E.g. aircraft, spacecraft or robots Reliability/Stability of adaptive systems (neural networks) needs to be guaranteed in safety/mission critical domains. However, connectionist models rarely provide an indication of the accuracy or reliability of their predictions.

Formal Methods: Model Checking Automatic analysis technique, which can be applied at system design stage. Checking whether a formal specification satisfies a set of properties, which are expressed in a requirements language. Model Checker Inputs: Yes +Witness / No + Counter-example specification properties Output(s):

An Example of a Neural Network Specification I1 I2 H1 H2 O1 Environment Tester Input Layer Hidden Layer Output Layer NeuralNet Note: this is not a realistic model of controller, but a “toy” model to evaluate the ability of model checking neural networks.

Neuron Automaton Input Middle Output k: identify of neuron;t: local clock;: activation of neuron i; i: pre-synaptic neuron identity; : speed of update;: activation of neuron k; j: post-synaptic neuron identity; : sigmoid function;: error; : net input; : weight;: learning rate.

Requirements Language

Requirements Language (cont.) Reachability Properties: –E.g. Safety Properties: –E.g. – Liveness Properties: –E.g. Note: the state formula success is true when SSE is less than a specified value.

Result The network satisfies the following properties and is guaranteed to learn XOR according to the required timing constraints using BP learning. It also guarantees the learning process is eventually stabilised. – deadline success ……

Summary Formal methods are justifiable techniques to represent low- level neural networks. They can also help to understand how high-level cognitive properties emerge from interactions between low-level neuron components. Formal methods may allow neural networks within engineering applications to be specified and justified at the system design stage. Verifications may give theoretically well-founded ways to evaluate and justify learning methods. Some pproperties can be hard to justify by simulation. –Simulations can only test that something occurs, but are unable to test that something can never occur without explicit mathematical analysis. (An open issue.)