Computational Intelligence Winter Term 2013/14 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems Introduction.
Advertisements

1 Copyright © 2013 Elsevier Inc. All rights reserved. Appendix 01.
Analysis of Algorithms
The imaging problem object imaging optics (lenses, etc.) image
February 21, 2002 Simplex Method Continued
Thursday, March 7 Duality 2 – The dual problem, in general – illustrating duality with 2-person 0-sum game theory Handouts: Lecture Notes.
FACTORING ax2 + bx + c Think “unfoil” Work down, Show all steps.
Artificial Intelligence 12. Two Layer ANNs
Multi-Layer Perceptron (MLP)
Computational Intelligence Winter Term 2009/10 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
1 The tiling algorithm Learning in feedforward layered networks: the tiling algorithm writed by Marc M é zard and Jean-Pierre Nadal.
Genome 559: Introduction to Statistical and Computational Genomics Elhanan Borenstein Artificial Neural Networks Some slides adapted from Geoffrey Hinton.
Randomized Algorithms Randomized Algorithms CS648 1.
Chapter 4 Gates and Circuits.
Gates and Circuits Nell Dale & John Lewis (adaptation by Erin Chambers and Michael Goldwasser)
Introduction to Logic Gates
Perceptron Lecture 4.
CS105 Introduction to Computer Concepts GATES and CIRCUITS
Computational Intelligence Winter Term 2009/10 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Computational Intelligence Winter Term 2011/12 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Chapter 4 Gates and Circuits.
Quadratic Inequalities
Presentation By Utkarsh Trivedi Y8544
Beyond Linear Separability
25 seconds left…...
Computational Intelligence Winter Term 2013/14 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Computational Intelligence Winter Term 2011/12 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Intracellular Compartments and Transport
Computational Intelligence Winter Term 2014/15 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Essential Cell Biology
How Cells Obtain Energy from Food
Computational Intelligence
Perceptron Learning Rule
NEURAL NETWORKS Perceptron
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
The McCulloch-Pitts Neuron. Characteristics The activation of a McCulloch Pitts neuron is binary. Neurons are connected by directed weighted paths. A.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
2101INT – Principles of Intelligent Systems Lecture 10.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 21: Perceptron training and convergence.
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Computational Intelligence Winter Term 2015/16 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Computational Intelligence
Computational Intelligence
Computational Intelligence
Computational Intelligence
Perceptrons Introduced in1957 by Rosenblatt
Computational Intelligence
Artificial Intelligence 12. Two Layer ANNs
Computational Intelligence
Computational Intelligence
Computational Intelligence
David Kauchak CS158 – Spring 2019
Computational Intelligence
Presentation transcript:

Computational Intelligence Winter Term 2013/14 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 2 Plan for Today Organization (Lectures / Tutorials) Overview CI Introduction to ANN McCulloch Pitts Neuron (MCP) Minsky / Papert Perceptron (MPP)

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 3 Organizational Issues Who are you? either studying Automation and Robotics (Master of Science) Module Optimization or studying Informatik - BSc-Modul Einf ü hrung in die Computational Intelligence - Hauptdiplom-Wahlvorlesung (SPG 6 & 7)

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 4 Organizational Issues G ü nter Rudolph Fakult ä t f ü r Informatik, LS 11 best way to contact me OH-14, R. 232 if you want to see me office hours: Tuesday, 10:30 – 11:30am and by appointment Who am I ?

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 5 Organizational Issues Lectures Wednesday10:15-11:45SRG1, R , weekly Tutorials Thursday16:30-17:30OH14, R. 1.04, bi-weekly Friday14:15-15:45OH14, R. 1.04, bi-weekly TutorDipl.-Inf. Simon Wessing, LS 11 Information teaching/lectures/CI/WS /lecture.jsp Slides see web page Literature see web page either or

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 6 Prerequisites Knowledge about mathematics, programming, logic is helpful. But what if something is unknown to me? covered in the lecture pointers to literature... and don t hesitate to ask!

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 7 Overview Computational Intelligence What is CI ? ) umbrella term for computational methods inspired by nature artifical neural networks evolutionary algorithms fuzzy systems swarm intelligence artificial immune systems growth processes in trees... backbone new developments

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 8 Overview Computational Intelligence term computational intelligence coined by John Bezdek (FL, USA) originally intended as a demarcation line ) establish border between artificial and computational intelligence nowadays: blurring border our goals: 1.know what CI methods are good for! 2.know when refrain from CI methods! 3.know why they work at all! 4.know how to apply and adjust CI methods to your problem!

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 9 Introduction to Artificial Neural Networks Biological Prototype Neuron - Information gathering(D) - Information processing (C) - Information propagation (A / S) human being: neurons electricity in mV range speed: 120 m / s cell body (C) dendrite (D) nucleus axon (A) synapse (S)

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 10 Abstraction nucleus / cell body … dendrites axon synapse signal input signal processing signal output Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 11 Model … x1x1 f(x 1, x 2, …, x n ) x2x2 xnxn function f McCulloch-Pitts-Neuron 1943: x i { 0, 1 } =: B f: B n B Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/ : Warren McCulloch / Walter Pitts description of neurological networks modell: McCulloch-Pitts-Neuron (MCP) basic idea: - neuron is either active or inactive - skills result from connecting neurons considered static networks (i.e. connections had been constructed and not learnt) Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 13 McCulloch-Pitts-Neuron n binary input signals x 1, …, x n threshold > x1x1 x2x2 xnxn = 1 boolean OR n... x1x1 x2x2 xnxn = n boolean AND ) can be realized: Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 14 McCulloch-Pitts-Neuron n binary input signals x 1, …, x n threshold > 0 in addition: m binary inhibitory signals y 1, …, y m if at least one y j = 1, then output = 0 otherwise: - sum of inputs threshold, then output = 1 else output = 0 x1x1 y1y1 0 NOT Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 15 Theorem: Every logical function F: B n B can be simulated with a two-layered McCulloch/Pitts net. Assumption: inputs also available in inverted form, i.e. 9 inverted inputs. Example: x1x1 x2x2 x3x3 x1x1 x2x2 x3x3 x1x1 x4x Introduction to Artificial Neural Networks ) x 1 + x 2 x1x1 x2x2

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 16 Proof: (by construction) Every boolean function F can be transformed in disjunctive normal form ) 2 layers (AND - OR) 1.Every clause gets a decoding neuron with = n ) output = 1 only if clause satisfied (AND gate) 2.All outputs of decoding neurons are inputs of a neuron with = 1 (OR gate) q.e.d. Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 17 Generalization: inputs with weights fires 1 if 0,2 x 1 + 0,4 x 2 + 0,3 x 3 0,7 0,7 0,2 0,4 0,3 x1x1 x2x2 x3x3 ¢ 10 2 x x x 3 7 ) duplicate inputs! 7 x2x2 x3x3 x1x1 ) equivalent! Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 18 Theorem: Weighted and unweighted MCP-nets are equivalent for weights 2 Q +. Proof: ) N Let Multiplication with yields inequality with coefficients in N Duplicate input x i, such that we get a i b 1 b 2 b i-1 b i+1 b n inputs. Threshold = a 0 b 1 b n ( Set all weights to 1. q.e.d. Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 19 Introduction to Artificial Neural Networks + feed-forward: able to compute any Boolean function + recursive: able to simulate DFA very similar to conventional logical circuits difficult to construct no good learning algorithm available Conclusion for MCP nets

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 20 Perceptron (Rosenblatt 1958) complex model reduced by Minsky & Papert to what is necessary Minsky-Papert perceptron (MPP), 1969 essential difference: x 2 [0,1] ½ R isolation of x 2 yields: Y N 0 1 What can a single MPP do? Y N 0 1 Example: Y N separating line separates R 2 in 2 classes Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 21 OR NAND NOR = 0= 1 AND XOR ? x1x1 x2x2 xor ) 0 < ) w 2 ) w 1 ) w 1 + w 2 < w 1, w 2 > 0 ) w 1 + w 2 2 contradiction! w 1 x 1 + w 2 x 2 Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 22 book Perceptrons analysis math. properties of perceptrons disillusioning result: perceptions fail to solve a number of trivial problems! - XOR-Problem - Parity-Problem - Connectivity-Problem conclusion: All artificial neurons have this kind of weakness! research in this field is a scientific dead end! consequence: research funding for ANN cut down extremely (~ 15 years) 1969: Marvin Minsky / Seymor Papert Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 23 how to leave the dead end: 1.Multilayer Perceptrons: 2.Nonlinear separating functions: x1x1 x2x2 2 x1x1 x2x2 2 1 ) realizes XOR XOR g(x 1, x 2 ) = 2x 1 + 2x 2 – 4x 1 x 2 -1 with = 0 g(0,0) = –1 g(0,1) = +1 g(1,0) = +1 g(1,1) = –1 Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 24 How to obtain weights w i and threshold ? as yet: by construction x1x1 x2x2 NAND example: NAND-gate ) 0 ) w 2 ) w 1 ) w 1 + w 2 < requires solution of a system of linear inequalities ( 2 P) (e.g.: w 1 = w 2 = -2, = -3) now: by learning / training Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 25 Perceptron Learning Assumption: test examples with correct I/O behavior available Principle: (1)choose initial weights in arbitrary manner (2)feed in test pattern (3)if output of perceptron wrong, then change weights (4)goto (2) until correct output for all test paterns graphically: translation and rotation of separating lines Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 26 Example threshold as a weight: w = (, w 1, w 2 ) ) 0 x2x2 x1x1 1 w2w2 w1w1 - suppose initial vector of weights is w (0) = (1, -1, 1) Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 27 Perceptron Learning P: set of positive examples output 1 N: set of negative examples output 0 1.choose w 0 at random, t = 0 2.choose arbitrary x 2 P [ N 3.if x 2 P and w t x > 0 then goto 2 if x 2 N and w t x 0 then goto 2 4.if x 2 P and w t x 0 then w t+1 = w t + x; t++; goto 2 5.if x 2 N and w t x > 0 then w t+1 = w t – x; t++; goto 2 6.stop? If I/O correct for all examples! I/O correct! let wx > 0, should be 0! (w–x)x = wx – xx < w x let wx 0, should be > 0! (w+x)x = wx + xx > w x remark: algorithm converges, is finite, worst case: exponential runtime Introduction to Artificial Neural Networks threshold µ integrated in weights

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 28 Introduction to Artificial Neural Networks We know what a single MPP can do. What can be achieved with many MPPs? Single MPP ) separates plane in two half planes Many MPPs in 2 layers ) can identify convex sets 1.How? ) 2 layers! 2.Convex? ( A B 8 a,b 2 X: a + (1- ) b 2 X for 2 (0,1)

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2013/14 29 Introduction to Artificial Neural Networks Single MPP ) separates plane in two half planes Many MPPs in 2 layers ) can identify convex sets Many MPPs in 3 layers ) can identify arbitrary sets Many MPPs in > 3 layers ) not really necessary! arbitrary sets: 1.partitioning of nonconvex set in several convex sets 2.two-layered subnet for each convex set 3.feed outputs of two-layered subnets in OR gate (third layer)