Computational Intelligence Winter Term 2009/10 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems Introduction.
Advertisements

© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Analysis of Algorithms
February 21, 2002 Simplex Method Continued
Thursday, March 7 Duality 2 – The dual problem, in general – illustrating duality with 2-person 0-sum game theory Handouts: Lecture Notes.
Artificial Intelligence 12. Two Layer ANNs
Multi-Layer Perceptron (MLP)
Computational Intelligence Winter Term 2009/10 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
1 The tiling algorithm Learning in feedforward layered networks: the tiling algorithm writed by Marc M é zard and Jean-Pierre Nadal.
Genome 559: Introduction to Statistical and Computational Genomics Elhanan Borenstein Artificial Neural Networks Some slides adapted from Geoffrey Hinton.
Randomized Algorithms Randomized Algorithms CS648 1.
Chapter 4 Gates and Circuits.
Introduction to Logic Gates
Perceptron Lecture 4.
CS105 Introduction to Computer Concepts GATES and CIRCUITS
Computational Intelligence Winter Term 2011/12 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Computational Intelligence Winter Term 2013/14 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Chapter 4 Gates and Circuits.
Quadratic Inequalities
Introduction to Machine Learning Fall 2013 Perceptron (6) Prof. Koby Crammer Department of Electrical Engineering Technion 1.
Presentation By Utkarsh Trivedi Y8544
25 seconds left…...
Computational Intelligence Winter Term 2011/12 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Computational Intelligence Winter Term 2014/15 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Computational Intelligence
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Perceptron Learning Rule
NEURAL NETWORKS Perceptron
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Computer Science Department FMIPA IPB 2003 Neural Computing Yeni Herdiyeni Computer Science Dept. FMIPA IPB.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version”
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
2101INT – Principles of Intelligent Systems Lecture 10.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 21: Perceptron training and convergence.
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
Computational Intelligence Winter Term 2015/16 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Computational Intelligence
Artificial neural networks:
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Computational Intelligence
Computational Intelligence
Computational Intelligence
Perceptrons Introduced in1957 by Rosenblatt
Computational Intelligence
Artificial Intelligence 12. Two Layer ANNs
Computational Intelligence
Computational Intelligence
David Kauchak CS158 – Spring 2019
Presentation transcript:

Computational Intelligence Winter Term 2009/10 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 2 Plan for Today Organization (Lectures / Tutorials) Overview CI Introduction to ANN McCulloch Pitts Neuron (MCP) Minsky / Papert Perceptron (MPP)

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 3 Organizational Issues Who are you? either studying Automation and Robotics (Master of Science) Module Optimization or studying Informatik - BA-Modul Einführung in die Computational Intelligence - Hauptdiplom-Wahlvorlesung (SPG 6 & 7)

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 4 Organizational Issues Günter Rudolph Fakultät für Informatik, LS 11 best way to contact me OH-14, R. 232 if you want to see me office hours: Tuesday, 10:30–11:30am and by appointment Who am I ?

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 5 Organizational Issues Lectures Wednesday10:15-11:45OH-14, R. E23 Tutorials Wednesday16:15-17:00OH-14, R. 304 group 1 Thursday16:15-17:00OH-14, R. 304 group 2 TutorNicola Beume, LS11 Information teaching/lectures/CI/WS /lecture.jsp Slides see web Literature see web

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 6 Prerequisites Knowledge about mathematics, programming, logic is helpful. But what if something is unknown to me? covered in the lecture pointers to literature... and dont hesitate to ask!

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 7 Overview Computational Intelligence What is CI ? ) umbrella term for computational methods inspired by nature artifical neural networks evolutionary algorithms fuzzy systems swarm intelligence artificial immune systems growth processes in trees... backbone new developments

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 8 Overview Computational Intelligence term computational intelligence coined by John Bezdek (FL, USA) originally intended as a demarcation line ) establish border between artificial and computational intelligence nowadays: blurring border our goals: 1.know what CI methods are good for! 2.know when refrain from CI methods! 3.know why they work at all! 4.know how to apply and adjust CI methods to your problem!

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 9 Introduction to Artificial Neural Networks Biological Prototype Neuron - Information gathering(D) - Information processing (C) - Information propagation (A / S) human being: neurons electricity in mV range speed: 120 m / s cell body (C) dendrite (D) nucleus axon (A) synapse (S)

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 10 Abstraction nucleus / cell body … dendrites axon synapse signal input signal processing signal output Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 11 Model … x1x1 f(x 1, x 2, …, x n ) x2x2 xnxn funktion f McCulloch-Pitts-Neuron 1943: x i { 0, 1 } =: B f: B n B Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/ : Warren McCulloch / Walter Pitts description of neurological networks modell: McCulloch-Pitts-Neuron (MCP) basic idea: - neuron is either active or inactive - skills result from connecting neurons considered static networks (i.e. connections had been constructed and not learnt) Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 13 McCulloch-Pitts-Neuron n binary input signals x 1, …, x n threshold > x1x1 x2x2 xnxn = 1 boolean OR n... x1x1 x2x2 xnxn = n boolean AND ) can be realized: Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 14 McCulloch-Pitts-Neuron n binary input signals x 1, …, x n threshold > 0 in addition: m binary inhibitory signals y 1, …, y m if at least one y j = 1, then output = 0 otherwise: - sum of inputs threshold, then output = 1 else output = 0 x1x1 y1y1 0 NOT Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 15 Analogons Neuronssimple MISO processors (with parameters: e.g. threshold) Synapseconnection between neurons (with parameters: synaptic weight) Topologyinterconnection structure of net Propagationworking phase of ANN processes input to output Training / Learning adaptation of ANN to certain data Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 16 Theorem: Every logical function F: B n B can be simulated with a two-layered McCulloch/Pitts net. Assumption: inputs also available in inverted form, i.e. 9 inverted inputs. ) x 1 + x 2 x1x1 x2x2 Example: x1x1 x2x2 x3x3 x1x1 x2x2 x3x3 x1x1 x4x Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 17 Proof: (by construction) Every boolean function F can be transformed in disjunctive normal form ) 2 layers (AND - OR) 1.Every clause gets a decoding neuron with = n ) output = 1 only if clause satisfied (AND gate) 2.All outputs of decoding neurons are inputs of a neuron with = 1 (OR gate) q.e.d. Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 18 Generalization: inputs with weights fires 1 if 0,2 x 1 + 0,4 x 2 + 0,3 x 3 0,7 0,7 0,2 0,4 0,3 x1x1 x2x2 x3x3 ¢ 10 2 x x x 3 7 ) duplicate inputs! 7 x2x2 x3x3 x1x1 ) equivalent! Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 19 Theorem: Weighted and unweighted MCP-nets are equivalent for weights 2 Q +. Proof: ) N Let Multiplication with yields inequality with coefficients in N Duplicate input x i, such that we get a i b 1 b 2 b i-1 b i+1 b n inputs. Threshold = a 0 b 1 b n ( Set all weights to 1. q.e.d. Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 20 Introduction to Artificial Neural Networks + feed-forward: able to compute any Boolean function + recursive: able to simulate DFA very similar to conventional logical circuits difficult to construct no good learning algorithm available Conclusion for MCP nets

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 21 Perceptron (Rosenblatt 1958) complex model reduced by Minsky & Papert to what is necessary Minsky-Papert perceptron (MPP), 1969 isolation of x 2 yields: J N 0 1 What can a single MPP do? J N 0 1 Example: J N separating line separates R 2 in 2 classes Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 22 OR NAND NOR = 0= 1 AND XOR ? x1x1 x2x2 xor ) 0 < ) w 2 ) w 1 ) w 1 + w 2 < w 1, w 2 > 0 ) w 1 + w 2 2 contradiction! w 1 x 1 + w 2 x 2 Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 23 book Perceptrons analysis math. properties of perceptrons disillusioning result: perceptions fail to solve a number of trivial problems! - XOR-Problem - Parity-Problem - Connectivity-Problem conclusion: All artificial neurons have this kind of weakness! research in this field is a scientific dead end! consequence: research funding for ANN cut down extremely (~ 15 years) 1969: Marvin Minsky / Seymor Papert Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 24 how to leave the dead end: 1.Multilayer Perceptrons: 2.Nonlinear separating functions: x1x1 x2x2 2 x1x1 x2x2 2 1 ) realizes XOR XOR g(x 1, x 2 ) = 2x 1 + 2x 2 – 4x 1 x 2 -1 with = 0 g(0,0) = –1 g(0,1) = +1 g(1,0) = +1 g(1,1) = –1 Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 25 How to obtain weights w i and threshold ? as yet: by construction x1x1 x2x2 NAND example: NAND-gate ) 0 ) w 2 ) w 1 ) w 1 + w 2 < requires solution of a system of linear inequalities (2 P) (e.g.: w 1 = w 2 = -2, = -3) now: by learning / training Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 26 Perceptron Learning Assumption: test examples with correct I/O behavior available Principle: (1)choose initial weights in arbitrary manner (2)fed in test pattern (3)if output of perceptron wrong, then change weights (4)goto (2) until correct output for al test paterns graphically: translation and rotation of separating lines Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 27 Perceptron Learning P: set of positive examples N: set of negative examples 1.choose w 0 at random, t = 0 2.choose arbitrary x 2 P [ N 3.if x 2 P and w t x > 0 then goto 2 if x 2 N and w t x 0 then goto 2 4.if x 2 P and w t x 0 then w t+1 = w t + x; t++; goto 2 5.if x 2 N and w t x > 0 then w t+1 = w t – x; t++; goto 2 6.stop? If I/O correct for all examples! I/O correct! let wx > 0, should be 0! (w–x)x = wx – xx < w x let wx 0, should be > 0! (w+x)x = wx + xx > w x remark: algorithm converges, is finite, worst case: exponential runtime Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 28 Example threshold as a weight: w = (, w 1, w 2 ) ) 0 x2x2 x1x1 1 w2w2 w1w1 - suppose initial vector of weights is w (0) = (1, -1, 1) Introduction to Artificial Neural Networks

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 29 Introduction to Artificial Neural Networks We know what a single MPP can do. What can be achieved with many MPPs? Single MPP) separates plane in two half planes Many MPPs in 2 layers) can identify convex sets 1.How?) 2 layers! 2.Convex? ( A B 8 a,b 2 X: a + (1- ) b 2 X for 2 (0,1)

Lecture 01 G. Rudolph: Computational Intelligence Winter Term 2009/10 30 Introduction to Artificial Neural Networks Single MPP ) separates plane in two half planes Many MPPs in 2 layers ) can identify convex sets Many MPPs in 3 layers ) can identify arbitrary sets Many MPPs in > 3 layers ) not really necessary! arbitrary sets: 1.partitioning of nonconvex set in several convex sets 2.two-layered subnet for each convex set 3.feed outputs of two-layered subnets in OR gate (third layer)