COGNITIVE MEMORY HUMAN AND MACHINE

Slides:



Advertisements
Similar presentations
Chapter 7 EM Math Probability.
Advertisements

High Performance Associative Neural Networks: Overview and Library High Performance Associative Neural Networks: Overview and Library Presented at AI06,
A MODEL OF HUMAN MEMORY July Memory patterns are probably not stored in the brains neurons. Why ? They take too long to train. New on-the-fly training.
Academic Advisor: Dr. Yuval Elovici Technical Advisor: Dr. Rami Puzis Team Members: Yakir Dahan Royi Freifeld Vitali Sepetnitsky 2.
Princess Nora University Artificial Intelligence Artificial Neural Network (ANN) 1.
Fractions Simplify: 36/48 = 36/48 = ¾ 125/225 = 125/225 = 25/45 = 5/9
Eigenfaces for Recognition Presented by: Santosh Bhusal.
Automatic classification of weld cracks using artificial intelligence and statistical methods Ryszard SIKORA, Piotr BANIUKIEWICZ, Marcin CARYK Szczecin.
Sparse Coding in Sparse Winner networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University,
黃文中 Preview 2 3 The Saliency Map is a topographically arranged map that represents visual saliency of a corresponding visual scene. 4.
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Handwritten Character Recognition Using Artificial Neural Networks Shimie Atkins & Daniel Marco Supervisor: Johanan Erez Technion - Israel Institute of.
Self-Organizing Hierarchical Neural Network
Neural Networks Dr. Peter Phillips. Neural Networks What are Neural Networks Where can neural networks be used Examples Recognition systems (Voice, Signature,
Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Intro Neural Networks (Reading:
1 Pendahuluan Pertemuan 1 Matakuliah: T0293/Neuro Computing Tahun: 2005.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
IT 691 Final Presentation Pace University Created by: Robert M Gust Mark Lee Samir Hessami Mark Lee Samir Hessami.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
October 14, 2010Neural Networks Lecture 12: Backpropagation Examples 1 Example I: Predicting the Weather We decide (or experimentally determine) to use.
Neural Networks Lab 5. What Is Neural Networks? Neural networks are composed of simple elements( Neurons) operating in parallel. Neural networks are composed.
Artificial Intelligence
Radial-Basis Function Networks
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Pattern Recognition Vidya Manian Dept. of Electrical and Computer Engineering University of Puerto Rico INEL 5046, Spring 2007
December 5, 2012Introduction to Artificial Intelligence Lecture 20: Neural Network Application Design III 1 Example I: Predicting the Weather Since the.
Artificial Neural Network Theory and Application Ashish Venugopal Sriram Gollapalli Ulas Bardak.
 In electrical engineering and computer science image processing is any form of signal processing for which the input is an image, such as a photograph.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
© Copyright 2004 ECE, UM-Rolla. All rights reserved A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C.
Detecting Pedestrians Using Patterns of Motion and Appearance Paul Viola Microsoft Research Irfan Ullah Dept. of Info. and Comm. Engr. Myongji University.
NEURAL NETWORKS FOR DATA MINING
CAPTCHA solving Tianhui Cai Period 3. CAPTCHAs Completely Automated Public Turing tests to tell Computers and Humans Apart Determines whether a user is.
D31 Entity Recognition Results with Auto- associative Memories Nicolas Gourier INRIA PRIMA Team GRAVIR Laboratory CAVIAR Project.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Computer Science Department Pacific University Artificial Intelligence -- Computer Vision.
Handwritten Recognition with Neural Network Chatklaw Jareanpon, Olarik Surinta Mahasarakham University.
School of Engineering and Computer Science Victoria University of Wellington Copyright: Peter Andreae, VUW Image Recognition COMP # 18.
Neural Networks in Computer Science n CS/PY 231 Lab Presentation # 1 n January 14, 2005 n Mount Union College.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 25 Nov 4, 2005 Nanjing University of Science & Technology.
Artificial Intelligence, Expert Systems, and Neural Networks Group 10 Cameron Kinard Leaundre Zeno Heath Carley Megan Wiedmaier.
1 Andrew Ng, Associate Professor of Computer Science Robots and Brains.
Face Detection Using Neural Network By Kamaljeet Verma ( ) Akshay Ukey ( )
C - IT Acumens. COMIT Acumens. COM. To demonstrate the use of Neural Networks in the field of Character and Pattern Recognition by simulating a neural.
CAPTCHA solving Tianhui Cai Period 3. CAPTCHAs Completely Automated Public Turing tests to tell Computers and Humans Apart User is human or machine? Prevents.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
11 Robot Intelligence Technology Lab. EE788 Robot Cognition and Planning, Prof. J.-H. Kim Lecture. COGNITIVE MEMORY Bernard Widrow Juan Carlos Aragon Dept.
Vision Based Automation of Steering using Artificial Neural Network Team Members: Sriganesh R. Prabhu Raj Kumar T. Senthil Prabu K. Raghuraman V. Guide:
Coin Recognition Using MATLAB - Emad Zaben - Bakir Hasanein - Mohammed Omar.
National Taiwan Normal A System to Detect Complex Motion of Nearby Vehicles on Freeways C. Y. Fang Department of Information.
INTRODUCTION TO NEURAL NETWORKS 2 A new sort of computer What are (everyday) computer systems good at... and not so good at? Good at..Not so good at..
Another Example: Circle Detection
The Relationship between Deep Learning and Brain Function
Jure Zbontar, Yann LeCun
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
Pearson Lanka (Pvt) Ltd.
Face Recognition and Detection Using Eigenfaces
Non-linear hypotheses
شبکه عصبی تنظیم: بهروز نصرالهی-فریده امدادی استاد محترم: سرکار خانم کریمی دانشگاه آزاد اسلامی واحد شهرری.
Face Recognition with Neural Networks
network of simple neuron-like computing elements
Creating Data Representations
Pattern Recognition & Machine Learning
Introduction to Artificial Intelligence Lecture 24: Computer Vision IV
Face Recognition: A Convolutional Neural Network Approach
Deep Learning Authors: Yann LeCun, Yoshua Bengio, Geoffrey Hinton
Presentation transcript:

COGNITIVE MEMORY HUMAN AND MACHINE by BERNARD WIDROW JUAN CARLOS ARAGON INFORMATION SYSTEMS LABORATORY DEPT. OF ELECTRICAL ENGINEERING STANFORD UNIVERSITY June, 2008

THE 3 HOWS How does human memory work ? How could I build a memory like that ? How could I use it to solve practical problems ?

Why would we like to do this ? What would we like to do ? Design a memory system that is as simple as possible, but behaves like and emulates human memory. Why would we like to do this ? To develop a new kind of memory for computers, adjunct to existing forms of memory, to facilitate solutions to problems in artificial intelligence, pattern recognition, speech recognition, control systems etc. To advance cognitive science with new insight into the working of human memory.

SALIENT FEATURES OF COGNITIVE MEMORY Stores sensory patterns (visual, auditory, tactile; radar, sonar, etc.). Stores patterns wherever space is available, not in specified memory locations. Stores simultaneously sensed input patterns in the same folder (e.g., simultaneous visual and auditory patterns are stored together). Data recovery is in response to “prompt” input patterns (e.g., a visual or auditory input pattern would trigger recall). Autoassociative neural networks are used in the data retrieval system.

Satellite photo of Diego Garcia Island showing U.S. Air Force base

Aircraft parked in a area near the main runway

Scanning, looking for a hit

This is a hit, object is KC135

A SIMPLE COGNITIVE MEMORY FOR PATTERN RECOGNITION OUTPUT PATTERNS PATTERN STORAGE ( TRAINING ) MUX PATTERN RETRIEVAL ( SENSING ) NN MUX HIT? V C NN VC = VISUAL CORTEX CAMERA HIT? V C BUFFER CAMERA A SIMPLE COGNITIVE MEMORY FOR PATTERN RECOGNITION

THREE PHOTOS OF BERNARD WIDROW USED FOR TRAINING A PHOTO OF JUAN CARLOS ARAGON, VICTOR ELIASHBERG, AND BERNARD WIDROW USED FOR SENSING

FACE DETECTION Training (low resolution, 20x20 pixel images) One image of a person’s face was trained in The image was adjusted by Rotation (2° increments, 7 angles) Translation (left/right, up/down, 1 pixel increments, 9 positions) Brightness (5 levels of intensity) Total number of training patterns = 315 Training time 12 minutes on AMD 64 bit Athlon 2.6 GHz computer for 0.25% MSE

FACE DETECTION Each input pattern was adjusted by Sensing (low resolution, 20x20 pixel images) Each input pattern was adjusted by Scaling (6 window sizes) Translation (90 pixel increments) Errors with background were ~8X greater than with a person’s face 60 patterns per second through neural network Autoassociative neural network has total of 1100 neurons distributed over 3 layers 400 neurons, 400 weights per neuron, first layer 300 neurons, 400 weights per neuron, second layer 400 neurons, 300 weights per neuron, third layer

FACE RECOGNITION Training (high resolution, 50x50 pixel images) All 3 images of Widrow’s face were trained in Each image was adjusted by Rotation (2° increments, 7 angles) Translation (left/right, up/down, 1 pixel increments, 25 positions) Scaling (3 window sizes) Total number of training patterns = 1575 Training time 2.6 hours in AMD 64 bit Athlon 2.6 GHz computer for 0.25% MSE

FACE RECOGNITION Sensing (high resolution, 50x50 pixel images) Each input pattern was adjusted by Scaling (6 window sizes) Translation (2 pixel increments, 25 positions) Brightness (6 levels of intensity) Optimization was done for each detected face Errors with unidentified faces were ~4X greater than with Widrow’s face 5 patterns per second through neural network Autoassociative neural network 1800 neurons, 2500 weights per neuron, first layer 1500 neurons, 1800 weights per neuron, second layer 2500 neurons, 1500 weights per neuron, third layer Total 5800 neurons, 10,950,000 weights

(a) (b) SENSING PATTERNS OBTAINED FROM WIDROW’S FACE WITH TWO WINDOW SIZES, (a) STRAIGHT UP, AND (b) ROTATED

Cognitive Memory Challenged Photographs distributed by NIST for the Face Recognition Grand Challenge version 1 were used for training and testing. Photographs of 75 persons were selected for training. 75 photographs NOT trained-in of the above persons were selected for sensing purposes. 300 photographs of persons NOT trained-in were selected for sensing. In total, 75 photographs were used for training and 375 for sensing. Autoassociative neural network had 3 layers distributed as follows: 2000 neurons in the first layer 1500 neurons in the second layer 10000 neurons in the last layer Total number of weights: 38 million. Retina size: 100 × 100 pixels. Results: 75 people trained-in were recognized and identified without error, while the 300 people not trained-in were rejected by the Cognitive Memory system.

VIDEO ON AIRCRAFT IDENTIFICATION

FACE RECOGNITION VIDEO