We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published bySilvester Rich
Modified about 1 year ago
Template design only ©copyright 2008 Ohio UniversityMedia Production Spring Quarter A hierarchical neural network structure for text learning is obtained through self-organization Similar representation for text based semantic network was used  An input layer takes in characters, then learns and activates words stored in memory Direct activation of words requires large computational cost for large dictionaries Extension to phrases, sentences or paragraphs would render such a network impractical due to associated computational cost Computer memory required would also be tremendously large This leads to a sparse hierarchical structure. The higher layers represent more complex concepts. Basic nodes in this network are capable of differentiating input sequences. Sequence learning is prerequisite to building spatio-temporal memories. This is performed using laminar minicolumn  LTM cells (Fig.1) In such networks, the interconnection scheme is naturally obtained through sequence learning and structural self-organization. No prior assumption about locality of connections or structure sparsity is made. Machine learns only inputs useful to its objectives, a process that is regulated by reinforcement signals and self organization. Hierarchical Neural Network for Text Based Learning Janusz A Starzyk, Basawaraj Ohio University, Athens, OH Introduction References Hierarchical Network Traditional approach is to describe semantic network structure and/or probabilities of transition in associated Markov models Biological networks learn Different Neural Network structures, but common goal Simple and efficient to solve the given problem Sparsity is essential Size of the network and time to train important for large data sets Hierarchical structure of identical processing units was proposed  Layered organization and sparse structure is biologically inspired Neurons on different layers interact through trained links Mountcastle, V. B., et. al, Response Properties of Neurons of Cat’s Somatic Sensory Cortex to Peripheral Stimuli, J. Neurophysiology, vol. 20, 1957, pp Rogers, T. T., McClelland, J. L., Semantic Cognition text: A parallel Distributed Processing Approach, 2004, MIT Press. Grossberg, S., How does the cerebral cortex work? Learning, attention and grouping by the laminar circuits of visual cortex. Spatial Vision, 12, ,1999. Starzyk J.A., Liu Y., Hierarchical spatio-temporal memory for machine learning based on laminar minicolumn structure, 11 th ICCNS, Boston, Network Simplification Proposed approach uses intermediate neurons to lower the computational cost Intermediate neurons decrease number of activations associated with higher level neurons This concept can be extended to associations of words Small number of rules for concurrent processing are used We can arrive at local optimum of network structure / performance The network topology is self-organizing through addition and removal of neurons and redirecting of neuron connections Neurons are described by their sets of input and output neurons Local optimization criteria are checked by searching the set SL A before the structure is updated when creating or merging the neurons. where IL X is the input list of neuron X and OL A is the output list of neuron A Fig. 3 If create a new node C. B A B Fig. 4 Neuron “A” with single output is merged with Neuron “B”, and “A” is removed. B A A Fig. 5 Neuron “B” with single input is merged with Neuron “A”, and “B” is removed. Batch Mode: All words used for training are available at initiation. Network simplification & optimization is done by processing - all the words in the training set. Total number of neurons is 23% higher than the reference (6000) Dynamic Mode: Words used for training are increased incrementally, - one word at a time Simplification & optimization is done by processing - one word at a time. Total number of neurons is 68% higher than the reference (6000) Implementation Results and Conclusion Tests were run with dictionary up to 6000 words The percent reduction in number of interconnections increases (by up to 65 – 70%) as the number of words increase. The time required to process network activation for all the words used decreases as the number of words increases (reduction by a factor of 55, in batch mode; and 35, in dynamic mode; for 6000 words). Dynamic implementation takes longer compared to the batch implementation, mainly due to the additional overhead required for bookkeeping. The savings (connections and activations) obtained in case of dynamic implementation are less compared to the batch implementation Combination of both methods is advisable for continuous learning and self-organization. A B X4X4 X1X1 X2X2 X3X3 AB X4X4 C X1X1 X2X2 X3X3 Rules for Self-Organization Fig. 2 If create a new node C. X1X1 B X3X3 X4X4 X2X2 A B A C X4X4 X3X3 X2X2 X1X1 Few simple rules used for self organization Rules Contd.
Bob Marinier 27 th Soar Workshop May 24, SESAME is a theory of human cognition Stephan Kaplan (University of Michigan) Modeled at the connectionist.
Hybrid BDD and All-SAT Method for Model Checking Orna Grumberg Joint work with Assaf Schuster and Avi Yadgar Technion – Israel Institute of Technology.
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Research Services in Manufacturing MALTA. Exploiting ICT for the Maltese Manufacturing Industry Key Experts:Dr Ernest Cachia Dr John Abela Dr Ing Saviour.
Eddy Li Eric Wong Martin Ho Kitty Wong Introduction to.
LEARNING SEMANTICS OF WORDS AND PICTURES TEJASWI DEVARAPALLI.
The CLARION Cognitive Architecture: A Tutorial Part 2 – The Action- Centered Subsystem Nick Wilson, Michael Lynch, Ron Sun, Sébastien Hélie Cognitive Science,
Bioinspired Computing Lecture 14 Alternative Neural Networks Netta Cohen.
Standard Brain Model for Vision The talk is given by Tomer Livne and Maria Zeldin.
Attractor neural networks and concept formation in psychological spaces: mind from brain? Włodzisław Duch Department of Informatics, Nicholas Copernicus.
KULIAH II JST: BASIC CONCEPTS Amer Sharif, S.Si, M.Kom.
Bioinspired Computing Lecture 16 Associative Memories with Artificial Neural Networks Netta Cohen.
© Negnevitsky, Pearson Education, Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works Introduction, or.
Unit 1 Introduction to C Programming. What is a Program? Unit 1: Programs.
UNIT V: LEARNING. LEARNING Learning from Observation Inductive Learning Decision Trees Explanation based Learning Statistical Learning methods Reinforcement.
MODELING PARAGIGMS IN ACT-R -Adhineth Mahanama.V.
Artificial Intelligence 12. Two Layer ANNs Course V231 Department of Computing Imperial College, London © Simon Colton.
1 Topic 2 Methods of Cognitive Neuroscience PS3002: Brain & Cognition John Beech School of Psychology University of Leicester.
CS 678 – Deep Learning1 Deep Learning Early Work Why Deep Learning Stacked Auto Encoders Deep Belief Networks.
What is an Operating System? A program that acts as an intermediary between a user of a computer and the computer hardware. Operating system goals: Execute.
1 Evolutionary Systems Paul CRISTEA Politehnica University of Bucharest Spl. Independentei 313, Bucharest, Romania, Phone: , Fax:
Prof. Dr. Mohamed M. El Hadi Sadat Academy for Management Sciences M. M. El Hadi 1 Intelligent Tutoring Systems.
1 Chapter 11. Hash Tables. 2 Many applications require a dynamic set that supports only the dictionary operations, INSERT, SEARCH, and DELETE. Example:
Chapter 6 – Architectural Design 1Chapter 6 Architectural design Software Engineering Ian Sommerville, Software Engineering, 9 th Edition Pearson.
Database System Concepts, 5th Ed. ©Silberschatz, Korth and Sudarshan See for conditions on re-usewww.db-book.com Chapter 20: Database System.
Maintenance optimization at VAC An analysis of opportunistic/preventive models and policies Michael Patriksson, Chalmers.
A 3-D Reconstruction System for the Human Jaw Using a Sequence of Optical Images To Dr\ Ahmed Agamia Eng\ Safaa By: Eman Sayed.
1 CSI 5388:Topics in Machine Learning Inductive Learning: A Review.
SCSC 311 Information Systems: hardware and software.
Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B+-Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered.
© 2016 SlidePlayer.com Inc. All rights reserved.