ICCINC'2003 1 Janusz Starzyk, Yongtao Guo and Zhineng Zhu Ohio University, Athens, OH 45701, U.S.A. 6 th International Conference on Computational.

Slides:



Advertisements
Similar presentations
20031 Janusz Starzyk and Yongtao Guo School of Electrical Engineering and Computer Science Ohio University, Athens, OH 45701, U.S.A. September,
Advertisements

Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
1 SECURE-PARTIAL RECONFIGURATION OF FPGAs MSc.Fisnik KRAJA Computer Engineering Department, Faculty Of Information Technology, Polytechnic University of.
Template design only ©copyright 2008 Ohio UniversityMedia Production Spring Quarter  A hierarchical neural network structure for text learning.
Analog Circuits for Self-organizing Neural Networks Based on Mutual Information Janusz Starzyk and Jing Liang School of Electrical Engineering and Computer.
FPGA-Based System Design: Chapter 3 Copyright  2004 Prentice Hall PTR SRAM-based FPGA n SRAM-based LE –Registers in logic elements –LUT-based logic element.
Characterization Presentation Neural Network Implementation On FPGA Supervisor: Chen Koren Maria Nemets Maxim Zavodchik
Lecture 26: Reconfigurable Computing May 11, 2004 ECE 669 Parallel Computer Architecture Reconfigurable Computing.
Hybrid Pipeline Structure for Self-Organizing Learning Array Yinyin Liu 1, Ding Mingwei 2, Janusz A. Starzyk 1, 1 School of Electrical Engineering & Computer.
1 HW/SW Partitioning Embedded Systems Design. 2 Hardware/Software Codesign “Exploration of the system design space formed by combinations of hardware.
Applications of Systolic Array FTR, IIR filtering, and 1-D convolution. 2-D convolution and correlation. Discrete Furier transform Interpolation 1-D and.
CS244-Introduction to Embedded Systems and Ubiquitous Computing Instructor: Eli Bozorgzadeh Computer Science Department UC Irvine Winter 2010.
Spring 07, Jan 16 ELEC 7770: Advanced VLSI Design (Agrawal) 1 ELEC 7770 Advanced VLSI Design Spring 2007 Introduction Vishwani D. Agrawal James J. Danaher.
EE141 1 Design of Self-Organizing Learning Array for Intelligent Machines Janusz Starzyk School of Electrical Engineering and Computer Science Heidi Meeting.
1 OPTIMIZED INTERCONNECTIONS IN PROBABILISTIC SELF-ORGANIZING LEARNING Janusz Starzyk, Mingwei Ding, Haibo He School of EECS Ohio University, Athens, OH.
Mahapatra-Texas A&M-Fall'001 cosynthesis Introduction to cosynthesis Rabi Mahapatra CPSC498.
Fourth International Symposium on Neural Networks (ISNN) June 3-7, 2007, Nanjing, China A Hierarchical Self-organizing Associative Memory for Machine Learning.
Future Hardware Realization of Self-Organizing Learning Array and Its Software Simulation Adviser: Dr. Janusz Starzyk Student: Tsun-Ho Liu Ohio University.
EE141 1 Broca’s area Pars opercularis Motor cortexSomatosensory cortex Sensory associative cortex Primary Auditory cortex Wernicke’s area Visual associative.
EE141 1 Design of Self-Organizing Learning Array for Intelligent Machines Janusz Starzyk School of Electrical Engineering and Computer Science Heidi Meeting.
1 FPGA Lab School of Electrical Engineering and Computer Science Ohio University, Athens, OH 45701, U.S.A. An Entropy-based Learning Hardware Organization.
Software Simulation of a Self-organizing Learning Array System Janusz Starzyk & Zhen Zhu School of EECS Ohio University.
Technion – Israel Institute of Technology Department of Electrical Engineering High Speed Digital Systems Lab Project performed by: Naor Huri Idan Shmuel.
Presenting: Itai Avron Supervisor: Chen Koren Characterization Presentation Spring 2005 Implementation of Artificial Intelligence System on FPGA.
Associative Learning in Hierarchical Self Organizing Learning Arrays Janusz A. Starzyk, Zhen Zhu, and Yue Li School of Electrical Engineering and Computer.
Final Presentation Neural Network Implementation On FPGA Supervisor: Chen Koren Maria Nemets Maxim Zavodchik
Characterization Presentation Neural Network Implementation On FPGA Supervisor: Chen Koren Maria Nemets Maxim Zavodchik
Fourth International Symposium on Neural Networks (ISNN) June 3-7, 2007, Nanjing, China Online Dynamic Value System for Machine Learning Haibo He, Stevens.
SSS 4/9/99CMU Reconfigurable Computing1 The CMU Reconfigurable Computing Project April 9, 1999 Mihai Budiu
DESIGN OF A SELF- ORGANIZING LEARNING ARRAY SYSTEM Dr. Janusz Starzyk Tsun-Ho Liu Ohio University School of Electrical Engineering and Computer Science.
Router modeling using Ptolemy Xuanming Dong and Amit Mahajan May 15, 2002 EE290N.
A Hybrid Self-Organizing Neural Gas Network James Graham and Janusz Starzyk School of EECS, Ohio University Stocker Center, Athens, OH USA IEEE World.
Chapter 5 Array Processors. Introduction  Major characteristics of SIMD architectures –A single processor(CP) –Synchronous array processors(PEs) –Data-parallel.
Benefits of Partial Reconfiguration Reducing the size of the FPGA device required to implement a given function, with consequent reductions in cost and.
Uniform Reconfigurable Processing Module for Design and Manufacturing Integration V. Kirischian, S. Zhelnokov, P.W. Chun, L. Kirischian and V. Geurkov.
Machine Learning. Learning agent Any other agent.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Using Neural Networks in Database Mining Tino Jimenez CS157B MW 9-10:15 February 19, 2009.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
Neural Network with Memory and Cognitive Functions Janusz A. Starzyk, and Yue Li School of Electrical Engineering and Computer Science Ohio University,
NEURAL NETWORKS FOR DATA MINING
High-Level Interconnect Architectures for FPGAs Nick Barrow-Williams.
ASIP Architecture for Future Wireless Systems: Flexibility and Customization Joseph Cavallaro and Predrag Radosavljevic Rice University Center for Multimedia.
J. Christiansen, CERN - EP/MIC
GRECO - CIn - UFPE1 A Reconfigurable Architecture for Multi-context Application Remy Eskinazi Sant´Anna Federal University of Pernambuco – UFPE GRECO.
IEEE ICECS 2010 SysPy: Using Python for processor-centric SoC design Evangelos Logaras Elias S. Manolakos {evlog, Department of Informatics.
Page 1 Reconfigurable Communications Processor Principal Investigator: Chris Papachristou Task Number: NAG Electrical Engineering & Computer Science.
ECE 488 Computer Engineering Design I Fall 2005 Hau Ngo Ming Zhang.
Report from Universidad Politécnica de Madrid Zorana Banković.
Lecture 16: Reconfigurable Computing Applications November 3, 2004 ECE 697F Reconfigurable Computing Lecture 16 Reconfigurable Computing Applications.
CS244-Introduction to Embedded Systems and Ubiquitous Computing Instructor: Eli Bozorgzadeh Computer Science Department UC Irvine Winter 2010.
MAPLD 2005/254C. Papachristou 1 Reconfigurable and Evolvable Hardware Fabric Chris Papachristou, Frank Wolff Robert Ewing Electrical Engineering & Computer.
Development of Programmable Architecture for Base-Band Processing S. Leung, A. Postula, Univ. of Queensland, Australia A. Hemani, Royal Institute of Tech.,
System-level power analysis and estimation September 20, 2006 Chong-Min Kyung.
Distributed Computation: Circuit Simulation CK Cheng UC San Diego
1 Hardware/Software Co-Design Final Project Emulation on Distributed Simulation Co-Verification System 陳少傑 教授 R 黃鼎鈞 R 尤建智 R 林語亭.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Self-Adaptive Embedded Technologies for Pervasive Computing Architectures Self-Adaptive Networked Entities Concept, Implementations,
20031 Janusz Starzyk, Yongtao Guo and Zhineng Zhu Ohio University, Athens, OH 45701, U.S.A. April 27 th, 2003.
Introduction to Field Programmable Gate Arrays (FPGAs) EDL Spring 2016 Johns Hopkins University Electrical and Computer Engineering March 2, 2016.
Univ. of TehranIntroduction to Computer Network1 An Introduction to Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Reconfigurable Computing1 Reconfigurable Computing Part II.
Topics SRAM-based FPGA fabrics: Xilinx. Altera..
Artificial Intelligence (CS 370D)
ELEC 7770 Advanced VLSI Design Spring 2012 Introduction
ELEC 7770 Advanced VLSI Design Spring 2010 Introduction
OVERVIEW OF BIOLOGICAL NEURONS
Artificial Intelligence Lecture No. 28
ARTIFICIAL NEURAL networks.
Presentation transcript:

ICCINC' Janusz Starzyk, Yongtao Guo and Zhineng Zhu Ohio University, Athens, OH 45701, U.S.A. 6 th International Conference on Computational Intelligence and Neural Computing Cary, NC, September 30 th, 2003

ICCINC' OUTLINE  Neural Networks  Traditional Hardware Implementation  Principle of Self-Organizing Learning  Advantages & Simulation Algorithm  Hardware Architecture  Hardware/software Codesign  Routing and Interface  PCB SOLAR  Future Work  Conclusion

ICCINC' Traditional ANN Hardware –Limited routing resource. –Quadratic relationship between the routing and the number of neuron makes classical ANNs wire dominated. input output information flow hidden Interconnect is 70% of chip area

ICCINC' Biological Neural Networks Biological Neural Networks Cell body From IFC’s webpage Dowling, 1998, p. 17

ICCINC' What is SOLAR?What is SOLAR? New Biologically Inspired Learning Network Organization Basic Fabric: A fixed lattice of distributed, parallel processing units (neurons) Self-organization: N eurons chose inputs adaptively from routing channels. N eurons chose inputs adaptively from routing channels. Neurons are adaptively self re-configured. Neurons are adaptively self re-configured. Neurons send output signals to the routing channels. Neurons send output signals to the routing channels. Number of neurons results automatically from problem complexity. Number of neurons results automatically from problem complexity. Self Organizing Learning Array SOLAR

ICCINC' Self Organizing Learning Array SOLAR-Organization  Neurons organized in a cell array  Sparse randomized connections  Local self-organization  Data driven  Entropy based learning  Regular structure  Suitable for large scale circuit implementation

ICCINC' Neuron’s Simulation Structure Neuron Inputs Neuron Inputs –System clock –Data input –Control input TCI –Information deficiency ID Other Neurons This neuron System clock Nearest neighbor neuron Remote neurons TCITCI IDID Neuron Outputs -Data output -Control output -Information deficiency

ICCINC' Self-Organizing Process

ICCINC' Self-organizing Principle Information index Neuron self-organizes by maximizing the information index

ICCINC' Self-organizing Principle Output information deficiency. Information deficiency (helps to organize SOLAR learning) The learning array grows by adding more neurons until input information deficiency of a subsequent neuron falls below threshold

Self-organizing Process Matlab Simulation Initial interconnection Learning process

ICCINC' Software Simulation Training Data SOLAR & other Algorithms Credit card approval data (ftp:cs.uci.edu) SOLAR & other Classifiers (Simulation) MethodMiss Detection Probability MethodMiss Detection Probability CAL5.131Naivebay.151 DIPOL92.141CASTLE.148 Logdisc.141ALLOC SMART.158CART.145 C NewID.181 IndCART.152CN2.204 Bprop.154LVQ.197 RBF.145Quadisc.207 Baytree.171Default.440 ITule.137k-NN.181 AC2.181SOLAR.135

ICCINC' Structure of a single neuron RPU: reconfigurable processing unit CU: control unit DPE: dynamic probability estimator EBE: entropy based evaluator DSRU: dynamic self-reconfiguration memory. NI/NO: Data input/output CI/CO: Control input/output

ICCINC' Routing Structure –CSU:configurable switching unit –BRU: bidirectional routing unit

ICCINC' Configurable Switching Unit (CSU) CSU is used to realize flexible connections among neurons –Butterfly structure –CSU can take any number of inputs Even number of inputs Odd number of inputs

ICCINC' Configurable Switching Unit(cont’d) Random connections of neurons with branching ratio of 50% for 3*6 and 6*15 neurons array Routing resources used 62.7%Routing resources used 85.3%

ICCINC' Branching Ratio of 10%Branching Ratio of 90% Random connections of 4*7 neurons array with branching ratio of 10% and 90% Configurable Switching Unit(cont’d)

ICCINC' HW/SW Codesign Partition of System Co-simulation  Neuron’s architecture  System initialization, organization and management  Interface JTAG Programming Software run in PC PCI Bus Hardware Board Virtex XCV800FPGA dynamic configuration

ICCINC' Software Model In Behavioural VHDL Hardware Model In Structural VHDL SW/HW Co-simulation A software process –Written in behavioral VHDL A hardware process –Written in RTL VHDL which is synthesizable HW/SW communication –FSM and FIFOs

ICCINC' Hardware Architecture

ICCINC' Software Architecture System Design Data I/O API PCI FUNC Kernel Driver Ctrl I/O API Sys Func Hardware Access Function Data I/O matDIME_DMARead.dll matDIME_DMAWrite.dll matviDIME_ReadRegister.dll matviDIME_WriteRegister.dll … Ctrl I/O matCloseDIMEBoard.dll matConfigDIMEBoard.dll matOpenDIMEBoard.dll … PCI BUS

ICCINC' PCB Design Single SOLAR PCB contains 2x2 VIRTEX XCV1000 chips

ICCINC' SOLAR PCB Design Boards Interface BoardSOLAR Board

ICCINC' Neurons Prototyping Problem: Neurons need to be carefully placed - otherwise some resources are lost. Neurons memory needs to be optimized for best resource utilization.

ICCINC' Future Work - System SOLAR

ICCINC' SOLAR is different from traditional neural networks …  Expandable modular architecture  Dynamically reconfigurable hardware structure  Interconnection number grows linearly with the number of neurons  Data-driven self-organizing learning hardware  Learning and organization is based on local information

ICCINC' Why to focus on networks of neurons?  Increases computational speed  Improves fault tolerance  Constraints us to use distributed solutions  Brain does it 

ICCINC' Can we set milestones in developing intelligent networks of neurons? How to represent a distributed cognition? How to model machine will to learn and act? How to introduce association between patterns? How a machine shell implement temporal learning? How machine shell block repetitive information from being processed over and over again? How machine shell evaluate its state with respect to set objectives and plan its actions? How to implement elements of reinforcement learning in distributed networks?

ICCINC' Questions