PANN Testing.

Slides:



Advertisements
Similar presentations
Slides from: Doug Gray, David Poole
Advertisements

+ Accelerating Fully Homomorphic Encryption on GPUs Wei Wang, Yin Hu, Lianmu Chen, Xinming Huang, Berk Sunar ECE Dept., Worcester Polytechnic Institute.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
PHOTOMOD. Future outlook Aleksey Elizarov Head of Software Development Department, Racurs October 2014, Hainan, China From Imagery to Map: Digital Photogrammetric.
Cyberinfrastructure for Scalable and High Performance Geospatial Computation Xuan Shi Graduate assistants supported by the CyberGIS grant Fei Ye (2011)
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Intro Neural Networks (Reading:
Presenting: Itai Avron Supervisor: Chen Koren Final Presentation Spring 2005 Implementation of Artificial Intelligence System on FPGA.
Artificial Neural Networks
1 CS 101 / 101-E Aaron Bloomfield Chapter 1: Hardware.
LIBS 100 Mr. Ed Rossman Essential Computing Concepts.
GPU-accelerated Evaluation Platform for High Fidelity Networking Modeling 11 December 2007 Alex Donkers Joost Schutte.
Artificial Neural Networks
Introduction to Hardware. What is binary? We use the decimal (base 10) number system Binary is the base 2 number system Ten different numbers are used.
Computationally Efficient Histopathological Image Analysis: Use of GPUs for Classification of Stromal Development Olcay Sertel 1,2, Antonio Ruiz 3, Umit.
Implementation of Parallel Processing Techniques on Graphical Processing Units Brad Baker, Wayne Haney, Dr. Charles Choi.
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
© Copyright 2004 ECE, UM-Rolla. All rights reserved A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C.
NEURAL NETWORKS FOR DATA MINING
1 © 2012 The MathWorks, Inc. Parallel computing with MATLAB.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Large-scale Deep Unsupervised Learning using Graphics Processors
Hardware. Make sure you have paper and pen to hand as you will need to take notes and write down answers and thoughts that you can refer to later on.
GPU DAS CSIRO ASTRONOMY AND SPACE SCIENCE Chris Phillips 23 th October 2012.
* Dong-Hyawn Kim: Graduate Student, KAIST Ju-Won Oh: Professor, Hannam University Ju-Won Oh: Professor, Hannam University In-Won Lee: Professor, KAIST.
The components of an Information System - Introduction.
ICAL GPU 架構中所提供分散式運算 之功能與限制. 11/17/09ICAL2 Outline Parallel computing with GPU NVIDIA CUDA SVD matrix computation Conclusion.
A Neural Network Implementation on the GPU By Sean M. O’Connell CSC 7333 Spring 2008.
Introduction to Hardware. What is binary? We use the decimal (base 10) number system Binary is the base 2 number system Ten different numbers are used.
ICOM 4035 – Data Structures Dr. Manuel Rodríguez Martínez Electrical and Computer Engineering Department Lecture 11 – September 25, 2001.
An Artificial Neural Network Approach to Surface Waviness Prediction in Surface Finishing Process by Chi Ngo ECE/ME 539 Class Project.
The Computer System CS 103: Computers and Application Software.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
How are they called?.
Next Generation Domain-Services in PL-Grid Infrastructure for Polish Science Górecki 1,2, Bachniak 1,2, Liput 2, Rauch 1,2, Kitowski 2,3, Pietrzyk 1,2.
* 김동현 : KAIST 토목공학과, 박사후연구원 오주원 : 한남대학교 토목환경공학과, 교수 오주원 : 한남대학교 토목환경공학과, 교수 이규원 : 전북대학교 토목환경공학과, 교수 이규원 : 전북대학교 토목환경공학과, 교수 이인원 : KAIST 토목공학과, 교수 이인원 :
Computer Architecture Lecture 26 Past and Future Ralph Grishman November 2015 NYU.
CSC 562: Final Project Dave Pizzolo Artificial Neural Networks.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
 System Requirements are the prerequisites needed in order for a software or any other resources to execute efficiently.  Most software defines two.
Computer Hardware & Processing Inside the Box CSC September 16, 2010.
GFlow: Towards GPU-based High- Performance Table Matching in OpenFlow Switches Author : Kun Qiu, Zhe Chen, Yang Chen, Jin Zhao, Xin Wang Publisher : Information.
SCHOOL OF ENGINEERING AND ADVANCED TECHNOLOGY Engineering Project Routing in Small-World Networks.
Business Intelligence and Decision Support Systems (9 th Ed., Prentice Hall) Chapter 6: Artificial Neural Networks for Data Mining.
Introduction to Hardware. What is binary? We use the decimal (base 10) number system Binary is the base 2 number system Ten different numbers are used.
S. Pardi Frascati, 2012 March GPGPU Evaluation – First experiences in Napoli Silvio Pardi.
Assignment 4: Deep Convolutional Neural Networks
Heterogeneous Processing KYLE ADAMSKI. Overview What is heterogeneous processing? Why it is necessary Issues with heterogeneity CPU’s vs. GPU’s Heterogeneous.
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
Evolutionary Computation Evolving Neural Network Topologies.
Personal Computer (PC)  Computer advertisement specification Intel® Pentium 4 Processor at 3.06GHz with 512K cache 512MB DDR SDRAM 200GB ATA-100 Hard.
Business Intelligence and Decision Support Systems (9 th Ed., Prentice Hall) Chapter 6: Artificial Neural Networks for Data Mining.
Solid State Disks Testing with PROOF
ANN-based program for Tablet PC character recognition
Photorealistic Image Colourization with Generative Adversarial Nets
Hardware September 19, 2017.
CSE 473 Introduction to Artificial Intelligence Neural Networks
CS : Technology Trends August 31, 2015 Ion Stoica and Ali Ghodsi (
Azure Machine Learning Noam Brezis Madeira Data Solutions
הכרת המחשב האישי PC - Personal Computer
network of simple neuron-like computing elements
Backpropagation.
Neural Networks Geoff Hulten.
Deep Neural Networks for Onboard Intelligence
Backpropagation.
Mihir Patel and Nikhil Sardana
Artificial Neural Networks
Artificial Neural Networks
Run time performance for all benchmarked software.
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

PANN Testing

Training Time and Error Benchmarking Target error: 0.02 (Mean Squared Error)

Training Time and Error Benchmarking Progress ANN reduces its training error to desired minimum in less than one second. Perceptron and Back Propagation ANNs have substantial error, which reduces slowly. Perceptron and Back Propagation ANNs show no tendency to reach target error. Tested training set: 30,000 images 60 Perceptron 50 40 Mean Squared Error (x 100) 30 Back propagation 20 Progress ANN 10 Target error: 0.02 50 250 1250 6250 31250 156250 781250 Time (milliseconds)

Progress P-Net Creator NeuroSolution Data Manager Additional Comparison to Alternatives Progress P-network NeuroSolutions Parameters Progress P-network NeuroSolution Data Manager Standard deviation 0.0035 0.011 Working time 3.297 sec 1938 sec = 32 minutes 18 sec Number of epochs 8 44000 Progress P-Net Creator NeuroSolution Data Manager

Training advantage factor = Additional Comparison to Alternatives Number of: records, images, lines, samples, data set and sample data are used synonymously. Comparison of training time between IBM SPSS Statistics 22 and P-network, for the same problem, tested on Apple iMac 27" 3.5 GHz quad-core Intel Core i7 8GB of 1600MHz DDR3 memory; SSD 14000 IBM SPSS Statistics 22 Exponential growth of training time 12000 10000 Network Images Training time Progress P-network 7000 4 sec. IBM SPSS Statistic 22 13400 = 3h. 43min. 8000 Time in seconds 6000 4000 2000 𝟏𝟑𝟒𝟎𝟎 𝒔𝒆𝒄 𝟒 𝒔𝒆𝒄 =𝟑𝟑𝟓𝟎 Training advantage factor = Number of images 1000 2000 3000 4000 5000 6000 7000 4 Progress P-network Linear growth of training time 3 Time in seconds 2 1 Number of images 1000 2000 3000 4000 5000 6000 7000 с Progress, Inc. 2013

PANN vs. Classical Neural Network Theoretically unlimited intelligence Long training time PANN Practically unlimited intelligence Quick training Human brain Training Time 10 1013 1012 1011 1010 109 107 108 106 105 104 103 102 Classical neural network PANN Hardware 1 hour 1 minute 24 hour 1 month 1 year 1 million years 1 thousand years Network Intelligence 1014 Network Intelligence is proportional to the number of elements and problems at hand Seconds PANN Software PANN requires minutes (at most - hours) to reach the level of network intelligence unreachable for classical neural networks for thousands of years с Progress, Inc. 2013

Test on images compression Files with images from CIFAR-10 test site No compression 20000 MSE:0.013 MSE:0.006 15000 MSE:0.005 MSE:0.011 Economy of disk space Number of color images in a file 10000 MSE:0.008 MSE:0.004 PANN with 50 weights on a synapse PANN with 100 weights on a synapse 5000 1000 100 1 50 100 150 200 250 300 File size on disk, MB

#2 GPUs breakthrough - 2 Amdahl’s Law and P-net PANN simple matrix algebra mathematics allows 100% parallel processing. Thus, speed increases linearly with additional GPUs and CPUs. Progress US patent application 15/449,614 that covers matrix algebra application with PANN, is filed on March 3, 2017. P-net application This allows building computers and other electronics with: very high processing speed, and reduced number of GPUs and CPUs

Trading speed: Comparison of PANN and nVidia cuDNN #2 GPUs breakthrough - 2 Trading speed: Comparison of PANN and nVidia cuDNN PANN training speed on CPU and GPUs is thousand times higher than of existing ANN. Allows to: Improve ANN training speed thousands times Build supercomputer on GPUs Build Hypercomputer on GPUs PANN provides 60 (threads on GPU) x = 201 000 x. Acceleration is proportional to the number (N) of GPUs: 201 000 x N

Comparison of P-net with CPU/GPU Training time, in msec inputs outputs images CPU GPU Log CPU time Log GPU time difference times 10.00 - 3.00 100.00 2.00 3.10 0.30 0.49 - 1.10 0.65 1 000.00 321.00 19.90 2.51 1.30 301.10 16.13 5 000.00 7 872.00 271.00 3.90 2.43 7 601.00 29.05 CPU - central processing unit GPU - graphics processing unit 1 sec = 1000 msec Testing computer with CPU speed/ GPU speed = 4 time, log

Comparison of P-net with CPU/GPU training time, in msec inputs outputs images CPU, ms GPU log CPU time log GPU time difference times 100.00 10.00 2.00 69.10 0.30 1.84 - 67.10 0.03 1 000.00 28.00 69.30 1.45 - 41.30 0.40 100 000.00 3 719.00 86.40 3.57 1.94 3 632.60 43.04 500 000.00 18 440.00 125.30 4.27 2.10 18 314.70 147.17 CPU - central processing unit GPU - graphics processing unit time, log