Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005.

Similar presentations


Presentation on theme: "A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005."— Presentation transcript:

1 A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005

2 A Perspective on the Future of Massively Parallel Computing: Fine-Grain vs. Coarse-Grain Parallel Models Predrag T. Tosic Proceedings of the 1st Conference on Computing Frontiers April 2004 p-tosic@cs.uiuc.edu

3 Outline  Intro & Background of Current Models –Limits of Sequential Models –Tightly Coupled MP –Loosely Coupled DS  Fine-Grain Parallel Models –ANN –Cellular Automata  Fine-Grain vs Coarse-Grain –Architecture –Functions –Potential Advantages  Summary and Conclusions

4 Introduction and Background of Current Models  Hardware limitations  There are physical limits to how fast we can compute –Limits to increasing the densities and decreasing the size of basic microcomponents –No signal can propagate faster than the speed of light

5 Introduction and Background of Current Models  Limitations of Von Neuman Model –There is a clear distinction (physical and logical) where data and programs are stored (memory) and where the computation is executed (processor) –Sequential

6 Parallel Processing  Realization that parallel processing was a necessity  Classical Models –Multiprocessing Supercomputers –Networked Distributed Systems –Both models are actually coarse-grain  Proposal –“Truly fine-grain connectionist massively parallel model”

7 Characteristics of Multiprocessing Supercomputers  Communication Medium –Shared, Distributed, Hybrid  Nature of Memory Access –Uniform vs. NUMA  Granularity  Instruction Streams (single or multiple)  Data Streams (single or multiple)

8 Characteristics of Distributed Systems  Loosely Coupled  Heterogeneous collection  Networked by middleware  Scalable  Flexible  Energy dissipation not an issue  Harder to program, control, detect errors and failures

9 The model we should really consider  Current supercomputers use thousands of processors  Current DS (like WWW) can use hundreds of millions of computers  We shouldn’t base parallel computing on CS or engineering principles  Instead look at the most sophisticated IP device engineered – the human brain

10 Human Brain  Tens of billions (10 10 ) of processors (neurons)  Highly interconnected with 10 15 interconnections  Each neuron is a very simple basic information processing unit

11 Artificial Neural Networks  Best known and most studied class of a connectionist model  1942 – Linear Perceptron  Multi-Layer Perceptron  Radial Basis Function NN  Hopfield NN

12 Neural Network Diagrams http://www.nd.com/neurosolutions/products/ns/nn andnsvideo.html

13 Artificial Neural Networks  Each neuron (processor) computes a single pre-determined function of its inputs  Neuron similar to a logical gate  Neurons connected with synapses  Each synapse stores about 10 bits of info  Each synapse fired about 10 times/sec  Receptors are input devices  Effectors are output devices

14 Artificial Neural Networks  Just as the brain grows, changes, and adapts, ANNs allow for –creation of new synapses –Dynamic modification of already existing synapses  ANNs – Memory –No separate place for memory –All info stored in nodes and edges –Dynamic changes in edge weights

15 Cellular Automata  The state of a cell at a given time depends only on its own state one time step previously, and the states of its nearby neighbors at the previous time step. All cells on the lattice are updated synchronously. Another Connectionist Model

16 Cellular Automata  Model inspired by physics  Grid where each node is a Finite State Machine –Edge-labeled directed graphs –Each vertex represents one of n states –Each edge a transition from one state to the other on receipt of the alphabet symbol that labels the edge

17 Cellular Automata  Only 2 possible states –0 is quiescent –1 is active  Only input is current states of its neighbors  All nodes execute in unison  A one-dimensional infinite CA is a “countably infinite set of nodes capable of universal computation”

18 Connectionist Models  Appear to be a legitimate model of a universal massively parallel computer  ANNs are suitable for learning, but not Cellular Automata  CA find most of their applications in studying paradigms of dynamics of complex systems

19 Coarse-Grain vs. Fine-Grain Architectures CoarseFine # of ProcThousandsBillions Type of procPowerful, expensive, dissipate energy Simple, cheap, energy efficient CapabilitiesComplexSingle, predefined function MemorySeparated from processor Virtually no distinction between memory and processor

20 Coarse-Grain vs. Fine-Grain Functions  At the very core level, connectionist models are different in how they: –Receive information –Process information –Store information

21 Limitations of ANNs  ANNs aren’t necessary in all domains –ANN can’t computer more or faster than the human brain –The power of a human brain is an asymptotic upper bound on a connectionist ANN model –Not needed for: Computation tasks Searching large databases

22 Suitable domains for ANNs  Pattern Recognition –“No computer can get anywhere close to the speed and accuracy with which humans recognize and distinguish between, for example, different human faces or other similar context-sensitive, highly structured visual images.”  Problem domains where computing agent has on- going, dynamic interaction with environment or where computations may have fuzzy components

23 Potential Advantages of Connectionist Fine-Grain Models  Scalability  Avoid slow storage bottleneck since there is no physically separated memory  Flexibility (not necessary to re-wire or re- program with additional components)  Graceful Degradation – neurons keep dying in our brains and yet we continue to function reasonably well

24 Potential Advantages of Connectionist Fine-Grain Models  Robustness – If one component of a tightly coupled supercomputer fails, the whole system can fall apart  Energy consumption – dissipate much less heat

25 Summary and Conclusions  Connectionist models such as ANNs or CA are capable of massively parallel information processing  They are legitimate candidates for an alternative approach to the design of highly parallel computers of the future  These models are conceptually, architecturally and functionally very different from traditional models

26 Summary and Conclusions  Connectionist models are: –Very fine-grained –Basic operations are much simpler –Several orders of magnitude more processors –Memory concept is totally different

27 Summary and Conclusions  Connectionist models are: –Yet to be built –Idea is in its infancy –Currently still too far-fetched an endeavor –Promising future as the underlying abstract model of the general-purpose massively parallel computers of tomorrow

28 Questions


Download ppt "A Perspective on the Future of Massively Parallel Computing Presented by: Cerise Wuthrich June 23, 2005."

Similar presentations


Ads by Google