Parallel Processing Group Members: PJ Kulick Jon Robb Brian Tobin.

Slides:



Advertisements
Similar presentations
© 2009 Fakultas Teknologi Informasi Universitas Budi Luhur Jl. Ciledug Raya Petukangan Utara Jakarta Selatan Website:
Advertisements

SE-292 High Performance Computing
Lecture 38: Chapter 7: Multiprocessors Today’s topic –Vector processors –GPUs –An example 1.
Fundamental of Computer Architecture By Panyayot Chaikan November 01, 2003.
Today’s topics Single processors and the Memory Hierarchy
1 Parallel Scientific Computing: Algorithms and Tools Lecture #3 APMA 2821A, Spring 2008 Instructors: George Em Karniadakis Leopold Grinberg.
Taxanomy of parallel machines. Taxonomy of parallel machines Memory – Shared mem. – Distributed mem. Control – SIMD – MIMD.
CSCI-455/522 Introduction to High Performance Computing Lecture 2.
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Multiprocessors CSE 4711 Multiprocessors - Flynn’s Taxonomy (1966) Single Instruction stream, Single Data stream (SISD) –Conventional uniprocessor –Although.
History of Distributed Systems Joseph Cordina
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
2. Multiprocessors Main Structures 2.1 Shared Memory x Distributed Memory Shared-Memory (Global-Memory) Multiprocessor:  All processors can access all.

Chapter 17 Parallel Processing.
1 CSE SUNY New Paltz Chapter Nine Multiprocessors.
 Parallel Computer Architecture Taylor Hearn, Fabrice Bokanya, Beenish Zafar, Mathew Simon, Tong Chen.
1 Pertemuan 25 Parallel Processing 1 Matakuliah: H0344/Organisasi dan Arsitektur Komputer Tahun: 2005 Versi: 1/1.
Fall 2008Introduction to Parallel Processing1 Introduction to Parallel Processing.
Parallel Computer Architectures
4. Multiprocessors Main Structures 4.1 Shared Memory x Distributed Memory Shared-Memory (Global-Memory) Multiprocessor:  All processors can access all.
MultiIntro.1 The Big Picture: Where are We Now? Processor Control Datapath Memory Input Output Input Output Memory Processor Control Datapath  Multiprocessor.
Introduction to Parallel Processing Ch. 12, Pg
CMSC 611: Advanced Computer Architecture Parallel Computation Most slides adapted from David Patterson. Some from Mohomed Younis.
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
1 Parallel computing and its recent topics. 2 Outline 1. Introduction of parallel processing (1)What is parallel processing (2)Classification of parallel.
KUAS.EE Parallel Computing at a Glance. KUAS.EE History Parallel Computing.
Course Outline Introduction in software and applications. Parallel machines and architectures –Overview of parallel machines –Cluster computers (Myrinet)
Parallel Computing Basic Concepts Computational Models Synchronous vs. Asynchronous The Flynn Taxonomy Shared versus Distributed Memory Interconnection.
CS668- Lecture 2 - Sept. 30 Today’s topics Parallel Architectures (Chapter 2) Memory Hierarchy Busses and Switched Networks Interconnection Network Topologies.
Multiprocessor systems Objective n the multiprocessors’ organization and implementation n the shared-memory in multiprocessor n static and dynamic connection.
Lecture 22Fall 2006 Computer Systems Fall 2006 Lecture 22: Intro. to Multiprocessors Adapted from Mary Jane Irwin ( )
August 15, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 12: Multiprocessors: Non-Uniform Memory Access * Jeremy R. Johnson.
Edgar Gabriel Short Course: Advanced programming with MPI Edgar Gabriel Spring 2007.
Parallel Processing - introduction  Traditionally, the computer has been viewed as a sequential machine. This view of the computer has never been entirely.
CHAPTER 12 INTRODUCTION TO PARALLEL PROCESSING CS 147 Guy Wong page
Chapter 2 Parallel Architecture. Moore’s Law The number of transistors on a chip doubles every years. – Has been valid for over 40 years – Can’t.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Multiprocessing. Going Multi-core Helps Energy Efficiency William Holt, HOT Chips 2005 Adapted from UC Berkeley "The Beauty and Joy of Computing"
Chapter 9: Alternative Architectures In this course, we have concentrated on single processor systems But there are many other breeds of architectures:
Spring 2003CSE P5481 Issues in Multiprocessors Which programming model for interprocessor communication shared memory regular loads & stores message passing.
PARALLEL PROCESSOR- TAXONOMY. CH18 Parallel Processing {Multi-processor, Multi-computer} Multiple Processor Organizations Symmetric Multiprocessors Cache.
Parallel Computing.
Outline Why this subject? What is High Performance Computing?
Parallel Computing Erik Robbins. Limits on single-processor performance  Over time, computers have become better and faster, but there are constraints.
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
Lecture 3: Computer Architectures
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 February Session 9.
Parallel Processing Presented by: Wanki Ho CS147, Section 1.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-2.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 May 2, 2006 Session 29.
LECTURE #1 INTRODUCTON TO PARALLEL COMPUTING. 1.What is parallel computing? 2.Why we need parallel computing? 3.Why parallel computing is more difficult?
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Group Members: PJ Kulick Jon Robb Brian Tobin
Parallel Architecture
Lectures on Parallel and Distributed Computing
CHAPTER SEVEN PARALLEL PROCESSING © Prepared By: Razif Razali.
Multiprocessor Systems
Distributed Processors
buses, crossing switch, multistage network.
Parallel Processing - introduction
CS 147 – Parallel Processing
Parallel Architectures Based on Parallel Computing, M. J. Quinn
Different Architectures
Chapter 17 Parallel Processing
Symmetric Multiprocessing (SMP)
buses, crossing switch, multistage network.
AN INTRODUCTION ON PARALLEL PROCESSING
Presentation transcript:

Parallel Processing Group Members: PJ Kulick Jon Robb Brian Tobin

Topics Theory of parallel computers SUPERCOMPUTERS Distributed Computing

Parallelism is the process of performing tasks concurrently. What is parallelism??? Real life examples: Definition A pack of wolves hunting its prey. An orchestra.

Flynn’s Hardware Taxonomy Processor Organizations Single instruction, single data (SISD) stream Multiple instruction, single data (MISD) stream Single instruction, multiple data (SIMD) stream Multiple instruction, multiple data (MIMD) stream Uniprocessor Vector processor Array processor Shared memory Distributed memory Symmetric multiprocessor (SMP) Nonuniform memory access (NUMA) Clusters

Taxonomy of parallel computing paradigms Parallel Computer SynchronousAsynchronous Vector/Array SIMD Systolic MIMD

Interconnection Networks(IN) IN topology Distributed MemoryShared Memory StaticDynamic 1-dimensional 2-dimensional Hypercube Single-stage Multi-stage Cross-bar Vector MIMD

Distributed Memory – Static Networks Linear array (1-d) 2-dimensional networks ring star tree mesh

Distributed Memory – Static Networks (cont’d) Fully connected network

Hypercube

Distributed Memory Dynamic configurations single-stage multi-stage cross-bar

Deep Blue First computer to defeat a world chess champion 32-node IBM Power Parallel SP2 6-move look ahead capability

SP2 Architecture “The IBM SP2 is a general-purpose scalable parallel system based on a distributed memory message passing architecture.” 2 to 128 nodes POWER2 technology RISC System/6000 processor

SP2 Architecture

Super Computers – “Real World” RISC System technology Running a high-volume scalable WWW server Forecasting the weather Designing cars Compaq AlphaServer technology Human Genome Project

Sun Systems MAJC Chip

MAJC Implements parallel processing on one chip Can operate in standalone or with up to several hundred others in parallel First version contains two separate processors As time goes many more will be included on one chip

Features Four function units per processor Each function unit contains local registers Global registers can be accessed by all FU’s Operates as SIMD Multiple function units allow multiple instructions to be done simultaneously Each function unit can act as RISC/DSP processor itself

Architecture

Instruction Word

SGI Onyx 3000

Onyx 3000 Series Developed for visualization and supercomputing Modular design allows for scalability ease Snap together approach Growth in multiple dimensions NUMAFlex architecture Designed for different generations to work together

Road Map

Available configurations

Applications of Onyx 3000 High speed processing Real time graphic to video High-definition editing Integral support for virtual reality, real- time six degrees of freedom (6DOF) interaction, and sensory immersion

Real World Example The Cave(Iowa State University) Recreation of Forbidden City John Dear Factory Molecular Structuring

References Stallings, Williams. Computer Organization and Architecture,5 th Edition.Upper Saddle River, New Jersey: Prentice Hall 2000 Lewis, Ted G. Introduction to Parallel Computing. Englewood Cliffs, New Jersey: Prentice Hall 1992 Kumar, Vippin. Introduction to Parallel Computing. Redwood City,California: The Benjamin/Cummings Publishing Company 1994 Moldovan,Dan I. Parallel Processing: From Applications to Systems. San Mateo, California: Morgan Kaufmann 1993