Part 2: Parallel Models (I)

Slides:



Advertisements
Similar presentations
PIPELINE AND VECTOR PROCESSING
Advertisements

Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
1. Introduction to Parallel Computing
FIT5174 Distributed & Parallel Systems
Fundamental of Computer Architecture By Panyayot Chaikan November 01, 2003.
1 Parallel Scientific Computing: Algorithms and Tools Lecture #3 APMA 2821A, Spring 2008 Instructors: George Em Karniadakis Leopold Grinberg.
Taxanomy of parallel machines. Taxonomy of parallel machines Memory – Shared mem. – Distributed mem. Control – SIMD – MIMD.
Tuesday, September 12, 2006 Nothing is impossible for people who don't have to do it themselves. - Weiler.

Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Machine.
Fall 2008Introduction to Parallel Processing1 Introduction to Parallel Processing.
Introduction to Parallel Processing Ch. 12, Pg
Flynn’s Taxonomy of Computer Architectures Source: Wikipedia Michael Flynn 1966 CMPS 5433 – Parallel Processing.
Lecture 5: Parallel Architecture
1 Parallel computing and its recent topics. 2 Outline 1. Introduction of parallel processing (1)What is parallel processing (2)Classification of parallel.
Course Outline Introduction in software and applications. Parallel machines and architectures –Overview of parallel machines –Cluster computers (Myrinet)
COP1220/CGS2423 Introduction to C++/ C for Engineers Professor: Dr. Miguel Alonso Jr. Fall 2008.
1 Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson.
Parallel and Distributed Computing References Introduction to Parallel Computing, Second Edition Ananth Grama, Anshul Gupta, George Karypis, Vipin Kumar.
Basic logical operations Operation Mechanism Through the combination of circuits that perform these three operations, a wide range of logical circuits.
The Central Processing Unit: What Goes on Inside the Computer
CHAPTER 12 INTRODUCTION TO PARALLEL PROCESSING CS 147 Guy Wong page
Chapter 2 Parallel Architecture. Moore’s Law The number of transistors on a chip doubles every years. – Has been valid for over 40 years – Can’t.
Computer Architecture And Organization UNIT-II General System Architecture.
Computer Science 101 Computer Systems Organization.
Parallel Computing.
Data Structures and Algorithms in Parallel Computing Lecture 1.
Outline Why this subject? What is High Performance Computing?
Computer Architecture And Organization UNIT-II Flynn’s Classification Of Computer Architectures.
Lecture 3: Computer Architectures
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
An Overview of Parallel Processing
Parallel Computing Presented by Justin Reschke
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
These slides are based on the book:
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
CHAPTER SEVEN PARALLEL PROCESSING © Prepared By: Razif Razali.
18-447: Computer Architecture Lecture 30B: Multiprocessors
Catalog of useful (structural) modules and architectures
Introduction to Parallel Computing
PARALLEL COMPUTING.
Distributed and Parallel Processing
COMPUTER SCIENCE DELL.
Introduction to parallel programming
CSNB COMPUTER SYSTEM CHAPTER 1 INTRODUCTION CSNB153 computer system.
Distributed Processors
buses, crossing switch, multistage network.
Course Outline Introduction in algorithms and applications
CS 147 – Parallel Processing
Flynn’s Classification Of Computer Architectures
Components of Computer
Introduction to Parallel Processing
Introduction to Parallel Processing
Pipelining and Vector Processing
Lecture 7: Introduction to Distributed Computing.
Array Processor.
MIMD Multiple instruction, multiple data
Data Structures and Algorithms in Parallel Computing
Chapter 17 Parallel Processing
Introduction to Micro Controllers & Embedded System Design
buses, crossing switch, multistage network.
Overview Parallel Processing Pipelining
Chap. 9 Pipeline and Vector Processing
AN INTRODUCTION ON PARALLEL PROCESSING
Computers: Tools for an Information Age
Chapter 4 Multiprocessors
A Top-Level View Of Computer Function And Interconnection
Computers: Tools for an Information Age
Presentation transcript:

Part 2: Parallel Models (I) What is the meaning of model? Sequential Model Classification of Computers According to Instructions and Data.

Introduction A model is a physical, mathematical or logical representation of a real world entity. In computer science, models of computation are used : # to describe real entities, namely, computers. # as a tool for thinking about problems and expressing algorithms.

sequence of instructions In sequential(serial) computation sequence of instructions sequence of data input unit control unit processor memory output unit

In sequential (serial) computation ٍٍSoftware has been written for serial computation: # A problem is broken into a discrete series of instructions. # Instructions are executed one after another. # Only one instruction may execute at any moment in time.

Universal model for sequential computation is RAM. RAM (Random Access Machine): consists of (1) memory, (2) processor, and (3) memory access unit. Memory # Memory with M locations, where M is (large) finite number. # Each memory location is capable of storing a piece of data. # Each memory location has a unique location Processor # A single processor operates under control of a sequential algorithm. # The processor can load/store data from/to memory and can perform basic arithmetic and logical operations.

Memory Access Unit # Creates a path from processor to memory # Establishes a direct connection between memory and processor. Operations Each step of a RAM algorithm consists of: # Read phase - processor reads data from memory into register # Compute phase - processor performs basic operations in memory # Write phase - processor writes contents of register into memory

different models of parallel computation A parallel computer is a computer consisting of two or more processors that can cooperate and communicate to solve large problem fast an interconnection network that connects processors with each other and/or with the memory modules. one or more memory modules different models of parallel computation

Michael Flynn’s Classification instruction streams (2) data streams SISD single instruction & single data processor control unit memory # A serial (non-parallel) computer. # Single instruction: only one instruction stream is being acted on by the CPU during any one clock cycle. # Single data: only one data stream is being used as input during any one clock cycle. Examples: most PCs, single CPU workstations and mainframes.

# A type of parallel computer. Michael Flynn’s Classification SIMD single instruction & multiple data Shared Memory Interconnection Network P1 P2 Pn control # A type of parallel computer. # Single instruction: All processing units execute the same instruction at any given clock cycle # Multiple data: Each processing unit can operate on a different data element # Examples: Processor Arrays: Connection Machine CM-2, Maspar MP-1, MP-2 Vector Pipelines: IBM 9000, Cray C90, Fujitsu VP, NEC SX-2, Hitachi S820

Dr. Hazem M. Bahig

# A single data stream is fed into multiple processing units. Michael Flynn’s Classification P1 P2 Pn memory control1 control2 controln MISD multiple instruction & single data # A single data stream is fed into multiple processing units. # Each processing unit operates on the data independently via independent instruction streams. Examples :Carnegie-Mellon C.mmp computer (1971). s

# Currently, the most common type of parallel computer. Michael Flynn’s Classification P1 P2 Pn control1 control2 controln SM IN MIMD multiple instruction & multiple data # Currently, the most common type of parallel computer. # Multiple Instruction: every processor may be executing a different instruction stream # Multiple Data: every processor may be working with a different data stream # Execution can be synchronous or asynchronous, deterministic or non-deterministic Examples: most current supercomputers, networked parallel computer "grids" and multi-processor SMP computers - including some types of PCs.

Homework is posted ….