Multiprocessor systems Objective n the multiprocessors’ organization and implementation n the shared-memory in multiprocessor n static and dynamic connection.

Slides:



Advertisements
Similar presentations
© 2009 Fakultas Teknologi Informasi Universitas Budi Luhur Jl. Ciledug Raya Petukangan Utara Jakarta Selatan Website:
Advertisements

Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
Lecture 6: Multicore Systems
Super computers Parallel Processing By: Lecturer \ Aisha Dawood.
Fundamental of Computer Architecture By Panyayot Chaikan November 01, 2003.
Today’s topics Single processors and the Memory Hierarchy
1 Parallel Scientific Computing: Algorithms and Tools Lecture #3 APMA 2821A, Spring 2008 Instructors: George Em Karniadakis Leopold Grinberg.
Taxanomy of parallel machines. Taxonomy of parallel machines Memory – Shared mem. – Distributed mem. Control – SIMD – MIMD.
CSCI 8150 Advanced Computer Architecture Hwang, Chapter 1 Parallel Computer Models 1.2 Multiprocessors and Multicomputers.
1 MIMD Computers Module 4. 2 PMS Notation (Bell & Newell, 1987) Similar to a block notation, except using single letters Can augment letter with ( ) containing.
History of Distributed Systems Joseph Cordina
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Advanced Topics in Algorithms and Data Structures An overview of the lecture 2 Models of parallel computation Characteristics of SIMD models Design issue.
2. Multiprocessors Main Structures 2.1 Shared Memory x Distributed Memory Shared-Memory (Global-Memory) Multiprocessor:  All processors can access all.
Multiprocessors ELEC 6200: Computer Architecture and Design Instructor : Agrawal Name: Nam.
1 Multiprocessors. 2 Idea: create powerful computers by connecting many smaller ones good news: works for timesharing (better than supercomputer) bad.
Multiprocessors Andreas Klappenecker CPSC321 Computer Architecture.

Parallel Computing Platforms
Chapter 17 Parallel Processing.
7. Fault Tolerance Through Dynamic or Standby Redundancy 7.6 Reconfiguration in Multiprocessors Focused on permanent and transient faults detection. Three.
The importance of switching in communication The cost of switching is high Definition: Transfer input sample points to the correct output ports at the.
1 CSE SUNY New Paltz Chapter Nine Multiprocessors.
Parallel Processing Group Members: PJ Kulick Jon Robb Brian Tobin.
 Parallel Computer Architecture Taylor Hearn, Fabrice Bokanya, Beenish Zafar, Mathew Simon, Tong Chen.
Fall 2008Introduction to Parallel Processing1 Introduction to Parallel Processing.
4. Multiprocessors Main Structures 4.1 Shared Memory x Distributed Memory Shared-Memory (Global-Memory) Multiprocessor:  All processors can access all.
Introduction to Parallel Processing Ch. 12, Pg
Chapter 5 Array Processors. Introduction  Major characteristics of SIMD architectures –A single processor(CP) –Synchronous array processors(PEs) –Data-parallel.
Anshul Kumar, CSE IITD CS718 : Data Parallel Processors 27 th April, 2006.
CS668- Lecture 2 - Sept. 30 Today’s topics Parallel Architectures (Chapter 2) Memory Hierarchy Busses and Switched Networks Interconnection Network Topologies.
1 Interconnects Shared address space and message passing computers can be constructed by connecting processors and memory unit using a variety of interconnection.
1 Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
ATM SWITCHING. SWITCHING A Switch is a network element that transfer packet from Input port to output port. A Switch is a network element that transfer.
CHAPTER 12 INTRODUCTION TO PARALLEL PROCESSING CS 147 Guy Wong page
CSCI 232© 2005 JW Ryder1 Parallel Processing Large class of techniques used to provide simultaneous data processing tasks Purpose: Increase computational.
Anshul Kumar, CSE IITD CSL718 : Multiprocessors Interconnection Mechanisms Performance Models 20 th April, 2006.
An Overview of Parallel Computing. Hardware There are many varieties of parallel computing hardware and many different architectures The original classification.
Anshul Kumar, CSE IITD ECE729 : Advanced Computer Architecture Lecture 27, 28: Interconnection Mechanisms In Multiprocessors 29 th, 31 st March, 2010.
MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 8 Multiple Processor Systems Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
Birds Eye View of Interconnection Networks
Principles of Linear Pipelining
PARALLEL PROCESSOR- TAXONOMY. CH18 Parallel Processing {Multi-processor, Multi-computer} Multiple Processor Organizations Symmetric Multiprocessors Cache.
1 Interconnection Networks. 2 Interconnection Networks Interconnection Network (for SIMD/MIMD) can be used for internal connections among: Processors,
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Process-Concept.
Data Structures and Algorithms in Parallel Computing Lecture 1.
1 Basic Components of a Parallel (or Serial) Computer CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM.
Parallel Computing Erik Robbins. Limits on single-processor performance  Over time, computers have become better and faster, but there are constraints.
Super computers Parallel Processing
Parallel Processing Presented by: Wanki Ho CS147, Section 1.
2/16/2016 Chapter Four Array Computers Index Objective understand the meaning and structure of array computer realize the associated instruction sets,
Multiprocessor  Use large number of processor design for workstation or PC market  Has an efficient medium for communication among the processor memory.
An Overview of Parallel Processing
Univ. of TehranIntroduction to Computer Network1 An Introduction to Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Array computers. Single Instruction Stream Multiple Data Streams computer There two types of general structures of array processors SIMD Distributerd.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Overview Parallel Processing Pipelining
Parallel Architecture
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Interconnection Networks (Part 2) Dr.
Dynamic connection system
Operating Systems (CS 340 D)
Refer example 2.4on page 64 ACA(Kai Hwang) And refer another ppt attached for static scheduling example.
Parallel and Multiprocessor Architectures
AN INTRODUCTION ON PARALLEL PROCESSING
High Performance Computing & Bioinformatics Part 2 Dr. Imad Mahgoub
Advanced Computer and Parallel Processing
Birds Eye View of Interconnection Networks
Advanced Computer and Parallel Processing
Presentation transcript:

Multiprocessor systems

Objective n the multiprocessors’ organization and implementation n the shared-memory in multiprocessor n static and dynamic connection networks

Structure of Multiprocessor systems n consists of N processors plus interconnections for passing data and control information among the computers. n Up to N different instruction streams can be active concurrently. n The challenge is to put the N processors to work on different parts of a computation,

Structure of MIMD

The fastest supercomputer n located in the Alberta National Laboratory in United States. n This computer is equivalent to 9000 Pentium processors n can handle 21,000x10^8 instructions per second.

Terminology n PE-to-PE u Each processor has its own memory and uses a common bus for intercommunication.

CRAY T3D n Number of Processors u 128 Processor Elements (PEs) n Performance per PE u 150 MFlops / PE n Main Memory u 8 GB Memory (64 MB each PE) n Harddisk u 80 GB temporary disk space n Swap area u 35 GB permanent user disk space

PE-to-PE SIMD machine configuration n Same as an array computer. n The local memory is attached to its own processor with an interconnection network for exchanging data. n The number of arithmetic processors is equal to the number of memory storages.

PE to PE SIMD

Processor-to-memory with N processors and N memories n The Processor-to-memory parallel machine with N processors and N memories n Same as an array computer without a control unit. n The local memory is separated from its own processor and they are linked up by an interconnection network.

PE to Memory

Difference between Multi-processors n MSIMD u It is called multiple-SIMD which can be reconfigured into a number of smaller independent SIMD machines n Partitionable SIMD/MIMD u be partitioned into smaller independent machines of different sizes working in SIMD or MIMD

Simple Uni-processor

Memory access n Access to memory may be blocked temporarily due to the conflicts in the network or the accessed memory module.

Three methods to solve memory access n Buffer network elements u To queue a request in the buffer when a conflict occurs. n Deletion u The processors will delete all but one of the conflicting requests n Either of the about schemes reduces CPU performance.

Multiprocessor interconnection networks n A basic parallel processing system consists of various processors to be linked up by an interconnection network to form interprocess communication.

Example of static network topologies n The processors are predefined with connection paths. n Classified as one-dimensional such as Linear Array, two- dimensional such as Ring, Star, Tree, etc.

Static network (fixed)

Two-way switching box n A dynamic network allows processors to re-route the path (That is why it is called dynamic). n The single-stage network is also called a recirculating network.

Switching box

4x4 switching network (Blocking)

Non-Blocking Network n A non-blocking networking has its ability to connect any input to output n Example is a cross-bar.

Cross bar

Summary n introduced multiprocessors with N processors and M (or N) memory storages. n Interconnected by a network which can be dynamic or static. n Dynamic network can be classified as a blocking or non-blocking network. n Blocking network can be implemented through switching boxes while non-blocking network can be implemented by a cross-bar network.