병렬처리시스템 2005년도 2학기 채 수 환 2019-02-25.

Slides:



Advertisements
Similar presentations
WATERLOO ELECTRICAL AND COMPUTER ENGINEERING 20s: Computer Hardware 1 WATERLOO ELECTRICAL AND COMPUTER ENGINEERING 20s Computer Hardware Department of.
Advertisements

Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Distributed Systems CS
Parallell Processing Systems1 Chapter 4 Vector Processors.
Development of Parallel Simulator for Wireless WCDMA Network Hong Zhang Communication lab of HUT.
Spring 07, Jan 16 ELEC 7770: Advanced VLSI Design (Agrawal) 1 ELEC 7770 Advanced VLSI Design Spring 2007 Introduction Vishwani D. Agrawal James J. Danaher.
Introduction What is Parallel Algorithms? Why Parallel Algorithms? Evolution and Convergence of Parallel Algorithms Fundamental Design Issues.
Parallel System Performance CS 524 – High-Performance Computing.
Chapter 2 Introduction to Systems Architecture. Chapter goals Discuss the development of automated computing Describe the general capabilities of a computer.
Lecture 37: Chapter 7: Multiprocessors Today’s topic –Introduction to multiprocessors –Parallelism in software –Memory organization –Cache coherence 1.
Leveling the Field for Multicore Open Systems Architectures Markus Levy President, EEMBC President, Multicore Association.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Parallel Processing CS453 Lecture 2.  The role of parallelism in accelerating computing speeds has been recognized for several decades.  Its role in.
1 Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
Computer system overview1 The Effects of Computers Pervasive in all professions How have computers affected my life? How have computers affected my life?
Advanced Principles of Operating Systems (CE-403).
Super computers Parallel Processing By Lecturer: Aisha Dawood.
M. Wang, T. Xiao, J. Li, J. Zhang, C. Hong, & Z. Zhang (2014)
CSC Multiprocessor Programming, Spring, 2012 Chapter 11 – Performance and Scalability Dr. Dale E. Parson, week 12.
Chapter 1 - OS Overview Ivy Tech State College Northwest Region 01 CIS106 Microcomputer Operating Systems Gina Rue CIS Faculty.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 28, 2005 Session 29.
Introduction Goal: connecting multiple computers to get higher performance – Multiprocessors – Scalability, availability, power efficiency Job-level (process-level)
1 Potential for Parallel Computation Chapter 2 – Part 2 Jordan & Alaghband.
VU-Advanced Computer Architecture Lecture 1-Introduction 1 Advanced Computer Architecture CS 704 Advanced Computer Architecture Lecture 1.
1 Processor design Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section 11.3.
Basic Information Technology Lecturer: Ms. Farwah Ahmad.
Optimizing Interconnection Complexity for Realizing Fixed Permutation in Data and Signal Processing Algorithms Ren Chen, Viktor K. Prasanna Ming Hsieh.
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
The History of ARM and Microcontrollers Chapter 1
18-447: Computer Architecture Lecture 30B: Multiprocessors
Computer Architecture: Parallel Processing Basics
CS5102 High Performance Computer Systems Thread-Level Parallelism
ELEC 7770 Advanced VLSI Design Spring 2016 Introduction
System Programming and administration
INTRODUCTION TO MICROPROCESSORS
The Problem Finding a needle in haystack An expert (CPU)
Parallel and Distributed Algorithms (CS 6/76501) Spring 2007
CS 21a: Intro to Computing I
Morgan Kaufmann Publishers
INTRODUCTION TO MICROPROCESSORS
Optimized Rewriter Rules for Efficient Querying of JSON Data
EE 193: Parallel Computing
CS775: Computer Architecture
ELEC 7770 Advanced VLSI Design Spring 2014 Introduction
Parallel and Distributed Computing Overview
Lecture 1: Parallel Architecture Intro
Parallel and Distributed Algorithms Spring 2005
Chapter 17 Parallel Processing
ELEC 7770 Advanced VLSI Design Spring 2012 Introduction
Compiler Back End Panel
Summary Background Introduction in algorithms and applications
Akshay Tomar Prateek Singh Lohchubh
STUDY AND IMPLEMENTATION
ELEC 7770 Advanced VLSI Design Spring 2010 Introduction
Compiler Back End Panel
Performance Evaluation of the Parallel Fast Multipole Algorithm Using the Optimal Effectiveness Metric Ioana Banicescu and Mark Bilderback Department of.
Embedded Computer Architecture 5SIA0 Overview
MAP33 Introdução à Computação Paralela
COMP60621 Fundamentals of Parallel and Distributed Systems
Processor design Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section 11.3.
Chapter 1 The Amazing Computer WOW! Course Outline History computer system, basic machine organization, Von Neumann Numbers systems, Binary numbers,
Mapping DSP algorithms to a general purpose out-of-order processor
The University of Adelaide, School of Computer Science
Design and Analysis of Algorithms
COMP60611 Fundamentals of Parallel and Distributed Systems
Processor design Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section 11.3.
CSC Multiprocessor Programming, Spring, 2011
Chapter 2 from ``Introduction to Parallel Computing'',
Presentation transcript:

병렬처리시스템 2005년도 2학기 채 수 환 2019-02-25

Textbook & Evaluation Policy Text : Pipelined and Parallel Computer Architectures by Sajjan G. Shiva, Harper Collins 1996. A Reference Book : Introduction to Parallel Computing by Ananth Grama, et al. , Pearson ,2003 Evaluation Policy: Mid-term Exam. 40% Final Exam. 40% Report 10% Attendance 10% 2019-02-25

How to Prepare Your Future Do not hurry up for success in your life. Make your plan for your future. Spare your time. Be an expert in your field. Have some meditation time each day. 2019-02-25

Chapter 0 Introduction Performance and cost are two major parameters for evaluation of architectures. The waves of computing The 1st wave in the 1960s was dominated by mainframe. The 2nd wave in the 1970s was belonged to the minicomputer systems. The 3rd wave in the 1980s was that of microcomputers. These three waves are considered evolutionary approaches. Parallel processing is considered the 4th wave of computing. It is considered revolutionary approach. 2019-02-25

I.1 Computing Paradigms Three paradigms of computing structures (Figure I.1) serial pipelined parallel 2019-02-25

Three Paradigms 2019-02-25

Example I.1 T: the time to complete a subtask, N: the number of tasks, A task consists of 4 subtasks. The time achieved by the serial paradigm. Serial: 4NT The speedup achieved by the pipelined paradigm 4NT / {4T +(N-1)T} The speedup achieved by the parallel architecture 4NT / P, where P is the number of processors 2019-02-25

I.1 Computing Paradigms (continued) Trivial parallelism: A job can be partitioned into independent tasks. In practice, clean partitioning is not possible. A considerable communication and task coordination overhead is needed in parallel processing. 2019-02-25

I.2 The need for pipelined and parallel processing Figure I.2 shows the summary of the fields requiring high performance computation. 2019-02-25

Grand Challenges 2019-02-25

I.2 The need for pipelined and parallel processing(continued) The overall processing speed depends on two factors: the computing speed of the processors the communication speed between their interconnection structure The communication overhead is the function of the bandwidth and the latency of the interconnection structure. Because of the low cost of microprocessors, it is possible to build low-cost high-performance systems utilizing a large number of them. 2019-02-25

I.2 The need for pipelined and parallel processing(continued) MPP(Massively Parallel Processor) architectures consists of a large number of processors interconnected by high bandwidth and low-latency network. The major issues of pipelined and parallel computing architectures: hardware: scale-up algorithm: efficient parallel algorithms language: programming languages for applying parallel algorithms compilers and other programming tools: compilers to translate parallel programming language into optimized object code, simulators, debuggers operating systems: O.S to make control of a parallel processing architecture efficiently. Performance evaluation: methods to evaluate speedup based on various metrics for parallel processing systems. 2019-02-25

I.3 Overview Part 1: Preliminaries (Chapter 1 and 2) Uniprocessor architecture overview Styles of architecture Part 2: Pipelined Architectures (Chapter 3 and 4) Pipelining Vector processors Part 3: Parallel Architectures (Chapter 5 and 6) Array processors Multiprocessor systems Part 4: Experimental Architectures (Chapter 7 and 8) Dataflow architecture Distributed processing systems, the 5th or 6th generation computers 2019-02-25