Distributed Programming CA107 Topics in Computing Series Martin Crane Karl Podesta.

Slides:



Advertisements
Similar presentations
Distributed Systems CS
Advertisements

CM-5 Massively Parallel Supercomputer ALAN MOSER Thinking Machines Corporation 1993.
2. Computer Clusters for Scalable Parallel Computing
Taxanomy of parallel machines. Taxonomy of parallel machines Memory – Shared mem. – Distributed mem. Control – SIMD – MIMD.
Types of Parallel Computers
PARALLEL PROCESSING COMPARATIVE STUDY 1. CONTEXT How to finish a work in short time???? Solution To use quicker worker. Inconvenient: The speed of worker.
History of Distributed Systems Joseph Cordina
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Advanced Topics in Algorithms and Data Structures An overview of the lecture 2 Models of parallel computation Characteristics of SIMD models Design issue.
Multiprocessors ELEC 6200: Computer Architecture and Design Instructor : Agrawal Name: Nam.
1  1998 Morgan Kaufmann Publishers Chapter 9 Multiprocessors.
Multiprocessors CSE 471 Aut 011 Multiprocessors - Flynn’s Taxonomy (1966) Single Instruction stream, Single Data stream (SISD) –Conventional uniprocessor.
Operating Systems CS208. What is Operating System? It is a program. It is the first piece of software to run after the system boots. It coordinates the.
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Machine.
1 CSE SUNY New Paltz Chapter Nine Multiprocessors.
CPE 731 Advanced Computer Architecture Multiprocessor Introduction
Chapter 13 The First Component: Computer Systems.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
CMSC 611: Advanced Computer Architecture Parallel Computation Most slides adapted from David Patterson. Some from Mohomed Younis.
07/14/08. 2 Points Introduction. Cluster and Supercomputers. Cluster Types and Advantages. Our Cluster. Cluster Performance. Cluster Computer for Basic.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
PMIT-6102 Advanced Database Systems
1 In Summary Need more computing power Improve the operating speed of processors & other components constrained by the speed of light, thermodynamic laws,
1 Parallel computing and its recent topics. 2 Outline 1. Introduction of parallel processing (1)What is parallel processing (2)Classification of parallel.
KUAS.EE Parallel Computing at a Glance. KUAS.EE History Parallel Computing.
LIGO-G Z 8 June 2001L.S.Finn/LDAS Camp1 How to think about parallel programming.
1 Lecture 20: Parallel and Distributed Systems n Classification of parallel/distributed architectures n SMPs n Distributed systems n Clusters.
1 Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Edgar Gabriel Short Course: Advanced programming with MPI Edgar Gabriel Spring 2007.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Cluster Workstations. Recently the distinction between parallel and distributed computers has become blurred with the advent of the network of workstations.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
Institute for Software Science – University of ViennaP.Brezany Parallel and Distributed Systems Peter Brezany Institute for Software Science University.
Example: Sorting on Distributed Computing Environment Apr 20,
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
1 SIAC 2000 Program. 2 SIAC 2000 at a Glance AMLunchPMDinner SunCondor MonNOWHPCGlobusClusters TuePVMMPIClustersHPVM WedCondorHPVM.
Parallel Computing Department Of Computer Engineering Ferdowsi University Hossain Deldari.
Chapter 9: Alternative Architectures In this course, we have concentrated on single processor systems But there are many other breeds of architectures:
1 CMPE 511 HIGH PERFORMANCE COMPUTING CLUSTERS Dilek Demirel İşçi.
CLUSTER COMPUTING TECHNOLOGY BY-1.SACHIN YADAV 2.MADHAV SHINDE SECTION-3.
March, 2003 SOS 7 Jim Harrell Unlimited Scale Inc.
PARALLEL PROCESSOR- TAXONOMY. CH18 Parallel Processing {Multi-processor, Multi-computer} Multiple Processor Organizations Symmetric Multiprocessors Cache.
Parallel Computing.
CS591x -Cluster Computing and Parallel Programming
Stored Programs In today’s lesson, we will look at: what we mean by a stored program computer how computers store and run programs what we mean by the.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Data Structures and Algorithms in Parallel Computing Lecture 1.
| nectar.org.au NECTAR TRAINING Module 4 From PC To Cloud or HPC.
Outline Why this subject? What is High Performance Computing?
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
 A computer is an electronic device that receives data (input), processes data, stores data, and produces a result (output).  It performs only three.
CDA-5155 Computer Architecture Principles Fall 2000 Multiprocessor Architectures.
Background Computer System Architectures Computer System Software.
SYSTEM MODELS FOR ADVANCED COMPUTING Jhashuva. U 1 Asst. Prof CSE
1  2004 Morgan Kaufmann Publishers Fallacies and Pitfalls Fallacy: the rated mean time to failure of disks is 1,200,000 hours, so disks practically never.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Why Parallel/Distributed Computing Sushil K. Prasad
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Feeding Parallel Machines – Any Silver Bullets? Novica Nosović ETF Sarajevo 8th Workshop “Software Engineering Education and Reverse Engineering” Durres,
Supercomputing versus Big Data processing — What's the difference?
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Introduction to parallel programming
What is Parallel and Distributed computing?
Different Architectures
Types of Parallel Computers
Presentation transcript:

Distributed Programming CA107 Topics in Computing Series Martin Crane Karl Podesta

The Basics….. What is a Distributed System (DS)? How does it differ from a Parallel Computer (MPP)? –differences become fuzzy…now called Supercomputers or High Performance Computers (HPC) Supercomputers and Supermodels: –both expensive –both hard to deal with/prone to tantrums –both look glamorous but... –Both spend lots of time doing tedious tasks for others: mostly matrix-vector products for Supercomputers being live mannequins for Supermodels

Why High Performance Computing? Solve larger and larger scientific problems –advanced product design –economic analysis –weather prediction/ climate modelling Store and process huge amount of data –data mining and knowledge discovery –image processing, multi-media information –internet information storage and search (eg GOOGLE)

Different Supercomputers (MPPs) in Your Neighbourhood Single Instruction, Multiple Data (SIMD) –as seen on PlayStation 2 –very useful for processing large arrays eg a(i) = b(i) + c(i)*d(i) {as are found in games} Multiple Instruction, Multiple Data (MIMD) –as seen in Deep Blue But these are dinosaurs - we want something more flexible

Problems with Traditional Supercomputer (ie MPP) Expensive –Very high starting cost ($10,000s per node) –Expensive software –High maintenance cost –Costly to upgrade Vendor dependent –lots of companies have come and gone (datacube, Connection Machines etc.) So, real/poor people cannot do HPC!

PC Cluster: a poor-man’s supercomputer! built from high-end PCs and high-speed comms network supports standard parallel programming based on message-passing model (MPI language) cheap (16 node cluster can cost less than $10k)

Cluster Diagram Here

DCU CA Cluster Resources “John the Baptist” Cluster –built by Redbrick using old CA machines –24 individual 450MHz machines –connected by a fast ethernet switch –harbinger of better things…. “The one that is to come”…… –24 SMP machines –each with 2 GHz –plus loadsa memory! –arrives about Xmas time, appropriately enough

What are the issues in HPC? Communication Vs Computation –size/ nature of problem –interconnect speed/ processor speed Fault tolerance –quality of hardware –nature of problem Load balancing –nature of problem/ quality of programmer –even an easy problem can be made difficult & slow by a bad implementation

Influence of Nature of Problem on Speed What is speed? –speed up is better: Time on 1 node/ Time on n nodes Speed-up and Problems –very good: embarrassingly parallel problems –fair to middling: regular and synchronous problems a bit of cross-talk between nodes –bad: irregular/ asynchronous problems lots of cross-talk between nodes