Lecture 8: Design of Parallel Programs Part III Lecturer: Simon Winberg.

Slides:



Advertisements
Similar presentations
IT253: Computer Organization
Advertisements

Operating System.
Cloud Computing at GES DISC Presented by: Long Pham Contributors: Aijun Chen, Bruce Vollmer, Ed Esfandiari and Mike Theobald GES DISC UWG May 11, 2011.
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 6 2/13/2015.
Tunis, Tunisia, 28 April 2014 Business Values of Virtualization Mounir Ferjani, Senior Product Manager, Huawei Technologies 2.
Reference: Message Passing Fundamentals.
1 Friday, September 29, 2006 If all you have is a hammer, then everything looks like a nail. -Anonymous.
Cloud computing (and Google AppEngine) material adapted from slides by Indranil Gupta, Jimmy Lim, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet,
Heterogeneous and Grid Computing2 Communication models u Modeling the performance of communications –Huge area –Two main communities »Network designers.
AN INTRODUCTION TO CLOUD COMPUTING Web, as a Platform…
Designing Parallel Programs David Rodriguez-Velazquez CS-6260 Spring-2009 Dr. Elise de Doncker.
Presented by Sujit Tilak. Evolution of Client/Server Architecture Clients & Server on different computer systems Local Area Network for Server and Client.
An Introduction to Cloud Computing. The challenge Add new services for your users quickly and cost effectively.
Plan Introduction What is Cloud Computing?
Lecture 1, 1Spring 2003, COM1337/3501Computer Communication Networks Rajmohan Rajaraman COM1337/3501 Textbook: Computer Networks: A Systems Approach, L.
VAP What is a Virtual Application ? A virtual application is an application that has been optimized to run on virtual infrastructure. The application software.
Cloud Computing الحوسبة السحابية. subject History of Cloud Before the cloud Cloud Conditions Definition of Cloud Computing Cloud Anatomy Type of Cloud.
Lecture 7: Design of Parallel Programs Part II Lecturer: Simon Winberg.
PhD course - Milan, March /09/ Some additional words about cloud computing Lionel Brunie National Institute of Applied Science (INSA) LIRIS.
Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over the Internet. Cloud is the metaphor for.
Lecture 8: Design of Parallel Programs Part III Lecturer: Simon Winberg Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
1 Chapter Client-Server Interaction. 2 Functionality  Transport layer and layers below  Basic communication  Reliability  Application layer.
Cloud Computing. What is Cloud Computing? Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable.
Cloud Computing 1. Outline  Introduction  Evolution  Cloud architecture  Map reduce operation  Platform 2.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
1 Interconnects Shared address space and message passing computers can be constructed by connecting processors and memory unit using a variety of interconnection.
Cloud Computing. Cloud Computing defined Dynamically scalable, device-independent and task-centric computing resources are provided online, with all charges.
Cloud Computing & Amazon Web Services – EC2 Arpita Patel Software Engineer.
Data and Computer Communications Circuit Switching and Packet Switching.
Week 12 (2012) Dr. Ghada Drahem. INTENDED LEARNING OUTCOMES This lecture covers: Networking concepts and terminology Common networking and communications.
What is the cloud ? IT as a service Cloud allows access to services without user technical knowledge or control of supporting infrastructure Best described.
Lecture 7: Design of Parallel Programs Part II Lecturer: Simon Winberg.
BZUPAGES.COM Presentation On SWITCHING TECHNIQUE Presented To; Sir Taimoor Presented By; Beenish Jahangir 07_04 Uzma Noreen 07_08 Tayyaba Jahangir 07_33.
Optimizing Charm++ Messaging for the Grid Gregory A. Koenig Parallel Programming Laboratory Department of Computer.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Lecture 3 : Performance of Parallel Programs Courtesy : MIT Prof. Amarasinghe and Dr. Rabbah’s course note.
Chapter 13 – I/O Systems (Pgs ). Devices  Two conflicting properties A. Growing uniformity in interfaces (both h/w and s/w): e.g., USB, TWAIN.
CLOUD COMPUTING. What is cloud computing ? History Virtualization Cloud Computing hardware Cloud Computing services Cloud Architecture Advantages & Disadvantages.
Architecture View Models A model is a complete, simplified description of a system from a particular perspective or viewpoint. There is no single view.
1 Lecture 4: Part 2: MPI Point-to-Point Communication.
Cloud Computing is a Nebulous Subject Or how I learned to love VDF on Amazon.
3/12/2013Computer Engg, IIT(BHU)1 CLOUD COMPUTING-1.
Web Technologies Lecture 13 Introduction to cloud computing.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Cloud Computing Talal Alsubaie DBA Saudi FDA. You Have a System (Website)
1 TCS Confidential. 2 Objective : In this session we will be able to learn:  What is Cloud Computing?  Characteristics  Cloud Flavors  Cloud Deployment.
Intro to Distributed Systems Hank Levy. 23/20/2016 Distributed Systems Nearly all systems today are distributed in some way, e.g.: –they use –they.
CLIENT SERVER COMPUTING. We have 2 types of n/w architectures – client server and peer to peer. In P2P, each system has equal capabilities and responsibilities.
Cloud Computing from a Developer’s Perspective Shlomo Swidler CTO & Founder mydrifts.com 25 January 2009.
Submitted to :- Neeraj Raheja Submitted by :- Ghelib A. Shuaib (Asst. Professor) Roll No : Class :- M.Tech(CSE) 2 nd Year.
KAASHIV INFOTECH – A SOFTWARE CUM RESEARCH COMPANY IN ELECTRONICS, ELECTRICAL, CIVIL AND MECHANICAL AREAS
CLOUD COMPUTING When it's smarter to rent than to buy.. Presented by D.Datta Sai Babu 4 th Information Technology Tenali Engineering College.
Switching By, B. R. Chandavarkar, CSE Dept., NITK, Surathkal Ref: B. A. Forouzan, 5 th Edition.
Amazon Web Services. Amazon Web Services (AWS) - robust, scalable and affordable infrastructure for cloud computing. This session is about:
Lecture 5: Lecturer: Simon Winberg Review of paper: Temporal Partitioning Algorithm for a Coarse-grained Reconfigurable Computing Architecture by Chongyong.
Prof. Jong-Moon Chung’s Lecture Notes at Yonsei University
Lecture 14: Cloud Computing & Virtualization
Unit 3 Virtualization.
Design of Parallel Programs / Heterogeneous Computing Solutions
Chapter 6: Securing the Cloud
Dr.S.Sridhar, Director, RVCT, RVCE, Bangalore
An Introduction to Cloud Computing
Dr.S.Sridhar, Director, RVCT, RVCE, Bangalore
Introduction to Cloud Computing
Cloud Computing Dr. Sharad Saxena.
Cloud Computing.
Design of Parallel Programs / Heterogeneous Computing Solutions
Internet and Web Simple client-server model
Internet Protocols IP: Internet Protocol
Lecture 11: Cloud Computing & Virtualization
Presentation transcript:

Lecture 8: Design of Parallel Programs Part III Lecturer: Simon Winberg

 Step 4: communications (cont)  Cloud computing  Step 5: Identify data dependencies

Steps in designing parallel programs The hardware may come first or later The main steps: 1.Understand the problem 2.Partitioning (separation into main tasks) 3.Decomposition & Granularity 4.Communications 5.Identify data dependencies 6.Synchronization 7.Load balancing 8.Performance analysis and tuning

Step 4: Communications (CONT) EEE4084F

 Communication Latency =  Time it takes to send a minimal length (e.g., 0 byte) message from one task to another. Usually expressed as microseconds.  Bandwidth =  The amount of data that can be sent per unit of time. Usually expressed as megabytes/sec or gigabytes/sec.

 Many small messages can result in latency dominating communication overheads.  If many small messages are needed:  It can be more efficient to package small messages into a larger ones, to increasing the effective bandwidth of communications.

Effective bandwidth Total latency =Sending overhead + Transmission time + time of flight + Receiver overhead Effective bandwidth = Message size / total latency Sending overheadTransmission time Time of flight Transport latency Receive overhead Total latency Transmission time time of flight is also referred to as ‘propagation delay’ – it may depend on how many channels are used. E.g. a two-channel path will give an effective lower propagation. With switching circuitry, the propagation delay can increase significantly.

Effective bandwidth calc. Solution: Transmission time = 100,000 bits 10Mbits/s = 10bits/us Example: Distance 100m Raw bandwidth 10Mbit/s Message 10,000 bytes Sending overhead 200us Receiving overhead 300us 100,000 bits = 10,000 us Time of flight = 3 x 10 8 m/s 100m = 3 x 10 6 m/s 0.33 us = 1m Total latency =Sending overhead + Transmission time + time of flight + Receiver overhead Total latency =200us + 10,000us us + 300us = 10, us Effective bandwidth = Message size / total latency Effective bandwidth = 100,000bits / 10,500us = 9.52 Mbits/s

 Communications is usually both explicit and highly visible when using the message passing programming model.  Communications may be poorly visible when using the data parallel programming model.  For data parallel design on a distributed system, communications may be entirely invisible, in that the programmer may have no understanding (and no easily obtainable means) to accurately determine what inter-task communications is happening.

 Synchronous communications  Require some kind of handshaking between tasks that share data / results.  May be explicitly structured in the code, under control of the programmer – or it may happen at a lower level, not under control of the programmer.  Synchronous communications are also referred to as blocking communications because other work must wait until the communications has finished.

 Asynchronous communications  Allow tasks to transfer data between one another independently. E.g.: task A sends a message to task B, and task A immediately begin continues with other work. The point when task B actually receives, and starts working on, the sent data doesn't matter.  Asynchronous communications are often referred to as non-blocking communications.  Allows for interleaving of computation and communication, potentially providing less overhead compared to the synchronous case

 Scope of communications:  Knowing which tasks must communicate with each other  Can be crucial to an effective design of a parallel program.  Two general types of scope:  Point-to-point (P2P)  Collective / broadcasting

 Point-to-point (P2P)  Involves only two tasks, one task is the sender/producer of data, and the other acting as the receiver/consumer.  Collective  Data sharing between more than two tasks (sometimes specified as a common group or collective).  Both P2P and collective communications can be synchronous or asynchronous.

Collective communications Typical techniques used for collective communications: Initiator Task BROADCAST Same message sent to all tasks Initiator Task SCATTER Different message sent to each tasks Task GATHER Messages from tasks are combined together Task REDUCING Only parts, or reduced form, of the messages are worked on

 There may be a choice of different communication techniques  In terms of hardware (e.g., fiberoptics, wireless, bus system), and  In terms of software / protocol used  Programmer may need to use a combination of techniques and technology to establish the most efficient choice (in terms of speed, power, size, etc).

Cloud Computing EEE4084F

Cloud Computing: a short but informative marketing clip… “Salesforce : what is cloud computing”

Cloud computing is a style of computing in which dynamically scalable and usually virtualized computing resources are provided as a service over the internet. Cloud computing use:  Request resources or services over the internet (or intranet)  Provides scalability and reliability of a data center

 On demand scalability  Add or subtract processors, memory, network bandwidth to your cluster  (Be billed for the QoS / resource usage)  Virtualization and virtual systems & services  Request operating system, storage, databases, databases, other services

Hardware Operating System App Traditional Computing Stack Hardware OS App Hypervisor OS Virtualized Computing Stack

 Utility computing  Rent cycles  Examples: Amazon’s EC2, GoGrid, AppNexus  Platform as a Service (PaaS)  Provides user-friendly API to aid implementation  Example: Google App Engine  Software as a Service (SaaS)  Just run it for me  Example: Gmail Driving philosophy: Why buy the equipment, do the configuration and maintenance and yourself, if you can contract it out? Can work out much more cost effectively.

 Elastic Compute Cloud (EC2)  Rent computing resources by the hour  Basic unit of accounting = instance-hour  Additional cost for bandwidth usage  Simple Storage Service (S3)  Persistent storage  Charge by the GB/month  Additional costs for bandwidth

 Using MP and Chimera cloud computing management system  Cloud system hosted by Chemical Engineering  Electrical Engineering essentially renting cycles… for free Thanks Chem Eng!  Scheduled time for use:  8am – 12pm Monday  12pm – 5pm Thursday  (You can run anytime you like, but the system is likely to be less loaded by users outside EEE4084F at the above times)

 Next Lecture  GPUs and CUDA  Data Dependencies (Step 5)  Later Lectures  Synchronization (step 6)  Load balancing (step 7)  Performance Analysis (step 8) End of parallel programming series