CS61C L42 Software Parallel Computing (1) Matt Johnson, Spring 2007 © UCB inst.eecs.berkeley.edu/~cs61c UC Berkeley CS61C : Machine Structures Lecture.

Slides:



Advertisements
Similar presentations
Lecture 12: MapReduce: Simplified Data Processing on Large Clusters Xiaowei Yang (Duke University)
Advertisements

Distributed Systems CS
1 Chapter 1 Why Parallel Computing? An Introduction to Parallel Programming Peter Pacheco.
CS61C L42 Inter-machine Parallelism (1) Garcia, Spring 2008 © UCB License Except as otherwise noted, the content of this presentation is licensed under.
DISTRIBUTED COMPUTING & MAP REDUCE CS16: Introduction to Data Structures & Algorithms Thursday, April 17,
CS61C L28 Parallel Computing (1) A Carle, Summer 2005 © UCB inst.eecs.berkeley.edu/~cs61c/su05 CS61C : Machine Structures Lecture #28: Parallel Computing.
Reference: Message Passing Fundamentals.
Distributed Computations
MapReduce: Simplified Data Processing on Large Clusters Cloud Computing Seminar SEECS, NUST By Dr. Zahid Anwar.
CS10 The Beauty and Joy of Computing Lecture #19 Distributed Computing Researchers at Indiana U used data mining techniques to uncover evidence.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 37 – Inter-machine Parallelism distributed computing says.
MapReduce Simplified Data Processing on Large Clusters Google, Inc. Presented by Prasad Raghavendra.
Distributed Computations MapReduce
Lecture 1: Introduction to High Performance Computing.
Lecture 2 – MapReduce: Theory and Implementation CSE 490h – Introduction to Distributed Computing, Spring 2007 Except as otherwise noted, the content of.
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
MapReduce : Simplified Data Processing on Large Clusters Hongwei Wang & Sihuizi Jin & Yajing Zhang
Hadoop Ida Mele. Parallel programming Parallel programming is used to improve performance and efficiency In a parallel program, the processing is broken.
MapReduce.
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
By: Jeffrey Dean & Sanjay Ghemawat Presented by: Warunika Ranaweera Supervised by: Dr. Nalin Ranasinghe.
MapReduce. Web data sets can be very large – Tens to hundreds of terabytes Cannot mine on a single server Standard architecture emerging: – Cluster of.
Map Reduce and Hadoop S. Sudarshan, IIT Bombay
1 Interconnects Shared address space and message passing computers can be constructed by connecting processors and memory unit using a variety of interconnection.
Parallel Programming Models Basic question: what is the “right” way to write parallel programs –And deal with the complexity of finding parallelism, coarsening.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
1 The Map-Reduce Framework Compiled by Mark Silberstein, using slides from Dan Weld’s class at U. Washington, Yaniv Carmeli and some other.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Map Reduce: Simplified Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat Google, Inc. OSDI ’04: 6 th Symposium on Operating Systems Design.
Lecture 1: Performance EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2013, Dr. Rozier.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
MapReduce How to painlessly process terabytes of data.
Google’s MapReduce Connor Poske Florida State University.
MapReduce M/R slides adapted from those of Jeff Dean’s.
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
1 MapReduce: Theory and Implementation CSE 490h – Intro to Distributed Computing, Modified by George Lee Except as otherwise noted, the content of this.
CSCI-455/522 Introduction to High Performance Computing Lecture 1.
PPCC Spring Map Reduce1 MapReduce Prof. Chris Carothers Computer Science Department
MapReduce Programming Model. HP Cluster Computing Challenges  Programmability: need to parallelize algorithms manually  Must look at problems from parallel.
SLIDE 1IS 240 – Spring 2013 MapReduce, HBase, and Hive University of California, Berkeley School of Information IS 257: Database Management.
CS591x -Cluster Computing and Parallel Programming
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
Chapter 5 Ranking with Indexes 1. 2 More Indexing Techniques n Indexing techniques:  Inverted files - best choice for most applications  Suffix trees.
C-Store: MapReduce Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY May. 22, 2009.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
MapReduce: simplified data processing on large clusters Jeffrey Dean and Sanjay Ghemawat.
CS61C L27 Performance II & Inter-machine Parallelism (1) Pearce, Summer 2010 © UCB inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 27.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Parallel Computing Presented by Justin Reschke
1/50 University of Turkish Aeronautical Association Computer Engineering Department Ceng 541 Introduction to Parallel Computing Dr. Tansel Dökeroğlu
Hardware Trends CSE451 Andrew Whitaker. Motivation Hardware moves quickly OS code tends to stick around for a while “System building” extends way beyond.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
Lecture 4. MapReduce Instructor: Weidong Shi (Larry), PhD
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Web: Parallel Computing Rabie A. Ramadan , PhD Web:
The University of Adelaide, School of Computer Science
EE 193: Parallel Computing
MapReduce Simplied Data Processing on Large Clusters
February 26th – Map/Reduce
Map reduce use case Giuseppe Andronico INFN Sez. CT & Consorzio COMETA
Cse 344 May 4th – Map/Reduce.
Distributed System Gang Wu Spring,2018.
CSE8380 Parallel and Distributed Processing Presentation
Distributed Systems CS
Dr. Tansel Dökeroğlu University of Turkish Aeronautical Association Computer Engineering Department Ceng 442 Introduction to Parallel.
CS639: Data Management for Data Science
COS 518: Distributed Systems Lecture 11 Mike Freedman
MapReduce: Simplified Data Processing on Large Clusters
Presentation transcript:

CS61C L42 Software Parallel Computing (1) Matt Johnson, Spring 2007 © UCB inst.eecs.berkeley.edu/~cs61c UC Berkeley CS61C : Machine Structures Lecture 42 – Software Parallel Computing TA Matt Johnson inst.eecs.berkeley.edu/~cs61c-tf PS3 Folding at 600 TFLOPS! distributed computing Playstation 3 now contributes 2/3rds of total power Only accounts for 110K of the 2M CPUs in project Thanks to Prof. Demmel for his CS267 slides & Andy Carle for 1 st CS61C draft

CS61C L42 Software Parallel Computing (2) Matt Johnson, Spring 2007 © UCB Today’s Outline Motivation for Parallelism Software-Managed Parallelism Idea Software vs. hardware parallelism Problems SW parallelism can solve Fundamental issues Programming for computer clusters The lower-level model – Message Passing Interface (MPI) Abstractions – MapReduce Programming Challenges

CS61C L42 Software Parallel Computing (3) Matt Johnson, Spring 2007 © UCB Big Problems Simulation: the Third Pillar of Science Traditionally perform experiments or build systems Limitations to standard approach:  Too difficult – build large wind tunnels  Too expensive – build disposable jet  Too slow – wait for climate or galactic evolution  Too dangerous – weapons, drug design Computational Science:  Simulate the phenomenon on computers  Based on physical laws and efficient numerical methods Search engines needs to build an index for the entire Internet Pixar needs to render movies

CS61C L42 Software Parallel Computing (4) Matt Johnson, Spring 2007 © UCB Performance Requirements Performance terminology the FLOP: FLoating point OPeration Computing power in FLOPS (FLOP per Second) Example: Global Climate Modeling Divide the world into a grid (e.g. 10 km spacing) Solve fluid dynamics equations for each point & minute  Requires about 100 Flops per grid point per minute Weather Prediction (7 days in 24 hours):  56 Gflops Climate Prediction (50 years in 30 days):  4.8 Tflops Perspective Pentium 4 3GHz Desktop Processor  ~6-12 Gflops  Climate Prediction would take ~ years Reference:

CS61C L42 Software Parallel Computing (5) Matt Johnson, Spring 2007 © UCB What Can We Do? Wait for our machines to get faster? Moore’s law tells us things are getting better; why not stall for the moment? Moore on last legs! Many believe so … thus push for multi-core (Friday)! From Hennessy and Patterson, Computer Architecture: A Quantitative Approach, 4th edition, October, 2006

CS61C L42 Software Parallel Computing (6) Matt Johnson, Spring 2007 © UCB Let’s Put Many CPUs Together! Distributed computing (SW parallelism) Many separate computers (each with independent CPU, RAM, HD, NIC) that communicate through a network Grids (home computers across Internet) and Clusters (all in one room) Can be “commodity” clusters, 100K+ nodes About being able to solve “big” problems, not “small” problems faster Multiprocessing (HW parallelism) Multiple processors “all in one box” that often communicate through shared memory Includes multicore (many new CPUs)

CS61C L42 Software Parallel Computing (7) Matt Johnson, Spring 2007 © UCB The Future of Parallelism “Parallelism is the biggest challenge since high level programming languages. It’s the biggest thing in 50 years because industry is betting its future that parallel programming will be useful.” – David Patterson

CS61C L42 Software Parallel Computing (8) Matt Johnson, Spring 2007 © UCB Administrivia Dan’s OH moved to 3pm Friday Performance competition submissions due May 8th No slip days can be used! The final is Sat 5/12 12:30-3:30pm Review session on Weds 5/9 at 2pm

CS61C L42 Software Parallel Computing (9) Matt Johnson, Spring 2007 © UCB Distributed Computing Themes Let’s network many disparate machines into one compute cluster These could all be the same (easier) or very different machines (harder) Common themes “Dispatcher” gives jobs & collects results “Workers” (get, process, return) until done Examples BOINC, Render farms Google clusters running MapReduce

CS61C L42 Software Parallel Computing (10) Matt Johnson, Spring 2007 © UCB Distributed Computing Challenges Communication is fundamental difficulty Distributing data, updating shared resource, communicating results Machines have separate memories, so no usual inter-process communication – need network Introduces inefficiencies: overhead, waiting, etc. Need to parallelize algorithms Must look at problems from parallel standpoint Tightly coupled problems require frequent communication (more of the slow part!) We want to decouple the problem  Increase data locality  Balance the workload

CS61C L42 Software Parallel Computing (11) Matt Johnson, Spring 2007 © UCB Programming Models: What is MPI? Message Passing Interface (MPI) World’s most popular distributed API MPI is “de facto standard” in sci computing C and FORTRAN, ver. 2 in 1997 What is MPI good for?  Abstracts away common network communications  Allows lots of control without bookkeeping  Freedom and flexibility come with complexity –300 subroutines, but serious programs with fewer than 10 Basics:  One executable run on every node  Each node process has a rank ID number assigned  Call API functions to send messages

CS61C L42 Software Parallel Computing (12) Matt Johnson, Spring 2007 © UCB Basic MPI Functions (1) MPI_Send() and MPI_Receive() Basic API calls to send and receive data point- to-point based on rank (the runtime node ID #) We don’t have to worry about networking details A few are available: blocking and non-blocking MPI_Broadcast() One-to-many communication of data Everyone calls: one sends, others block to receive MPI_Barrier() Blocks when called, waits for everyone to call (arrive at some determined point in the code) Synchronization

CS61C L42 Software Parallel Computing (13) Matt Johnson, Spring 2007 © UCB Basic MPI Functions (2) MPI_Scatter() Partitions an array that exists on a single node Distributes partitions to other nodes in rank order MPI_Gather() Collects array pieces back to single node (in order)

CS61C L42 Software Parallel Computing (14) Matt Johnson, Spring 2007 © UCB Basic MPI Functions (3) MPI_Reduce() Perform a “reduction operation” across nodes to yield a value on a single node Similar to accumulate in Scheme  (accumulate + ‘( )) MPI can be clever about the reduction Pre-defined reduction operations, or make your own (and abstract datatypes)  MPI_Op_create() MPI_AllToAll() Update shared data resource

CS61C L42 Software Parallel Computing (15) Matt Johnson, Spring 2007 © UCB MPI Program Template Communicators - set up node groups Startup/Shutdown Functions Set up rank and size, pass argc and argv “Real” code segment main(int argc, char *argv[]){ MPI_Init (&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); /* Data distribution */... /* Computation & Communication*/... /* Result gathering */... MPI_Finalize(); }

CS61C L42 Software Parallel Computing (16) Matt Johnson, Spring 2007 © UCB Challenges with MPI Deadlock is possible… Seen in CS61A – state of no progress Blocking communication can cause deadlock  "crossed" calls when trading information  example: –Proc1: MPI_Receive(Proc2, A); MPI_Send(Proc2, B); –Proc2: MPI_Receive(Proc1, B); MPI_Send(Proc1, A);  There are some solutions - MPI_SendRecv() Large overhead from comm. mismanagement Time spent blocking is wasted cycles Can overlap computation with non-blocking comm. Load imbalance is possible… Things are starting to look hard to code!

CS61C L42 Software Parallel Computing (17) Matt Johnson, Spring 2007 © UCB A New Hope: Google’s MapReduce Remember CS61A? (reduce + (map square '(1 2 3))  (reduce + '(1 4 9))  14 We told you “the beauty of pure functional programming is that it’s easily parallelizable” Do you see how you could parallelize this? What if the reduce function argument were associative, would that help? Imagine 10,000 machines ready to help you compute anything you could cast as a MapReduce problem! This is the abstraction Google is famous for authoring (but their reduce not the same as the CS61A’s or MPI’s reduce)  Builds a reverse-lookup table It hides LOTS of difficulty of writing parallel code! The system takes care of load balancing, dead machines, etc.

CS61C L42 Software Parallel Computing (18) Matt Johnson, Spring 2007 © UCB MapReduce Programming Model Input & Output: each a set of key/value pairs Programmer specifies two functions: map (in_key, in_value)  list(out_key, intermediate_value) Processes input key/value pair Produces set of intermediate pairs reduce (out_key, list(intermediate_value))  list(out_value) Combines all intermediate values for a particular key Produces a set of merged output values (usu just one) code.google.com/edu/parallel/mapreduce-tutorial.html

CS61C L42 Software Parallel Computing (19) Matt Johnson, Spring 2007 © UCB MapReduce Code Example map(String input_key, String input_value): // input_key : document name // input_value: document contents for each word w in input_value: EmitIntermediate(w, "1"); reduce(String output_key, Iterator intermediate_values): // output_key : a word // output_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result)); “Mapper” nodes are responsible for the map function “Reducer” nodes are responsible for the reduce function Data on a distributed file system (DFS)

CS61C L42 Software Parallel Computing (20) Matt Johnson, Spring 2007 © UCB MapReduce Example Diagram ah ah erahif oror uhorah if ah:1,1,1,1 ah:1if:1 or:1or:1 uh:1or:1ah:1 if:1 er:1if:1,1or:1,1,1uh:1 ah:1 ah:1 er: file 1 file 2 file 3 file 4 file 5 file 6 file 7 (ah)(er)(if)(or)(uh) map(String input_key, String input_value): // input_key : doc name // input_value: doc contents for each word w in input_value: EmitIntermediate(w, "1"); reduce(String output_key, Iterator intermediate_values): // output_key : a word // output_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));

CS61C L42 Software Parallel Computing (21) Matt Johnson, Spring 2007 © UCB MapReduce Advantages/Disadvantages Now it’s easy to program for many CPUs Communication management effectively gone  I/O scheduling done for us Fault tolerance, monitoring  machine failures, suddenly-slow machines, other issues are handled Can be much easier to design and program! But… it further restricts solvable problems Might be hard to express some problems in a MapReduce framework Data parallelism is key  Need to be able to break up a problem by data chunks MapReduce is closed-source – Hadoop!

CS61C L42 Software Parallel Computing (22) Matt Johnson, Spring 2007 © UCB Things to Worry About: Parallelizing Code Applications can almost never be completely parallelized; some serial code remains s is serial fraction of program, P is # of processors Amdahl’s law: Speedup(P) = Time(1) / Time(P) ≤ 1 / ( s + ((1-s) / P) ), and as P  ∞ ≤ 1/s Even if the parallel portion of your application speeds up perfectly, your performance may be limited by the sequential portion

CS61C L42 Software Parallel Computing (23) Matt Johnson, Spring 2007 © UCB But… What About Overhead? Amdahl’s law ignores overhead E.g. from communication, synchronization Amdahl’s is useful for bounding a program’s speedup, but cannot predict speedup

CS61C L42 Software Parallel Computing (24) Matt Johnson, Spring 2007 © UCB Summary Parallelism is necessary It looks like the future of computing… It is unlikely that serial computing will ever catch up with parallel computing Software parallelism Grids and clusters, networked computers Two common ways to program:  Message Passing Interface (lower level)  MapReduce (higher level, more constrained) Parallelism is often difficult Speedup is limited by serial portion of code and communication overhead

CS61C L42 Software Parallel Computing (25) Matt Johnson, Spring 2007 © UCB To Learn More… About MPI… Parallel Programming in C with MPI and OpenMP by Michael J. Quinn About MapReduce… code.google.com/edu/parallel/mapreduce -tutorial.html labs.google.com/papers/mapreduce.html lucene.apache.org/hadoop/index.html Try the lab, and come talk to me!

CS61C L42 Software Parallel Computing (26) Matt Johnson, Spring 2007 © UCB Bonus slides These are extra slides that used to be included in lecture notes, but have been moved to this, the “bonus” area to serve as a supplement. The slides will appear in the order they would have in the normal presentation

CS61C L42 Software Parallel Computing (27) Matt Johnson, Spring 2007 © UCB How Do I Experiment w/ SW Parallelism? 1.Work for Google. ;-) 2.Use open source MapReduce and VMWare Hadoop: map/reduce & distributed file system: lucene.apache.org/hadoop/ Nutch: crawler, parsers, index : lucene.apache.org/nutch/ Lucene Java: text search engine library: lucene.apache.org/java/docs/ 3.Wait until tomorrow!!! (lab) Google is donating a cluster for instruction!  Will be built for Fall 2007 We’re developing MapReduce bindings for Scheme for CS61A New MPI lab for CS61C It will be available to students & researchers

CS61C L42 Software Parallel Computing (28) Matt Johnson, Spring 2007 © UCB Example Applications Science Global climate modeling Biology: genomics; protein folding; drug design; malaria simulations Astrophysical modeling Computational Chemistry, Material Sciences and Nanosciences : Search for Extra-Terrestrial Intelligence Engineering Semiconductor design Earthquake and structural modeling Fluid dynamics (airplane design) Combustion (engine design) Crash simulation Computational Game Theory (e.g., Chess Databases) Business Rendering computer graphic imagery (CGI), ala Pixar and ILM Financial and economic modeling Transaction processing, web services and search engines Defense Nuclear weapons -- test by simulations Cryptography