Distributed Computing Seminar Lecture 1: Introduction to Distributed Computing & Systems Background Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

Executional Architecture
Q and A for Ch. 1, 2, 3 CS 332 Spring Structure of the class Q: Comer describes five aspects of networking around which he has structured his text.
COS 461 Fall 1997 Networks and Protocols u networks and protocols –definitions –motivation –history u protocol hierarchy –reasons for layering –quick tour.
Data and Computer Communications Eighth Edition by William Stallings Lecture slides by Lawrie Brown Chapter 2 – Protocol Architecture, TCP/IP, and Internet-Based.
Real-Time Authentication Using Digital Signature Schema Marissa Hollingsworth BOISECRYPT ‘09.
Data and Computer Communications Eighth Edition by William Stallings Lecture slides by Lawrie Brown Chapter 2 – Protocol Architecture, TCP/IP, and Internet-Based.
© 2009 Pearson Education Inc., Upper Saddle River, NJ. All rights reserved.1 Computer Networks and Internets, 5e By Douglas E. Comer Lecture PowerPoints.
Multiple Processor Systems Chapter Multiprocessors 8.2 Multicomputers 8.3 Distributed systems.
Distributed systems Programming with threads. Reviews on OS concepts Each process occupies a single address space.
Lecture 3 – Networks and Distributed Systems CSE 490h – Introduction to Distributed Computing, Spring 2007 Except as otherwise noted, the content of this.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
Networking Theory (Part 1). Introduction Overview of the basic concepts of networking Also discusses essential topics of networking theory.
Multiple Processor Systems 8.1 Multiprocessors 8.2 Multicomputers 8.3 Distributed systems.
Cloud Computing Lecture #1 Parallel and Distributed Computing Jimmy Lin The iSchool University of Maryland Monday, January 28, 2008 This work is licensed.
CS-3013 & CS-502, Summer 2006 Network Input & Output1 CS-3013 & CS-502, Summer 2006.
Fundamentals of Python: From First Programs Through Data Structures
Gursharan Singh Tatla Transport Layer 16-May
The OSI Model and the TCP/IP Protocol Suite
Lecture 1 – Parallel Programming Primer CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed.
Protocols and the TCP/IP Suite Chapter 4. Multilayer communication. A series of layers, each built upon the one below it. The purpose of each layer is.
Information Technologies -- Computer Networks 林永松 台灣大學資訊管理學系 (02)
CS252: Systems Programming Ninghui Li Final Exam Review.
What is Concurrent Programming? Maram Bani Younes.
Mr C Johnston ICT Teacher
Chapter 2 The Infrastructure. Copyright © 2003, Addison Wesley Understand the structure & elements As a business student, it is important that you understand.
Chapter 17 Networking Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E William Stallings.
Data and Computer Communications Eighth Edition by William Stallings Lecture slides by Lawrie Brown Chapter 2 – Protocol Architecture, TCP/IP, and Internet-Based.
What is a Protocol A set of definitions and rules defining the method by which data is transferred between two or more entities or systems. The key elements.
1 Chapter Client-Server Interaction. 2 Functionality  Transport layer and layers below  Basic communication  Reliability  Application layer.
Institute of Computer and Communication Network Engineering OFC/NFOEC, 6-10 March 2011, Los Angeles, CA Lessons Learned From Implementing a Path Computation.
Common Devices Used In Computer Networks
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
Computers Are Your Future Tenth Edition Chapter 8: Networks: Communicating & Sharing Resources Copyright © 2009 Pearson Education, Inc. Publishing as Prentice.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
School of Engineering and Computer Science Victoria University of Wellington Copyright: Peter Andreae, VUW Networking COMP # 21.
ICOM 6115©Manuel Rodriguez-Martinez ICOM 6115 – Computer Networks and the WWW Manuel Rodriguez-Martinez, Ph.D. Lecture 3.
Data and Computer Communications Chapter 2 – Protocol Architecture, TCP/IP, and Internet-Based Applications.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Today’s Topics Chapter 8: Networks Chapter 8: Networks HTML Introduction HTML Introduction.
ECE200 – Computer Organization Chapter 9 – Multiprocessors.
Games Development 2 Concurrent Programming CO3301 Week 9.
Operating Systems CSE 411 Multi-processor Operating Systems Multi-processor Operating Systems Dec Lecture 30 Instructor: Bhuvan Urgaonkar.
Lecture 3 – Networks and Distributed Systems CSE 490h – Introduction to Distributed Computing, Spring 2007 Except as otherwise noted, the content of this.
Copyright © 2007 Pearson Education, Inc. Slide 3-1 E-commerce Kenneth C. Laudon Carol Guercio Traver business. technology. society. Third Edition.
Lecture 6 Page 1 Advanced Network Security Review of Networking Basics Advanced Network Security Peter Reiher August, 2014.
Lecture 1 – Introduction CSE 490h – Introduction to Distributed Computing, Spring 2007 Except as otherwise noted, the content of this presentation is licensed.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Connecting Devices CORPORATE INSTITUTE OF SCIENCE & TECHNOLOGY, BHOPAL Department of Electronics and.
Mass Data Processing Technology on Large Scale Clusters Summer, 2007, Tsinghua University All course material (slides, labs, etc) is licensed under the.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 April 11, 2006 Session 23.
2001 Networking Operating Systems (CO32010) 1. Operating Systems 2. Processes and scheduling 3.
Introduction to Distributed Systems 闫宏飞 北京大学信息科学技术学院 7/22/2010.
J. Liebeher (modified by M. Veeraraghavan) 1 Introduction Complexity of networking: An example Layered communications The TCP/IP protocol suite.
Lecture 1 – Introduction CSE 490h – Introduction to Distributed Computing, Winter 2008 Except as otherwise noted, the content of this presentation is licensed.
© Spinnaker Labs, Inc. Google Cluster Computing Faculty Training Workshop Module II: Student Background Knowledge This presentation includes course content.
Agenda  Quick Review  Finish Introduction  Java Threads.
CDA-5155 Computer Architecture Principles Fall 2000 Multiprocessor Architectures.
CLOUD COMPUTING ARCHITECTURES & APPLICATIONS LECTURERS LAZAR KIRCHEV, PhD ILIYAN NENOV KRUM BAKALSKY 7 March, 2011 LECTURE #1 INTRODUCTION. CONTEMPORARY.
Intro to Parallel and Distributed Processing Some material adapted from slides by Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google.
Chapter 27 Network Management Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Network Processing Systems Design
BASICS Gabriella Paolini (GARR) 27/05/11 - ICCU Roma 1 How INTERNET works !
What is a Protocol A set of definitions and rules defining the method by which data is transferred between two or more entities or systems. The key elements.
Lecture 1 – Parallel Programming Primer
Distributed Shared Memory
ECE 4450:427/527 - Computer Networks Spring 2017
Multiple Processor Systems
Technologies and Applications of Computer Networks
How Our Customers Communicate With Us
Technologies and Applications of Computer Networks
Presentation transcript:

Distributed Computing Seminar Lecture 1: Introduction to Distributed Computing & Systems Background Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet Summer 2007 Except where otherwise noted, the contents of this presentation are © Copyright 2007 University of Washington and are licensed under the Creative Commons Attribution 2.5 License.

Course Overview 5 lectures  1 Introduction  2 Technical Side: MapReduce & GFS  2 Theoretical: Algorithms for distributed computing Readings + Questions nightly  Readings:  Questions: minilecture/MapReduceMiniSeriesReadingQuestions.doc

Outline Introduction to Distributed Computing Parallel vs. Distributed Computing History of Distributed Computing Parallelization and Synchronization Networking Basics

Computer Speedup Moore’s Law: “The density of transistors on a chip doubles every 18 months, for the same cost” (1965) Image: Tom’s Hardware and not subject to the Creative Commons license applicable to the rest of this work. Image: Tom’s Hardware

Scope of problems What can you do with 1 computer? What can you do with 100 computers? What can you do with an entire data center?

Distributed problems Rendering multiple frames of high-quality animation Image: DreamWorks Animation and not subject to the Creative Commons license applicable to the rest of this work.

Distributed problems Simulating several hundred or thousand characters Happy Feet © Kingdom Feature Productions; Lord of the Rings © New Line Cinema, neither image is subject to the Creative Commons license applicable to the rest of the work.

Distributed problems Indexing the web (Google) Simulating an Internet-sized network for networking experiments (PlanetLab) Speeding up content delivery (Akamai) What is the key attribute that all these examples have in common?

Parallel vs. Distributed Parallel computing can mean:  Vector processing of data  Multiple CPUs in a single computer Distributed computing is multiple CPUs across many computers over the network

A Brief History… Parallel computing was favored in the early years Primarily vector-based at first Gradually more thread- based parallelism was introduced Image: Computer Pictures Database and Cray Research Corp and is not subject to the Creative Commons license applicable to the rest of this work.

“Massively parallel architectures” start rising in prominence Message Passing Interface (MPI) and other libraries developed Bandwidth was a big problem A Brief History…

A Brief History…1995-Today Cluster/grid architecture increasingly dominant Special node machines eschewed in favor of COTS technologies Web-wide cluster software Companies like Google take this to the extreme

Parallelization & Synchronization

Parallelization Idea Parallelization is “easy” if processing can be cleanly split into n units:

Parallelization Idea (2) In a parallel computation, we would like to have as many threads as we have processors. e.g., a four- processor computer would be able to run four threads at the same time.

Parallelization Idea (3)

Parallelization Idea (4)

Parallelization Pitfalls But this model is too simple! How do we assign work units to worker threads? What if we have more work units than threads? How do we aggregate the results at the end? How do we know all the workers have finished? What if the work cannot be divided into completely separate tasks? What is the common theme of all of these problems?

Parallelization Pitfalls (2) Each of these problems represents a point at which multiple threads must communicate with one another, or access a shared resource. Golden rule: Any memory that can be used by multiple threads must have an associated synchronization system!

What is Wrong With This? Thread 1: void foo() { x++; y = x; } Thread 2: void bar() { y++; x+=3; } If the initial state is y = 0, x = 6, what happens after these threads finish running?

Multithreaded = Unpredictability When we run a multithreaded program, we don’t know what order threads run in, nor do we know when they will interrupt one another. Thread 1: void foo() { eax = mem[x]; inc eax; mem[x] = eax; ebx = mem[x]; mem[y] = ebx; } Thread 2: void bar() { eax = mem[y]; inc eax; mem[y] = eax; eax = mem[x]; add eax, 3; mem[x] = eax; } Many things that look like “one step” operations actually take several steps under the hood:

Multithreaded = Unpredictability This applies to more than just integers: Pulling work units from a queue Reporting work back to master unit Telling another thread that it can begin the “next phase” of processing … All require synchronization!

Synchronization Primitives A synchronization primitive is a special shared variable that guarantees that it can only be accessed atomically. Hardware support guarantees that operations on synchronization primitives only ever take one step

Semaphores A semaphore is a flag that can be raised or lowered in one step Semaphores were flags that railroad engineers would use when entering a shared track Only one side of the semaphore can ever be red! (Can both be green?)

Semaphores set() and reset() can be thought of as lock() and unlock() Calls to lock() when the semaphore is already locked cause the thread to block. Pitfalls: Must “bind” semaphores to particular objects; must remember to unlock correctly

The “corrected” example Thread 1: void foo() { sem.lock(); x++; y = x; sem.unlock(); } Thread 2: void bar() { sem.lock(); y++; x+=3; sem.unlock(); } Global var “Semaphore sem = new Semaphore();” guards access to x & y

Condition Variables A condition variable notifies threads that a particular condition has been met Inform another thread that a queue now contains elements to pull from (or that it’s empty – request more elements!) Pitfall: What if nobody’s listening?

The final example Thread 1: void foo() { sem.lock(); x++; y = x; fooDone = true; sem.unlock(); fooFinishedCV.notify(); } Thread 2: void bar() { sem.lock(); if(!fooDone) fooFinishedCV.wait(sem); y++; x+=3; sem.unlock(); } Global vars: Semaphore sem = new Semaphore(); ConditionVar fooFinishedCV = new ConditionVar(); boolean fooDone = false;

Too Much Synchronization? Deadlock Synchronization becomes even more complicated when multiple locks can be used Can cause entire system to “get stuck” Thread A: semaphore1.lock(); semaphore2.lock(); /* use data guarded by semaphores */ semaphore1.unlock(); semaphore2.unlock(); Thread B: semaphore2.lock(); semaphore1.lock(); /* use data guarded by semaphores */ semaphore1.unlock(); semaphore2.unlock(); (Image: RPI CSCI.4210 Operating Systems notes)

The Moral: Be Careful! Synchronization is hard  Need to consider all possible shared state  Must keep locks organized and use them consistently and correctly Knowing there are bugs may be tricky; fixing them can be even worse! Keeping shared state to a minimum reduces total system complexity

Fundamentals of Networking

Sockets: The Internet = tubes? A socket is the basic network interface Provides a two-way “pipe” abstraction between two applications Client creates a socket, and connects to the server, who receives a socket representing the other side

Ports Within an IP address, a port is a sub-address identifying a listening program Allows multiple clients to connect to a server at once

What makes this work? Underneath the socket layer are several more protocols Most important are TCP and IP (which are used hand-in-hand so often, they’re often spoken of as one protocol: TCP/IP) Even more low-level protocols handle how data is sent over Ethernet wires, or how bits are sent through the air using wireless…

Why is This Necessary? Not actually tube-like “underneath the hood” Unlike phone system (circuit switched), the packet switched Internet uses many routes at once

Networking Issues If a party to a socket disconnects, how much data did they receive? … Did they crash? Or did a machine in the middle? Can someone in the middle intercept/modify our data? Traffic congestion makes switch/router topology important for efficient throughput

Conclusions Processing more data means using more machines at the same time Cooperation between processes requires synchronization Designing real distributed systems requires consideration of networking topology Next time: How MapReduce works