Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Distributed Systems 闫宏飞 北京大学信息科学技术学院 7/22/2010.

Similar presentations


Presentation on theme: "Introduction to Distributed Systems 闫宏飞 北京大学信息科学技术学院 7/22/2010."— Presentation transcript:

1 Introduction to Distributed Systems http://net.pku.edu.cn/~course/cs402 闫宏飞 北京大学信息科学技术学院 7/22/2010

2 Introduction to Distributed System Parallelization & Synchronization

3 Parallelization Idea (2)

4 Parallelization Idea (3)

5 Parallelization Idea (4)

6 Process synchronization refers to the coordination of simultaneous threads or processes to complete a task in order to get correct runtime order and avoid unexpected race conditions.race conditions Process synchronization refers to the coordination of simultaneous threads or processes to complete a task in order to get correct runtime order and avoid unexpected race conditions.race conditions

7 And if you thought I was joking …

8 What is Wrong With This? Thread 1: void foo() { x++; y = x; } Thread 2: void bar() { y++; x++; } If the initial state is y = 0, x = 6, what happens after these threads finish running?

9 Multithreaded = Unpredictability When we run a multithreaded program, we don ’ t know what order threads run in, nor do we know when they will interrupt one another. Thread 1: void foo() { eax = mem[x]; inc eax; mem[x] = eax; ebx = mem[x]; mem[y] = ebx; } Thread 2: void bar() { eax = mem[y]; inc eax; mem[y] = eax; eax = mem[x]; inc eax; mem[x] = eax; } Many things that look like “ one step ” operations actually take several steps under the hood:

10 Multithreaded = Unpredictability This applies to more than just integers: Pulling work units from a queue Reporting work back to master unit Telling another thread that it can begin the “ next phase ” of processing … All require synchronization!

11 Synchronization Primitives Semaphore Barriers Lock Thread 1: void foo() { sem.lock(); x++; y = x; sem.unlock(); } Thread 2: void bar() { sem.lock(); y++; x++; sem.unlock(); }

12 Too Much Synchronization? Deadlock Thread A: semaphore1.lock(); semaphore2.lock(); /* use data guarded by semaphores */ semaphore1.unlock(); semaphore2.unlock(); Thread B: semaphore2.lock(); semaphore1.lock(); /* use data guarded by semaphores */ semaphore1.unlock(); semaphore2.unlock(); (Image: RPI CSCI.4210 Operating Systems notes)

13 And if you thought I was joking …

14 The Moral: Be Careful! Synchronization is hard Need to consider all possible shared state Must keep locks organized and use them consistently and correctly Knowing there are bugs may be tricky; fixing them can be even worse! Keeping shared state to a minimum reduces total system complexity

15 Introduction to Distributed System Fundamentals of Networking

16 TCP UDP IP ROUTER SWITCH HTTP FTP PORT SOCKET Protocol Firewall Gateway

17 What makes this work? Underneath the socket layer are several more protocols Most important are TCP and IP (which are used hand-in- hand so often, they ’ re often spoken of as one protocol: TCP/IP)

18 TCP/IP 协议发明人 Cerf 及 Kahn 获 2004' 图灵奖 Vinton G. Cerf Robert E. Kahn

19 Why is This Necessary? Not actually tube-like “ underneath the hood ” Unlike phone system (circuit switched), the packet switched Internet uses many routes at once

20 Networking Issues If a party to a socket disconnects, how much data did they receive? Did they crash? Or did a machine in the middle? Can someone in the middle intercept/modify our data? Traffic congestion makes switch/router topology important for efficient throughput

21 Introduction to Distributed System Distributed Systems

22 Outline Models of computation Connecting distributed modules Failure & reliability

23 Models of Computation Instructions Single (SI)Multiple (MI) Data Multiple (MD) SISD single-threaded process MISD pipeline architecture SIMD vector processing MIMD multi-threaded processes Single (SD) Flynn’s Taxonomy

24 SISD D D D D D D D D D D D D D D Processor Instructions

25 SIMD D0D0 D0D0 Processor Instructions D0D0 D0D0 D0D0 D0D0 D0D0 D0D0 D0D0 D0D0 D0D0 D0D0 D1D1 D1D1 D2D2 D2D2 D3D3 D3D3 D4D4 D4D4 … … DnDn DnDn D1D1 D1D1 D2D2 D2D2 D3D3 D3D3 D4D4 D4D4 … … DnDn DnDn D1D1 D1D1 D2D2 D2D2 D3D3 D3D3 D4D4 D4D4 … … DnDn DnDn D1D1 D1D1 D2D2 D2D2 D3D3 D3D3 D4D4 D4D4 … … DnDn DnDn D1D1 D1D1 D2D2 D2D2 D3D3 D3D3 D4D4 D4D4 … … DnDn DnDn D1D1 D1D1 D2D2 D2D2 D3D3 D3D3 D4D4 D4D4 … … DnDn DnDn D1D1 D1D1 D2D2 D2D2 D3D3 D3D3 D4D4 D4D4 … … DnDn DnDn D0D0 D0D0

26 MIMD D D D D D D D D D D D D D D Processor Instructions D D D D D D D D D D D D D D Processor Instructions

27 Memory (Instructions and Data) Processor InstructionsData Interface to external world

28 Memory (Instructions and Data) Processor InstructionsData Interface to external world Processor InstructionsData Interface to external world Processor InstructionsData Interface to external world Processor InstructionsData Interface to external world

29 Memory (Instructions and Data) Processor InstructionsData Interface to external world Processor InstructionsData Interface to external world Processor InstructionsData Interface to external world Processor InstructionsData Interface to external world Memory (Instructions and Data) Network

30 Memory (Instructions and Data) Processor InstructionsData Interface to external world InstructionsData Network Processor Memory (Instructions and Data) Processor InstructionsData Interface to external world InstructionsData Processor Memory (Instructions and Data) Processor InstructionsData Interface to external world InstructionsData Processor Memory (Instructions and Data) Processor InstructionsData Interface to external world InstructionsData Processor

31 Outline Models of computation Connecting distributed modules Failure & reliability

32 System Organization Having one big memory would make it a huge bottleneck Eliminates all of the parallelism

33 CTA: Memory is Distributed

34 Interconnect Networks Bottleneck in the CTA is transferring values from one local memory to another Interconnect network design very important; several options are available Design constraint: How to minimize interconnect network usage?

35 “ Massively parallel architectures ” start rising in prominence Message Passing Interface (MPI) and other libraries developed Bandwidth was a big problem For external interconnect networks in particular A Brief History … 1985-95

36 A Brief History … 1995-Today Cluster/grid architecture increasingly dominant Special node machines eschewed in favor of COTS technologies Web-wide cluster software Companies like Google take this to the extreme (10,000 node clusters)

37 More About Interconnects Several types of interconnect possible Bus Crossbar Torus Tree

38 Interconnect Bus Simplest possible layout Not realistically practical Too much contention Little better than “one big memory”

39 Crossbar All processors have “input” and “output” lines Crossbar connects any input to any output Allows for very low contention, but lots of wires, complexity Will not scale to many nodes

40 Toroidal networks Nodes are connected to their logical neighbors Node-node transfer may include intermediaries Reasonable trade-off for space/scalability

41 Tree Switch nodes transfer data “up” or “down” the tree Hierarchical design keeps “short” transfers fast, incremental cost to longer transfers Aggregate bandwidth demands often very large at top Most natural layout for most cluster networks today

42 Outline Models of computation Connecting distributed modules Failure & reliability

43 “ A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable ” -- Leslie Lamport’s research contributions have laid the foundations of the theory of distributed systems. logical clock Byzantine failures Paxos for consensus ….

44 Reliability Demands Support partial failure Total system must support graceful decline in application performance rather than a full halt Data Recoverability If components fail, their workload must be picked up by still-functioning units Individual Recoverability Nodes that fail and restart must be able to rejoin the group activity without a full group restart

45 Reliability Demands Consistency Concurrent operations or partial internal failures should not cause externally visible nondeterminism Scalability Adding increased load to a system should not cause outright failure, but a graceful decline Increasing resources should support a proportional increase in load capacity Security The entire system should be impervious to unauthorized access Requires considering many more attack vectors than single-machine systems

46 Ken Arnold, CORBA designer: “ Failure is the defining difference between distributed and local programming ”

47 Component Failure Individual nodes simply stop

48 Data Failure Packets omitted by overtaxed router Or dropped by full receive-buffer in kernel Corrupt data retrieved from disk or net

49 Network Failure External & internal links can die Some can be routed around in ring or mesh topology Star topology may cause individual nodes to appear to halt Tree topology may cause “ split ” Messages may be sent multiple times or not at all or in corrupted form …

50 Timing Failure Temporal properties may be violated Lack of “ heartbeat ” message may be interpreted as component halt Clock skew between nodes may confuse version-aware data readers

51 Byzantine Failure Difficult-to-reason-about circumstances arise Commands sent to foreign node are not confirmed: What can we reason about the state of the system?

52 Malicious Failure Malicious (or maybe na ï ve) operator injects invalid or harmful commands into system

53 Correlated Failures Multiple CPUs/hard drives from same manufacturer lot may fail together Power outage at one data center may cause demand overload at failover center

54 Preparing for Failure Distributed systems must be robust to these failure conditions But there are lots of pitfalls …

55 The Eight Design Fallacies The network is reliable. Latency is zero. Bandwidth is infinite. The network is secure. Topology doesn't change. There is one administrator. Transport cost is zero. The network is homogeneous. -- Peter Deutsch and James Gosling, Sun Microsystems

56 Dealing With Component Failure Use heartbeats to monitor component availability “ Buddy ” or “ Parent ” node is aware of desired computation and can restart it elsewhere if needed Individual storage nodes should not be the sole owner of data Pitfall: How do you keep replicas consistent?

57 Dealing With Data Failure Data should be check-summed and verified at several points Never trust another machine to do your data validation! Sequence identifiers can be used to ensure commands, packets are not lost

58 Dealing With Network Failure Have well-defined split policy Networks should routinely self-discover topology Well-defined arbitration/leader election protocols determine authoritative components Inactive components should gracefully clean up and wait for network rejoin

59 Dealing With Other Failures Individual application-specific problems can be difficult to envision Make as few assumptions about foreign machines as possible Design for security at each step

60 TPS: Definition A system that handles transactions coming from several sources concurrently Transactions are “events that generate and modify data stored in an information system for later retrieval” * To be considered a transaction processing system the computer must pass the ACID test. * http://en.wikipedia.org/wiki/Transaction_Processing_System http://en.wikipedia.org/wiki/Transaction_Processing_System

61 Key Features of TPS: ACID “ACID” is the acronym for the features a TPS must support: Atomicity – A set of changes must all succeed or all fail Consistency – Changes to data must leave the data in a valid state when the full change set is applied Isolation – The effects of a transaction must not be visible until the entire transaction is complete Durability – After a transaction has been committed successfully, the state change must be permanent.

62 Atomicity & Durability What happens if we write half of a transaction to disk and the power goes out?

63 Logging: The Undo Buffer 1. Database writes to log the current values of all cells it is going to overwrite 2. Database overwrites cells with new values 3. Database marks log entry as committed If db crashes during (2), we use the log to roll back the tables to prior state

64 Consistency: Data Types Data entered in databases have rigorous data types associated with them, and explicit ranges Does not protect against all errors (entering a date in the past is still a valid date, etc), but eliminates tedious programmer concerns

65 Consistency: Foreign Keys Database designers declare that fields are indices into the keys of another table Database ensures that target key exists before allowing value in source field

66 Isolation Using mutual-exclusion locks, we can prevent other processes from reading data we are in the process of writing When a database is prepared to commit a set of changes, it locks any records it is going to update before making the changes

67 Faulty Locking Locking alone does not ensure isolation! Changes to table A are visible before changes to table B – this is not an isolated transaction

68 Two-Phase Locking After a transaction has released any locks, it may not acquire any new locks Effect: The lock set owned by a transaction has a “growing” phase and a “shrinking” phase

69 Relationship to Distributed Comp At the heart of a TPS is usually a large database server Several distributed clients may connect to this server at points in time Database may be spread across multiple servers, but must still maintain ACID

70 Conclusions Parallel systems evolved toward current distributed systems usage Hard to avoid failure Determine what is reasonable to plan for Keep protocols as simple as possible Be mindful of common pitfalls


Download ppt "Introduction to Distributed Systems 闫宏飞 北京大学信息科学技术学院 7/22/2010."

Similar presentations


Ads by Google