Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed System Building Blocks. Outline Distributed Programming Paradigm ◦Shared Memory Programming ◦Message Passing Interface Networking Remote Procedure.

Similar presentations


Presentation on theme: "Distributed System Building Blocks. Outline Distributed Programming Paradigm ◦Shared Memory Programming ◦Message Passing Interface Networking Remote Procedure."— Presentation transcript:

1 Distributed System Building Blocks

2 Outline Distributed Programming Paradigm ◦Shared Memory Programming ◦Message Passing Interface Networking Remote Procedure Call

3 Distributed Programmi ng Paradigm Based on the memory architecture, the programming paradigm can be roughly categorizes to different classes Shared Memory Programming ◦The processing units share the same memory space Message Passing Interface Programming ◦There is no shared memory among multiple processing units thus the processing units can only communicate sending and receiving messages

4 Parallel Processing Idea Serialized processing with context switch Parallel processing Task 1 Task 2 Task 1 Task 2

5 Shared Memory Programming Model Multiple processing units connect to the shared memory and have the same memory address space. All the processing units can see the virtually the same memory. Memory Buses Processing Unit …… Memory Unit …… Single shared memory address space

6 Multi-thread Programming Shared memory multi-thread programming is the standard for single machine programming. It can harness the full power of multicore architecture (with careful programming). The programming model is quite simple but it is hard to program correctly and efficiently. ◦Windows: WinThread ◦Linux: pthread ◦Scientific Computing: OpenMP We will see some examples of pthread and OpenMP programming and use the examples to show some important concepts while building our distributed systems.

7 Process and Thread

8 Thread creation and termination Create a thread by providing the entry of the thread (a function) ◦pthread_create(thread, attr, start_routine, arg) Wait a thread to finish. This is a special kind of thread synchronization. ◦pthread_join Quit the execution of a thread ◦pthread_exit Once a thread is created, they are peers and independent. pthreadcreate.c

9 Thread synchronization As threads share the same memory address space, it is dangerous that if a shared resources are accessed simultaneously. If this happens, the behavior of the program will not be defined. Thus, we need the mechanisms to synchronize the access to the shared resources. Commonly, this can be achieved through locks. Another synchronization happens when you want to define the order of instruction flow in different threads. As every two threads are independent, some synchronization should be used to implement this.

10 Mutual Exclusion Providing mutual exclusion access to shared resources. Shared Resources Thread

11 pthread Mutual Exclusion Mutex is an abbreviation for "mutual exclusion". Mutex variables are one of the primary means of implementing thread synchronization and for protecting shared data when multiple writes occur. Only one thread can lock (or own) a mutex variable at any given time. pthread_mutex_init pthread_mutex_destroy pthread_mutex_lock pthread_mutex_unlock pthreadmutex.c

12 Define execution order among different threads It is quite common that events are used as a mechanism for defining the execution order among different portions of codes located in multiple threads. Event means the execution of some threads will not continue until something has happened. Another thread will make the thing happen. Notifythread1 thread2 Waiting Working

13 pthread Conditional Variables Condition variables allow threads to synchronize based upon the actual value of data. pthread_cond_int(condition,attr) pthread_cond_destroy(condition) pthread_condition_wait(condition, mutex) pthread_condition_signal(condition) pthread_condition_broadcast(condition) pthreadcondition.c

14 //thread 1 for (int i = 0; i<50;i++) global_counter+=i; //thread 2 for (int i = 50; i<=100;i++) global_counter += i; Race condition Two thread access the shared resources without synchronization. The behavior of race condition is undefined and might bring some undesirable results. int global_counter=0; What will be the final result of global_couter after these two code blocks finished?

15 //thread 1 lock(A) lock(B) do_something() unlock(B) unlock(A) //thread 2 lock(B) lock(A) do_someotherthings() unlock(A) unlock(B) Dead lock

16 Deadlock on the road

17 Live lock Only request for lock but do nothing useful. while (true){ Lock L1 if (!Lock L2) Release(L1) else break; } do something useful here while (true){ Lock L2 if (!Lock L1) Release(L2) else break; } do something useful here

18 Message Passing Interface With the large number of computing nodes, it is very difficult to build a single shared memory space for the processing units. Thus, process can exchange information by sending/receiving messages. MPI is the de-facto standard for programming in the cluster environment for scientific computing.

19 MPI Programs Each process has its own stack and code segment. Processes exchange information by passing messages. Support both SPMD and MPMD computing. SPMD Program

20 MPI supports MPMD (a) MPMD Master/Worker (b) MPMD Coupled Analysis Node 1 Node 2 Node 3 Node 1 Node 2 Node 3 prog_a prog_b prog_a prog_b prog_c (c) MPMD Streamline prog_a prog_b prog_c Node 1Node 2Node 3

21 Create the MPI world #include #include "mpi.h" int main ( int argc, char *argv[] ) { int rank; int size; MPI_Init ( argc, argv ) ; MPI_Comm_rank ( MPI_COMM_WORLD, &rank ) ; MPI_Comm_size ( MPI_COMM_WORLD, &size ) ; printf ( "Hello world from process %d of %d\n", rank, size ) ; MPI_Finalize () ; return 0; } Hello world from process 0 of 4 Hello world from process 1 of 4 Hello world from process 2 of 4 Hello world from process 3 of 4

22 MPI Basic: Point to Point Communication int MPI_SEND(buf, count, datatype, dest, tag, comm) int MPI_RECV(buf,count,datatype,source,tag,comm,status) What parameter make the communication happen? ◦The buffer of sender or receiver ◦quantity of data, count ◦data type ◦source and destination ◦tag ◦the communicators and groups

23 Group Synchronization MPI_Barrier(comm) ◦Creates a barrier synchronization in a group. Each task, when reaching the MPI_Barrier call, blocks until all tasks in the group reach the same MPI_Barrier call.

24 Broadcast

25 Scatter and Gather Gather: many to one Scatter: one to many process  data 

26 Allgather

27 Alltoall process  data 

28 Network basics IP, TCP, DNS Socket Protocols programming structures

29 TCP/IP, DNS IP: The Internet Protocol (IP) is the principal communications protocol used for relaying datagrams (also known as network packets) across an internetwork using the Internet Protocol Suite responsible for routing packets across network boundaries. (Routing, finding a specified destination on the Internet) TCP: TCP provides reliable, ordered delivery of a stream of octets from a program on one computer to another program on another computer. DNS: You can not remember something like (IP address) easily. But you can remember easily. DNS is just the system to translate the name of a machine to the IP address.www.google.com

30 What makes two process talk? The address of two machines and the identification of two process in each machine. Source IP, Destination IP: the address of two machines Source Port, Destination Port: the identification of two processes in two machines So, a connection is identified by following four parameters: ◦source IP ◦destination IP ◦source Port ◦destination Port

31 Socket Sock is the fundamental programming abstraction for communication between two processes on two different machines. (It is OK to use socket for communication in the same machine, however this is not the typical usage for inter process communication between two processes on the same machine.) Client and server use two different types of sockets: ◦Client creates a client-side socket and make a connect call to the server socket. After successfully connected, the client can start sending data to the server. ◦Server creates a server-side socket and often listen on this socket waiting for the client. If a connection request package is received, the server will then make a new socket to accept the connect and start communication. The original socket can be used waiting for other connections.

32 Ports As mentioned before, ports are used to identify a specific process within a machine(with an IP address). Using different source ports allows multiple clients to connect to a server at once.

33 Example: Web Server (1/3) 33 The server creates a listener socket attached to a specific port. 80 is the agreed-upon port number for web traffic.

34 Example: Web Server (2/3) 34 The client-side socket have to use a source port, but the OS chooses a random unused port number When the client requests a URL (e.g., “www.google.com”), its OS uses DNS system to find its IP address.

35 Example: Web Server (3/3) 35 Listener is ready for more incoming connections, while current connection can processed in parallel.

36 Example: Web Server 36

37 The network packet Data transfer over Internet by using data packets. Packets wrapper various information that is used for different usage. For example, addresses are used for routing, serial number and size are used for stream control. Your data can be considered as payload in the packet which looks like a letter inside an envelop. You should know that here are some lower level of protocols for interoperate with physical devices such as MAC layer for Ethernet or using wireless.

38 IP: the Internet Protocol IP mainly focuses on how to find a machine on the Internet. Thus, IP define the addressing schema of machines. IP packet encapsulate the upper layer protocol information as well as the data that provided by the applications. IP protocol does not provide reliability. IP protocol just includes enough information for the data to tell the routers where is the destination for the data carried in the packet.

39 TCP: Transport Control TCP is built on top of IP. TCP provides a virtual line between two ends. The data is stream oriented instead of message (packet) orientied. TCP provides the reliability and ordering of messages. TCP is very important basic building block for upper layer protocol. For example, HTTP is built on top of TCP.

40 You and the web you want to access Not actually tube-like “underneath the hood” Unlike phone system (circuit switched), the packet switched Internet uses many routes at once

41 It is difficult to handle network problems If you can not receive a message from a specific machine, it is quite difficult even impossible to identify whether it is node crash or network crash. If you send some data to a machine and a party to a socket disconnects, how can we identify how much data did the other receive. Security problems: during the data transfer, Can someone in the middle intercept/modify our data? Performance problems: Traffic congestion makes switch/router topology important for efficient throughput

42 Programming structures for processing network information fork() based server data processing multiple threads based select() based poll() based see the bible of UNIX Network Programming

43 CLIENT fd=socket() setsockopt(fd) r=connect(fd,destination) read(fd)/write(fd) send(fd)/recv(fd) sendto/recvfrom(fd) close(fd) SERVER listenfd=socket(); setsockopt(listenfd) bind(listenfd) listen(listenfd) acceptedfd=accet(listenfd) do_various_work_with_accepte dfd();//see following slides Before you do the data transfer

44 fork based server data processing acceptedfd=accept(listenfd); pid=for(); assert(pid>=0); if(pid==0) //child process close(listenfd); do_some_thing_with_acceptedfd; else close(acceptedfd); go_back_to_accept();

45 threads code acceptedfd=accept(listenfd); thread=get_free_thread_from_pool(); set_thread_data(thread,acceptedfd); activate_thread(thread) go_back_to_accept() //in the thread do_something(acceptedfd); close(acceptedfd);

46 select code for multiple sockets Why? you want to reuse the power of single thread and processing on multiple sockets. you want to stay in the same thread an thus you can keep the information more conveniently ( you don’t want to do some synchronization among threads). FD_ZERO, FD_SET, fd_set(readfds, writefds), maxfd //setting the fds you want to monitor switch(select(fd_set)){ -1: something is wrong; break; 0: rarely happen, you should do the select again;break; default: for each fd you want to monitor if FD_ISSET(fd_set, fd) do data transfer with the fd. }

47 poll code for multiple sockets Some other group propose the function of poll and use similar but different programming interface. The programming structure can be the same as select. int poll(struct pollfd *ufds, unsigned int nfds, int timeout); POLLIN, POLLOUT, POLLPRI

48 If you are using the poll or select version, how can you notify the working thread? using fifo, and put one of the fd pair in the poll or select list otherwise, you can use eventfd() which only use one instead of two fds.

49 RPC What is remote procedure call? Why RPC? The types of RPC How can we implement RPC (RPC internals)

50 RPC A remote procedure call (RPC) is an inter-process communication that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction. That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote. Consider the object oriented principles, RPC is called as remote invocation or remote method invocation.

51 Request/Response over Internet Regular client-server protocols involve sending data back and forth according to a shared state. Client: Server: HTTP/1.0 index.html GET 200 OK Length: 2400 (file data) HTTP/1.0 hello.gif GET 200 OK Length: … This is the straightforward way to use the network facilities.

52 Call a function in another process in another machine RPC servers will call arbitrary functions in its address spaces with arguments passed over the network and return values back over network. Client: Server: foo.dll,bar(4, 10, “hello”) “returned_string” foo.dll,baz(42) err: no such function …

53 Possible modes of RPC Synchronous RPC: Client call an RPC function, and then wait until the return value is sent back from the server. Asynchronous PRC: Client call an RPC function, and then can continue on some other work. After a while, the client can check a handle to find out whether the return value of an PRC call is ready. callback supported RPC: Client call an RPC function and then can continue on some other work. When the execution of the function is finished on the server, the server will notify the client to call a registered callback function. Similar concepts exists in many areas of computer science including networking (different types of sockets), operating system (think about the system call provided).

54 Synchronous RPC 54

55 Asynchronous RPC 55

56 Callbacks 56

57 So, how can we implement RPC? From the client side: 1 wrap the argument 2 wrap the function id 3 wrap the server:port So, translate the function call of bar(arg0, arg1) to some underlying mechanism like rpc_call(foo.dll, bar, arg0, arg1) Programmers want bar(arg0, arg1) but the RPC designer have to implement rpc_call

58 Design Considerations Protocol Choices: UDP? TCP? Fault Tolerant: what if the network is broken? The call might be sent 0, 1, 2, ……times. A client sent just one call, but the server might receive multiple invocations. What should the server do? (at-most-once semantic vs. multiple invocation semantic.) Security: can any one call RPC functions? This is through network, some malicious user might send a lot of invocations. Compatible: how do you handle multiple versions of a function? Error conditions: the function call itself might return error. The RPC framework might raise errors. So, how to handle various error conditions? Object Oriented Support: we need to marshal/unmarshal objects. A lot of RPC protocols: DCOM, CORBA, JRMI……

59 Go to Lab1 Continue code guide for the lib1 and help students to understand various aspects of the source code in yfs. RPC FUSE

60 Thank you! Any Questions?


Download ppt "Distributed System Building Blocks. Outline Distributed Programming Paradigm ◦Shared Memory Programming ◦Message Passing Interface Networking Remote Procedure."

Similar presentations


Ads by Google