2002 Networking Operating Systems (CO32010) 1. Operating Systems 2. Processes and scheduling 4.

Slides:



Advertisements
Similar presentations
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Advertisements

1 Concurrency: Deadlock and Starvation Chapter 6.
Concurrency: Deadlock and Starvation Chapter 6. Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate.
Concurrency: Deadlock and Starvation Chapter 6. Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Concurrency: Deadlock and Starvation Chapter 6. Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Fall 2003 URL: Distributed System Architectures.
Section 3. True/False Changing the order of semaphores’ operations in a program does not matter. False.
MODERN OPERATING SYSTEMS Third Edition ANDREW S
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Deadlocks, Message Passing Brief refresh from last week Tore Larsen Oct
CY2003 Computer Systems Lecture 05 Semaphores - Theory.
Tam Vu Remote Procedure Call CISC 879 – Spring 03 Tam Vu March 06, 03.
Ceng Operating Systems Chapter 2.4 : Deadlocks Process concept  Process scheduling  Interprocess communication  Deadlocks Threads.
28.2 Functionality Application Software Provides Applications supply the high-level services that user access, and determine how users perceive the capabilities.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Wednesday, June 28, 2006 Command, n.: Statement presented by a human and accepted by a computer in such a manner as to make the human feel that he is.
Chapter 7: Deadlocks. 7.2 Chapter Objectives To develop a description of deadlocks, which prevent sets of concurrent processes from completing their tasks.
1 Lecture 8: Deadlocks Operating System Spring 2008.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Witawas Srisa-an Chapter 6
CPSC 4650 Operating Systems Chapter 6 Deadlock and Starvation
1 Concurrency: Deadlock and Starvation Chapter 6.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
PRASHANTHI NARAYAN NETTEM.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Fundamentals of Python: From First Programs Through Data Structures
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Concurrency: Deadlock and Starvation Chapter 6. Goal and approach Deadlock and starvation Underlying principles Solutions? –Prevention –Detection –Avoidance.
1 Concurrency: Deadlock and Starvation Chapter 6.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
Server Sockets: A server socket listens on a given port Many different clients may be connecting to that port Ideally, you would like a separate file descriptor.
DCE (distributed computing environment) DCE (distributed computing environment)
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts - 7 th Edition, Feb 7, 2006 Process Concept Process – a program.
1 Processes, Threads, Race Conditions & Deadlocks Operating Systems Review.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
2001 Networking Operating Systems (CO32010) 1. Operating Systems 2. Processes and scheduling 3.
1 Announcements The fixing the bug part of Lab 4’s assignment 2 is now considered extra credit. Comments for the code should be on the parts you wrote.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
2001 Networking Operating Systems (CO32010) 1. Operating Systems 2. Processes and scheduling 4.
Shuman Guo CSc 8320 Advanced Operating Systems
Processes CSCI 4534 Chapter 4. Introduction Early computer systems allowed one program to be executed at a time –The program had complete control of the.
RTX - 51 Objectives  Resources needed  Architecture  Components of RTX-51 - Task - Memory pools - Mail box - Signals.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
1 VxWorks 5.4 Group A3: Wafa’ Jaffal Kathryn Bean.
2002 Networking Operating Systems (CO32010) 1. Operating Systems 2. Processes and scheduling 3.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings.
Operating Systems COMP 4850/CISG 5550 Deadlocks Dr. James Money.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Remote Procedure Calls External Data Representation (Ch 19) RPC Concept (Ch 20)
Introduction Contain two or more CPU share common memory and peripherals. Provide greater system throughput. Multiple processor executing simultaneous.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Introduction to operating systems What is an operating system? An operating system is a program that, from a programmer’s perspective, adds a variety of.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Distributed Computing & Embedded Systems Chapter 4: Remote Method Invocation Dr. Umair Ali Khan.
Informationsteknologi Monday, October 1, 2007Computer Systems/Operating Systems - Class 111 Today’s class Deadlock.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community.
Process Management Deadlocks.
Topics Covered What is Real Time Operating System (RTOS)
Background on the need for Synchronization
CSE 451 Section 1/27/2000.
Chapter 6 – Distributed Processing and File Systems
Presentation transcript:

Networking Operating Systems (CO32010) 1. Operating Systems 2. Processes and scheduling 4. Distributed file systems 5. Routing protocols 6. Routers 7. Encryption 8. NT, UNIX and NetWare 3.1Introduction 3.2Interprocess communication 3.3Flags and semaphores 3.4RPC 3.5Multi-processor systems 3.6Exercises Objectives: To define the concept of distributed processing, and contrast centralized systems against distributed ones. To define mechanisms of interprocess control, such as pipes, semaphores, flags, and message queues. To define, in detail, how semaphores are used, and how the can prevent deadlock. To define the conditions for deadlock. To outline algorithms to prevent deadlock, such as the Banker ’ s Algorithm. To outline practical interprocess control protocols, especially RPC. Objectives: To define the concept of distributed processing, and contrast centralized systems against distributed ones. To define mechanisms of interprocess control, such as pipes, semaphores, flags, and message queues. To define, in detail, how semaphores are used, and how the can prevent deadlock. To define the conditions for deadlock. To outline algorithms to prevent deadlock, such as the Banker ’ s Algorithm. To outline practical interprocess control protocols, especially RPC. 3. Distributed processing

Centralised v. Distributed Head Office Regional Office Local Office ATM Customers Staff Logistics Distributed: Decision making Account management Logistics

Network Client requests a remote process and passes process parameters Server runs process and returns the results to the client 3.2 Client/server architecture

Process A Process A Shared memory Shared memory Process A Process A Process B Process B Process B Process B 1. Socket 2. Semaphores 3. Shared memory Resource Process A Process A Process B Process B Sleep until ready Gets access to resource and increments a semaphore (wait) Process A Process A Process B Process B 4. Pipe Process A | Process B Connection over a network over locally 3.3 IPC methods Message queue Message queue Process A Process A Process B Process B 5. Message queue

wait (); code that must be mutually exclusive signal (); wait (); code that must be mutually exclusive signal (); Wait decrements the semaphore Signal increments the semaphore wait (); code that must be mutually exclusive signal (); wait (); code that must be mutually exclusive signal (); Process B will go to sleep as the semaphore has a zero value Process A Process B Process B will wake up when the semaphore value becomes a non-zero Semaphore 3.4 Semaphore usage in a program

#define MAX_BUFF 100 /*maximum items in buffer*/ int buffer_count=0; /*current number of items in buffer*/ void producer_buffer(void) { while (TRUE){/*Infinite loop */ put_item(); /*Put item*/ if (buffer_count==MAX_BUFF) sleep(); /*Sleep, if buffer full */ enter_item(); /*Add item to buffer*/ buffer_count = buffer_count + 1; /*Increment number of items in the buffer */ if (buffer_count==1) wakeup(consumer);/*was buffer empty?*/ } void consumer_buffer(void) { while (TRUE) {/*Infinite loop */ if (buffer_count==0) sleep(); /* Sleep, if buffer empty */ get_item(); /* Get item*/ buffer_count = buffer_count - 1; /* Decrement number of items in the buffer*/ if (buffer_count==MAX_BUFF-1) wakeup(producer_buffer); /* if buffer not full anymore, wake up producer*/ consume_item(); /*remove item*/ } 3.5 Consumer-producer example

Deadlock Resource locking. This is where a process is waiting for a resource which will never become available. Some resources are pre-emptive, where processes can release their access on them, and give other processes a chance to access them. Others, though, are non-pre- emptive, and processes are given full rights to them. No other processes can then get access to them until the currently assigned process is finished with them. An example of this is with the transmission and reception of data on a communication system. It would not be a good idea for a process to send some data that required data to be received, in return, to yield to another process which also wanted to send and receive data. Starvation. This is where other processes are run, and the deadlocked process is not given enough time to catch the required event. This can occur when processes have a low priority compared with other ones, as higher priority tasks tend to have a better chance to access the required resources.

Analogy to deadlock C F A B D E

Four conditions for deadlock Mutual exclusion condition. This is where processes get exclusive control of required resources, and will not yield the resource to any other process. Wait for condition. This is where processes keep exclusive control of acquired resources while waiting for additional resources. No pre-emption condition. This is where resources cannot be removed from the processes which have gained them, until they have completed their access on them. Circular wait condition. This is a circular chain of processes on which each process holds one or more resources that are requested by the next process in the chain.

Analogy to deadlock C F A B D E Circular wait condition Mutual exclusion condition and no pre-emption. None of cars will give up their exclusive access to the Junction.

Banker’s Algorithm (Safe condition) Process A requires a maximum of 50MB. Process B requires a maximum of 40MB. Process C requires a maximum of 60MB. Process D requires a maximum of 40MB. The current state would be safe as Process A can complete which releases 50 MB (which allows the other processes to complete): ProcessCurrent allocationMaximum allocation required A4050 B2040 C2060 D1040 Resource unallocated10

Banker’s Algorithm(Unsafe condition) Process A requires a maximum of 50MB. Process B requires a maximum of 40MB. Process C requires a maximum of 60MB. Process D requires a maximum of 40MB. The current state would be unsafe as no process can complete: ProcessCurrent allocationMaximum allocation required A1550 B3040 C4560 D040 Resource unallocated5

Banker’s Algorithm Each resource has exclusive access to resources that have been granted to it. Allocation is only granted if there is enough allocation left for at least one process to complete, and release its allocated resources. Processes which have a rejection on a requested resource must wait until some resources have been released, and that the allocated resource must stay in the safe region. Problems: Requires processes to define their maximum resource requirement. Requires the system to define the maximum amount of a resource. Requires a maximum amount of processes. Requires that processes return their resources in a finite time. Processes must wait for allocations to become available. A slow process may stop many other processes from running as it hogs the allocation.

RPC

The caller process sends a call message, with all the procedure’s parameters Client Server reads parameters and runs the process Server Caller process waits for a response Server process waits for a call The caller process sends a call message, with all the procedure’s parameters Process, and parameters Server sends results to the client Results Server process waits for a call 3.13 RPC operation

RPC RPC provides: A unique specification of the called procedure. A mechanism for matching response parameters with request messages. Authentication of both callers and servers. The call message has two authentication fields (the credentials and verifier), and the reply message has one authentication field (the response verifier). Protocol errors/messages (such as incorrect versions, errors in procedure parameters, indication on why a process failed and reasons for incorrect authentication).

RPC RPC provides three fields which define the called procedure: Remote program number. These are numbers which are defined by a central authority (like Sun Microsystems). Remote program version number. This defines the version number, and allows for migration of the protocol, where older versions are still supported. Different versions can possibly support different message calls. The server must be able to cope with this. Remote procedure number. This identifies the called procedure, and is defined in the specification of the specific program’s protocol. For example, file service may define that an 8 defines a read operation and a 10 defines a write operation.

RPC RPC call message format: Message type. This is either CALL (0) or REPLY (1). Message status. There are two different message status fields, depending on whether it is a CALL or a REPLY. Rpcvers. RPC Version number (unsigned integer). Prog, vers and proc. Specifies the remote program, its version number and the procedure within the remote program (all unsigned integers). Cred. Authentication credentials. Verf. Authentication verifier. Procedure specific parameters.

RPC authentications RPC authentication No authentication (AUTH_NULL). No authentication is made when callers do not know who they are or when the server does not care who the caller is. This type of method would be used on a system that did not have external connections to networks, and assumes that all the callers are valid. Unix authentication (AUTH_UNIX). Unix authentication uses the Unix authentication system, which generates a data structure with a stamp (an arbitrary ID which the caller machine may generate), machine name (such as ‘Apollo’), UID (caller’s effective user ID), GID (the caller’s effective group ID) and GIDS (an array of groups which contain the caller as a member). Short authentication (AUTH_SHORT). DES authentication (AUTH_DES). Unix authentication suffers from two problems: the naming is too Unix oriented and there is no verifier (so credentials can easily be faked). DES overcomes this by addressing the caller using its network name (such as instead of by an operating system specific integer. These network names are unique on the Internet. For example identifies user ID number 111 on the mycomputer.net system.

RPC programming RPC programming levels: Highest layer. At this level the calls are totally transparent to the operating system, the computer type and the network. With this the programmer simply calls the required library routine, and does not have to worry about any of the underlying computer type, operating system or networking. For example, the rnusers routine returns the number of users on a remote computer (as given in Program 3.2). Middle layer. At this level the programmer does not have to worry about the network connection (such as the TCP sockets), the Unix system, or other low-level implementation mechanisms. It just makes a remote procedure call to routines on other computers, and is the most common implementation as it gives increased amount of control over the RPC call. These calls are made with: registerrpc (which obtains a unique system-wide procedure identification number); callrpc (which executes a remote procedure call); and svc_run. The middle layer, in some more complex applications, does not allow for timeout specifications, choice of transport, Unix process control, or error flexibility in case of errors. If these are required, the lower layer is used. Lowest layer. At this level there is full control over the RPC call, and this can be used create robust and efficient connections.

RPC highest level programming #include int main(int argc, char *argv[]) { int users; if (argc != 2) { fprintf(stderr, "Use: rnusers hostname\n"); return(1); } if ((users = rnusers(argv[1])) < 0) { fprintf(stderr, "Error: rnusers\n"); exit(-1); } printf("There are %d users on %s\n", users, argv[1]); return(0); }

RPC middle level programming #include #define RUSERSPROG 10002/* Program number*/ #define RUSERSVERSION 2/* Version number*/ #define RUSERPROCVAL1/* Procedure number*/ int main(int argc, char *argv[]) { unsigned long users; int rtn; if (argc != 2) { fprintf(stderr, "Use: nusers hostname\n"); exit(-1); } if (rtn = callrpc(argv[1], RUSERSPROG, RUSERSVERSION, RUSERSPROCVAL, xdr_void, 0, xdr_u_long, &users) != 0) { clnt_perrno(stat); return(1); } printf("There are %d users on %s\n", users, argv[1]); return(0); }

RPC lowest level programming #include #define RUSERSPROG 10002/* Program number*/ #define RUSERSVERSION 2/* Version number*/ #define RUSERPROCVAL1/* Procedure number*/ char *nuser(); intmain(void) { registerrpc(RUSERSPROG, RUSERSVERS, RUSERSPROC_NUM, nuser, xdr_void, xdr_u_long); svc_run(); fprintf(stderr, "Error: server terminated\n"); return(1); }

RPC lowest level programming Sample contents of /etc/rpc file: portmapper100000portmap sunrpc rstatd100001rstat rstat_svc rup perfmeter rusersd100002rusers nfs100003nfsprog ypserv100004ypprog This shows RPC process name, and RPC procedure number.