Presentation is loading. Please wait.

Presentation is loading. Please wait.

What we will cover…  Processes  Process Concept  Process Scheduling  Operations on Processes  Interprocess Communication  Communication in Client-Server.

Similar presentations


Presentation on theme: "What we will cover…  Processes  Process Concept  Process Scheduling  Operations on Processes  Interprocess Communication  Communication in Client-Server."— Presentation transcript:

1 What we will cover…  Processes  Process Concept  Process Scheduling  Operations on Processes  Interprocess Communication  Communication in Client-Server Systems (Reading Materials)  Threads  Overview  Multithreading Models  Threading Issues 1-1 Lecture 3

2 What is a process  An operating system executes a variety of programs:  Batch system – jobs  Time-shared systems – user programs or tasks  Single-user Microsoft Windows or Macintosh OS User runs many programs Word processor, web browser, email  Informally, a Process is just one such program in execution; progress in sequential fashion  Similar to any high level language programs code (C/C++/Java code etc.) written by users  However, formally, a process is something more than just the program code (text section)! 1-2 Lecture 3

3 Process in Memory  In addition to the text section  A process includes:  program counter  contents of the processor’s registers  stack  Contains temporary data  Method parameters  Return addresses  Local variables  data section  While a program is a passive entity, a process is known as an active entity 1-3 Lecture 3

4 Process State  As a process executes, goes from creation to termination, goes through various “states”  new: The process is being created  running: Instructions are being executed  waiting: The process is waiting for some event to occur  ready: The process is waiting to be assigned to a processor  terminated: The process has finished execution 1-4 Lecture 3

5 Diagram of Process State 1-5 Lecture 3

6 Process Control Block (PCB)  A process contains numerous information  A system has many processes  How to manage all the process information Each process is represented by a Process Control Block  a table full of information for each process  Process state  Program counter  CPU registers  CPU scheduling information  Memory-management information  I/O status information 1-6 Lecture 3

7 Process Control Block (PCB) 1-7 Lecture 3

8 CPU Switch From Process to Process 1-8 Lecture 3

9 Process Scheduling  In a multiprogramming environment, there will be many processes  many of them ready to run  Many of them waiting for some other events to occur  How to manage the architecture?  Queuing  Job queue – set of all processes in the system  Ready queue – set of all processes residing in main memory, ready and waiting to execute  Device queues – set of processes waiting for an I/O device  Processes migrate among these various queues 1-9 Lecture 3

10 A Representation of Process Scheduling 1-10 Lecture 3

11 OS Queue structure (implemented with link list) 1-11 Lecture 3

12 Schedulers  A process migrates among various queues  Often more processes are there than can be executed immediately  Stored in mass-storage devices (typically, disk)  Must be brought into main memory for execution  OS selects processes in some fashion  Selection process carried out by a scheduler  Two schedulers in effect…  Long-term scheduler (or job scheduler) – selects which processes should be brought into the memory  Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU 1-12 Lecture 3

13 Schedulers (Cont)  Short-term scheduler is invoked very frequently (milliseconds)  (must be fast)  Long-term scheduler is invoked very infrequently (seconds, minutes)  (may be slow)  The long-term scheduler controls the degree of multiprogramming  Long-term scheduler has another big responsibility  Processes can be described as either:  I/O-bound process – spends more time doing I/O than computations, many short CPU bursts  CPU-bound process – spends more time doing computations; few very long CPU bursts  Balance between two types of processes 1-13 Lecture 3

14 Addition of Medium Term Scheduling 1-14 Lecture 3

15 Context Switch  All the earlier mentioned process scheduling has a trade-off  When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process via a context switch  Time dependent on hardware support  Context-switch time is pure overhead; the system does no useful work while switching 1-15 Lecture 3

16 Interprocess Communication  Concurrent processes within a system may be independent or cooperating  Cooperating process can affect or be affected by other processes, including sharing data  Reasons for cooperating processes:  Information sharing – several users may be interested in a shared file  Computation speedup – break a task into subtasks and work in parallel  Convenience  Need InterProcess Communication (IPC)  Two models of IPC  Shared memory  Message passing 1-16 Lecture 3

17 Communications Models Message-passing Shared-memory 1-17 Lecture 3

18 Shared memory: Producer-Consumer Problem  Paradigm for cooperating processes  producer process produces information that is consumed by a consumer process  IPC implemented by a shared buffer  unbounded-buffer places no practical limit on the size of the buffer  bounded-buffer assumes that there is a fixed buffer size More practical Let’s design! 1-18 Lecture 3

19 Bounded-Buffer – Shared-Memory Solution design  Three steps in the design problem 1. Design the buffer 2. Design the producer process 3. Design the consumer process 1. Shared buffer (implemented as circular array with two logical pointers: in and out) #define BUFFER_SIZE 10 typedef struct {... } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; 1-19 Lecture 3

20 Bounded-Buffer – Producer & Consumer process design 2. Producer design while (true) { /* Produce an item */ while (((in + 1) % BUFFER SIZE) == out) ; /* do nothing -- no free buffers */ buffer[in] = nextProduced; in = (in + 1) % BUFFER SIZE; } 3. Consumer design while (true) { while (in == out) ; // do nothing -- nothing to consume // remove an item from the buffer nextConsumed = buffer[out]; out = (out + 1) % BUFFER SIZE; } 1-20 Lecture 3

21 Shared Memory design  Previous design is correct, but can only use BUFFER_SIZE-1 elements!!!  Exercise for you to design a solution where BUFFER_SIZE items can be in the buffer  Part of Assignment 1 1-21 Lecture 3

22 Interprocess Communication – Message Passing  Processes communicate with each other without resorting to shared memory  IPC facility provides two operations:  send(message) – message size fixed or variable  receive(message)  If P and Q wish to communicate, they need to:  establish a communication link between them  exchange messages via send/receive 1-22 Lecture 3

23 Direct Communication  Processes must name each other explicitly:  send (P, message) – send a message to process P  receive(Q, message) – receive a message from process Q  Properties of communication link  A link is associated with exactly one pair of communicating processes  Between each pair there exists exactly one link  Symmetric (both sender & receiver must name the other to communicate)  Asymmetric (receiver not required to name the sender) 1-23 Lecture 3

24 Indirect Communication  Messages are directed and received from mailboxes (also referred to as ports)  Each mailbox has a unique id  Processes can communicate only if they share a mailbox  Properties of communication link  Link established only if processes share a common mailbox  A link may be associated with many processes  Each pair of processes may share several communication links  Link may be unidirectional or bi-directional 1-24 Lecture 3

25 Communications in Client-Server Systems  Socket connection 1-25 Lecture 3

26 Sockets  A socket is defined as an endpoint for communication  Concatenation of IP address and port  The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8  Communication consists between a pair of sockets 1-26 Lecture 3

27 Socket Communication 1-27 Lecture 3

28 Threads  Process model discussed so far assumed that a process was sequentially executed program with a single thread  Increased scale of computing  putting pressure on programmers, challenges include Dividing activities Balance Data splitting Data dependency Testing and debugging  Think of a busy web server! 1-28 Lecture 3

29 Single and Multithreaded Processes 1-29 Lecture 3

30 Benefits  Responsiveness  Resource Sharing  Economy  Scalability 1-30 Lecture 3

31 Multithreaded Server Architecture 1-31 Lecture 3

32 Concurrent Execution on a Single-core System 1-32 Lecture 3

33 Parallel Execution on a Multicore System 1-33 Lecture 3

34 User and Kernel Threads  User threads: Thread management done by user-level threads library  Kernel threads: Supported by the Kernel  Windows XP  Solaris  Linux  Mac OS X 1-34 Lecture 3

35 Multithreading Models  Many-to-One  One-to-One  Many-to-Many 1-35 Lecture 3

36 Many-to-One  Many user-level threads mapped to single kernel thread  Examples:  Solaris Green Threads  GNU Portable Threads 1-36 Lecture 3

37 One-to-One  Each user-level thread maps to kernel thread  Examples  Windows NT/XP/2000  Linux 1-37 Lecture 3

38 Many-to-Many Model  Allows many user level threads to be mapped to many kernel threads  Allows the operating system to create a sufficient number of kernel threads  Solaris prior to version 9 1-38 Lecture 3

39 Many-to-Many Model 1-39 Lecture 3

40 Threading Issues  Thread cancellation of target thread  Dynamic unbound usage of threads 1-40 Lecture 3

41 Thread Cancellation  Terminating a thread before it has finished  General approaches:  Asynchronous cancellation terminates the target thread immediately Problems?  Deferred cancellation allows the target thread to periodically check if it should be cancelled 1-41 Lecture 3

42 Dynamic usage of threads  Create thread as and when needed  Disadvantages:  Amount of time to create a thread  Nevertheless, this thread will be discarded once it has completed work; no reusage  No bound on the total number of threads created in the system may result in severe resource scarcity 1-42 Lecture 3

43 Solution: Thread Pools  Create a number of threads in a pool where they await work  Advantages:  Usually faster to service a request with an existing thread than create a new thread  Allows the number of threads in the application(s) to be bound to the size of the pool  Almost all modern OS provide kernel support for threads: Windows XP, MAC, Linux… 1-43 Lecture 3


Download ppt "What we will cover…  Processes  Process Concept  Process Scheduling  Operations on Processes  Interprocess Communication  Communication in Client-Server."

Similar presentations


Ads by Google