Presentation is loading. Please wait.

Presentation is loading. Please wait.

Midterm Preparation Summary DEEDS Group

Similar presentations


Presentation on theme: "Midterm Preparation Summary DEEDS Group"— Presentation transcript:

1 Midterm Preparation Summary DEEDS Group
Operating Systems Midterm Preparation Summary DEEDS Group Important Note: These slides are only from a selected set of topics for midterm review. The exam coverage is all the slides/topics covered in the class lectures, exercises and labs.

2 Processes Process: program in execution Program Process PC Registers
Process Stack (function params, return addresses, local variables) Heap (many include) Program Passive entity File containing a list of instructions Process Active entity With a PC specifying the next instruction to execute and a set of associated resources. Heap: Memory that is dynamically allocated during process runtime Program becomes a process when an executable file is loaded into memory.

3 Process States new: process is being created.
running: instructions are being executed. waiting: waiting for (I/O completion or reception of a signal) event ready: waiting to be assigned to a processor terminated: finished execution As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. Each process may be in one of the states: Only one process can be running on any processor at any instant.

4 Process Control Block (PCB)
Process state Program counter CPU registers CPU scheduling information Memory management information Accounting information I/O Status Information Each process represented in the OS by a process control block (PCB). It contains: Process state => new, ready, running, waiting, halted etc. Program counter => counter indicating the address of the next instruction to be executed CPU registers => include accumulators, index registers, stack pointers, general purpose registers plus any condition code information. CPU scheduling information => includes a process priority, pointers to scheduling queues and any other scheduling parameters. Memory management information => includes value of the base registers, page tables, segment tables depending on the memory system used by the OS Accounting information => amount of CPU and real time used, time limits, account numbers, job or process numbers and so on. I/O Status Information => list of I/O devices allocated to the process

5 Process Scheduling Objective of multiprogramming: some process running at all times Objective of time sharing: switch the CPU among processes To meet these objectives, the process scheduler selects an available process for program execution on the CPU. Scheduling queues: Ready queue Device queue Selection of a process => Scheduler Long-term Short-term The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running. To meet these objectives, the process scheduler selects an available process for program execution on the CPU. The processes that are residing in main memory and are ready and waiting to execute are kept on a list called ready queue. The list of processes waiting for a particular I/O device is called a device queue. Selection of a process is carried out by the appropriate scheduler.. Long-term scheduler: selects processes from the pool of newly created processes. Short-term scheduler: select from among processes that are ready to execute and allocates the CPU to one of them.

6 Context Switch When interrupt occurs, the system needs to save the current context of the process currently running on the CPU “Context” Value of the CPU registers Process state Memory management information Context switch State save of the current process State restore of a different process Interrupts cause the OS to change a CPU from its current task and to run a kernel routine. When an interrupt occurs, the system needs to save the current context of the process currently running on the CPU so that it can restore that context when its processing is done. “Context” is represented in the PCB of the process, it includes: - The value of the CPU registers - Process state - Memory management information Switching the CPU to another process requires performing a state save of the current process and a state restore of a different process. This task is called a “Context Switch”.

7 Process Creation A process may create several new processes via a create-process system call Creating process: parent process New process: children Process identifier (pid) When the process is created, initialization data may be passed along by the parent process to the child process. A process may create several new processes via a create-process system call during the course of execution. Each of these processes may in return create other processes, forming a tree of processes. Most OS (including Unix and Windows family) identify processes according to a unique process identifier (pid), which is typically an integer number. When the process is created, initialization data may be passed along by the parent process to the child process. For ex: a process, displaying the contents of the file img.jpg on the screen. When it is created, it will get the filename from the parent. Because it will use that filename to open the file, write the contents out etc.

8 Process Creation & Termination
Two possibilities exist in terms of creation Parent executes concurrently with its children Parent waits until some or all of its children have terminated Two possibilities exist in terms of the address space of the new process Child process is a duplicate of the parent process (same program and data as parent) Child process has a new program loaded into it Process Termination All the resources of the process (physical and virtual memory, open files, and I/O buffers) are deallocated by the operating system. When a process creates a new process, two possibilities exist in terms of execution

9 Interprocess Communication
Cooperating or independent processes Cooperating processes require an interprocess communication (IPC) mechanism Two fundemental models of interprocess communication Shared memory Message passing Processes executing concurrently in the operating system may be either independent processes or cooperating processes. Cooperating processes require an interprocess communication (IPC) mechanism that will allow them to exchange data and information.

10 Interprocess Communication
In the shared memory model, a region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region. In the message passing model, communication takes place by means of messages exchanged between the cooperating processes. Message Passing Shared Memory

11 Threads Threads: Basic unit of CPU utilization Shares Thread id PC
Register set Stack Shares Code section Data section Other OS resources But, all threads share exact same address space access to all global variables within the process access to all files within the shared address space can read, write, delete variables, files and stacks It shared with other threads belonging to the same process its code section, data section, and other operating system resources, such as open files and signals. If a process has multiple threads of control, it can perform more than one task at a time.

12 Threads Benefits of Multithreaded programming Responsiveness
Allows a program to continue even if parts of it are blocked Ex: Tabs in Firefox, Opera, Text/Image Web server streams etc. Resource Sharing (but also less protection!) Threads share memory and process resources Allows an application to perform several different activities within the same address space Efficiency/Performance More economical to context-switch threads than processes Solaris: times faster thread creation vs. process creation; context switch 5 times faster for threads vs. processes Utilization of multiprocessor architectures Worker threads dispatching to different processors Responsiveness example: A multithreaded web browser could still allow user interaction in one thread while img is being loaded in another. For efficiency and performance: Allocating memory and resources for process creation is costly. Because threads share resources of the process to which they belong, it is more economical to create and context-switch threads. For Utilization of multiprocessor architectures The benefits of multithreading can be greatly increased in a multiprocessor architecture, where threads may be running in parallel on different processors.

13 Multithreading Models
Support for threads User threads Supported above the kernel Managed without kernel support Kernel threads Supported and managed by OS Three common way of establishing a relationship between user and kernel threads Many-to-one model One-to-one model Many-to-many model Support for threads may be provided either at the user level, for user threads, or by the kernel, for kernel threads.

14 1. Many-to-one Model Maps many user-level threads to one kernel thread. Thread management is done in user space => + Entire process will block if a thread makes a blocking system call => - Multiple threads are unable to run in parallel on multiprocessors (one thread can access the kernel at a time) Examples: Green threads in Solaris GNU portable threads

15 2. One-to-one model Maps each user thread to a kernel thread
Provides more concurrency than the many-to-one model Allows multiple threads to run in parallel on multiprocessors Drawback: Creating a user thread requires creating the corresponding kernel thread Examples: Windows Linux Solaris 9 and newer Provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call. Drawback: Creating a user thread requires creating the corresponding kernel thread. Because the overhead of creating kernel threads can burden the performance of an application, most implementations of this model restrict the number of threads supported by the system..

16 3. Many-to-one model Multiplexes many user level threads to a smaller or equal number of kernel threads. Lesser concurrency than one-to-one but easier scheduler and different kernel threads for different server types Developers can create as many user threads as necessary and the corresponding kernel threads can run in parallel on a multiprocessor When user-thread blocks, kernel schedules another for execution Examples: Solaris prior to v9 Windows NT family with the ThreadFiber package True concurrency is never gained because the kernel can schedule only one thread at a time.

17 4. Two-level model Popular variation on the many-to-many model, except that it also allows a user thread to be bound to a kernel thread Examples: HP-UX 64-bit Unix Solaris 8 and earlier

18 Memory Management Swapping
Where in the memory should we place our programs? Limited amount of memory! More than one program! Programs have different sizes! Program size might grow (or shrink)! Memory allocation changes as Processes come into memory Leave memory Swapping We want: A lot of memory (more than RAM) Transparency (Relocation) Protection (rouge processes) Speed (Cache is fast, RAM is OK, disk is slow) (Partial) Solution: Virtual Memory

19 Virtual Memory Separates Requires a translation at run time
Virtual (logical) addresses Physical addresses Requires a translation at run time Virtual  Physical Handled in HW (MMU) The basic idea behind virtual memory is that each program has its own address space, which is broken up into chunks called pages. Each page is a contiguous range of addresses. These pages are mapped onto physical memory

20 Paging

21 Paging The relation between virtual addresses and physical memory addresses given by page table One page table per process is needed Page table needs to be reloaded at context switch In the example, we have a computer that generates 16-bit addresses, from 0 up to 64K. These are the virtual addresses. This computer however has only 32 KB of physical memory. So although 64 KB programs can be written, they cannot be loaded into memory in their entirety and run. In this example, pages are 4 KB. The corresponding units in physical memory are called page frames. They are also 4 KB in the example.

22 Paging Every memory lookup
Find the page in the page table Find the (physical) memory location Now we have two memory accesses (per reference) Solution: Translation Lookaside Buffer (TLB) (again a cache…)

23 TLBs – Translation Lookaside Buffers
TLB is a small hardware device for mapping virtual addresses to physical addresses without going through the page table. It is usually inside the MMU and consists of small number of entries. When a virtual address is presented to the MMU for translation, the hardware first checks to see if its virtual page number is present in the TLB by comparing it to all the entries simultaneously. If a valid match is found and the access does not violate the protection bits, the page frame is taken directly from the TLB, without going to the page table.. When the virtual page number is not in TLB, the MMU detects the miss and does an ordinary page table lookup. It then evicts one of the entries from the TLB and replaces it with the page table entry just looked up.

24 TLBs – Translation Lookaside Buffers
Memory lookup Look for page in TLB (fast) If hit, fine go ahead! If miss, find it and put it in the TLB Find the page in the page table (hit) Reload the page from disk (miss) What if the physical memory is full? So we know how to find a page table entry If it is not in the page table Get it from disk What if the physical memory is full? Throw some page out! Remember cache replacement policies?

25 Page Replacement Algorithms
Optimal Page Replacement Algorithm Replace page needed at the farthest point in future Optimal but unrealizable Not Recently Used Each page has Reference bit, Modified bit Bits are set by HW when page is referenced, modified Reference bit is periodically unset (at clock ticks) Pages are classified Class 0: not referenced, not modified Class 1: not referenced, modified Class 2: referenced, not modified Class 3: referenced, modified NRU removes page at random from lowest numbered non empty class Optimal Page Replacement Algorithm At the time of the page fault, the operating system has no way of knowing when each of the pages will be referenced next. It’s only possible to implement this algorithm on the second run, by using the page reference information collected during the first run. Not Recently Used The idea is that it is better to remove a modified page that has not been referenced. NRU is simple and gives decent performance

26 Page Replacement Algorithms
FIFO Page Replacement Algorithm Maintain a linked list of all pages In the order they came into memory Page at beginning of list replaced Disadvantage Page in memory the longest may be used often Second Chance Page Replacement Algorithm Pages sorted in FIFO order Inspect the R bit, give the page a second chance if R=1 Second Chance Page Replacement Algorithm Avoids the problem of throwing out a heavily used page is to inspect the R bit of the oldest page. If R=1, the bit is cleared, the page is put onto the end of the list of pages, and its load time is updated as though it had just arrived in memory.

27 Page Replacement Algorithms
The Clock Page Replacement Algorithm Although second chance is a reasonable algorithm, it is unnecessarily inefficient because it is constantly moving pages around on its list. A better approach is to keep all the page frames on a circular list in the form of a clock, hands points to the oldest page. If R=0, the page is evicted, the new page is inserted into the clock in its place, and the hand is advanced one position. If R=1, it is cleared and the hand is advanced to the next page.

28 Page Replacement Algorithms
Least Recently Used (LRU) Locality: pages used recently will be used soon Throw out the page that has been unused longest Keep a linked list of pages or a counter Not Frequently Used (NFU) - Simulating LRU in Software A counter is associated with each page At each clock interrupt add R to the counter LRU Must keep a linked list of pages Most recently used at front, least at rear Update this list every memory reference !! Alternative: counter (64 bit) in the page table entry Update on memory reference Choose page lowest value counter Periodically with zero the counter Both solutions (counter, matrix) require extra hardware NFU The main problem with NFU is that it never forgets anything. For example, in a multipass compiler, pages that were heavily used during pass 1 may still have a high count well into later passes. In fact, if pass 1 happens to have the longest execution time of all the passes, the pages containing the code for subsequent passes may always have lower counts than the pass 1 pages. Consequently, the OS will remove the useful pages instead of pages no longer in use. A small modification to NFU makes it able to simulate LRU quite well. The modification has two parts. First, the counters are each shifted right 1 bit before the R bit is added in. Second, the R bit is added to the leftmost rather than the rightmost bit.

29 Small Modification to NFU: Aging
1) The counters are each shifted right 1 bit before the R bit is added in 2) R bit is added to the leftmost rather than the rightmost bit.

30 Segmentation One-dimensional address space with growing tables
One table may bump into another For many problems, having two or more separate virtual address spaces may be much better than having only one.. For example, a compiler has many tables that are built up as compilation proceeds, which some of them grows continuously as compilation proceeds. Consider what happens when if a program has a much larger than usual number of variables but a normal amount of everything else. The chunk of address space allocated for symbol table may fill up, but there may be lots of room for other tables. What is really needed is a way of freeing the programmer from having to manage the expanding and contracting tables. The solution is to provide the machine with many completely independent address spaces called segments. Each segment consists of a linear sequence of addresses, from 0 to some maximum. Many segments may have different limits and they can change during execution.

31 Segmentation Figure illustrates a segmented memory being used for the compiler tables discussed earlier.

32 Segmentation After the system has been running for a while, memory will be divided up into number of chunks, some containing segments and some containing holes. This phenomenon is called checkerboarding or external fragmentation, wastes memory in the holes. It can be dealt by compaction (e)

33 I/O Goals for I/O Handling Enable use of peripheral devices
Present a uniform interface for Users (files etc.) Devices (respective drivers) Hide the details of devices from users (and OS)

34 I/O Most device controllers provide
buffers (in / out) control registers status registers These are accessed from the OS/Apps I/O ports memory-mapped hybrid The issue thus arises of how the CPU communicates with the control registers and the device data buffers. Two alternatives exist. (a). In the first approach, each control register is assigned to an I/O port number. The set of all the I/O ports from the I/O port space is protected so that ordinary user programs cannot access it. In this scheme the address spaces for memory and I/O are different. (b). The second approach is to map all the control registers into the memory space. Each control register is assigned a unique memory address to which no memory is assigned. This is called memory-mapped I/O. (c). A hybrid scheme, with memory mapped I/O data buffers and separate I/O ports for the control registers is shown in c

35 Direct Memory Address (DMA)
No matter whether a CPU does or does not have memory-mapped I/O, it needs to address the device controllers to exchange data with them. The CPU can request data from an I/O controller one byte at a time but doing so wastes the CPU’s time. So, DMA is used. The OS can only use DMA if the hardware has a DMA controller, which most systems do. No matter it is physically located, the DMA controller has access to the system bus independent of the CPU, it contains several registers that can be written and read by the CPU. CPU programs the DMA controller by setting its registers so it knows what to transfer where. DMA controller initiates the transfer by issuing a read request over the bus to the disk controller. This read request looks like any other any other read request, and the disk controller does not know whether it came from the CPU or from a DMA controller. The memory address to write to is on the bus’ address lines so when the disk controller fetches the next word from its internal buffer, it knows where to write it. Then write to memory is another cycle. When the write is complete, the disk controller sends an ack signal to the DMA controller, also over the bus. The DMA controller then increments the memory address to use and decrements the byte count. If the byte count is still greater than 0, steps 2 through 4 are repeated until the count reaches 0. At that time, the DMA controller interrupts the CPU to let it know that the transfer is now complete. When the OS starts up, it does not have to copy the disk block to memory, its already there.

36 I/O Handling Three kinds of I/O handling Programmed I/O
Interrupt-driven I/O DMA-based I/O There are three different ways that I/O can be performed. The simplest form of I/O is to have the CPU do all the work. This method is called programmed I/O.

37 Programmed I/O Programmed I/O is simple but has the disadvantage of tying up the CPU full time until all the I/O is done.

38 Code for interrupt handler
Interrupt-driven I/O Code for system call Code for interrupt handler Let us consider the case of printing on a printer that does not buffer characters but prints each one as it arrives. If the printer can print, say 100 characters/sec, each character takes 10msec to print. This means that after every character is written to the printer’s data register, the CPU will sit in an idle loop for 10 msec waiting to be allowed to output the next character. This is more than enough to do a context switch and run some other processes for the 10msec that would otherwise to be wasted. The way to allow the CPU to do something else while waiting for the printer to become ready is to use interrupts.

39 I/O Using DMA Printing a string using DMA
a) code executed when the print system call is made b) interrupt service procedure An obvious disadvantage of interrupt-driven I/O is that an interrupt occurs on every character. Wastes time The idea here is to let the DMA controller feed the characters to the printer one at a time, without CPU being bothered.

40 Deadlock A set of processes each holding a resource and waiting to acquire a resource held by another. Deadlock  None of the processes can …run, release resources or be awakened B P1 A P2 has needs

41 Deadlock Modeling D has T, wants U has wants has wants wants has C has U, wants T process A holding resource R <in-arrow to process> process B is waiting (requesting) for resource S <out-arrow from process> process C and D are in deadlock over resources T and U

42 Deadlock Detection 1. Detection with One Resource of Each Type
holds wants T The simplest case which only one resource of each type exists: one scanner, one printer, one drive.. In other words, we are excluding systems with two printers for the moment. Consider a system with 7 processes, A to G, and six resources R to W. It’s easily seen that there is a cycle and as a result a deadlock in here, D, E and G are all deadlocked. Develop resource ownership and requests graph If a cycle can be found within the graph  deadlock

43 Deadlock Detection 2. Detection with Multiple Resource of Each Type
When multiple copies of the some of the resources exist, a different approach is needed to detect deadlocks.

44 2. Detection with Multiple Resource of Each Type

45 Deadlock Avoidance Safe and Unsafe States
Safe: If there is a scheduling order that satisfies all processes even if they request their maximum resources at the same time * Keep in mind that only 1 process can execute at a given time! Available Resources = 10 We can avoid deadlocks, but only if certain information is available in advance A state is said to be safe if there is some scheduling order in which every process can run to completion even if all of them suddenly request their maximum number of resources immediately. (a) (b) (c) (d) (e)

46 Safe and Unsafe States (a) (b) (c) (d) “Potential” deadlock state as both A or C can ask for 5 resources and only 4 are currently free! Note: This is not a deadlock – just that the “potential” for a deadlock exists IF A or C ask for the max. If they ask for <max, the system works just fine!

47 Banker's (State) Algorithm for a Single Resource
What algorithm does is to check to see if granting the request leads to an unsafe state. If it does, the request is denied. If leads to a safe state, it is carried out. a and b are safe, but c is not

48 Banker’s Algorithm for Multiple Resources
Look for a row R whose unmet resource needs are all smaller than or equal to A. If no such row exists, the system will eventually deadlock since no process can run to completion. Assume the process of the row chosen requests all the resources it needs and finishes. Mark that process terminated and add all resources to the A vector. Repeat steps 1 and 2 until either all processes are marked terminated (the initial was safe) or no process is left whose resource needs can be met (in which case there is a deadlock).

49 Scheduling Task/Process/Thread Types Scheduling Algorithms
Non pre-emptive (NP): An ongoing task cannot be displaced Pre-emptive: Ongoing tasks can be switched in/out as needed Scheduling Algorithms First Come, First Served (FCFS) Shortest Job First (SJF) Round Robin (RR) Priority Based (PB) CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU. There are many CPU scheduling algorithms, several of them are:

50 First-Come, First-Served (FCFS) Scheduling (Non-Preemptive)
Process Length (CPU Burst Time) P P P Processes arrive (& get executed) in their arrival order: P1 , P2 , P3 Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: ( )/3 = 17 P1 P2 P3 24 27 30

51 Shortest-Job-First (SJF) Scheduling (Non-preemptive)
Process Arrival Time Burst Time P P P P SJF (non-preemptive) P1, then P3 then P2, P4 Av. waiting = (0 + [8-2] + [7-4] + [12-5])/4 = 4.00 P1 P3 P2 7 3 16 P4 8 12 Once CPU given to the process it cannot be preempted until it completes its CPU burst

52 Shortest-Job-First (SJF) Scheduling (Preemptive)
Process Arrival Time Burst Time P P P P SJF (preemptive) Average waiting time = P1:[0, (11-2)]; P2:[0, (5-4)]; P3: 0; P4: (7-5) Average waiting time = ( )/4 = 3 …[6.75;4.00] P1: 5 left P1: 5 left P1: 5 left P1: 5 left P1: 5 left P2: 2 left P2: 2 left P2: 2 left P2: 2 left P2: 2 left P2:2, P4:4, P1:5 P2:2, P4:4, P1:5 P2:2, P4:4, P1:5 P2:2, P4:4, P1:5 P1 P3 P2 4 2 11 P4 5 7 16 P1 P3 P2 4 2 11 P4 5 7 16 P1 P3 P2 4 2 11 P4 5 7 16 If a new process arrives with CPU burst length less than remaining time of current executing process, preempt.

53 Round Robin Each process gets a fixed slice of CPU time (time quantum: q), ms After this time has elapsed, the process is “forcefully” preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Performance? q too large  FIFO (poor response time & poor CPU utilization) q too small  q must be large with respect to context switch overhead, else context switching overhead reduces efficiency a b c a b c a b ReadyQ

54 Dynamic RR with Quantum = 20
Process Burst Time P1 53 P2 17 P3 68 P4 24 Typically, higher average turnaround than SJF, but better response Turnaround time – amount of time to execute a particular process (minimize) Response time (min) – amount of time it takes from request submission until first response P1 P2 P3 P4 20 37 57 77 97 117 121 134 154 162

55 Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to process with the highest priority Preemptive (priority and order changes possible at runtime) Non-preemptive (initial execution order chosen by static priorities) Problem  Starvation: low priority processes may never execute Solution  Aging: as time progresses increase priority of process

56 Race Condition Shared memory, shared files, shared address spaces, so how do we prohibit more than 1 process from accessing shared data at the same time and providing ordered access? Concurrent access to shared data (CS) by >1 processes outcome “can” depend solely on the order of accesses as seen by the resource instead of the proper request order: Race Condition

57 Mutual Exclusion ME Solution Basis  Exclusive CS Access

58 ME: Lock Variables flag(lock) as global variable for access to shared section lock = 0 ; resource_free lock = 1 ; resource_in_use Check lock; if free (0); set lock to 1 and then access CS A reads lock(0); initiates set_to_1 B comes in before lock(1) finished; sees lock(0), sets lock(1) Both A and B have access rights  race condition Happening as “locking” (the global var.) is not an atomic action “Atomic”: All sub-actions finish for action to finish or nothing (All or Nothing) unlocked?  acquire lock enter CS done  release lock do non-CS

59 ME with “Busy Waiting” A sees turn=0; enters CS B sees turn=0; busy_waits (CPU waste) A exits CS, sets turn =1 B sees turn=1; enters CS B finishes CS; sets turn=0; A enters CS; finishes CS quickly; sets turn=1; B in non-CS, A in non-CS; A finishes non-CS & wants CS; BUT turn=1 A waits (Condition 3 of ME? Process seeking CS should not be blocked by a process not using CS! But, no race condition given the strict alternation!)

60 Petersons ME int turn; “turn” to enter CS
boolean flag[2]; TRUE indicates ready (access) to enter CS do { flag[i] = TRUE; turn = j; set access for next_CS access while (flag[j] && turn = = j); CS only if flag[j]=FALSE or turn = i CS flag[i] = FALSE; non-CS } while(TRUE); If turn == i, then process Pi is allowed to execute in its critical section The flag array is used to indicate if a process is ready to enter its critical section. To enter the critical section, process Pi first sets flag[i] to be true and sets turn to the value j, thereby asserting that if the other process wishes to enter the critical section, it can do so. If both processes try to enter at the same time, turn will be set to both i and j at roughly the same time. Only one of these assignments will last, the other will occur but will be overwritten immediately. The eventual value of turn decides which of the two processes is allowed to enter its critical section first.

61 Sleep and Wakeup Shared fixed-size buffer - Producer puts info IN
- Consumer takes info OUT

62 Semaphores mutual exclusion  synchronization  |


Download ppt "Midterm Preparation Summary DEEDS Group"

Similar presentations


Ads by Google