Presentation is loading. Please wait.

Presentation is loading. Please wait.

Operating Systems Chapter 2: Processes and Threads

Similar presentations


Presentation on theme: "Operating Systems Chapter 2: Processes and Threads"— Presentation transcript:

1 Operating Systems Chapter 2: Processes and Threads
Instr: Yusuf Altunel IKU Department of Computer Engineering (212)

2 Content 2.1 Processes 2.2 Threads 2.3 Interprocess communication
2.4 Classical IPC problems 2.5 Scheduling

3 The Process Model Software is organized Each process has its own CPU
into a number of sequential processes Each process has its own CPU In reality CPU switches back and forth from process to process A process will perform its computation There is a need for timing mechanism to prevent some processes making the CPU too much busy Since the time needed for each process is not uniform

4 Processes The Process Model
Multiprogramming of four programs Conceptual model of 4 independent, sequential processes Only one program active at any instant

5 Process Creation Principal events result in process creation
System initialization User request to create a new process Initiation of a batch job

6 Process Termination Conditions terminate processes: Normal exit
voluntary Error exit Fatal error involuntary Killed by another process

7 Process Hierarchies Parent creates a child process, Windows
child processes can create their own process Forms a hierarchy UNIX calls this a "process group" Windows has no process hierarchy concept all processes are treated equally

8 Process States Running Ready Blocked using the CPU no CPU is available
Ready to run, but temporarily stopped to let another process run Blocked Cannot run even if the CPU is available Unable to run until some external event happens

9 Scheduler Lowest layer of process-structured OS
handles interrupts, scheduling Starts and stops the processes when it is necessary

10 Process Implementation
Process Table Maintained by OS to implement the processes Each entry is reserved for one process Contains information about Process state Program counter Stack pointer Memory allocation Status of its open files Accounting and scheduling information etc. Exact fields vary from system to system

11 Fields of a process table entry
Process Management Memory Management File Management Registers Pointer to text segment Root directory Program Counter Pointer to data segment Working directory Program Status Word File descriptions Stack Pointer User ID Process State Group ID Priority Scheduling parameters Process id Parent process Process group Signals Time when process started CPU Time used Children’s CPU time Time of next alarm Fields of a process table entry

12 Interrupt Handling Skeleton of what lowest level of OS does
when an interrupt occurs

13 Threads Comparing Thread vs. Process
(a) Three processes each has one thread (b) One process with three threads

14 Processes vs. Threads Process items shared
by all threads in a process Thread items are private to each thread

15 Thread Usage: Word Processor
A word processor with three threads

16 Strategy to Implement Threads
Managing in user Space OS is not aware of threads Compiler is responsible to create and manage threads When a thread is about to block it chooses and starts its successor before stopping Managing in Kernel Space the operating system is aware of the existence of multiple threads per process when a thread blocks, the operating system chooses the next one to run, either from the same process or a different one

17 Implementing Threads in User Space
A user-level threads package

18 Implementing Threads in the Kernel
A threads package managed by the kernel

19 Pros and Cons User space Kernel space Pros Cons
Switching threads is much faster Cons When thread is blocked e.g., waiting for I/O or a page fault to be handled the kernel blocks the entire process Kernel space Having the chance to execute the next Switching is slower

20 Interprocess Communication
Processes need to communicate with others Example: UNIX command to Concatenate three files (Process 1) Select all lines containing the word “tree” (Process 2) cat chapter1 chapter2 chapter3 | grep tree The first process should concatenate three files and send the resulting file to the second process Second process will get the concatenated files and finds those lines including the word “tree”

21 Race Conditions Processes often share a common storage Race condition
In main memory or a shared file Race condition Two or more processes ready To read or write a shared data The final result depends on who runs precisely when

22 Mutual Exclusion ... Slot 4 Slot 5 Slot 6 Slot 7 Critical region
in=4 Critical region part of a program where shared area is to be accessed Shared area Shared file Shared memory Variable etc. To avoid race conditions: No two processes are ever at the critical region at the same time Mutual exclusion: must be prevented against access by more than one process at the same time Process A Shared Area Process B out=7

23 Mutual Exclusion Conditions
conditions of mutual exclusion: No two processes simultaneously in critical region No assumptions made about speeds or numbers of CPUs No process running outside its critical region may block another process No process must wait forever to enter its critical region

24 Mutual exclusion using critical regions

25 Solving the Race Conditions
Busy Waiting Disabling Interrupts Lock Variables Strict Alternation Peterson’s Solution The TSL (Test and Lock) Instruction Sleep and Wakeup Semaphores Mutexes

26 Busy Waiting (Disabling Interrupts)
Disable all interrupts just after entering CR Re-enable them just before leaving CR User will get the power to turn system interrupts off Ends the system if forget to turn on again System with multi-processor Turning interrupts will affect only one processor Other processors will be able to access to shared area

27 Busy Waiting (Lock Variables)
Use a lock variable to Set to 1 when the process enter CR If it is already 1 the process just waits until it becomes 0 When the process exits CR, resets the lock to 0 Problem: The lock variable itself is shared Access to lock variable creates another Race Condition

28 Busy Waiting (Strict Alternation)
Algorithm: Enter CR and set a variable to 1 If another process wants to enter CR keep checking the variable until it becomes 0 The process resets the variable to 0 when exits the CR Second process will enter to CR when it understand the variable became 0 Not Good Similar problems to previous solutions Keeps the CPU busy If the waiting process is slower the first process may not allow the second process to enter CR

29 Busy Waiting (Peterson's Solution )
Keep an array to show which process’s turn is it Keep a variable (turn) to show whose turn is it Algorithm: The process entering CR Sets the array element Process 0 element 0; Process 1 element 1, etc. Set turn to its process number At leaving CR, reset the array element A second process will be blocked until the array element is reset

30 Busy Waiting (TSL Instruction)
TSL (Test and Lock) instruction: TSL RX,LOCK Indivisible block of operation reads the lock into register RX stores a nonzero value at the lock Help from hardware is taken Algorithm Use a shared variable flag When flag is 0 any process may set it to 1 using TSL instructions enter to CR after finishing execution reset the flag When flag is 1 any process wants to enter CR should wait

31 Busy Waiting: Disadvantages
Continuously check the status Wastes CPU time Has unexpected results Priority inversion problem A process L with lower priority enters its CR A process H with higher priority blocks L and start execution At a point H needs to enter CR But L has already entered H keeps CPU continuously checking L has lower priority, never has a chance to leave CR

32 Sleep and Wakeup SLEEP: WAKEUP: If WAKEUP signal is lost System call
causes a process to be blocked WAKEUP: causes a blocked process to be awakened and start execution If WAKEUP signal is lost (signal sent when the process is not asleep yet) Processes might go to sleep and never wakeup

33 Semaphores Counts number of wakeups Solves the lost wakeup problem
saved for future use Solves the lost wakeup problem a semaphore: could have the value 0 indicating that no wakeups were saved some positive value if one or more wakeups were pending Atomic operations: Checking the value, changing it, and possibly going to sleep two atomic operations: down if semaphore value is greater than 0 it decrements the value If the value is 0 the process is put to sleep Up increments the value of the semaphore If one or more processes were sleeping on that semaphore one of them is chosen by the system (e.g., at random) and is allowed to complete its down. after an up on a semaphore with processes sleeping on it, the semaphore will still be 0, but there will be one fewer process sleeping on it

34 Solving Race Cond with Semaphores
uses three semaphores: one called full for counting the number of slots that are full initially 0 one called empty for counting the number of slots that are empty initially equal to number of slots in the shared area one called mutex to make sure the producer and consumer do not access the buffer at the same time initially set to 1 Each process does a mutex down just before entering its critical region and an up just after leaving it full and empty are used for synchronization

35 Mutexes A mutex is a variable
can be in one of two states: unlocked (0) locked (1) only 1 bit is required to represent it, but in practice an integer often is used When a thread (or process) needs access to a critical region it calls mutex_lock. If the mutex is current unlocked meaning that the critical region is available the call succeeds and the calling thread is free to enter the critical region. if the mutex is already locked, the calling thread is blocked until the thread in the critical region is finished and calls mutex_unlock. If multiple threads are blocked on the mutex, one of them is chosen at random and allowed to acquire the lock.

36 Process Scheduling Operating system (OS) must decide Scheduler:
when more than one process is runnable Scheduler: OS part to decide which process is run Scheduling algorithm: The algorithm to decide which process will run first A good algorithm should have the criteria: Fairness Each process must get its fair share of CPU Efficiency Keep the CPU busy as much as possible Response time Minimize the response time for interactive users Turnaround Minimize the time batch users must wait for output Throughput Maximize the number of jobs processed per hour

37 Scheduling Bursts of CPU usage alternate with periods of I/O wait
Burst: Patlama Bursts of CPU usage alternate with periods of I/O wait a CPU-bound process an I/O bound process

38 Scheduling Algorithms
First-Come First-Served Shortest Job First Shortest Remaining Time Next Three level scheduling Round Robin Scheduling Priority Scheduling Multiple Queues Shortest Process Next Guaranteed Scheduling Lottery Scheduling Fair-Share Scheduling

39 First-Come First-Served
The simplest algorithm Processes use the CPU in the order they request it. There is a single process queue. When the first job enters the system it is started immediately allowed to run as long as it wants to. If new jobs come in they are put onto the end of the queue. When the running process is blocked the first process on the queue runs next. When a blocked process becomes ready it is placed on the end of the queue.

40 An example of shortest job first scheduling
the process execution time must be known Especially useful in batch processing systems Run the first process that will take shortest time Then continue with the next one An example of shortest job first scheduling

41 Shortest Remaining Time Next
The scheduler chooses the process whose remaining run time is the shortest The remaining time must be known When a new job arrives its total time is compared to the current process' remaining time If the new job needs less time the current process is suspended and the new job started. This scheme allows new short jobs to get good service.

42 Three level scheduling
First level of scheduling: Called as admission scheduler Jobs arrived at the system are initially placed in an input queue stored on the disk. Decides which jobs to admit to the system. A typical algorithm might be to look for a mix of compute-bound jobs and I/O bound jobs. Alternatively, short jobs could be admitted quickly Second level of scheduling called as the memory scheduler. determines which processes are kept in memory and which on the disk. Third level of scheduling Called as the CPU scheduler Any suitable algorithm can be used here picking one of the ready processes in main memory to run next

43 Three level scheduling

44 Round Robin Scheduling
Current Process One of the Oldest Simplest Fairest Most widely used Each process assigned a time interval called quantum when the time ends the process is blocked the next process is switched Next Process B F D G A Current Process Next Process F D G A B

45 Priority Scheduling Each process assigned a priority
The processes with highest in priority run first To prevent higher priority processes to run infinitively Decrease the priority at each clock tick It is often convenient to Group processes into priority classes Apply priority scheduling between classes and round robin within each class

46 Multiple Queues To decrease the swap processes Create priority queues
Assign 1 quanta to the processes in 1st priority queue 2 quanta to the processes in 2nd priority queue 4 quanta to the processes in 3rd priority queue 8 quanta to the processes in 4th priority queue ... Whenever a process used up all quanta allocated to it will be moved to the next priority queue

47 Guaranteed Scheduling
If there are n processes each will receive about 1/n of the CPU power. Keep track of how much CPU each process has had since its creation. Then compute the amount of CPU used by each process Compute the ratio of actual CPU time consumed to CPU time entitled. A ratio of 0.5: a process has only had half of what it should have had, A ratio of 2.0: a process has had twice as much as it was entitled to. Run the process with the lowest ratio until its ratio has moved above its closest competitor To entitle: Yetki vermek

48 Lottery Scheduling Give processes lottery tickets To schedule Example
a lottery ticket is chosen at random, schedule the process holding that ticket Example For 50 lotteries per second each process will get 20 msec of CPU time on average Lottery scheduling can be used to solve difficult scheduling problems Lottery scheduling can be used to solve problems that are difficult to handle with other methods. One example is a video server in which several processes are feeding video streams to their clients, but at different flame rates. Suppose that the processes need frames at 10, 20, and 25 frames/sec. By allocating these processes 10,20, and 25 tickets, respectively, they will automatically divide the CPU in approximately the correct proportion, that is, 10: 20: 25.

49 Fair-Share Scheduling
To consider who is the process owner Example: round robin, equal priorities, user 1 has 9 processes user 2 has 1 process user 1 will get 90% of the CPU and user 2 will get only 10% of it. Allocate user a fraction of the CPU time pick processes so that equilize the CPU time used by the users. If two users using the system They will get 50% of the CPU of each no matter how many processes they have in existence.

50 Scheduling Threads When several processes each have multiple threads
we have two levels of parallelism present: processes and threads. Scheduling in such systems differs depending on whether user-level threads or kernel-level threads or both are supported.

51 User Level Threads Possible scheduling of user-level threads
50-msec process quantum threads run 5 msec/CPU burst

52 Kernel-Level Threads Possible scheduling of kernel-level threads
50-msec process quantum threads run 5 msec/CPU burst

53 End of Chapter 2 Processes and Threads


Download ppt "Operating Systems Chapter 2: Processes and Threads"

Similar presentations


Ads by Google