Presentation is loading. Please wait.

Presentation is loading. Please wait.

Operating Systems 9 – scheduling

Similar presentations


Presentation on theme: "Operating Systems 9 – scheduling"— Presentation transcript:

1 Operating Systems 9 – scheduling
PIETER HARTEL

2 Types of scheduling Short-term: which runnable process to handle by which CPU Medium-term: which processes to swap in (challenge?) Long-term: which processes to accept in a batch environment I/O scheduling: which pending I/O request to handle by which I/O device More processes is more scheduling opportunities but also more resources that are in use

3 Scheduling is managing queues to minimise delays
User oriented criteria: response time, deadlines, predictability System oriented criteria: throughput, resource utilisation Tension? Events include? Good response time for one user probably means poor throughput More processes is more scheduling opportunities but also more resources that are in use Events can be Interrupts, signals, sys calls etc

4 Common scheduling policies
Round robin requires pre-emption, quantum can be varied Example arrival&service times: A: 0&3; B: 2&6; C: 4&4; D: 6&5; E: 8&2 Q = Quantum Feedback: each time the process is pre-empted, it gets double the quantum at the next lower priority for the next time it runs

5 Multi-level feedback queue : past behaviour predicts future
Round Robin in RQi for 2i time units Promote waiting processes Demote running processes

6 Linux scheduling (section 10.3)
Three levels Real-time FIFO, pre-empted only by higher priority RT FIFO Round robin, pre-empted by clock after quantum expiry Time sharing, lower priority, otherwise as above Run queue per CPU with two arrays of 140 queue heads each, active and expired (i.e. out of quantum) Dynamic priority rewards interactivity and punishes CPU hogging Wait queue for threads waiting for events Completely Fair Scheduling Real-time round robin is a misnomer, no deadlines can be specified, simply high priority threads CFS basically means that interactive tasks get their fair share of the CPU

7 loop #ifndef _loop_h #define _loop_h 1 extern void loop(int N) ;
#endif loop loop.h gcc –c loop.c –o loop.o Write a test program #include "loop.h" #define M 1690 /* Burn about N * 10 ms CPU time */ void loop(int N) { int i, j, k ; for(i = 0; i < N; i++) { for(j = 0; j < M; j++) { for(k = 0; k < M; k++) { } loop.c #include <stdio.h> int main(int argc, char * argv[]) { int i; for(i = 0; i < 10; i++) { loop(100); printf("%d\n", i); } return 0 ;

8 SchedXY int main(int argc, char *argv[]) { pid_t pid = getpid();
gcc –o RR80 -DX=SCHED_RR -DY=80 SchedXY.c sudo RR80& top int main(int argc, char *argv[]) { pid_t pid = getpid(); struct sched_param param; param.sched_priority=Y; if( sched_setscheduler(pid, X, &param) != 0 ) { printf("cannot setscheduler\n"); } else { for(;;); }; return 0; }

9 ThreadSched #define N 8 #define M 1000000 void *tproc(void *ptr) {
int k, i = *((int *) ptr); int bgn = sched_getcpu(); printf("thread %d on CPU %d\n", i,bgn); for(k=0;k<M;k++) { int now = sched_getcpu(); if( bgn != now ) { printf("thread %d to CPU %d\n", i,now); break; } sched_yield(); pthread_exit(0); ThreadSched Output? gcc ThreadSched.c -lpthread ./a.out ./a.out xx + ./a.out thread 1 on CPU 0 thread 2 on CPU 2 thread 0 on CPU 3 thread 3 on CPU 3 thread 4 on CPU 0 thread 5 on CPU 3 thread 6 on CPU 2 thread 7 on CPU 3 thread 7 to CPU 7 thread 3 to CPU 7 thread 5 to CPU 1 thread 4 to CPU 4 thread 6 to CPU 4 + ./a.out xx thread 0 on CPU 0 thread 4 on CPU 4 thread 1 on CPU 1 thread 5 on CPU 5 thread 6 on CPU 6 thread 7 on CPU 7

10 Nice int main(int argc, char *argv[]) { int p, q, r; cpu_set_t cpuset;
CPU_ZERO(&cpuset); CPU_SET(1, &cpuset); sched_setaffinity(parent, sizeof(cpu_set_t), &cpuset); for( p = 0; p < P; p++ ) { for( q = 0; q < Q; q++ ) { pid_t child = fork(); if (child == 0) { setpriority(PRIO_PROCESS, getpid(), p) ; for(r = 0; r < R; r++) { loop(100); } exit(0) ; Nice Output? gcc Nice.c loop.o ./a.out >junk& top ./a.out ./a.out xx $ top PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 18503 pieter R :03.06 a.out 18507 pieter R :02.16 a.out 18508 pieter R :02.10 a.out 18509 pieter R :01.66 a.out 18510 pieter R :01.62 a.out 18511 pieter R :01.60 a.out 18512 pieter R :01.28 a.out 18513 pieter R :01.26 a.out 18514 pieter R :01.24 a.out cpu=1 parent=9829 policy=0 loop= us cpu=1 child=9847 prio=0 started. cpu=1 child=9847 r=0. cpu=1 child=9848 prio=0 started. cpu=1 child=9849 prio=0 started. cpu=1 child=9847 r=1. cpu=1 child=9850 prio=1 started. cpu=1 child=9848 r=0. cpu=1 child=9851 prio=1 started. cpu=1 child=9853 prio=1 started. cpu=1 child=9854 prio=2 started. cpu=1 child=9849 r=0. cpu=1 child=9855 prio=2 started. cpu=1 child=9847 r=2. cpu=1 child=9850 r=0. cpu=1 child=9848 r=1. cpu=1 child=9856 prio=2 started. cpu=1 child=9851 r=0. cpu=1 child=9849 r=1. cpu=1 child=9850 finished. cpu=1 child=9856 r=4. cpu=1 child=9851 finished. cpu=1 child=9853 r=6. cpu=1 child=9853 finished. cpu=1 child=9854 r=5. cpu=1 child=9855 r=5. cpu=1 child=9856 r=5. cpu=1 child=9854 r=6. cpu=1 child=9854 finished. cpu=1 child=9856 r=6. cpu=1 child=9856 finished. cpu=1 parent=9829 finished.

11 Summary Maximising resource usage, while minimising delays Decisions
Long term: admission of processes Medium term: swapping Short term: CPU assignment to ready process Criteria Response time: users Throughput: system


Download ppt "Operating Systems 9 – scheduling"

Similar presentations


Ads by Google