Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer System Overview Chapter 1. Interrupts An interruption of the normal sequence of execution Improves processing efficiency Allows the processor.

Similar presentations


Presentation on theme: "Computer System Overview Chapter 1. Interrupts An interruption of the normal sequence of execution Improves processing efficiency Allows the processor."— Presentation transcript:

1 Computer System Overview Chapter 1

2 Interrupts An interruption of the normal sequence of execution Improves processing efficiency Allows the processor to execute other instructions while an I/O operation is in progress A suspension of a process caused by an event external to that process and performed in such a way that the process can be resumed

3 Classes of Interrupts Program – arithmetic overflow – division by zero – execute illegal instruction – reference outside user’s memory space Timer I/O Hardware failure

4 Interrupt Handler A program that determines nature of the interrupt and performs whatever actions are needed Control is transferred to this program Generally part of the operating system

5 Multiple Interrupts Disable interrupts while an interrupt is being processed – Processor ignores any new interrupt request signals

6 Multiprogramming Processor has more than one program to execute The sequence the programs are executed depend on their relative priority and whether they are waiting for I/O After an interrupt handler completes, control may not return to the program that was executing at the time of the interrupt

7 Cache Memory

8 Layers of Computer System

9 Services Provided by the Operating System Program development – Editors and debuggers Program execution Access to I/O devices Controlled access to files System access

10 Operating System Functions same way as ordinary computer software – It is program that is executed Operating system relinquishes control of the processor to execute other programs

11

12 Kernel Portion of operating system that is in main memory Contains most-frequently used functions Also called the nucleus

13 Multiprogramming When one job needs to wait for I/O, the processor can switch to the other job

14 Batch Multiprogramming versus Time Sharing Batch MultiprogrammingTime Sharing Principal objectiveMaximize processor use Minimize response time Source of directives to operating system Job control language commands provided with the job Commands entered at the terminal

15 Major Achievements Processes Memory Management Information protection and security Scheduling and resource management System structure

16 Processes A program in execution An instance of a program running on a computer The entity that can be assigned to and executed on a processor A unit of activity characterized by a single sequential thread of execution, a current state, and an associated set of system resources

17 Difficulties with Designing System Software Improper synchronization – ensure a process waiting for an I/O device receives the signal Failed mutual exclusion Nondeterminate program operation – program should only depend on input to it, not relying on common memory areas Deadlocks

18 Process Consists of three components – An executable program – Associated data needed by the program – Execution context of the program All information the operating system needs to manage the process

19 Memory Management Process isolation Automatic allocation and management Support for modular programming Protection and access control Long-term storage

20 File System Implements long-term store Information stored in named objects called files

21 Major Elements of Operating System

22 Operating System Design Hierarchy LevelNameObjectsExample Operations 13ShellUser programmingStatements in shell language environment 12User processesUser processesQuit, kill, suspend, resume 11DirectoriesDirectoriesCreate, destroy, attach, detach, search, list 10DevicesExternal devices, suchOpen, close, as printer, displaysread, write and keyboards 9File systemFilesCreate, destroy, open, close read, write 8CommunicationsPipesCreate, destroy, open. close, read, write

23 Operating System Design Hierarchy LevelNameObjectsExample Operations 7Virtual MemorySegments, pagesRead, write, fetch 6Local secondaryBlocks of data, deviceRead, write, allocate, free storechannels 5Primitive processesPrimitive process,Suspend, resume, wait, signal semaphores, ready list

24 Characteristics of Modern Operating Systems Multithreading – process is divided into threads that can run simultaneously Thread – dispatchable unit of work – executes sequentially and is interruptable Process is a collection of one or more threads

25 Characteristics of Modern Operating Systems Symmetric multiprocessing – there are multiple processors – these processors share same main memory and I/O facilities – All processors can perform the same functions

26 Characteristics of Modern Operating Systems Distributed operating systems – provides the illusion of a single main memory and single secondary memory space – used for distributed file system

27 Client/Server Model Simplifies the Executive – possible to construct a variety of APIs Improves reliability – each service runs as a separate process with its own partition of memory – clients cannot not directly access hardware Provides a uniform means fro applications to communicate via LPC Provides base for distributed computing

28 Threads and SMP Different routines can execute simultaneously on different processors Multiple threads of execution within a single process may execute on different processors simultaneously Server processes may use multiple threads Share data and resources between process

29 Two Suspend States

30 Operating System Control Structures Information about the current status of each process and resource Tables are constructed for each entity the operating system manages

31 Memory Tables Allocation of main memory to processes Allocation of secondary memory to processes Protection attributes for access to shared memory regions Information needed to manage virtual memory

32 I/O Tables I/O device is available or assigned Status of I/O operation Location in main memory being used as the source or destination of the I/O transfer

33 File Tables Existence of files Location on secondary memory Current Status Attributes Sometimes this information is maintained by a file-management system

34 Process Table Where process is located Attributes necessary for its management – Process ID – Process state – Location in memory

35

36

37 User-Level Threads All thread management is done by the application The kernel is not aware of the existence of threads

38 Kernel-Level Threads W2K, Linux, and OS/2 are examples of this approach Kernel maintains context information for the process and the threads Scheduling is done on a thread basis

39 Categories of Computer Systems Single Instruction Single Data (SISD) – single processor executes a single instruction stream to operate on data stored in a single memory Single Instruction Multiple Data (SIMD) – each instruction is executed on a different set of data by the different processors

40 Categories of Computer Systems Multiple Instruction Single Data (MISD) – a sequence of data is transmitted to a set of processors, each of which executes a different instruction sequence. Never implemented Multiple Instruction Multiple Data (MIMD) – a set of processors simultaneously execute different instruction sequences on different data sets

41

42 Currency Communication among processes Sharing resources Synchronization of multiple processes Allocation of processor time

43 Difficulties with Concurrency Sharing global resources Management of allocation of resources Programming errors difficult to locate

44 Competition Among Processes for Resources Mutual Exclusion – Critical sections Only one program at a time is allowed in its critical section Example only one process at a time is allowed to send command to the printer Deadlock Starvation

45 Cooperation Among Processes by Communication Messages are passes – Mutual exclusion is not a control requirement Possible to have deadlock – Each process waiting for a message from the other process Possible to have starvation – Two processes sending message to each other while another process waits for a message

46 Mutual Exclusion: Hardware Support Special Machine Instructions – Performed in a single instruction cycle – Not subject to interference from other instructions – Reading and writing – Reading and testing

47 Mutual Exclusion: Hardware Support Test and Set Instruction boolean testset (int i) { if (i == 0) { i = 1; return true; } else { return false; }

48 Mutual Exclusion: Hardware Support Exchange Instruction void exchange(int register, int memory) { int temp; temp = memory; memory = register; register = temp; }

49 Mutual Exclusion Machine Instructions Advantages – Applicable to any number of processes on either a single processor or multiple processors sharing main memory – It is simple and therefore easy to verify – It can be used to support multiple critical sections

50 Mutual Exclusion Machine Instructions Disadvantages – Busy-waiting consumes processor time – Starvation is possible when a process leaves a critical section and more than one process is waiting. – Deadlock If a low priority process has the critical region and a higher priority process needs, the higher priority process will obtain the processor to wait for the critical region

51 Semaphores Special variable called a semaphore is used for signaling If a process is waiting for a signal, it is suspended until that signal is sent Wait and signal operations cannot be interrupted Queue is used to hold processes waiting on the semaphore

52 Semaphores Semaphore is a variable that has an integer value – May be initialized to a nonnegative number – Wait operation decrements the semaphore value – Signal operation increments semaphore value

53 Monitors Monitor is a software module Chief characteristics – Local data variables are accessible only by the monitor – Process enters monitor by invoking one of its procedures – Only one process may be executing in the monitor at a time

54 Message Passing Enforce mutual exclusion Exchange information send (destination, message) receive (source, message)

55 Synchronization Sender and receiver may or may not be blocking (waiting for message) Blocking send, blocking receive – Both sender and receiver are blocked until message is delivered – Called a rendezvous

56 Synchronization Nonblocking send, blocking receive – Sender continues processing such as sending messages as quickly as possible – Receiver is blocked until the requested message arrives Nonblocking send, nonblocking receive – Neither party is required to wait

57 Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate with each other No efficient solution Involve conflicting needs for resources by two or more processes

58 Reusable Resources Used by one process at a time and not depleted by that use Processes obtain resources that they later release for reuse by other processes Processors, I/O channels, main and secondary memory, files, databases, and semaphores Deadlock occurs if each process holds one resource and requests the other

59 Example of Deadlock

60 Conditions for Deadlock Mutual exclusion – only one process may use a resource at a time Hold-and-wait – A process request all of its required resources at one time

61 Conditions for Deadlock No preemption – If a process holding certain resources is denied a further request, that process must release its original resources – If a process requests a resource that is currently held by another process, the operating system may preempt the second process and require it to release its resources

62 Conditions for Deadlock Circular wait – Prevented by defining a linear ordering of resource types

63 Deadlock Avoidance A decision is made dynamically whether the current resource allocation request will, if granted, potentially lead to a deadlock Requires knowledge of future process request

64 Two Approaches to Deadlock Avoidance Do not start a process if its demands might lead to deadlock Do not grant an incremental resource request to a process if this allocation might lead to deadlock

65 Resource Allocation Denial Referred to as the banker’s algorithm State of the system is the current allocation of resources to process Safe state is where there is at least one sequence that does not result in deadlock Unsafe state is a state that is not safe

66 Determination of a Safe State Initial State

67 Deadlock Avoidance Maximum resource requirement must be stated in advance Processes under consideration must be independent; no synchronization requirements There must be a fixed number of resources to allocate No process may exit while holding resources

68 Deadlock Detection

69 Strategies once Deadlock Detected Abort all deadlocked processes Back up each deadlocked process to some previously defined checkpoint, and restart all process – original deadlock may occur Successively abort deadlocked processes until deadlock no longer exists Successively preempt resources until deadlock no longer exists

70 Selection Criteria Deadlocked Processes Least amount of processor time consumed so far Least number of lines of output produced so far Most estimated time remaining Least total resources allocated so far Lowest priority

71 UNIX Concurrency Mechanisms Pipes Messages Shared memory Semaphores Signals

72 Paging Each process has its own page table Each page table entry contains the frame number of the corresponding page in main memory A bit is needed to indicate whether the page is in main memory or not

73

74 Aim of Scheduling Response time Throughput Processor efficiency

75 Types of Scheduling

76 Decision Mode Nonpreemptive – Once a process is in the running state, it will continue until it terminates or blocks itself for I/O Preemptive – Currently running process may be interrupted and moved to the Ready state by the operating system – Allows for better service since any one process cannot monopolize the processor for very long

77 Classifications of Multiprocessor Systems Loosely coupled multiprocessor – Each processor has its own memory and I/O channels Functionally specialized processors – Such as I/O processor – Controlled by a master processor Tightly coupled multiprocessing – Processors share main memory – Controlled by operating system

78 Assignment of Processes to Processors Treat processors as a pooled resource and assign process to processors on demand Permanently assign process to a processor – Dedicate short-term queue for each processor – Less overhead – Processor could be idle while another processor has a backlog

79 Assignment of Processes to Processors Global queue – Schedule to any available processor Master/slave architecture – Key kernel functions always run on a particular processor – Master is responsible for scheduling – Slave sends service request to the master – Disadvantages Failure of master brings down whole system Master can become a performance bottleneck

80 Process Scheduling Single queue for all processes Multiple queues are used for priorities All queues feed to the common pool of processors Specific scheduling disciplines is less important with more than on processor

81 Deadline Scheduling Information used – Ready time – Starting deadline – Completion deadline – Processing time – Resource requirements – Priority – Subtask scheduler

82


Download ppt "Computer System Overview Chapter 1. Interrupts An interruption of the normal sequence of execution Improves processing efficiency Allows the processor."

Similar presentations


Ads by Google