Download presentation
Presentation is loading. Please wait.
1
Introduction to computing and the Internet
Revision – Part A
2
Computer Memory Main Memory: Secondary Memory
Stores data currently being used Is made of semiconductor chips. Secondary Memory magnetic (floppy discs, hard disc ) Optical (CD-ROM, DVD)
3
Arrangement of Memory Cells
Each cell has a unique address Longer strings stored by using consecutive cells value = RAM (random access memory)
4
Memory cells n-bit cell Can hold m*n bits m cells
In reality, most electronic memories have 8-bit cells. Can hold m*n bits m cells
5
Accessing Data in the Main Memory
Instructions and data are stored in the main memory in a serial order. CPU executes instructions one by one top down.
6
To identify each memory
To read data from each cell To issue read or write signal To identify each memory cell Main memory CPU Add. bus Data bus Control bus
7
Address Decoder Unique cell Has a unique Address. Address Of the cell
To activated Main memory CPU Decoder Address bus
8
Decoder with N Address Lines
Main Memory 0000…0000 a0 a1 0000…0001 0000…0010 N add. lines 2N add cell 1111…1111 AN-1
9
Multiplexer Cells form rows and columns.
Each cell can be identified by a row address and column address. Each cells address uses only N/2 address lines. This can be done using a multiplixed addresses.
10
Example 2 Suppose that a computer’s Main Memory has 1013 cells. How many address lines are needed in order for all the cells to be useable? Explain your answer. Answer: With N address lines a computer can have a maximum 2N usable cells. 29 = 512, 210 = 1024. 9 address lines would not generate enough addresses for cells to be used. 10 address lines would. Having more than 10 address lines would lead to too many addresses wasted. So the desired number of address lines is 10. N = ⌈log2(1050)⌉ can be used to find the number of address lines. If multiplexed addresses is used, then 5 address lines would be sufficient for 1013 cells to be useable.
11
Summary Main memory Magnetic memory Optical memory RAM
Low storage capacity Fast (electrical signals) Volatile. Magnetic memory Floppy disk Hard disk Magnetic tape Optical memory CD_ROM disk DVD
12
Central Processing Unit - CPU
Program → sequence of machine-code instructions. CPU executes these instructions fetch-execute cycle Fetch next instruction Execute Start Halt
13
Component of the CPU Registers Program counter Accumulator
Arithmetic Logic unit. Control Unit. MAR MBR IR
14
Registers MAR Stores the address of the cell the CPU is going to execute. MBR Contains instruction or data just read from the memory. Or data that is about to be written in the memory. IR Holds the instruction just fetched from the main memory.
15
Program Counter - PC Holds the address of the next instruction.
16
Arithmetic Logic Unit- ALU
performs all arithmetic operations and Boolean logical operations.
17
Control Unit Controls all operations
18
ALU, Registers and Control Unit Relationships
Data are presented to the ALU in registers. Performs operations and put the result back in registers Registers ALU Control unit Control operations.
19
CPU and System Bus MAR Address bus Registers ALU MBR Data bus Control
unit Control bus
20
Instruction Format Op-code Operands
Op-code indicates what the kind of operation to be performed. Operands Specifies the things that is to be operated on It is an address of a cell where some data are stored. instruction Operand Op-code
21
Enhancing Computer Performance
Desirable to make computers run faster. How can this be achieved? In a computer all information processing is done in the CPU. The speed of the CPU is the number of micro-operations it can perform in a second.
22
CPU Speed CPU consists of a set of registers, an ALU and Control Unit.
CPU micro-operations are controlled by the control unit. The control unit issues a sequence of control signals at a fixed frequency. The control unit is able to do that as it is connected to a clock.
23
Clock A clock is a micro-chip that regulates the timing and speed of all computer functions. It includes a crystal that vibrates at a certain frequency when electricity is applied to it. The clock transmits a regular sequence of alternating 1s and 0s. cycle
24
Clock speed Also called clock rate, the speed at which a microprocessor executes instructions. Every computer contains an internal clock that regulates the rate at which instructions are executed and synchronizes all the various computer components. The CPU requires a fixed number of clock cycles to execute each instruction. The faster the clock, the more instructions the CPU can execute per second. Clock speeds are expressed in Megahertz (MHz) or Gigahertz (GHz).
25
Control Unit - Clock Control unit can issue one or more control signals in one clock cycle. This will enable the CPU to do one micro-operation per cycle, or a number of micro-operations simultaneously. Recent processor have a clock with frequency 2 GHz (2*230 Hz) (2* 230 micro-operation/ sec)
26
Cache and Main Memory Word transfer Word transfer CPU Cache Main
CPU repeatedly accesses a particular small part of the main memory. In a short time a copy of this portion of the main memory is kept in the cache.
27
Read and Write with Cache
Read a word from the main memory? The CPU checks whether the word is in the cache. If yes, the word is delivered to the CPU. If not, a block of the main memory containing the desired word is read into the cache and then passed to the CPU. Write data to the main memory? The CPU writes the data to the cache. Then, the cache writes the data to the main memory.
28
Pipelining Introducing parallelism into the sequential machine-instruction program. A number of instructions can be executed in parallel. Dividing instruction into stages
29
Six-Stage Instruction Cycle – without pipelining
5 instructions A, B, C, D, E 1 2 3 4 5 6 7 …. 12 ….. S1 A B S2 S3 S4 S5 S6 24 25 …… E D
30
Six-Stage Instruction Cycle – with pipelining
5 instructions A, B, C, D, E time stages 1 2 3 4 5 6 7 8 9 10 S1 A B C D E S2 S3 S4 S5 S6 It takes 6 time unit to finish the instruction A, and the other 4 instruction require 1 more time unit each to finish there execution Therefore the time required is = 10 09/12/2018 cis110
31
n-Stage Instruction Cycle
Suppose we have m instruction Without pipelining n*m With pipelining n+m-1 time units. Explanation of the formulas: The first instruction takes n time unit to be executed completely. The other (m-1) instruction will require one time unit for each one of them to be executed completely. Therefore the time requires to execute m instruction in n-stage cycle is n+m-1.
32
Disadvantage of pipelining(*)
Data hazards Data hazards occur when data is modified. For example an operand is modified and read soon after. Because instruction may not finished writing to the operand, the second instruction may use incorrect data. Structural hazards: Occurs when a part of the processor’s hardware is needed by two or more instructions at the same time Control hazards occur when the processor is told to branch - IE, if a certain condition is true, jump from one part of the instruction stream to another one - not necessarily the next one sequentially. In such a case, the processor cannot tell in advance whether it should process the next instruction This can result in the processor doing unwanted actions.
33
Aims of RISC Reduce number of instruction To simplify control unit
freed chip used to allocate large number CPU registers. Small instruction format fast decoding Addressing is referred to internal registers, not to the main memory Hence, Operands is faster Compiler generates better machine code. However, RISC programs have more instructions
34
Aims of CISC Large number of complex instructions Decoding is slower,
Instructions have different addressing mode. Hence, fetching operands are complicated However, instructions are more expressive than RISC. Programming at assembly level is simpler CISC programs have less instruction than RISC
35
RISC vs CISC computers RISC has fixed length instruction, where CISC normally has variable length instructions. RISC has more registers than CISC. RISC uses register to register for computation. only ”LOAD” and ”STORE” can access memory. where CISC uses memory to memory operations. RISC usually has far fewer instructions than CISC. Unlike, CISC, where more transistors are used to store complex instructions, RISC uses more transistors on memory registers. The CISC approach attempts to minimize the number of instructions per program, sacrificing the number cycles per instruction. RISC does the opposite, reducing the cycles per instruction at the cost of the number of instruction per program.
36
Data Representation Integers Fractions Characters
Unsigned notation, Signed magnitude notation, Excess notation, Two’s complement notation. Fractions IEEE Floating-point notation Characters Colours, images and sound.
37
Unsigned notation It represents only positive integers.
38
Signed magnitude representation
The most significant bit is used to represent the sign. 0 for positive integers, 1 for negative integers. The unsigned value of the remaining bits represent the magnitude. Disadvantages: two representations of 0, difficult operations.
39
Excess notation In excess notation, the value represented is the value in unsigned with a fixed value subtracted from it. For n-bit binary sequence the value subtracted is 2(n-1). Only one representation of 0. easy comparison
40
Excess Notation with n bits
Decimal value In unsigned notation Decimal value In excess notation - 2n-1 =
41
Two’s complement notation
In two’s complement the most significant for an n-bit number has a contribution of –2(n-1). One representation of zero All arithmetic operations can be performed by using addition and inversion. The most significant bit: 0 for positive and 1 for negative.
42
Properties of Two’s Complement Notation
Positive numbers begin with 0 Negative numbers begin with 1 Only one representation of 0, e.g. 0000 Relationship between +V and – V.
43
Example – 10001110 Unsigned Signed Magnitude: Excess Notation:
= = 142 Signed Magnitude: the sign bit = 1, hence it is a negative number. the unsigned value of the remaining bits = = 14. Hence the value represented in signed magnitude is −14 Excess Notation: Excess notation value = unsigned value − 28-1 = = 14 Two’s Complement: The value represented in two’s complement is: − = = −114 Another way is by finding the positive value of it complement, , which is 114. The value represented then is -114.
44
Floating-point representation IEEE 745 Standard
The most important floating point representation is defined in IEEE Standard 754. It was developed to facilitate the portability from on processor to another. The IEEE Standard defines both a 32-bit and 64-bit format.
45
Representation in IEEE 754 single precision (*)
1 bit sign 8 biased exponent (bias is 127) 23 bit normalised mantissa Sign Exponent Mantissa
46
Example 2: Represent the decimal value -10.75 in IEEE single precision
-6.75 is a negative number then the sign bit is 1. 6.75 = * 20 = * 21 = * 22 The real mantissa is , then the normalised mantissa is The real exponent is 2 and the bias is 127, then the exponent is =12910 = The representation of in IEEE single-precision is: = C0D80000 1
47
Example 3: Which number the following IEEE single precision notation represents?
Sign bit is = 0, hence, it is positive number. Exponent is = 130 Bias =127, hence the real exponent is 130 –127 = 3. The mantissa: It is normalised, hence the true mantissa is 1.01 = Finally, the number is x 23 = 10
48
Exercise Assume that the hexadecimal value 80D00000 represents a floating point number in IEEE-standard single precision format. What decimal number does it represent? Show your work.
49
Representation in IEEE 754 double precision format
It uses 64 bits 1 bit sign 11 bit biased exponent (bias is 1023) 52 bit normalised mantissa Sign Exponent Mantissa
50
Floating Point Representation format (summery) (*)
the sign bit represents the sign 0 for positive numbers 1 for negative numbers The exponent is biased by a fixed value b, called the bias. The mantissa should be normalised, e.g. if the real mantissa if of the form 1.f then the normalised mantissa should be f, where f is a binary sequence. Sign Exponent Mantissa
51
Colour representation
Colours can represented using a sequence of bits.
52
Image representation An image can be divided into many tiny squares, called pixels. Each pixel has a particular colour. The quality of the picture depends on two factors: the density of pixels. The length of the word representing colours. The resolution of an image is the density of pixels. The higher the resolution the more information information the image contains.
53
Representing Sound Graphically
X axis: time Y axis: pressure A: amplitude (volume) : wavelength (inverse of frequency = 1/)
54
Sampling Sampling is a method used to digitise sound waves.
A sample is the measurement of the amplitude at a point in time. The quality of the sound depends on: The sampling rate, the faster the better The size of the word used to represent a sample.
55
Digitizing Sound Capture amplitude at these points
Lose all variation between data points Zoomed Low Frequency Signal
56
operating System User command interface Process Manager Memory manager
Device Manager File manager Network manager Operating System Resource management
57
Operating system as a resource manager
Process Manager: Next program to be executed? Time to be given to each program? Memory manager: Best use of the memory to run as many programs as possible I/O Device (e.g.printer) Manager: Which program should use a particular I/O device? Network manager: which computer should execute a particular program? Resource management process management: which program should be executed next? and how much time should be given to each program? memory management: how to make the best use of the available memory to run as many program as possible? I/O(e.g. a printer) management: which program should use a particular I/O device.
58
Type of operating systems
Multi-programming Operating system can handle several programs at once. Time-sharing Operating system allows many user to share the same computer and interact with it. Or, in case of a single-user computer (e.g. PC), the user can work on several programs at the same time. Multiprogramming OS All the programs to be run are loaded in the main memory. The operation system picks one program and execute it. Once it is finished, the operation system picks another one and execute it. A program might involve I/O operation, which is usually a slow operation. If that happens, instead of waiting the I/O to operation the complete, the operation system starts executing another program. Time sharing OS Operation system allows many user to share the same computer and interact with it. Many users can use one computer through terminals. The OS allocates a very short time to each user-program, and switches rapidly from one user-program to another. Each user has the impression that the entire computer is dedicated to his/her. Time sharing can also be used on single-user computer(e.g. PC), in which case the user can work on several program at the same time (e.g. printing, , editing, …). In this case the CPU switches rapidly between the programs A time sharing operation system is an extension the multiprogramming operation system.
59
Operating system as a process manager
Coordinates the occupation of the main memory by different processes and their data. At any time the operation system may be dealing with many processes. e.g. a process may be executed or allowed to wait in main memory, or swapped out of the main memory.
60
Processes Definition of a process Process Scheduling
Operations on Processes Cooperating Processes
61
What is a process Process – a program in execution; process execution must progress in sequential fashion. A process includes: program counter Stack data section heap
62
Process State As a process executes, it changes state
new: The process is being created. running: Instructions are being executed. waiting: The process is waiting for some event to occur. ready: The process is waiting to be assigned to a process. terminated: The process has finished execution.
63
Threads Many software packages are multi-threaded
Web browser: one thread display images, another thread retrieves data from the network Word processor: threads for displaying graphics, reading keystrokes from the user, performing spelling and grammar checking in the background A thread is sometimes called a lightweight process It is comprised over a thread ID, program counter, a register set and a stack It shares with other threads belonging to the same process its code section, data section and other OS resources (e.g., open files) A process that has multiples threads can do more than one task at a time
64
Single and Multithreaded Processes
65
Benefits Responsiveness Resource Sharing Economy
One part of a program can continue running even if another part is blocked Resource Sharing Threads of the same process share the same memory space and resources Economy Much less time consuming to create and manage threads than processes
66
Java Thread States
67
Process Control Block (PCB)
Information associated with each process. Identifier Process state Program counter CPU registers CPU scheduling information Memory-management information Accounting information I/O status information
68
CPU Switch From Process to Process
The PCB is saved when a process is removed from the CPU and another process takes its place (context switch).
69
Process Scheduling Queues
Job queue – set of all processes in the system. Ready queue – set of all processes residing in main memory, ready and waiting to execute. Device queues – set of processes waiting for an I/O device. Process migration between the various queues.
70
Schedulers Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue. Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU.
71
Medium Term Scheduling
Time sharing Operating systems may introduce a medium term scheduler Removes processes from memory (and thus CPU contention) to reduce the degree of multiprogramming – swapping Swapping may be needed to improve the process mix or to free up memory if it has become overcommitted
72
Intermediate queue Job queue Ready queue CPU End Process request I/O
This figure illustrates these queues and the interaction between them. A process is first added to the job queue. If the resources are available, it is put into main memory and it joins the ready queue. It is then executed by the CPU. If it requires an I/O operation, it is put in the I/O queue on the relevant I/O device. A process requiring I/O may be temporary kicked out of the main memory on to the hard disk (joining the intermediate queue), in order to to free some some space in main memory for other operation to come in. When a process is completely executed it is removed from all the queues. I/O I/O I/O I/O
73
Scheduling Criteria CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process waiting to get into memory + waiting in the ready queue + executing on the CPU + I/O Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced,
74
Optimization Criteria
Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time In most cases we optimize the average measure
75
Scheduling Algorithms First-Come, First-Served (FCFS)
Process Burst Time P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: ( )/3 = 17 P1 P2 P3 24 27 30 CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait.
76
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order P2 , P3 , P1 The Gantt chart for the schedule is: Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: ( )/3 = 3 Much better than previous case. Average waiting time is generally not minimal and may vary substantially if the process CPU-burst times vary greatly P1 P3 P2 6 3 30
77
FCFS Scheduling (Cont.)
FCFS is non-preemptive Not good for time sharing systems where where each user needs to get a share of the CPU at regular intervals Short process(I/O bound) wait for one long CPU-bound process to complete a CPU burst before they get a turn lowers CPU and device utilization I/O bound processes complete their burst and enter ready queue – I/O devices idle and I/O bound processes waiting CPU bound process completes CPU burst and moves to I/O device I/O bound processes all quickly complete their CPU bursts and enter I/O queue – now CPU is idle CPU bound completes I/O and executes on CPU; back to step 1
78
Shortest-Job-First (SJR) Scheduling
Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time (on a tie use FCFS) Two schemes: Non-preemptive – once CPU given to the process it cannot be preempted until completes its CPU burst. preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the shortest-Remaining-Time-First (SRTF). SJF is optimal – gives minimum average waiting time for a given set of processes.
79
Example of Non-Preemptive SJF
Process Arrival Time Burst Time P P P P SJF (non-preemptive) Average waiting time = ( )/4 = 4 P1 P3 P2 7 3 16 P4 8 12
80
Example of Preemptive SJF
Process Arrival Time Burst Time P P P P SJF (preemptive) Average waiting time = ( )/4 = 3 P1 P3 P2 4 2 11 P4 5 7 16
81
Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer highest priority). Can be preemptive (compares priority of process that has arrived at the ready queue with priority of currently running process) or non-preemptive (put at the head of the ready queue) SJF is a priority scheduling where priority is the predicted next CPU burst time. Problem Starvation – low priority processes may never execute. Solution Aging – as time progresses increase the priority of the process.
82
Round Robin (RR) Each process gets a small unit of CPU time (time quantum), usually milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.
83
Example of RR with Time Quantum = 20
Process Burst Time P1 53 P2 17 P3 68 P4 24 The Gantt chart is: Typically, higher average turnaround than SJF, but better response. P1 P2 P3 P4 20 37 57 77 97 117 121 134 154 162
84
Memory Management When a process is executed it has to be in main memory as the main memory can be accessed quicker. An efficient use of the main memory is an important task of the operation system. Different memory management techniques are used for this purpose. The operation system manages the use of the main memory, to make the best and efficient of it use.
85
Memory partition How processes are arranged in the main memory before been executed? Fixed-sized partitions Variable-sized partitions
86
Fixed-sized partitions
OS 8M 8M 8M The main memory is divided into equal-sized chunks. The size of the chunks are predetermined, fixed before the process is loaded. Fixed-sized partitioning is not an efficient way of managing main memory. Only a few processes will require exactly the same amount of memory as provided by the partition. A process of 3M will occupy one chunk and therefore the 5M is wasted. 8M 8M
87
Variable-sized partitions
OS 8M 2M 4M 8M A better way of partitioning the main memory is variable-sized. Each process is allocated exactly the amount of memory it requires. 18M
88
Swapping I/O operations are slow
If a running process requires an I/O operation. The CPU will move to another process in the main memory. Suppose the main memory is full of processes waiting on I/O. CPU becomes idle To solve this problem Swapping technique is used. If the CPU becomes idle, Some processes are moved temporarily out the main memory to the disk, to free a space for new process to come to be executed by the CPU.
89
disk Main memory No Swapping Completed processes Main memory With
Operation System Long-term queue No Swapping Completed processes Main memory Long-term queue Operation System With Swapping Completed processes Medium-term
90
os os os os P1 P1 P1 p2 p2 p3 a c b d os os os os P1 P1 p2 P4 P4 P4 P3
Process 1,2,3 (b,c,d) are loaded and each one of them is given the exact amount of memory they require. In (e-f) P4 needs to be loaded. And there is not enough space left to accommodate it. P4 < P2 and P2 requires an I/O operation. There P2 is swapped out of the memory and P4 is loaded. In (g-h) P2 becomes ready again after completing the I/O operation. P1 which now require an I/O operation is swapped out of the memory and P2 is swapped back in. Variable-sized partitioning was good initially, but gets worst quickly. The main memory will become more and more fragmented. To solve this problem we use paging technique. P4 P4 P4 P3 P3 p3 p3 e g f h
91
Fragmentation Memory is divided into partitions
Each partition has a different size Processes are allocated space and later freed After a while memory will be full of small holes! No free space large enough for a new process even though there is enough free memory in total If we allow free space within a partition we have internal fragmentation Fragmentation: External fragmentation = unused space between partitions Internal fragmentation = unused space within partitions
92
Problems with swapping
Swapped process are I/O output processes. I/O processes are slower. The swapping process is slow as well. Internal fragmentation Solution: Reduce the amount of codes that needs to be swapped. Paging
93
Paging A program is divided into small fixed-sized chunks(pages).
Main memory is divided into small fixed-sized chunks (frames). A page is stored in one frame. A program is stored in a set of frames. These frames do not need to be continuous.
94
simple paging is not efficient
Better than fixed and variable-sized partitions. OS - loads all pages of a process in the main memory. However, not all pages of a process need to be in the main memory in order to be executed. OS - can still execute a process if only some of the pages are loaded Demand paging.
95
Demand paging Operating system – loads a page only when it is required
No swapping in or out of unused pages is needed. Better use of memory. CPU can access only a number of pages of a process at one time. Then asks for more pages to be loaded.
96
Small – large pages Large pages: Small pages: Advantages
Smaller page table Less page faults Less overhead in reading/writing pages Disadvantages: More internal fragmentation, Worse locality of reference Small pages: Advantages: Less internal fragmentation Better with local of reference Bigger page table More page faults, Overhead in reading/writing pages
97
OS - I/O management There are four main I/O operations.
Control: tell the system to perform some action (e.g. rewind tape). Test: check the status of the device Read: read data from the device Write write data to the device.
98
I/O modules System bus I/O module I/O module CPU Main memory I/O
Programmed I/O Interrupt-driven I/O Direct memory access System bus I/O module I/O module CPU Main memory CPU and Main memory are linked to I./O devices through the system bus. I/O module are just interface between the CPU and main memory on one side and the an I/O device on the other side. Data transfer rate in I/O devices is much lower than the CPU and the main memory. Different I/O devices use different data formats and word lengths. I/O modules facilitate the communication between the CPU and an I/O device. I/O device I/O device
99
Advantages of I/O modules
They allow the CPU to view a wide range of devices in a simple-minded format CPU does not need to know details of timing, format, or electronic mechanics. CPU only needs to function in terms of a simple read and write commands. They help the CPU to work more efficiently They are 3 ways in which I/O modules can work Programmed I/O Interrupt-driven I/O Direct memory access.
100
Programmed I/O Ready NO yes Issue Read to I/O module Check status
The CPU sends an I/O command the I/O module waits until the I/O operation is completed Poor performance Data transfer Ready Read word from I/O module For example the use of programmed I/O device to read in a block of data, which consists of many words. Suppose that each instruction is to read a word and write it to the main memory. Reading a block of data consists of large number instructions. The CPU first issues a read command to the I/O module. The I/O module then tries to obtain a word from the I/O device. This takes some times and when this is done the word is put in I/O register in the I/O module which is linked the data bus. This procedure continues until all the instructions are executed – the whole block of data is transferred to the main memory. A great deal of the CPU time is wasted when a large block of data is transferred. Write word To memory NO done yes Next instruction
101
Interrupt-driven I/O The CPU issues a command to the I/O module and then gets on with executing other instructions. The I/O module interrupts the CPU when it is ready to exchange data with the CPU. The CPU then executes the data transfer. Most computer have interrupt lines to detect and record the arrival of an interrupt request.
102
Interrupt-driven I/O Ready NO yes CPU goes to do Other things
Issue Read to I/O module CPU goes to do Other things Check status When the status Is ready the I/O module sends An interrupt-signal Ready Read word from I/O module When the CPU wants to read a word the I/O device. It issues a read signal to the CPU. The CPU then goes on doing other things. When the I/O module finishes reading the word from the device, it sends an interrupt signal to the CPU. The CPU then comes back to get the word from the I/O module and write it the main memory. In this way the CPU is doing many jobs while the I/O operation is taking place. This approach is more efficient than Programmed I/O. CPU still has to perform data transfer. slow when transferring large block of data Write word To memory NO done yes Next instruction
103
Direct-memory-access - DMA
Special-purpose processor. Handles data transfer. CPU issues to the DMA: starting address in main memory to read/write to. Starting address in the I/O device to read/write to. The number of words to be transferred. DMA transfers data without intervention from the CPU. DMA sends interrupt to the CPU when transfer is completed.
104
DMA/CPU - bus system DMA take care data transfer.
CPU free to do other jobs. However, they can not use the bus at the same time. DMA can use the bus only when the CPU is not using it. Some times it has to force to CPU to free the bus, cycles stealing.
105
DMA/CPU System bus DMA CPU Main memory I/O module I/O device
106
Summery OS- memory manager OS- I/O manager
Fixed-sized partition: waist of memory. Variable-sized partition: fragmentation. Swapping. Time wasted in swapping the whole process Simple paging: process divided into pages and loaded into main memory(divided into frames). Demand paging: only the required pages are loaded to main memory. OS- I/O manager Programmed I/O: CPU waste waiting for I/O operation. Interrupt-driven I/O: CPU responsible for data transfer. DMA: takes care of data transfer instead the CPU.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.