Presentation is loading. Please wait.

Presentation is loading. Please wait.

Component 2 6L, M, N, O, P.

Similar presentations


Presentation on theme: "Component 2 6L, M, N, O, P."— Presentation transcript:

1 Component 2 6L, M, N, O, P

2 Assessment Outcomes 6L - Explain the reasons for, and possible consequences of, partitioning of main memory 6M - Describe methods of data transfer including the use of buffers to allow for differences in speed of devices 6N - Describe the principles of high level scheduling: processor allocation, allocation of devices and the significance of job priorities 6O - Explain the three basic states of a process: running, ready and blocked 6P - Explain the role of time-slicing, polling and threading

3 Main memory Main memory is the memory that is directly accessible to the processor. This means that it needs to be very fast in order to not slow down the processing of the computer. ROM - is a permanent storage area for special programs and data that have been installed during the process of computer manufacture. The contents cannot be altered by software because once data has been written onto a ROM chip it cannot be removed and can only be read. BIOS - is a part of ROM that stores critical programs such as the program that boots (starts) the computer. For example, a pocket calculator ROM contains 1,024 bits of permanently stored instructions for operating the calculator. RAM - constitutes the working area of the computer and is used for storage of program(s) and data currently in use. RAM is volatile, which means the current contents are lost when power is removed, or different programs and data are entered. RAM is directly written/read by the processor. Cache - holds frequently used code and data. It is very fast, and is accessed by the processor so that it does not have to waste time waiting for the main memory to respond to a request for data.

4 Partitioning Main Memory
Main memory typically is split into two partitions. One, called Low Memory, is where the operating system is placed. The other, called High Memory, is where the processes that need to be run are put. Processes are simply the jobs that have to be carried out by the Central Processing Unit. These will include running applications but also jobs that have to be done in the background, such as managing the input and output to a printer or speakers. All the processes that need to be run are put in a queue.

5 Partitioning Main Memory
Operating systems can manage the main memory in a number of ways, including using partitions, paging and segmentation. With partitions, the main memory is divided up into areas, which can be a fixed size or a variable size. Each process in the queue is allocated to one partition, assuming that the process can fit in the partition. If it can't, then parts of the process have to be switched in and out of the partition as and when necessary. When a partition is free, another process from the queue is passed to the partition, until the process is finished and the partition is freed up again. 

6 Partition inefficiencies
A process will use up an entire partition, whether it needs the entire partition or not. This is one inefficient problem with this method, although having a number of different sized partitions for the operating system to select from can decrease this problem because you can then assign a process to the smallest partition that it will fit in. As processes are loaded and removed from partitions, the memory can become fragmented and some areas of memory will be unused. This is an inefficient use of resources. 

7 Page Tables Another method of managing the memory is to use paging:
A page table is the data structure used by a virtual memory system in a computer operating system to store the mapping between virtual addresses and physical addresses. Each application will have associated with it a page table. This will identify each page for each application and will contain information about where to find that page if it is on the hard disk in virtual memory. If a page is in RAM then the entry for that particular page will be blank. When a page is needed, the operating system looks up the page reference in the page table, finds out where it is in virtual memory and then transfers it from the hard disk to the primary memory.

8 Page Table Inefficiencies
Paging is a simple system to implement but may not be the most efficient. This is because the pages are of a fixed size. A page may only have a small amount of data in it but will require a whole page to store it in. Pages are fixed sizes. They will contain code that may not be a logical group of code (for example, a logical block of code such as a function might run across 6 different pages). This means that to load that particular function will require 6 pages to be loaded up, which is not as efficient as loading e.g. one page for the whole function. Page tables are typically large and accessing them slows down the computer.

9 Segmentation An alternative approach is to use segmentation. Instead of pages being a fixed size, they are a variable size. Because their size can vary, code can be logically grouped into one segment. You could have a segment per procedure, for example, and this would reduce the size of the tables needed to store the segment information

10 Segmentation To summarize:
Code segment, where the program code is stored Data segment, where variables defined by the code are stored whilst the application is running. Stack segment, starts from the top of the segment and grows downwards. The stack is used by sub-routines and interrupt service routines to hold temporary data and addresses. There can be many segments in memory at the same time. Each one is a separate process or application and each may be a different size. As shown below.

11 Segmentation Drawback
The downside to segmentation is that retrieving a segment is more complicated. You need to store both the start address of a segment and the size of the segment. Start addresses have to be calculated with care because the sizes of each segment varies. With paging, it is entirely predictable where each page starts and finishes.

12 Virtual Memory RAM is the primary memory of your computer, but RAM is a fixed size and sometimes your computer needs more RAM than is available. When a program is loaded a piece of software called the ‘loader’ has to: Transfer an application into the primary memory (RAM) Adjust RAM references (i.e. RAM addresses and locations) If you are multi-tasking, however, then you may have a number of applications and a number of files open (i.e. in the RAM) at the same time…

13 Virtual Memory How can more than one program be running in memory at the same time, even though there isn't enough primary memory to run both programs? The operating system relies on ‘virtual memory’. It relies on the fact that, even if you have 5 applications open at the same time, only one of them is ‘active’, or being serviced by the CPU, at any one time. In addition, you often don’t actually need the entire code for an application in RAM at any one moment. For example, if you are not using some clipart that comes with a word processing application, then the clipart doesn’t need to be in RAM, although if you do start using it then it must be moved into RAM.

14 Virtual Memory – A Process Line
All programs are split up into equal lengths by the operating system. These equal lengths are known as pages. A page might be 32 Kbytes long, for example. The RAM is also split up into pages. The computer keeps as many pages as it can in RAM, especially those needed to run the active application. In the above example, if the word processing software was active, then the operating system, the virus checker, all of the word processing application and 3 Mbytes of the spreadsheet software would be kept in RAM. The remaining 2 Mbytes of the spreadsheet software can’t fit into RAM and would be stored, in page-size blocks, in virtual memory, on backing storage (the hard disk) in an area called the page file. In Windows, this is also known as the swap file. When you switch from the word processor to the spreadsheet, so that the spreadsheet is the active application, what happens? The CPU moves out of RAM and on to the hard disk some of the pages that it doesn't need for a while, (in this case some of the word processing pages) and then moves from the backing storage into RAM those spreadsheet pages which it now does need! This takes a little bit of time - the hard disk is very slow compared to the CPU. Have you ever noticed when you are multitasking that it can take a little while (sometimes ages) before you can actually access your computer when you switch between applications? When the pages have been swapped, you can then start using the spreadsheet. If you swap back again, the above process is repeated.

15 Virtual Memory Drawbacks
As has already been highlighted, swapping pages in and out of the hard disk takes time! ‘disk threshing’ (or thrashing your hard disk) as it is known, causes the computer to work slowly. The more memory you have, however, the more pages you can keep in it and the less swapping you need to do! Ideally, you want to have enough RAM to run applications without using virtual memory. In the example above, we only needed 3 Mbytes of virtual memory, so it would have been unlikely to slow down the computer too much. Imagine if we had Mbytes of applications and only 32 Mbytes of RAM!! You would recommend a user in this situation to go out and buy some more RAM, quickly! You should run a ‘defragmentation utility’ regularly on your computer's hard disk. As you use a hard disk, files get split up and stored increasingly all over the hard disk. This might eventually mean that pages used in virtual memory can't be stored together. To access pages that are all over a hard disk is slower than accessing pages that are all together. Running Defrag regularly will keep your hard disk working at its optimum performance.

16 Buffering When two devices working at different speeds try to communicate, they have to do so at the speed of the slowest device. This is not good because the CPU gets tied up managing the transfer of a constant stream of data to a device, a printer for example. That means it can't work on other tasks that may be more urgent than mere printing! However, by using a 'buffer', the problem of working at the speed of the slowest device and slowing the CPU down can largely be overcome. A buffer is simply some memory that improves the efficiency of data transfer between two devices working at different speeds by allowing big blocks of data to be collected together and then sent at once rather than as a stream of data that needs constant CPU management time.

17 Printer buffer example
Take a non buffered setup: The CPU sends data to the printer (quickly) The printer receives data (slowly) The CPU waits for the printer to catch up The printer is still receiving data The CPU is still waiting The printer processes the data (slowly) The printer needs more information so asks the CPU for something else The CPU sends it (quickly) Etc. Until the end of time.

18 Printer buffer example
But with a buffer.. The CPU sends data to the buffer The buffer stores the data The CPU does ANOTHER TASK The printer fetched data from the buffer The CPU is still doing OTHER TASKS The printer can take as long as it likes…

19 Buffers and interrupts
A complication can arise, however, if you need to send a file to a printer but the whole file won't fit into the buffer because the file is too big for it! To deal with this situation, we need to use interrupts. These are simply signals sent to the CPU. They tell the CPU to stop what it is doing and give some time to whoever sent the interrupt.

20 Double Buffering The above model can be improved using double buffering. This simply means having two buffers rather than one, and whilst one is emptying, the other is filling up. When the first one empties, the second one starts to empty and the first one starts to fill up. This is used anywhere where a lot of data has to be transferred and it is important to do this as quickly as possible. Typical examples apart from printing would include media streaming, where one video stream is being sent to the graphics and sound card from one buffer whilst another stream is filling up a second buffer, ready to be used when the first buffer is empty, and transferring data from camcorders to a computer.

21 Scheduling CPUs can carry out instructions very quickly although it is important to recognise that they can only ever work on one task, or process, at a time. The organisation of CPU time is another responsibility of the operating system and more specifically, a piece of software within the operating system known as the ‘scheduler’. A computer that is capable of working on more than one application at apparently the same time is also known as a 'multiprogramming environment'. Scheduling is the name given to the process of managing CPU time. The aims of scheduling are to: Make sure the CPU is working as hard and as efficiently as possible. Make sure the resources such as printing are used as efficiently as possible. Ensure that users on a network think they are the only ones on the system.

22 Purpose of Scheduling The scheduling part of the operating system has the following duties Process as many jobs as possible Make maximum use of CPU time Be responsive to the user so they are unaware of a delay to their process Make maximum use of resources such as input-output devices Be fair to all jobs - none left stranded for too long Be able to prioritise jobs Be able to alter priorities according to some rules built into the scheduler Avoid 'deadlock' As you can see in the list above, the scheduler has many duties to fulfill. The ideal scheduler would be one that is making 100% use of the CPU, no jobs are left hanging around and the user is never aware of a time delay also no resources are left idle if a process is wanting to use it.

23 High Level Scheduler The high-level scheduler is responsible for organising all of the processes that need servicing into an order. As each job enters the system, the high-level scheduler places it in the right place in a queue known as the READY TO PROCESS queue. The actual order is based on a ‘scheduling algorithm’ – in other words, some rules about how to order the processes. Some different scheduling algorithms are discussed later in this section. So the high-level scheduler puts new processes in the READY TO PROCESS queue into the primary memory if it can.

24 Medium Level Scheduler
If a process is in the RUNNING state (it is in the middle of being serviced by the CPU) and it suddenly needs some peripheral device usage, it is taken out of the RUNNING state and put into the BLOCKED queue. If that process stayed in the running state while waiting for I/O, the CPU would be sitting around wasting time, which isn’t an efficient use of the CPU! A process put into the blocked state can be moved out of the IAS and into backing storage if necessary, to free up memory. When the resources are available to service a process held in the backing store, it can be moved back into the IAS. It is the job of the medium-level scheduler to move a process into and out of IAS as necessary.

25 Low Level Scheduler Once the processes are in the IAS, the low-level scheduler takes control. This piece of software is responsible for actually selecting and running a process. It looks at the order the high-level scheduler has put the processes in by looking at the READY TO PROCESS queue. It then checks that any necessary resources are available for the most important process in the queue and then runs it using the CPU - it takes the process from the READY TO PROCESS queue and puts it into the RUNNING state. It will stay in the running state until either an interrupt happens, that process finishes or the process needs resources that are unavailable, for example, a printer. The low- level scheduler will select the next most appropriate process to give to the CPU. You can summarise what happens with the following diagram.

26 processor allocation

27 allocation of devices As well as allocating segments or pages within the primary memory the devices that are connected to the computer need to be managed. For example the graphics card, sound card, or speakers.. The computer needs to be able to determine which process requires each device and the schedule that the device is free at the same time as the process is being run. This can mean that the device drivers also need to be loaded into primary memory to facilitate the running of the device, this may cause the original process to be blocked temporarily whilst the other process is run.

28 Processing State: Blocked
The task of the CPU is to process data and follow instructions. It must be able to react to events as they occur, regardless of what it is doing at the time. For instance, input - output devices need to be handled as and when they need attention Hard disks Optical drives Mouse Keyboard Printers Monitors Other I/O hardware One method of doing this is for the CPU to keep checking with the device to see if it needs attention. This method is called 'polling'. Imagine the CPU as an annoying little child sitting in the back seat of the car. He keeps asking "Are we there yet, Are we there yet, Are we ..." You get the idea. The child is 'polling' the Input-Output device (the parent). This works, but it is horribly inefficient and wastes a lot of time and effort. A much better (and peaceful) method is for the child to issue an instruction "Please tell me when we have arrived, meanwhile I can play with my toys". The parent / child is now using the 'interrupt' method. When they arrive a message (interrupt) is sent to the child (CPU) to inform him that the event has happened and needs to store his toys so that he can return to them later and prepare to leave the car (the new task).

29 Time-Slicing This approach makes use of a special form of queue called a FIFO, meaning 'First in First Out' as shown below. This is quite a fair approach - the process that has been waiting the longest gets the next turn. The process is allowed to run for a fixed amount of CPU time then it goes to the back of the queue once again. This is a good algorithm if every job is more or less equally important. The downside of the Round Robin approach is that it takes no account of job priority. For example process 1 sitting just behind (Round Robin) top priority Process 4 might be dependent on reacting to some timed event such as data logging and yet it is made to wait its turn. So for urgent jobs, Round Robin is not the best way to go. It does have the advantage of being very simple to implement.

30 Priority Scheduling Unlike the Round Robin approach, priority scheduling is trying to take account of the relative priority of the jobs in the queue Consider two of the jobs sitting in the READY queue: Process 1 and Process 4. With a priority scheduling algorithm, Process 1 would be the next to run because it has a higher priority than Process 4 This approach is very good for making sure that the most vital jobs get to run first. However, it must also deal with low-priority jobs, otherwise they may never get a look-in. In this case the scheduler may start to bump up the priority of lower jobs to ensure they will eventually run.

31 Threading A thread is a single sequence stream within in a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. In a process, threads allow multiple executions of streams. The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel. Like a traditional process i.e., process with one thread, a thread can be in any of several states (Running, Blocked, Ready or Terminated). Each thread has its own stack. Since thread will generally call different procedures and thus a different execution history. This is why thread needs its own stack. An operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread has or consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like processes as a result threads shares with other threads their code section, data section, OS resources  also known as task, such as open files and signals.


Download ppt "Component 2 6L, M, N, O, P."

Similar presentations


Ads by Google