Download presentation
Presentation is loading. Please wait.
Published byDomenic Andrews Modified over 6 years ago
1
Operating System Session 1 Hakim Sabzevari University Dr.Malekzadeh
2
Grading Grading: Midterm exam: 35% 1/9/1394 Final exam: 35%
Quiz & Assignments: 10% Project 30% Textbooks: Operating System Concepts, Abraham Silberschatz (9th Edition) Operating Systems Design & Implementation, Andrew Tanenbaum (3rd Edition) Operating Systems Internals and Design Principles, William Stallings (6th Edition) کتاب پارسه دکتر حقیقت
3
Fundamental of operating system
4
Types of computers Computers can be broadly classified by their speed, use, size, and computing power: Microcomputers/PC Minicomputer Mainframe/Macrocomputer Supercomputer Common areas between two adjoining rectangles represent the fact that the higher end of smaller computer system may have the capacities equivalent to lower end of bigger computer system. For example, a highly configured micro computer may be as good as smaller minicomputer.
5
Micro-computers It is used by one user at a time (single user computer system). It has moderately powerful microprocessor. Microcomputers can be classified as: Desktop computers Laptops personal digital assistant (PDA) Tablets Smart phones Calculators Gaming consoles Navigation system of cars
6
Mini-computer Multi-user computer system. Minicomputer is a multi-processing system that can serve up to 200 computers/users simultaneously. A mini-computer is larger than a PC but generally smaller than a mainframe. Individual departments of a large company or organizations use Mini-computers for specific purposes. For example, a production department can use Mini-computers for monitoring certain production process.
7
Minicomputer cont… Previously, minicomputers were considered to be superior to personal systems. They have also been used as main servers for local area networks of microcomputers. But these days, advancement in technology has made minicomputers almost obsolete because the PCs today are highly advanced. Example of minicomputers are: K-202 Texas Instrument TI-990 SDS-92
8
Mainframe Mainframe computers are large, powerful computers capable of handling processing requests for large numbers of users simultaneously. For this reason, banks, educational institutions and insurance companies use mainframe computers to store data about their customers, students and insurance policy holders. Traditionally, users would connect to a mainframe using a terminal which is a device that has a screen and keyboard for input and output, but it does not do its own processing (they are also called dumb terminals since they can’t process data on their own).
9
Mainframe cont… The simple example of Mainframe application is automated teller machine (ATM) to interact with the corresponding bank account. Another example is POS (point of sale). If we swipe a credit card the information is passed to a mainframe machine via POS to process further. Mainframe is used mainly due to its capacity to handle millions of data or transactions per second. Example of Mainframe computers are: Fujitsu’s ICL VME Hitachi’s Z800
10
Supercomputer Supercomputers are very large computers with very large computing power and huge RAM. What distinguishes a supercomputer from the others is the use of massively parallel processing. For example, one supercomputer contains 5800 processors and more than a terabyte RAM. They are extremely fast computer, which can perform hundreds of millions of instructions per second. Supercomputers are very expensive and are employed for specialized applications that require enormous amounts of mathematical calculations.
11
Supercomputer cont… For example, weather forecasting, scientific simulations (molecular modeling), (animated) graphics, petroleum research works, nuclear energy research, crypt analysis, electronic design, and analysis of geological data. Example of supercomputers are: IBM’s Sequoia, in United States Fujitsu’s K Computer in Japan IBM’s Mira in United States IBM’s SuperMUC in Germany NUDT Tianhe-1A in China
12
Performance Metrics of speed
Rate/ Data rate/Speed: units of work per unit time We need to use proper metrics to measure the rate. For example: Frame/second Sample/second Image/second Bit/second Byte/second Millions of bytes/second Instruction/second Millions of instructions For example for image processing the instruction/second is not proper but frame/second is better.
13
CPU Performance Metrics: IPS
Instructions per second (IPS) is a measure of CPU speed: KIPS: thousand instructions per second MIPS: million instructions per second GIPS: Giga instructions per second CPU instruction rates are different from clock frequencies, usually reported in Hz, as each instruction may require several clock cycles to complete or the processor may be capable of executing multiple independent instructions at once.
14
CPU Performance Metrics: OPS
MOPS: million operations per second FLOPS: FLoating-point Operations Per Second MFLOPS: Mega floating-point ops per second GFLOPS: Gigaflops TFLOPS: Teraflops PFLOPS: PetaFLOPS EFLOPS: ExaFLOPS (by 2020) The PC performance is measured in GFLOPS. The supercomputer performance is measured in TFLOPS.
15
Know more! KiB, MiB KiB: kibibytes kilo binary byte
MiB: mebibytes mega binary byte It shows the differences between power-of-2 and power-of-10: MiB = Mebibyte = 1024 KiB KiB = Kibibyte = 1024 Bytes MB = Megabyte = 1,000 KB KB = Kilobyte = 1,000 Bytes
16
Layers of a computer system
Computer system components provide different layers including: Hardware: provides basic computing resources (CPU, memory, I/O devices). Users: people, other computers Application software (e.g., web browser) and system software/utilities/tools (e.g., compiler, linker): define how the hardware are used to solve the computing problems of the users. Operating system: controls and coordinates the use of the hardware among the various application programs for the various users.
17
Layers of a computer system cont…
Application Programs System Programs Operating System Machine Language Hardware
18
Operating System definition: user view
An Operating System is an interface between user and hardware of a computer system. We need OS due to: Convenient and Efficient use of computer hardware resources (GUI) Sharing of limited or expensive physical resources.
19
Operating System definition: system view
Operating system controls and coordinates (هماهنگ) the use of resources among various computer programs for various users. Resources: can be Physical: CPU, memory, I/O devices (storage, modem, monitor, etc.) Logical: files, data Control (Control Program): OS controls the execution of programs to ensure proper use of the hardware resources and to prevent errors. Coordination (Resource allocator): OS acts as a manager of underlying resources and allocates/de-allocates resources efficiently and fairly.
20
Evolution of Operating Systems
Computer Generation in computer terminology is a change in technology a computer is used. Initially, the generation term was used to distinguish between varying hardware technologies. But nowadays, generation includes both hardware and software, which together make up an entire computer system. There are totally five computer generations:
21
First Generation (1945–55); Vacuum Tubes and Plug-boards
The vacuum tubes (لامپ خلا) used as primary electronic components. The magnetic drums (استوانه) implemented as memory units. The magnetic tapes implemented as secondary storage media.
22
First Generation cont…
23
First Generation cont…
These computers were basically programmed manually by setting switches, plugging and unplugging cables. You had to interact with the hardware directly. Operating systems and Programming languages were unknown (even assembly language was unknown). All programming was done in absolute machine language (the lowest-level programming language understood by computers), often by wiring up plugboards. The usual mode of operation was for the programmer to sign up for a block of time on the signup sheet on the wall, then come down to the machine room, insert his plugboard into the computer and wait for output. Plugboard (تخته مدار)
24
First Generation cont…
By the early 1950s, the routine had improved somewhat with the introduction of punched cards (also called IBM or Hollerith cards). It was now possible to write programs on cards and read them in instead of using plugboards. The rest of procedure was the same.
25
Problems of First generation
These machines were enormous, filling up entire rooms with tens of thousands of vacuum tubes. They were very expensive to operate and could be afforded only by very large organizations. They used a lot of electricity, generated a lot of heat, which was often the cause of malfunctions. Some computers of this generation were: ENIAC EDVAC UNIVAC IBM-701 IBM-650
26
Second Generation (1955-1965); Simple Batch, Spooling
Using the transistors as replacement of vacuum tubes, this generation was cheaper, consumed less power, smaller, more reliable and faster than the first generation machines. The Magnetic cores were used as primary memory The magnetic tape and magnetic disks as secondary storage devices.
27
Second Generation cont…
They still relied on punched cards for input and output. It moved from binary machine language to assembly languages, which allowed programmers to specify instructions in words. Machine independent languages like COBOL, FORTRAN and ALGOL developed.
28
Second Generation cont…
Computer, Audio, Video Magnetic Tapes Second Generation cont… Some magnetic storage devices are: Magnetic Tape Magnetic Drum Magnetic Core Floppy Disk Magnetic Disk
29
Second Generation cont…
magnetic drum: cylinder with a magnetic surface on which data is stored magnetic core: tiny rings of magnetic material string at the intersection of a vertical and horizontal wire Magnetic Disk
30
Second Generation cont…
To run a job (consist of the program, data, and some control card), a programmer would first write the program on paper (in FORTRAN), then punch it on cards. He would then bring the card deck down to the input room and wait until the output was ready. An operator would take one of the card decks that had been brought from the input room and read it in. Then he would go over to the printer and take away the output (usually printed papers) and carry it over to the output room, so that the programmer could collect it later. OS here called monitor and was so simple and it only move form the finished batch to next new batch (uni-programmed) automatically.
31
Second Generation cont…
As we can see, much computer time was wasted while operators put the input tapes or take the output papers while the CPU sat idle for some time. Therefore, they looked for ways to reduce the wasted time. The solution generally adopted was the batch system (سیستمهای دسته ای). CPUسریع است ولی دستگاه های I/O کند هستند وقتی کار جاری برای تکمیل یک عملیات I/O منتظر میشود، در این حال CPU بیکار میماند و مجبور است صبر کند تا عملیات I/Oبه اتمام برسد
32
Second Generation cont…
All the programmers leave their programs with the operator. The operator would sort the programs with the same requirement into a group called batch. In batch processing (unlike transaction processing) there is no interaction (from start to completion) between the user and the job while that job is executing. For example during a database run, we can not open, edit and save it. The job is prepared and submitted, and at some later time, the output appears. The delay between job submission and job completion is called turnaround time (زمان برگشت گردش یا زمان پاسخ).
33
Second Generation cont…
The suitability of this type of processing is in programs with large computation time with no need of user interaction/involvement. Users are not required to wait while the job is being processed. They can submit their programs to operators and return later to collect the results. Some examples of such programs include payroll (لیست حقوق), bill, forecasting, statistical analysis, and large scientific number crunching programs.
34
Second Generation cont…
Since it is faster to read from a magnetic tape than from a deck of cards, it became common for computer centers to have two computers: Less powerful computer IBM 1401 Powerful main computer IBM 7094
35
Second Generation cont…
(a) Programmers bring punched cards to the IBM 1401 computer system. The 1401 was very good at reading cards, copying tapes, and printing output, but not at all good at numerical calculations. (b) The1401 reads a batch of jobs onto tape. (c) The tape then rewound and brought into the machine room, where it was mounted on a tape drive on IBM 7094. (d) The operator then loaded a special program (the ancestor of today's operating system), which read the first job from tape and ran it. It runs a job and writes the output on another tape (instead of being printed). After finishing each job, it automatically runs the next job. (e) After finishing the whole batch, the operator removed the input and output tapes, replaced the input tape with the next batch, and carries the output tape to the IBM 1401 for printing off line (i.e., not connected to the main computer). (f) The 1401 prints output. This is called Off-line Spooling.
36
Spool Spooling means all devices (CPU and I/O) are working at a time and no one is idle. While the output in the last room is printed, in the first room the next input tape is getting ready and in the second room, the OS is running the job. So all devices are working together which is called spool. Since CPU is much faster than I/O devices, the spooling is possible using buffer. Spool= simultaneous peripheral operations
37
Spool cont… Since CPU is much faster than I/O devices, so CPU puts data for I/O devices in a buffer which can be directly accessed by I/O devices without requiring CPU involvement. To spool a job means to store it on a buffer (tape) so that it can be printed or processed by another program later at a more convenient time when necessary. Spooling helps to solve the problems of speed mismatch among different devices. For example CPU operates at a very high speed than the printer.
38
Offline spool When the I/O devices are not directly connected to the computer, it is called offline spooling.
39
Second Generation cont…
CPU burst means CPU’s operations on the process. IO Burst means interaction of the process with the I/O operations.
40
Second Generation cont…
In general we can divide programs into two groups: CPU bound/limited (تنگنای محاسباتی): programs that perform lots of computation and do little IO. Tend to have a few long CPU bursts. IO Bound/limited: programs that perform lots of IO operations. Each IO operation is followed by a short CPU burst to process the IO, then more IO happens.
41
Problems of second generation
Problem1: In the batch processing, the system first prepares a batch and after that it will execute all the jobs in that batch. But the main problem is that if a job requires an input or output operation, then it is not possible. The reason is that the jobs were generally submitted on punched cards and magnetic tapes and users were not present to interact with their jobs were run. With CPU-bound jobs, IO is infrequent, so this waste time is not significant. However, with IO-bound data processing jobs, the IO wait time is significant.
42
Problems of second generation cont…
Problem2: time wasted when CPU remains idle during: preparing the batch on the 1401 replacing the input tape with the next batch on the 7094 This wasted time caused poor utilization of the CPU resulting in high turnaround times hence low throughput (Turnaround time is a time between submitting a job and receiving the output). Problem3: developing and maintaining two completely different product lines (1401 and 7094) was expensive for the manufacturers. In addition, many new computer customers initially needed a small machine.
43
Third Generation (1965-1980); IC, multiprogramming, online spooling, timesharing
The third generation computer is marked by the use of Integrated Circuits (IC) in place of transistors. A single IC has many transistors, resistors and capacitors along with the associated circuitry. The IC was invented by Jack Kilby. This development made computers smaller in size, reliable and efficient. Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors. Commodore PET, 1977
44
Third Generation cont…
IBM attempted to solve the problem3 by introducing the 360 series which does both kinds of jobs (IO-bound and CPU-bound) For the other two problems something had to be done to avoid having the (expensive) CPU be idle so much. Solution to these problems: Batch Multiprogramming /cooperative multitasking/non-preemptive multitasking/multiprogramming without timesharing Online spooling/spooling Time-sharing Multiprogramming/preemptive multitasking (اشتراک زمانی) Preemptive means Prevent preemptive: قابل پس گرفتن non-preemptive: غیر قابل پس گرفتن
45
Uni-Programmed OS Uni-Programmed systems run just one program at a time, sharing the memory between that program and the operating system. Typical example: DOS
46
Uni-Programmed OS cont…
There are three variations: The OS is at the bottom of RAM, Fig.(a) The OS is in the ROM, Fig.(b) The device drivers is in the ROM and the rest of the system in RAM down below, Fig.(c)
47
Example:
48
Uni-Programmed OS cont…
Disadvantages of Uni-Programmed OS are: Only one program runs at a time Process can destroy OS Not cost effective. When a single process is doing I/O, the CPU is inactive. Multiprogramming makes more effective use of the CPU. Uni-programming does not support time-sharing and provides poor CPU utilizations when the workload includes many I/O bound processes. Then multiprogramming came. They support multiple batch jobs (batch multiprogramming) or multiple interactive jobs (time-sharing) in memory at once thus need memory management.
49
Multiprogramming Multiprogramming means sharing of resources between more than one processes. Memory was divided into several partitions and each job was given a partition in memory. A job would then be given the CPU for execution. When that job needs an I/O operation and leave CPU, the CPU switches to another job that is kept in memory (the CPU is allocated to a job only if the job is in memory) until it also needed the services on an I/O device. Therefore, the CPU is always busy. For example a multiprogramming system with three jobs in memory is shown as follow:
50
Multiprogramming cont…
Multiprogramming makes sure that the CPU always has something to execute, thus increases the CPU utilization/usage.
51
Multiprogramming cont…
The act of reassigning a CPU from one task to another one is called a context switch (context means state). When context switches occur frequently enough the illusion of parallelism is achieved.
52
Multiprogramming cont…
Since, several jobs are kept in main memory at the same time, and the CPU is multiplexed among them, this requires memory management and protection and necessary hardware and software support: Resource Allocation: OS manages different types of resources such as main memory, CPU cycles, file storage, etc. If there are more than one jobs running at the same time, then resources must be allocated to each of them. Resource Allocation means to decide which process gets which resource when.
53
Multiprogramming cont…
Job Scheduler: If several jobs are ready to bring from disk to memory and there is not enough room for all of them, then the system chooses jobs among them (Job Scheduling). CPU scheduler: If several jobs are ready to run in memory, the system must choose among them (CPU Scheduling). Memory manager: Having several jobs in memory at the same time requires memory management to allocate the memory to them.
54
Multiprogramming cont…
55
Online spooling With invention of Hard Disks, the spooling stores job on a buffer (which is Disk instead of tape). In online spooling, disks are directly connected to computer (online). In this case, output of data processing is available immediately. Disks are faster than tapes.
56
First hard disk (IBM, 1956, 1000Kg, 5MB≈ 64,000 punched cards)
57
Online spooling cont… In spooling, a high-speed device like a disk is placed between a running program and a low-speed IO device. Input: Spooling was able to read jobs from cards onto the disk as soon as they were brought to the computer room. Thus, the input was available quickly when needed since reading from disk was much faster than reading from tape, which had to be rewound or forwarded to retrieve data.
58
Online spooling cont… Output: Instead of writing directly to a printer, the outputs are written to the disk. This way, while other programs can be initiated sooner, when the printer becomes available, the outputs go to the buffer of printer to print. With spooling, the 1401 was no longer needed, and tapes disappeared.
59
Spooling Today The spoolsv.exe file is described as the Spooler SubSystem App or Windows Print Spooler Service and is the main component of the printing interfaces. The spoolsv.exe file is initialized when the computer starts, and it runs in the background until the computer is turned off. The process spoolsv.exe transfers the data in a buffer. If the printer needs the data, it will retrieve it from the buffer. While the spoolsv.exe file is storing the data in the buffer, the user can run other operations. The spoolsv.exe process is also responsible for queuing printing tasks. Through this function, the user does not need to wait for each printing task to be completed one after the other.
60
Spooling Today cont…
61
Time-sharing/preemptive multitasking (اشتراک زمانی)
A Batch Multiprogramming system improves utilization but does not support interaction with users (machine waits for IO operation by itself not by user). Time-sharing extends Multiprogramming to handle multiple interactive jobs (interactive processing means the user has to be present and program cannot proceed until there is some inputs from the user). Time-sharing is a variant of multiprogramming technique, in which each user has an online (i.e., directly connected) terminal. Because the user is present and interacting with the computer, the computer system must respond quickly to user requests, otherwise user productivity could decrease.
62
Time-sharing cont… Terminals are connected to the main computer and used for input and output. No processing is made. They do not have CPUs. Main computer; having a CPU executing processes by utilization of the OS, (e.g. UNIX). Time-sharing means multiple users have terminals (not computers) connected to a main computer and execute their task in the main computer.
63
How Time-sharing works
The reason why it is called time-sharing is that the processor's time is shared among multiple users simultaneously. Timesharing system allows many users to share computer resources. Each user is allocated a fixed tiny slice of time (e.g. two milliseconds). The computer performs whatever operations it can for that user until the time-slice ends then utilizes the next allocated time for the other users.
64
How Time-sharing works cont…
A time sharing system allows many users to simultaneously share the computer resources. Time-sharing works because of the difference between the few milliseconds (at least) between a user’s keystrokes and great difference in the speed of the terminal and the computer. Since each action or command in a time-shared system take a very small fraction of time, only a little CPU time is needed for each user. As the CPU switches rapidly from one user to another user, each user is given impression that he has his own computer, while it is actually one computer shared among many users. Time-sharing system can run several programs at the same time, so it is also a multiprogramming system. But multiprogramming OS is not a time-sharing system.
65
Time-sharing example In below figure the user5 is active, user6 is in ready status, and other users are in waiting state. As soon as the time slice of user5 is completed, the control moves on to the next ready user i.e. user6. Preemptive multitasking operating systems include Cloud Computing, Linux and other Unix-like systems, Microsoft Windows NT/2000/XP, Mac OS X and OS/2.
66
Fourth Generation (1980-present); PC
The microprocessor brought the fourth generation of computers, as thousands of IC were built onto a single silicon chip. The fourth generation of computers is marked in 1980 by the use of Very Large Scale Integration (VLSI) circuits. VLSI circuits having about 5000 transistors and other circuit elements and their associated circuits on a single chip made it possible to have microcomputers of fourth generation. Fourth Generation computers became more powerful, compact, reliable, and affordable. As a result, it gave rise to personal computer (PC) revolution.
67
Fifth Generation In the fifth generation, the VLSI technology became ULSI (Ultra Large Scale Integration) technology, resulting in the production of microprocessor chips having ten million electronic components. This generation is based on parallel processing hardware and AI (Artificial Intelligence) software.
68
Other types of operating system
Real-Time Operating System (RTOS) Network operating system (NOS) Distributed operating system (DOS) Multi-processors Multi-computers Multithreading Operating System
69
Real-Time Operating System (RTOS)
A real-time system has to respond to input within a finite and specified time often referred to as deadline. The correctness depends not only on the logical result but also the time it was delivered. RTOS manages the resources so that particular operation executes in precisely the same amount of time every time it occurs. These systems are required in special applications where the response is needed immediately such as industrial control systems, weapon systems and medical products.
70
RTOS cont… RTOS systems can be hard and soft real time.
Hard-real time: is purely deterministic and time constraint system. For example users expected the output for the given input in 10sec then system must process the input data and give the output exactly by 10th second. Here in the above example 10 sec. is the deadline to complete process for given data. It should not give the output by 11th second or by 9th second, exactly by 10th second it should give the output. In the hard real time system meeting the deadline is very important. If the system fails to meet the deadline even once, the entire system performance is worthless and will fail.
71
RTOS cont… Soft-real time: even if the system fails to meet the deadline, possibly more than once, the system is not considered to have failed. In this case the results of the requests are not worthless value after its deadline, rather it degrades as time passes after the deadline (Streaming audio-video, mp3 players).
72
Network operating system (NOS)
NOS runs on the server machine and it enables server to handle the users, their data and other networking functionalities. The main purpose of this operating system is to allow the access of a shared resource to all the computers of a network. NOS: Microsoft Windows Server 2003, 2008, and 2012, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
73
NOS cont… In network operating system, the users are fully aware of the existence of the multiple computers on the network. Access to resources of various machines is done by: Remote logging into the remote machine. Each computer runs its own operating system and it has its own local users. When a user wants to access any other machine, he must require some kind of remote login to access the other machine. Transferring data from remote machines to local machines, via the File Transfer Protocol (FTP) mechanism.
74
Assignment: Remote Access Software (RAS) allows a user to remotely administer another computer through a GUI: TeamViewer Windows Remote Desktop Connection Mikogo TightVNC LogMeIn
75
Taxonomy of Parallel Processor Architectures
Distributed means data is stored and processed on multiple locations. There are a number of ways of classifying systems with more than one CPU: Shared vs. distributed memory tightly vs. loosely coupled systems SIMD vs. MIMD
76
Taxonomy of Parallel Processor Architectures
77
Shared vs. Distributed memory
Systems with more than one processor are classified as: Shared memory: all processors share the same memory. They are usually called multiprocessors. Distributed memory: each processor has its own memory. They are usually called multicomputers.
78
Loose vs. Tight coupling
Coupling or dependency is the degree to which a system relies on each one of the other systems. Distributed memory systems can be classified based on coupling in two ways: Loosely coupled/distributed processing/multicomputer (اتصال ضعیف) Tightly coupled/parallel processing/multiprocessor (اتصال محکم) Shared memory systems are always tightly coupled. Distributed memory systems are always loosely coupled.
79
Loosely coupled Each processor has its own bus, memory, clock, and IO subsystem. Each processor communicates with other processors through the network medium (e.g., LAN, phone lines, WAN). Each processor runs its own independent local OS.
80
tightly coupled = high bandwidth between processors
loosely coupled = low bandwidth between processors Tightly coupled Processors share the clock, memory, bus, devices, and sometimes cache. Processors run a single instance of OS. Processors communicate frequently through a common memory. Tightly coupled systems can be classified into symmetric and asymmetric systems:
81
Loose vs. Tight coupling cont…
82
Tightly coupled: SIMD vs. MIMD
Tightly coupled systems (with either distributed or shared memory) can be further classified by their control stream. There are two ways to classify parallel computers: single instruction/multiple data (SIMD): all processors execute the same instructions at the same time while each processing unit can operate on a different data element multiple instruction/multiple data (MIMD): each processor executes different instructions independently while each processor is working with a different data stream
83
SIMD vs. MIMD
84
Asymmetric Multiprocessing
Asymmetric Multiprocessing (ASMP/AMP) has one master CPU and the remainder CPUs are slaves. The master distributes tasks among the slaves, and I/O is usually done by the master only. The operating system typically sets aside one or more processors for its exclusive use. The remainders of the processors run user applications. As a result, the single processor running the operating system can fall behind the processors running user applications. This forces the applications to wait while the operating system catches up, which reduces the overall throughput of the system. if the processor that fails is an operating system processor, the whole computer can go down.
85
Symmetric Multiprocessing
Symmetric multiprocessing (SMP) treats all processors as equals, and I/O can be processed on any CPU. Any processor can run any type of thread. Because the operating system threads can run on any processor, the chance of hitting a CPU bottleneck is greatly reduced. All processors are allowed to run a mixture of application and operating system code. A processor failure in the SMP model only reduces the computing capacity of the system.
86
Tightly coupled cont… each CPU in symmetric multiprocessing runs the same copy of the OS, while in asymmetric multiprocessing, they split responsibilities.
87
Closely coupled The closely coupled configuration is an extension of the loosely coupled configuration. It adds global storage, a storage unit that can be accessed by all processors. Global storage requires its own processor. An example of this system is the datacenters that have multiple supercomputers clustered together all working to run tasks like weather prediction or stock-market trend regression.
88
Types of computing It is becoming more common to use multiple processors to solve large-scale complex computational tasks for improving the execution time of computationally intensive programs. When we want to solve a computing problem that requires resources more than one computer can handle, we need to connect it with other computers and get the job done. Grid computing Cluster computing Cloud computing
89
Grid computing (=Distributed)
Grid Computing is a method of computer processing in which different parts of a program are run simultaneously on two or more autonomous (independent, loosely coupled) computers that are communicating with each other over a network. In distributed computing, users not aware of multiplicity of machines. Access to remote resources is similar to access to local resources. Distributed System is a general term for networks based systems.
90
Grid computing cont… Thousands of computers are employed in the process. All the devices that have computing power like desktops, laptops, tablets, mobiles, supercomputers, mainframes, servers, and meteorological sensors are connected together to form a single network. They are all connected together using the Internet. A software that is capable of dividing the program over many computers is used for this purpose. This entire infrastructure of connected computers is called a grid.
91
Grid Computing cont… The term grid computing is frequently used to describe heterogeneous nodes distributed across the globe over a LAN, MAN, or WAN with different OS and different Hardware working together across a WAN or the Internet. A grid is a loosely coupled system. Here individual user gets access to the resources (like processors, storage, data etc.) on demand with little or no knowledge of the fact that where those resources are physically located. For example, we use electricity for running air-conditioners, televisions etc. through wall sockets without concerned about the fact that from where that electricity is coming and how it is being generated. Every node in a grid behaves like an independent entity. This means it manages its resources by itself.
92
Cluster computing (=Parallel)
Distributed computing is a type of parallel computing. Cluster computing is a method of computer processing in which different parts of a program run simultaneously on two or more processors that are part of the same computer (tightly coupled). In order to be able to work together, multiple processors need to be able to share information with each other. This is accomplished using a shared-memory environment.
93
Distributed vs. Parallel processing
(a)–(b) A distributed system. (c) A parallel system.
94
Cluster Computing cont…
Clusters are generally restricted to computers on the same subnetwork or LAN. The aim is to combine the resources of several computers so that they function as a single unit. Cluster computing is a type of computing in which several nodes are made to run the same task as a single entity. The various homogeneous nodes involved in cluster have the same OS and same Hardware and the are normally connected to each other using a fast LAN.
95
Cluster Computing cont…
Unlike grids, the cluster from outside is seen as a single unit powerful system. Therefore outsider client can not access individual systems in the cluster and can only give the program to the head cluster to use services. The head cluster receives the programs and distribute them between the existing systems and manage the performance of the parallel works and finally gives the results to the outsider client.
96
Cluster vs. grid: Differences
Grid computing is something similar to cluster computing. The big difference is that a cluster is homogenous while grids are heterogeneous. The computers that are part of a grid can run different operating systems and have different hardware whereas the cluster computers all have the same hardware and OS. Another difference lies in the way resources are handled. In case of Cluster, the whole system (all nodes) behave like a single system view and resources are managed by centralized resource manager. In case of Grid, every node is autonomous i.e. it has its own resource manager and behaves like an independent entity.
97
Cluster vs. grid: Similarities
Both the techniques involve solving computing problems that are not within the scope of a single computer by connecting computers together. These two techniques want to increase efficiency and throughput by networking of computers.
98
Multithreading Operating System
Multi-threading is the ability of an operating system to execute the different parts of one program, called threads, simultaneously. In Multitasking, OS executes more than one program simultaneously, while in multithreading, OS executes different parts of a program simultaneously.
99
Cloud computing “Cloud” is a metaphor for the Internet, and “cloud computing” is using the Internet to access applications, data or services that are stored or running on remote servers. A cloud computing company is any company that provides its services over the Internet. These services fall into three different categories, or layers. The layers of cloud computing, which sit on top of one another, are Infrastructure/Hardware-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). Infrastructure sits at the bottom, Platform in the middle and Software on top. Other “soft” layers can be added on top of these layers as well, with elements like cost and security extending the size and flexibility of the cloud.
100
Layers of the cloud SPI is an acronym for the most common cloud computing service models.
101
Assignment Cloud Computing
102
پروژه یک کلاستر و یا گرید با دو کامپیوتر در محیطویندوز یا لینوکس بسازید
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.