Presentation is loading. Please wait.

Presentation is loading. Please wait.

System software – history, review and basic problems.

Similar presentations


Presentation on theme: "System software – history, review and basic problems."— Presentation transcript:

1 System software – history, review and basic problems.
Introduction to Computer Systems (6) System software – history, review and basic problems. Piotr Mielecki Ph. D.

2 Operating systems. 1.1. Definitions.
The operating system (OS) is the piece of software which provides the basic interface between the user of the Computer System and the hardware (and any other, like communication system, for example) resources. In other words it manages the sharing of the different resources of a Computer System and provides programmers with an interface used to access these resources. In fact there’s no universally accepted definition of the operating system. The good approximation can be “everything a vendor ships when you order an operating system”, but it varies widely. Another approximation can be “the only program which is running at all times on computer”, but this fits to the most basic module of the OS – the kernel rather. Everything else from that point of view is a system program (when shipped with an operating system) or an application program.

3 Operating systems. 1.1. Definitions.
The operating system (OS) is the piece of software which provides the basic interface between the user of the Computer System and the hardware (and any other, like communication system, for example) resources. In other words it manages the sharing of the different resources of a Computer System and provides programmers with an interface used to access these resources. In fact there’s no universally accepted definition of the operating system. The good approximation can be “everything a vendor ships when you order an operating system”, but it varies widely. Another approximation can be “the only program which is running at all times on computer”, but this fits to the most basic module of the OS – the kernel rather. Everything else from that point of view is a system program (when shipped with an operating system) or an application program.

4 More precisely we can say, what the operating system is:
The resource allocator – manages all resources, decides between conflicting requests for efficient and safe resource use. The control program – controls the execution of programs (running program is considered as object called the process) to prevent errors and improper use of the computer.

5 1.2. Basic functions and services of the operating system.
Computer startup and I/O access. The first function which the Computer System needs to have implemented in software is support for it’s startup. The software used for this purpose is usually called the bootstrap loader and in most of cases is stored in the ROM or EPROM memory. In the PC-like computers bootstrap loader seeks for the first available sector (first 512 bytes – boot sector) on first mass-storage device, loads it’s contents to the RAM memory and starts it up. All further operations (i.e. loading of the OS kernel and so on) are initialized by this start-up code. The ROM memory (the firmware of the PC’s mainboard and other components) keeps not only the bootstrap loader, but also the simple, basic functions provided to make use of physical devices like disks, display adapters, keyboard, serial and parallel ports etc. The programmer’s interface to these functions is organized as a set of interrupt handlers (called by the INT <number_of_interrupt> assembly-level instruction or invoked by hardware interrupts).

6 Using them the programmer can perform such operations like reading one character from the keyboard, sending one or more characters to text display, read/write the sector or set of sectors from/to the disk device etc. These functions and the utility which makes possible to set the basic parameters for hardware (access modes for hard disks, boot devices priority, system bus clock frequency and many others) are included in the Basic Input / Output System (BIOS). To make use of enhanced features of some devices (like hardware 2D / 3D graphic acceleration in display adapters, for example) the particular operating system doesn’t use the BIOS functions rather. It has its own drivers (procedures included into the kernel) instead. Theoretically the drivers should be compiled and linked directly to the kernel binary code, but this method is not flexible enough – we would have to recompile the kernel each time when any new device driver was installed. The kernels of most of today’s operating systems (like Windows or Linux, for example) are able to link the drivers dynamically as external modules.

7 Process management. The process means “the program which is running”. This kind of object is much more sophisticated than just the binary code or disk file including this code. We can imagine the process as the object with the set of features (attributes), which values are dynamically changing during the execution. The most important attributes are: The value stored in the Program Counter CPU’s register. The values in other CPU’s internal registers, including Status Register (flags) first of all. The contents of the memory blocks associated with the process (code, data, stack). Status of Input / Output operations performed by process. In very simple, non-multitasking operating systems (like CP/M or MS-DOS, for example) only one program could be executed (from the beginning to the end). So the operating system didn’t have to keep track to the process’s status and record the values of the process’s features at any moment.

8 Most of today’s operating systems can process more than one program using time-sharing multitasking method. That means each process is sometimes interrupted (“frozen”) for a while to deliver the CPU time for another process. So the OS has to record the values of all features of stopped process (in the structure called Process Control Block – PCB) to revive this process after some time. In general, the process during its life-time changes the state: NEW TERMINATING Admission Scheduler dispatch Exit READY RUNNING Interrupt I/O event wait I/O event completion WAITING

9 As the process executes, it passes trough the following states:
New – the process is being created: the structure which defines the process (PCB) is created, the OS module called Long Time Scheduler checks the resources needed and admits the newly created process to the queue of processes ready for execution (but none of the instructions belonging to this process is executed yet). Ready – the process waits for CPU in the queue of ready processes and is ready to run. The OS module called Short Time Scheduler decides when interrupt the process currently executed and start next process from the queue. The module called Dispatcher is responsible for performing all operations needed to freeze one process and start the other one. Running – instructions of the process are being executed to the moment of interrupt caused by Short Time Scheduler or I/O request raised by process itself. Waiting – the process is waiting for particular I/O event (in the I/O queue). After servicing the I/O operation process goes to the queue of ready processes. Terminated – the process has finished the execution. OS frees resources used by process, deletes its PCB block and eventually other system variables associated with this process and finally the process finishes its life cycle (disappears from the system).

10 Two main classes of processes can be distinguished as far as their behavior during the life-cycle is concerned: CPU oriented – processes which are performing lots of mathematical (and other) CPU operations and don’t need frequently I/O accesses (interaction with user, for example). The Short Time Scheduler first of all is responsible for fair switching the CPU between processes of this kind to reach smooth multitasking. I/O oriented – processes which are performing lots of I/O operations and usually don’t need to wait for Short Time Scheduler to decide to interrupt them. They are spending most of their time in the I/O queue waiting for events raised by user (mouse move or click, keyboard input etc.) or other processes (data transmission, mass-storage operations etc.). This is typical for common applications like word processors, graphical editors, games, web browsers etc.

11 Memory management. The virtual memory used in today’s operating systems provides logical RAM space much larger than physical memory installed in the computer. The mechanisms used to implement the virtual memory (segmentation, protection, pagination, swapping) need support from hardware and system software as well. We can consider that: The CPU hardware can support translation from logical (virtual) address used by program to physical address sent to system Address Bus. The appropriate data structures (page tables, first of all) have to be initiated and serviced by operating system routines. The CPU hardware can detect the situations, when the logical address is not correct or not suitable to translate to hardware address at the moment.

12 These events for the Intel x86 family of processors, for example, can be divided into three groups:
The virtual address points outside of the memory space (segment) allocated for the process – this is called the segmentation fault or protection fault. This way the memory is protected against illegal use by the unauthorized process, so one process cannot access the memory block which belongs to the other process. The OS has to make decision, what to do with the process which caused this kind of error (in fact the only well decision is to abort this process). The address is correct as far as range (segment), but points to the page not present in the page table(s). This event is called a pagination fault and is also a serious system error (the page table is not serviced correctly), so the process should be aborted. The logical address is correct, but the appropriate page is not loaded into physical memory right now (doesn’t have the physical frame associated – the virtual address has “missed” the physical frame). This is not a critical error. In this case the OS should perform the swapping routine, freeing the frame occupied by another page in most of cases and filling it with the contents of requested page (read from mass-storage).

13 The OS routines, implemented in the kernel to support virtual memory subsystem, are usually triggered by signals (internally generated interrupts), raised by CPU and called exceptions. Of course the CPU itself cannot manage the swap area in the mass-storage to exchange the pages. The routine invoked by the CPU (implemented by the authors of particular OS for the “missed frame exception”) at the end returns back to the instruction which caused the exception – this time the page is mapped to physical frame, so the process should go on.

14 1.2.4. Mass storage and file system management.
The more detailed characteristic of the mass-storage and on-line file systems was given in the Lecture 5. The BIOS and drivers in the OS kernel are responsible for low-level interface to different mass-storage devices (ATAPI/SATA or SCSI/SAS hard disks and CD/DVD drives, flash drives etc.). Upper level mass-storage access is possible with the file systems (like NTFS for Windows hard disks, FAT-12 for floppies, ISO-9660 or UDF for CD and DVD-ROMs etc.) serviced by the other kernel modules. File systems may provide journaling, which supports safe recovery in the event of a system crash. A journaled file system (like ext3 in Linux, for example) writes information twice: first to the journal, which is a log of file system operations, then to its proper place in the ordinary file system. In the event of a crash, the system can recover to a consistent state (roll-back the transaction) by replaying a portion of the journal. In contrast, non-journaled file systems typically need to be examined in their entirety by a utility such as fsck (UNIX / Linux) or chkdsk (Windows).

15 Some features implemented in the file system may support the security management in the operating system. For example all UNIX / Linux compatible file systems (like ext3, for example) can recognize the owner of each file (more detailed: the single user who is the owner of the file and the group which also can be the owner) and the rights which the individual owner, the group and any other user have to this file.

16 1.2.5. I/O resources management.
Although the disk drives are the examples of the I/O devices, their importance for the operating system (as a secondary-level storage system) gives them a privileged position. The other I/O devices (like communication ports, display adapters, printers, multimedia etc.) can be considered as general I/O resources of the system. The processes, especially the I/O oriented (interactive) ones, are frequently requesting different operations from these resources. Some of them are very simple events (like moving the mouse, pressing the key on keyboard etc.) some are more complicated tasks (like printing the document, sending the message to other process, recording the CD-RW disc etc.).

17 OS kernel, managing these requests, uses the following schema:
The request of the process is registered and queued in the I/O queue (FIFO, for example) mapped to the particular device. The process is interrupted (that means the CPU can be given to another process), but it doesn’t free other resources possessed (memory blocks first of all). So the process is in the Waiting state. When the device becomes accessible for the particular process (the event for the process was waiting occurs), the request of the process is serviced by the system I/O routine (some data is read from disk to memory buffer, for example). After completion of the I/O operation process is moved to the queue or Ready processes, where it must wait for its turn to access the CPU (pointed by Short Time Scheduler). The waiting processes, in most of cases, temporarily loose their resources (memory pages mainly) to make them accessible for running processes.

18 Some I/O devices can be accessed by many processes simultaneously
Some I/O devices can be accessed by many processes simultaneously. When reading files from hard disk or CD-ROM, for example, several processes can request different allocation blocks mapped to different files. The OS queues and services each allocation block (not the entire file) request separately, so we can consider many file I/O operations being performed simultaneously. Some other devices have to be accessed in the exclusive mode only. When printing the document or burning the CD or DVD disk, for example, the entire job (not just printing one character, for example) must be finished before device can start another one.

19 User interface. Since the computers became very popular and easy accessible devices, used everyday by millions of people, the problem of intuitive, easy to use environment for user has become one of the most important. Simple and costless text-oriented user interfaces (based on the set of commands interpreted by the shell or Command Line Interpreter – CLI) are still frequently used by administrators of the UNIX-based systems, for example. The main advantage of text shells is that they don’t require fast network connections to work with remote systems, sometimes localized very far from the administrator’s office. Common today’s computer users don’t accept this kind of interface. They are familiar with graphical environments rather (Graphical User Interface – GUI), invented by Xerox in mid-1960-ties and introduced first time to home and desktop computers in early 1980-ties by Apple (first Macintosh), Commodore (Amiga) and Atari (ST-series).

20 Most popular interface of this kind has actually the family of Microsoft Windows operating systems:
The history of these systems started with the graphical overlay for MS-DOS, which also supported the virtual memory and poor multitasking by replacing some functions of underlying MS-DOS system. First stable and popular versions of Windows were 3.10 and 3.11 (with network support). Starting with the Windows 95 the MS-DOS as separate OS was not needed (it was embedded in Windows 95 as the “MS-DOS 7.0”). But up to the version 98 ME this line of Microsoft systems suffered from many disadvantages coming from the underlying, 16-bit MS-DOS kernel. The other, quite different as far as kernel architecture is concerned (but with nearly identical graphic layout), family of Windows systems are those based on NT (New Technology) architecture. Today’s systems like Windows XP, Windows Vista or Windows 2003 server are all based on this kernel. The manufacturer is developing new concepts as far as GUI itself mostly and some other aspects (security, for example).

21 Different approach to the implementation of GUI is the client-server X-11 protocol, used in UNIX / Linux family of operating systems (including Mac OS X, based on BSD UNIX operating system), providing modular X-Window graphical environment. In this principle the machine which displays the graphics and services all user’s I/O (mouse, keyboard) is the server (X-server), while the machine (remote sometimes) on which the applications are running is the client. The program which can attach our machine to remote UNIX host (with X-11 services installed) and provide us with graphical environment is called X-terminal. The X-terminal can be the MS-Windows application, for example. So we can say that the UNIX GUI concept is much more flexible (and complicated) form the one used in Windows. Different graphical environments based on this concept (like Gnome, KDE or Aqua) are today very “Windows-like” in most of cases.

22 The advanced, sophisticated GUI environments have also many additional functions. For example:
Support for scalable fonts, installed at the system level and accessible for all applications (like TrueType in Windows). Support for different national languages (localization). System-level implementation of multimedia (graphic and sound) support (virtual devices called “Codecs” – Encoder / Decoder). System-level mechanisms like Trash-can, Clipboard. Mechanisms for interchanging data objects between the different applications, like OLE (Object Linking and Embedding) for example.

23 Communication. Today the most well-known method of communication between the two different processes (running on two different machines in most of cases) is using the network protocol (TCP/IP, for example). The communication itself consists of two basic operations: Sending the message to the desired destination address (the process pointed by its unique number/identifier or “mailbox”, which is seen by all the processes which are allowed to communicate). Receiving the message from the explicitly pointed process or from the common “mailbox”. In the TCP/IP stack of protocols concept the “mailbox” is implemented by a combination of network address of the machine (IP number, for example) and the number of internal port mapped exactly to particular “mailbox” on this host (80 for HTTP service, for example). This combination is called a socket. The sending and receiving operations (after initialization of the sockets) are very similar to common file writes and reads.

24 Although the TCP/IP protocol is very flexible method of communication, there were many different ideas before it was designed and implemented. The simplest way we can imagine to pass some data (the message) from one process to another is using the same disk file by two or more running programs as a common area of communication. We can do so on local system rather, unless the shared file is localized on shareable or remote file system (on file server’s volume, for example). The other disadvantage is need for mass-storage access, which slows-down the communication. Other “local” method is using the same operational memory block as a common area of communication. In UNIX-like operting systems this method is one of the most popular and very fast. The OS provides programmers with all Application Programmers Interface (API) functions (accessible in C language) needed to attach and detach the Shared Memory block to/from the process. This technique is one of the methods included in Inter Process Communication (IPC) subsystem in UNIX / Linux systems.

25 The disadvantage of the common area of communication (like shared file or memory block) is a synchronization problem: if more than one process wants to write data to the same place, the result may be not predictable (the race effect). This problem can be solved with the help of semaphores (the “locks” which can be closed or opened by the process before or after any operation on shared area). The semaphores are also implemented in standard IPC subsystem for UNIX / Linux and can be used together with Shared Memory to establish a stable, synchronized and fast communication between many processes. Easier to use (simple synchronization) way of passing the message is to put the message into the pipe: since one process drops the entire message to the pipe, it can be catch by other process. The sender can send many messages and doesn’t need to wait for receiver (messages are queued). In the simplest version in UNIX-like systems (unnamed pipes) only the related processes (sharing some common values of variables – handles to the pipe) can use the same pipe.

26 The more flexible technique called named pipes (of FIFO queues in UNIX-like systems) works the same way, but the pipe can be recognized “outside” the process which has created it – as an object similar to file, for example. In MS-Windows named pipes can work in local network, not only on the same machine, but it must be the same physical segment of network (without the router between two machines using named pipes). One more, very simple and not flexible, but still useful method of communication is sending and receiving the signals between the processes (the process can send signal to itself too). All information the sender can pass to the directly pointed (by the unique Process ID – PID number, for example) receiver is the number of the signal. The receiver must have appropriate routine to run, if the particular signal is received. In UNIX-like systems one number of signal (9) is reserved, and can’t be reprogrammed. The routine which is obligatory called after receiving this signal aborts the process immediately. User can send the signals to his processes using shell command kill <signal_number> <PID>.

27 Security. The term “security” in computer systems (not only operating systems) has two different meanings: Reliability – fault resistance, stability. In general all the hardware and software components should be well designed (and tested) and free from faults (like software bugs, for example), if only it’s possible. In some critical situations, not intended nor caused by conscious, malicious human activity, the system should be able to work despite of faults (like problems with power voltage, faults of disks in RAID arrays etc.). Redundancy (like in RAID disk arrays or redundant power supplies) is frequently used technique to reach a good reliability of hardware constructions. The methods of theory of reliability (a branch of mathematics) are used in studies about these problems. Metrics like Mean Time to First Failure (MTFF) or Mean Time Between the Failures (MTBF) are used to denominate the parameters of components tested in laboratories. Well designed and carefully serviced backup strategy is important for restoring entire system (or some parts of the system, sometimes single disk file) in the case of critical failure.

28 Security – strategy and set of methods used to avoid problems caused by unauthorized and unwanted (malicious in many cases) human activities. Access to the resources of the system (disk files, I/O devices etc.) should be limited to the real needs of well-known (registered) users – the access should be authorized. The user should be authenticated (recognized) as precisely as it’s only possible. In most of cases the authentication process consists of checking the user name and password only. More sophisticated methods are employing hardware keys (chip-cards, for example) or biometrics (finger print or eye scanners, for example). In the spread (network) systems the machine used by authenticated user should also be registered and checked for better security (Virtual Private Networks – VPN). Large and widely spread information systems (corporation networks, e-commerce, e-banking etc.) need the advanced methods for managing lots of different resources and users to define the access rights. The Directory Services (based on spread databases) are used to register each class of resources and any single resource (including user) as well and setting relations between them. Most of implementations of Directory Services are based on Lightweight Directory Access Protocol – LDAP). Two well-known implementations of this protocol are ActiveDirectory (Microsoft) and eDirectory (Novell).

29 Serious problem is preventing the system (private network or single computer, for example) from cracker’s attacks, worms, viruses etc. Some types of malicious software can be filtered by specialized anti- virus applications running on the mail or file servers or desktop computers, some attacks can be eliminated using software or hardware firewalls. But one of the most important problems is a user himself and his/her resistance on socio-technical attacks.

30 2. Virtualization. Virtualization basically lets one computer do the job of multiple computers at the same time, by sharing the resources of a single machine across multiple system environments. Single virtual server or virtual desktop (workstation) can be a host for multiple operating systems and multiple applications. Building virtual infrastructures composed of multiple servers, storage systems, networks and other resources we can get high availability of resources, better desktop management, increased security, and improved disaster recovery processes. Today virtualization is a proven software technology that is rapidly transforming the IT landscape and fundamentally changing the way that people compute. There are several different approaches to virtualization and many different software tools (commercial and free) that can be used to implement both single virtual hosts and entire virtual infrastructures.

31 Virtualization based on single physical machine (left) vs
Virtualization based on single physical machine (left) vs. virtual infrastructure (right) according to VMware.

32 One of the commonly used solutions is open-source project Xen (designed in Cambridge University, now developed by XenSource). It’s based on functionality built in OS kernel (options available in Linux kernels of 2.6 family, for example). This kind of functionality is called hypervisor and manages (monitors) different virtual machines. First we have to install OS with kernel supporting Xen on host machine (physical computer which is real host – domain0), then we can define several virtual machines (guests, partitions, domains), and finally install individual operating systems on each of them.

33 The technique used by most of Xen users, called para-virtualization, assumes that some of I/O operations requested by guest (those, which could disturb other guest systems) are served with drivers installed in host OS, so virtual machines are not quite isolated from host OS. Another disadvantage of the para-virtualization is that OS for each virtual machine should be modified / adjusted to run with Xen hypervisor, so it’s used for open-source operating systems like Linux, NetBSD, OpenSolaris etc. Full virtualization with Xen is possible using 64-bit Intel x86 processors with VT (Virtual Technology) extension or 64-bit AMD processors with AMD-V extension. With Xen 3.x it’s possible to manage multi-processor architectures up to 32 CPUs). More flexible and more advanced solutions are implemented in VMware family of products (some of them for free). The basic OS-independent hypervisor for x86-based computers from this vendor is (freeware) VMware ESX / ESXi. The VMware approach to virtualization inserts a thin layer of software directly on the computer hardware or on a host operating system. This software layer creates virtual machines and contains a virtual machine monitor (hypervisor) that allocates hardware resources dynamically and transparently so that multiple operating systems can run concurrently on a single physical computer without even knowing it.

34 3. History of the system software – milestones.
Mid-1940-ties: buffered input from punched cards and tapes – queue of loaded program codes and data on input, CPU doesn’t have to wait for slow devices. User = Operator. Early 1950-ties: first command line interpreters (Job Control Language – JCL), batch processing – sequence of jobs from input queue is processed according to some strategy (FIFO or more sophisticated). User  Operator. 1960-ties: multitasking with batch processing (time-sharing operating systems) on mainframe computers, virtual memory, advanced I/O resources management, research works on graphical interfaces, first virtualizations on large mainframe hardware structures. 1970-ties: interactive, multitasking operating systems (like today), mainframe and minicomputers, first desktop computers with simple, non-multitasking OS-s, TCP/IP network protocol (DARPA / Berkeley University) as flexible standard. 1980-ties: spread resources (mass storage, printers first of all), parallel and spread processing in advanced (evaluation and top-level) constructions, GUI as a standard environment for popular OS (Apple), cheaper computer networks, first commercial “network OS” for PC (Novell)…


Download ppt "System software – history, review and basic problems."

Similar presentations


Ads by Google