Presentation on theme: "IT 344: Operating Systems Winter 2010 Module 2 Operating Systems Overview Chia-Chi Teng 265G CTB."— Presentation transcript:
IT 344: Operating Systems Winter 2010 Module 2 Operating Systems Overview Chia-Chi Teng email@example.com 265G CTB
4/28/20152 Architectural Support for OS IT 251: How much do you remember? –x86 assembly? –Complier/assembler/linker? –stack frame? –registers? –cache? –virtual memory? –TLB? –interrupt? –system calls?
4/28/20153 IT 344 In this class we will learn: –what are the major components of most OS’s? –how are the components structured? –what are the most important (common?) interfaces? –what policies are typically used in an OS? –what algorithms are used to implement policies? Philosophy –you may not ever build an OS –but as a computer engineer you need to understand the foundations –most importantly, operating systems exemplify the sorts of engineering design tradeoffs that you’ll need to make throughout your careers – compromises among and within cost, performance, functionality, complexity, schedule …
Moore’s Law In 1965, Gordon Moore predicted that the number of transistors that can be integrated on a die would double every 18 to 24 months (i.e., grow exponentially with time). Amazingly visionary – million transistor/chip barrier was crossed in the 1980’s. –2300 transistors, 1 MHz clock (Intel 4004) - 1971 –16 Million transistors (Ultra Sparc III) –42 Million transistors, 2 GHz clock (Intel Xeon) – 2001 –55 Million transistors, 3 GHz, 130nm technology, 250mm 2 die (Intel Pentium 4) - 2004 –140 Million transistor (HP PA-8500)
Illustration of Moore’s Law
4/28/20156 Processing power –doubling every 18 months –60% improvement each year –factor of 100 every decade –1980: 1 MHz Apple II+ == $2,000 1980 also 1 MIPS VAX-11/780 == $120,000 –2006: 3.0GHz Pentium D == $800 What about today? Even coarse architectural trends impact tremendously the design of systems
4/28/20157 Primary memory capacity –same story, same reason (Moore’s Law) 1972: 1MB = $1,000,000 1982: 512K of VAX-11/780 memory for $30,000 2005: 4GB vs. 2GB (@400MHz) = $800
4/28/20158 2009: 4GB vs. 2GB (@800MHz) = $100
4/28/20159 Aside: Where does it all go? –Facetiously: “What Gordon giveth, Bill taketh away” –Realistically: our expectations for what the system will do increase relentlessly e.g., GUI, VR, game –“Software is like a gas – it expands to fill the available space” – Nathan Myhrvold (1960-) Microsoft Stock Price
4/28/201510 Disk capacity, 1975-1989 –doubled every 3+ years –25% improvement each year –factor of 10 every decade –Still exponential, but far less rapid than processor performance Disk capacity since 1990 –doubling every 12 months –100% improvement each year –factor of 1000 every decade –10x as fast as processor performance!
4/28/201511 Only a few years ago, we purchased disks by the megabyte (and it hurt!) Today, 1 GB (a billion bytes) costs $1 $0.50 $0.25 from Dell (except you have to buy in increments of 40 80 250 GB) –=> 1 TB costs $1K $500 $250, 1 PB costs $1M $500K $250K
4/28/201512 Optical bandwidth today –Doubling every 9 months –150% improvement each year –Factor of 10,000 every decade –10x as fast as disk capacity! –100x as fast as processor performance!! What are some of the implications of these trends? –Just one example: We have always designed systems so that they “spend” processing power in order to save “scarce” storage and bandwidth!
4/28/201519 Lower-level architecture affects the OS even more dramatically The operating system supports sharing and protection –multiple applications can run concurrently, sharing resources –a buggy or malicious application can’t nail other applications or the system There are many approaches to achieving this The architecture determines which approaches are viable (reasonably efficient, or even possible) –includes instruction set (synchronization, I/O, …) –also hardware components like MMU or DMA controllers
4/28/201520 Architectural support can vastly simplify (or complicate!) OS tasks –e.g.: early PC operating systems (DOS, MacOS) lacked support for virtual memory, in part because at that time PCs lacked necessary hardware support Apollo workstation used two CPUs as a bandaid for non- restartable instructions! –Until recent years, Intel-based PCs still lacked support for 64-bit addressing (which has been available for a decade on other platforms: MIPS, Alpha, IBM, etc…) changing rapidly due to AMD’s 64-bit architecture
4/28/201521 Architectural features affecting OS’s These features were built primarily to support OS’s: –timer (clock) operation –synchronization instructions (e.g., atomic test-and-set) –memory protection –I/O control operations –interrupts and exceptions –protected modes of execution (kernel vs. user mode) –privileged instructions –system calls (and software interrupts) Virtualization architectures –Intel: http://www.intel.com/technology/itj/2006/v10i3/1-hardware/1- abstract.htm –AMD: http://enterprise.amd.com/us-en/AMD-Business/Business- Solutions/Consolidation/Virtualization.aspx
4/28/201522 Privileged instructions some instructions are restricted to the OS –known as protected or privileged instructions e.g., only the OS can: –directly access I/O devices (disks, network cards) why? –manipulate memory state management page table pointers, TLB loads, etc. why? –manipulate special ‘mode bits’ interrupt priority level why? –halt instruction why?
4/28/201523 OS protection So how does the processor know if a privileged instruction should be executed? –the architecture must support at least two modes of operation: kernel mode and user mode VAX, x86 support 4 protection modes –mode is set by status bit in a protected processor register user programs execute in user mode OS executes in kernel mode (OS == kernel) Privileged instructions can only be executed in kernel mode –what happens if user mode attempts to execute a privileged instruction?
4/28/201524 OS protection So how does the processor know if a privileged instruction should be executed? –the architecture must support at least two modes of operation: kernel mode and user mode VAX, x86 support 4 protection modes –mode is set by status bit in a protected processor register user programs execute in user mode OS executes in kernel mode (OS == kernel) Privileged instructions can only be executed in kernel mode –what happens if user mode attempts to execute a privileged instruction?
4/28/201525 Crossing protection boundaries So how do user programs do something privileged? –e.g., how can you write to a disk if you can’t execute I/O instructions? User programs must call an OS procedure –OS defines a sequence of system calls –how does the user-mode to kernel-mode transition happen? There must be a system call instruction, which: –causes an exception (throws a software interrupt), which vectors to a kernel handler –passes a parameter indicating which system call to invoke –saves caller’s state (registers, mode bit) so they can be restored –OS must verify caller’s parameters (e.g., pointers) –must be a way to return to user mode once done
4/28/201526 A kernel crossing illustrated user mode kernel mode Firefox: read(int fileDescriptor, void *buffer, int numBytes) package arguments trap to kernel mode save registers find sys_read( ) handler in vector table restore app state, return to user mode, resume trap handler sys_read( ) kernel routine
4/28/201527 System call issues What would happen if kernel didn’t save state? Why must the kernel verify arguments? How can you reference kernel objects as arguments or results to/from system calls?
4/28/201528 Memory protection OS must protect user programs from each other –maliciousness, ineptitude OS must also protect itself from user programs –integrity and security –what about protecting user programs from OS? Simplest scheme: base and limit registers –are these protected? Prog A Prog B Prog C base reg limit reg base and limit registers are loaded by OS before starting program
4/28/201529 More sophisticated memory protection coming later in the course paging, segmentation, virtual memory –page tables, page table pointers –translation lookaside buffers (TLBs) –page fault handling
4/28/201530 OS control flow After the OS has booted, all entry to the kernel happens as the result of an event –event immediately stops current execution –changes mode to kernel mode, event handler is called Kernel defines handlers for each event type –specific types are defined by the architecture e.g.: timer event, I/O interrupt, system call trap –when the processor receives an event of a given type, it transfers control to handler within the OS handler saves program state (PC, regs, etc.) handler functionality is invoked handler restores program state, returns to program
4/28/201531 Interrupts and exceptions Two main types of events: interrupts and exceptions –exceptions are caused by software executing instructions e.g., the x86 ‘int’ instruction e.g., a page fault, or an attempted write to a read-only page an expected exception is a “trap”, unexpected is a “fault”
4/28/201532 Interrupts and exceptions Two main types of events: interrupts and exceptions –exceptions are caused by software executing instructions e.g., the x86 ‘int’ instruction e.g., a page fault, or an attempted write to a read-only page an expected exception is a “trap”, unexpected is a “fault” –interrupts are caused by hardware devices e.g., device starts or finishes I/O e.g., timer fires
4/28/201533 I/O control Issues: –how does the kernel start an I/O? special I/O instructions memory-mapped I/O –how does the kernel notice an I/O has finished? polling interrupts Interrupts are basis for asynchronous I/O –device performs an operation asynchronously to CPU –device sends an interrupt signal on bus when done –in memory, an interrupt vector table contains list of addresses of kernel routines to handle various interrupt types who populates the vector table, and when? –CPU switches to address indicated by vector index specified by interrupt signal
4/28/201534 Timers How can the OS prevent runaway user programs from hogging the CPU (infinite loops?) –use a hardware timer that generates a periodic interrupt –before it transfers to a user program, the OS loads the timer with a time to interrupt “quantum” – how big should it be set? –when timer fires, an interrupt transfers control back to OS at which point OS must decide which program to schedule next very interesting policy question: we’ll dedicate a class to it Should the timer be privileged? –for reading or for writing?
4/28/201535 Synchronization Interrupts cause a wrinkle: –may occur any time, causing code to execute that interferes with code that was interrupted –OS must be able to synchronize concurrent processes Synchronization: –guarantee that short instruction sequences (e.g., read- modify-write) execute atomically –one method: turn off interrupts before the sequence, execute it, then re-enable interrupts architecture must support disabling interrupts –another method: have special complex atomic instructions read-modify-write test-and-set load-linked store-conditional
4/28/201536 “Concurrent programming” Management of concurrency and asynchronous events is biggest difference between “systems programming” and “traditional application programming” –modern “event-oriented” application programming is a middle ground Arises from the architecture Can be sugar-coated, but cannot be totally abstracted away Huge intellectual challenge –Unlike vulnerabilities due to buffer overruns, which are just sloppy programming
Architectures are still evolving New features are still being introduced to meet modern demands, e.g.: –Support for virtual machine monitors –Support for multi-core and parallel programming –Support for security (encryption, trusted modes) –Increasingly sophisticated video/graphics –Mobile devices –Other stuff that hasn’t been invented yet … Intel’s big challenge (Microsoft’s too): finding applications that require new hardware (OS) support, so that you will want to upgrade to a new computer to run them. 4/28/201537
4/28/201538 Review questions Why wouldn’t you want a user program to be able to access an I/O device (e.g., the disk) directly? OK, so what keeps this from happening? What prevents user programs from directly accessing the disk? So, how does a user program cause disk I/O to occur? What prevents a user program from scribbling on the memory of another user program? What prevents a user program from scribbling on the memory of the operating system? What prevents a user program from running away with the CPU?
Summary Protection –User/kernel modes –Protected instructions System calls –Used by user-level process to access OS functions –Access what is “in” the OS Exceptions –Unexpected event during execution (e.g. divide by zero) Interrupts –Timer, I/O 4/28/201539
4/28/201541 OS history In the very beginning… –OS was just a library of code that you linked into your program; programs were loaded in their entirety into memory, and executed –interfaces were literally switches and blinking lights And then came batch systems –OS was stored in a portion of primary memory –OS loaded the next job into memory from the card reader job gets executed output is printed, including a dump of memory repeat… –card readers and line printers were very slow so CPU was idle much of the time (wastes $$)
4/28/201542 Spooling Disks were much faster than card readers and printers Spool (Simultaneous Peripheral Operations On-Line) –while one job is executing, spool next job from card reader onto disk slow card reader I/O is overlapped with CPU –can even spool multiple programs onto disk OS must choose which to run next job scheduling –but, CPU still idle when a program interacts with a peripheral during execution –buffering, double-buffering
4/28/201543 Multiprogramming To increase system utilization, multiprogramming OSs were invented –keeps multiple runnable jobs loaded in memory at once –overlaps I/O of a job with computing of another while one job waits for I/O completion, OS runs instructions from another job –to benefit, need asynchronous I/O devices need some way to know when devices are done –interrupts –polling –goal: optimize system throughput perhaps at the cost of response time…
4/28/201544 Timesharing To support interactive use, create a timesharing OS: –multiple terminals into one machine –each user has illusion of entire machine to him/herself –optimize response time, perhaps at the cost of throughput Timeslicing –divide CPU equally among the users –if job is truly interactive (e.g., editor), then can jump between programs and users faster than users can generate load –permits users to interactively view, edit, debug running programs (why does this matter?)
4/28/201545 MIT CTSS system (operational 1961) was among the first timesharing systems –only one user memory-resident at a time (32KB memory!) MIT Multics system (operational 1968) was the first large timeshared system –nearly all OS concepts can be traced back to Multics! –“second system syndrome” Fredrick Brooks “Mythical man-month” 1975
4/28/201546 CTSS as an illustration of architectural and OS functionality requirements OS User program
4/28/201547 Parallel systems Some applications can be written as multiple parallel threads or processes –can speed up the execution by running multiple threads/processes simultaneously on multiple CPUs [Burroughs D825, 1962] –need OS and language primitives for dividing program into multiple parallel activities –need OS primitives for fast communication among activities degree of speedup dictated by communication/computation ratio –many flavors of parallel computers today Multi-core SMPs (symmetric multi-processors) MPPs (massively parallel processors) NOWs (networks of workstations) computational grid (SETI @home)
4/28/201548 Personal computing Primary goal was to enable new kinds of applications Bit mapped display [Xerox Alto,1973] –new classes of applications –new input device (the mouse) Move computing near the display –why? Window systems –the display as a managed resource Local area networks [Ethernet] –why? Effect on OS?
4/28/201549 Distributed OS Distributed systems to facilitate use of geographically distributed resources –workstations on a LAN –servers across the Internet Supports communications between programs –interprocess communication message passing, shared memory –networking stacks Sharing of distributed resources (hardware, software) –load balancing, authentication and access control, … Speedup isn’t the issue –access to diversity of resources is goal
4/28/201550 Client/server computing Mail server/service File server/service Print server/service Compute server/service Game server/service Music server/service Web server/service etc.
4/28/201551 Peer-to-peer (p2p) systems Napster Gnutella (LimeWire) –example technical challenge: self- organizing overlay network –technical advantage of Gnutella? –er … legal advantage of Gnutella? Data source: Digital Music News Research Group
4/28/201552 Embedded/mobile/pervasive computing Pervasive computing –cheap processors embedded everywhere –how many are on your body now? in your car? –cell phones, PDAs, network computers, … Typically very constrained hardware resources –slow processors –very small amount of memory (e.g., 8 MB) –no disk –typically only one dedicated application –limited power But this is changing rapidly!
4/28/201553 Ad hoc networking
4/28/201554 IT 344 In this class we will learn: –what are the major components of most OS’s? –how are the components structured? –what are the most important (common?) interfaces? –what policies are typically used in an OS? –what algorithms are used to implement policies? Philosophy –you may not ever build an OS –but as a computer engineer you need to understand the foundations –most importantly, operating systems exemplify the sorts of engineering design tradeoffs that you’ll need to make throughout your careers – compromises among and within cost, performance, functionality, complexity, schedule …