Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 1: Introduction

Similar presentations


Presentation on theme: "Lecture 1: Introduction"— Presentation transcript:

1 Lecture 1: Introduction
SE350: Operating Systems Lecture 1: Introduction OS is one of the most complex topics in all of CS. It is also one of the most important courses for your careers. You get to poke under the hood to see how things really work. And a lot of what you’ll find in an OS, has an analogue in real life. You’ll find you won’t be able to look at a traffic jam or a supermarket line the same way again.

2 Course Mechanics All information, lecture notes, and announcements will be posted on the course website Course is added to Piazza and LEARN Links are available on the course website

3 Teaching Team Instructor Lab instructor Teaching assistants
Seyed Majid Zahedi Office hours Tuesdays from 12:30 to 13:30 Lab instructor Rollen DSouza Teaching assistants Andrew Gapic Guang Zhi Xiao Liuyang Ren Waleed Qadir Khan Please stop by! To motivate you, I have candies and chocolate in my office that you can help yourself with. Our addresses are available on the course website. TAs will be helping you during the lab sessions

4 Readings Textbook Optional references
Operating Systems: Principles and Practice (2nd Edition) Optional references Operating Systems: Three Easy Pieces (Freely Available Online) Operating System Concepts (10th Edition) All the books (and two other OS books) are on the course reserve list and available at the Davis Centre Library. Electronic version of the textbook is available. Read the book before and after the lecture.

5 Evaluation Midterm exam: 15% Lab project: 35% Final exam: 50%
Feb. 12th from 18:30 to 20:20 Lab project: 35% Group sign-up deadline: Jan. 18th Lab due dates: Feb. 4th, Mar. 4th, and Mar. 18th Final report: Apr. 5th Final exam: 50% To be announced There will be no quizzes so you don’t really have to attend the lectures. Please come to the lectures if you want to learn.

6 RTX Project Overview Project description Logistics About the Lab
What is this project about? What are the deliverables? Logistics How to form a group? Help! My group sucks! Lab times About the Lab What lab facilities are provided? How to seek help outside lab?

7 RTX Project You will design, implement and test a real-time kernel
A basic multiprogramming environment Five priority queues and preemption Simple memory management Message-based Inter-Process Communication (IPC) System console Input/Output (UART 0/1) Debugging support (UART 1/0) Your RTX operates on Keil MCB1700 (ARM Cortex-M3) Boards

8 Deliverables Four deliverables in total Important Policies
RTX Implementations (P1, P2 and P3) RTX Demonstrations (Weeks 5, 9 and 11) RTX P4: Final project report (30-40 pages) + Code Important Policies Three grace days without penalty 10% per day late submission penalty afterwards Zero tolerance policy for plagiarism We use Moss We follow UW Policy 71 for any single incident

9 Project Groups Forming a project group Leaving a project group
Four members (three is acceptable for special cases) Within the same lab section as much as possible your group to Rollen to signup by 23:59 on Jan. 18th Leaving a project group One week notice in writing before nearest deadline Only one split-up is allowed All students involved lose one grace day Everyone in the same group gets the same grade

10 Scheduled Labs Scheduled lab sessions are in odd weeks
Mon., Wed., and Fri. (excluding reading week) Help sessions: Weeks 1, 3 and 7 Demo sessions: Weeks 5, 9 and 11

11 Lab Facilities Lab is located in E2-2363
Nexus computers Keil MCB1700 LPC1768 (ARM Cortex-M3) Boards MDK-ARM MDK-Lite ed. V4.60 (32KB code size limit) RealView Compilation Tools are included The simulator is good for development work outside the lab* SE350 shares the lab with ECE222 students ECE222 has scheduled labs during even weeks Tue., Wed., and Thu. at the same time-slot * It is my understanding that the simulator doe not perfectly match behaviour on hardware, so be forewarned: test your code on the hardware well before the deadline!

12 Seeking Help Outside Lab
Piazza Looking for group partners Lab/Project administration, Keil IDE, and project Q&A Target response time: one business day Do not wait till the last minute to ask questions Please avoid individual s Use private piazza posts Reserve s for directed, confidential conversations Office hours during even weeks (every week for me) By appointment

13 Your First Task Form a group of 4
Recall how to work with MCB 1700 and Keil

14 Main Points (for today)
Operating system definition Software to manage a computer’s resources for its users and applications OS challenges Reliability, security, responsiveness, portability, etc. OS history How did we get here? Device I/O How does I/O work?

15 What is an Operating System?
In some sense, OS is just a software engineering problem: how do you convert what the hardware gives you into something that the application programmers want? For any OS area (file systems, virtual memory, networking, CPU scheduling), begin by asking two questions: what’s the hardware interface? (the physical reality) what’s the application interface? (the nicer abstraction) We’ve broken it down a bit: we have users and their applications. You probably already know about libraries that applications can be linked with code that helps them do their job, e.g., like the C library, with malloc and free and string operations. But when you write an app, you write it as if it has the entire machine you don’t need to worry about the fact that there are multiple other apps running at the same time It doesn’t crash just because one of those other apps has a bug. This interface is an abstract virtual machine – an abstraction that allows programmers to ignore the OS. Much of the OS runs in “kernel mode” – we’ll describe that in the next lecture. That code provides applications the abstraction of their own dedicated hardware. And under all of that is another abstraction, that allows the OS to run on a variety of different hardware - this is the HAL. That way, you can change the underlying hardware, without changing (much of) the OS. Need to have the various layers coordinate – library makes calls into the kernel to do things, like write file data to the disk, or get a network packet. But they run in separate domains. If you look inside Linux (or any other OS you might find), you’ll see these three categories. Kern – kernel code Userland – system libraries Kern/arch – machine dependent routines, specific to each different type of CPU Software to manage a computer’s resources for its users and applications

16 Operating System Roles
Referee Resource allocation among users, applications Isolation of different users, applications from each other Communication between users, applications Illusionist Each application appears to have entire machine to itself Infinite number of processors, (near) infinite amount of memory, reliable storage, reliable network transport Glue Libraries, user interface widgets, etc. Resource allocation: all apps share the same set of hardware so the OS needs to decide which app gets what resources and when. Isolation: OS should protect application from each other if one app crashes, you don’t want it to require a system reboot OS should protect itself from apps. Illusion: when you write a “hello world!” program you don’t really think about how much memory the system has and how many other apps are running. Glue: OS provides set of common tools and services that facilitates sharing between apps – e.g., common UI routines so apps can have the same “look and feel.” Each of these points is pretty complex! But we’ll see a lot of examples. 90% of the code in an OS is in the glue – but its mostly easy to understand, so we won’t spend any time on it. Consider the illusion though – you buy a new computer, with more processors than the last one. But you don’t have to get all new software – the OS is the same, the apps are the same, but the system runs faster. How does it do that? You buy more memory – you don’t change the OS, you don’t change the apps, but the system runs faster. How does it do that? Part of the answer is that the OS serves as the referee between applications. How much memory should everyone have? How much of the CPU?

17 Example: File Systems Referee Illusionist Glue
Prevent users from accessing each other’s files without permission Even after a file is deleted and its space re-used Illusionist Files can grow (nearly) arbitrarily large Files persist even when the machine crashes in the middle of a save Glue Named directories, printf, etc.

18 OS Challenges: Web Service Example
How does the server manage many simultaneous client requests? How do we keep the client safe from spyware embedded in scripts on a web site? How do we make updates to the web site so that clients always see a consistent view? What are the challenges that an OS has to address? To answer that question let’s consider a simple example. Other challenges: Server invokes helper apps. How does the operating system enable multiple applications to communicate with each other? Synchronize access by multiple requests to shared state

19 OS Challenges (cont.) Reliability Availability Security Privacy
Does the system do what it was designed to do? Availability What portion of the time is the system working? Mean Time To Failure (MTTF), Mean Time to Repair Security Can the system be compromised by an attacker? Privacy Data is accessible only to authorized users An excuse to define some terms!

20 OS Challenges (cont.) Performance Latency/response time Throughput
How long does an operation take to complete? Throughput How many operations can be done per unit of time? Overhead How much extra work is done by the OS? Fairness How equal is the performance received by different users? Predictability How consistent is the performance over time?

21 OS Challenges (cont.) Portability For programs:
Abstract virtual machine (AVM) Application programming interface (API) For the operating system Hardware abstraction layer Is the operating system easy to move to new hardware platforms? Portability also apply to OS itself.

22 Portability

23 OS History

24 Microprocessor Trends
End of Dennard Scaling [R. Dennard et al. 1974] [Moore’s Law 1965] Dark Silicon [Esmaeilzadeh et al. 2011] Every two years we have doubled the number of transistors on our chips. This is what Moore predicted in 1965, and is known as Moore’s law. Dennard scaling gave us a guideline to make our transistors faster as we make them smaller. For years, we scaled down the voltage and current of our transistors as we were scaling down their feature size. This way, we were able to improve performance while staying within a fixed power budget. Unfortunately, Dennard scaling has stopped since 2005. What happened was, we hit the power wall. We could not scale down voltage and current because leakage current started to go up exponentially. As a result, our processors are not getting faster at a rate that we want them to do. Architects solution to this, was to put multiple cores on the same chip. But this also didn’t work well, we we hit the Dark Silicon Era. We cannot turn on all our processors and run them at full speed for a long period of time, because this could melt our chips.

25 Computing Trends If the OS code base lasts for 20 years, it has to survive between at least two of these columns – that’s a huge amount of change; You often hear about Moore’s Law – how CPU has sped up a huge amount. But memory and disk capacity have gone up by even more! Its not listed here, but disks have become much bigger, but they haven’t gotten much faster – a disk access still takes almost as long as it did in 1981, so the relative speed of the CPU and the disk has become enormous. Perhaps most interesting is the # of users per machine – that ratio has changed dramatically, and it’s a consequence of the 500K times change in cost/performance of computing

26 Early Operating Systems: Very Expensive Computers
One user/application at a time Had complete control of hardware OS was runtime library Users would stand in line to use the computer Batch systems Keep CPU busy by having a queue of jobs OS would load next job while current one runs Users would submit jobs, and wait, and wait, and wait … For sing-user systems, processor remained idle while the user loaded the program For batch systems, upgrading OS would turn back computer into single-user system

27 Time-Sharing Operating Systems
Multiple users on computer at the same time Multiprogramming: run multiple programs at same time Interactive performance: try to complete everyone’s tasks quickly As computers became cheaper, more important to optimize for user time, not computer time 

28 Today’s Operating Systems
Smartphones Embedded systems Laptops Tablets Virtual machines Data center servers

29 Tomorrow’s Operating Systems
Giant-scale data centers Increasing numbers of processors per computer Increasing numbers of computers per user Very large scale storage

30 Device I/O OS needs to communicate with devices
Network, disk, video, keyboard, mouse, etc.  There are 2 methods to access I/O devices Memory mapped I/O (MMIO) Port mapped I/O (PMIO) memory-mapped I/O is the dominant method Port mapped I/O can be useful in architectures with constrained physical memory

31 Memory Mapped vs Port Mapped I/O
Address space MMIO: Physical address space is shared between DRAM and I/O PMIO: Address space for I/O is distinct from main memory I/O-access Instructions MMIO: Same as memory-access MOV register, memAddr // To read MOV memAddr, register // To write PMIO: Distinct access instructions IN register, portAddr // To read OUT portAddr, register // To write

32 Programmed I/O I/O operations take time (physical limits)
OS pokes I/O memory on device to issue request  OS polls I/O memory to wait until I/O is done  Device completes, stores data in its buffers  Kernel copies data from device into memory 

33 Interrupt-Driven I/O OS pokes I/O memory on device to issue request
CPU goes back to work on some other task Device completes, stores data in its buffers Device triggers interrupt to signal I/O completion Device specific handler code runs  When done, resume previous work  If data transfer rate is high, this could have more overhead than programmed I/O because of context switch

34 Direct Memory Access (DMA) I/O
OS sets up DMA controller with # words to transfer OS commands I/O device to initiate data transfer DMA controller provides address and r/w control Device reads/writes data directly to main memory DMA controller interrupts CPU on I/O completion This feature is useful at any time that the CPU cannot keep up with the rate of data transfer, or when the CPU needs to perform work while waiting for a relatively slow I/O data transfer.

35 Programmed I/O vs DMA Programmed (and interrupt-driven) I/O
I/O results stored in the device  CPU reads and writes to device memory  Each CPU instr. is an uncached read/write (over I/O bus)  Direct memory access (DMA)  I/O device reads/writes into the computer’s memory  After I/O interrupt, CPU can access results in memory 

36 Faster DMA I/O: Buffer Descriptors
OS sets up a queue of I/O requests Each entry in the queue is a buffer descriptor It contains I/O operation and buffer to contain data  Buffer descriptor itself is DMA’ed!  CPU and device I/O share the queue I/O device reads from front  CPU fills at tail  Interrupt only if buffer empties/fills  while the processor is handling the interrupt, the device is idle An implication is that each logical I/O operation can involve several DMA requests: one to download the buffer descriptor from memory into the device, then to copy the data in or out, and then to store the success/failure of the operation back into buffer descriptor.

37 Device Interrupts How do device interrupts work?
Where does the CPU run after an interrupt? What is the interrupt handler written in? C? Java? What stack does it use? Is the work the CPU had been doing before the interrupt lost forever? If not, how does CPU know how to resume that work? I’m going to answer these questions, kind of as a side light to the rest of the discussion. But they are pretty important!

38 Challenge: Saving/Restoring State
We need to be able to interrupt and transparently resume execution  I/O device signals I/O completion  Periodic hardware timer to check if app is hung  Multiplexing multiple apps on a single CPU  Code unaware it has been interrupted!  Not just the program counter  Condition codes, registers used by interrupt handler, etc. 

39 Question What (hardware, software) do you need to be able to run an untrustworthy application? Answer: Need memory protection (range limit) Capable of changing the memory protection System calls limit Ability to interrupt a running job To have a privileged mode Limit apps to change permission

40 Another Question How should an operating system allocate processing time between competing uses? Give the CPU to the first to arrive? To the one that needs the least resources to complete? To the one that needs the most resources?

41 Acknowledgment This lecture is a slightly modified version of the one prepared by Tom Anderson RTX Project slides are adapted from Irene Huang’s slide deck on the course


Download ppt "Lecture 1: Introduction"

Similar presentations


Ads by Google