Download presentation
Presentation is loading. Please wait.
1
OS Concepts - Overview
2
Memory OS provides memory space
Applications can address entire address space All 2^64 possible locations OS restricts access in some way to most of it E.g. non-executable, read-only, kernel-only Address space is organized into segments Each segment contains data of a specific type .text = program instructions (code) .rodata = read only data .data = initialized global variables Also: string constants .bss = uninitialized global variables Heap stack
3
Memory layout Traditional Unix (32bit) Program contents on the bottom
Kernel memory is on top Dynamic memory is in the middle Heap grows up Stack grows down kernel virtual memory Memory mapped region for shared libraries run-time heap (via malloc) program text (.text) initialized data (.data) uninitialized data (.bss) stack memory invisible to user code the “brk” ptr
4
Memory layout Plus much more… Modern Linux (64bit)
VDSO Modern Linux (64bit) Many more addresses Kernel is no longer top 1GB Sparsely mapped in at various addresses Memory mapped devices Balancing address use between stack and heap no longer an issue Heap allocated using mmap()’s brk can still be used VDSO region User executable kernel code User accessible kernel data current time Plus much more… kernel physical memory Memory mapped region for shared libraries run-time heap (via malloc) program text (.text) initialized data (.data) uninitialized data (.bss) stack kernel virtual memory Memory mapped devices
5
Memory management Address space of a process is virtual memory
What the process sees Virtual memory may or may not be backed by physical memory Actual byte addressable memory devices on motherboard (DRAM, NVM, etc) OS managed mapping of virtual memory to physical memory Memory grouped together as pages typically 4KB of physically contiguous memory OS allocates pages for each processes OS maps allocated pages into the virtual address space of each process OS tracks current mapping of all processes What memory is assigned to whom OS can change mapping at anytime Move memory around Move memory to disk (swapping)
6
Static vs. Dynamic memory
Each process has static and dynamic memory Static memory – created at process initialization Dynamic memory – assigned as process runs This is only a user level distinction No difference to the OS Dynamic memory is expanded based on user level allocation requests Multiple interfaces to request more memory Ultimately the same OS mechanism
7
Processes/Threads User level execution takes place in a process context OS allocates CPU time to processes/threads Process context: CPU and OS state necessary to represent a thread of execution Processes and threads are conceptually different Reality: processes == threads + some extra OS state Linux: everything managed as a kernel thread Kernel threads are mostly what you would think of as a process from Intro to OS User level threads are really just processes with a shared address space But different stacks As you can see the terminology becomes blurry
8
Processes/Threads New processes/threads created via system call
Linux: clone() Creates a new kernel thread Used for both processes and user level threads What about fork()? FreeBSD: fork() + thr_create() Windows: CreateProcess() + CreateThread() “Addressing” processes and threads Processes are assigned a pid Threads share pid, but are assigned unique tids Processes can be externally controlled using signals Signals are fundamental Unix mechanism designed for processes (operate on PIDs) Combining signals and threads can be really scary
9
Linux Example Linux clone() takes a lot of arguments
fork() is a wrapper with a fixed set of args to clone() man clone /* Prototype for the raw system call */ long clone(unsigned long flags, void *child_stack, void *ptid, void *ctid, struct pt_regs *regs); int pid = fork(); if (pid == 0) { // child code } else if (pid > 0) { // parent code } else { // error (pid == -1) }
10
Scheduling threads OS selects which thread to run
Scheduler Policy OS activates a thread Context Switch Mechanism When to schedule Cooperative scheduling Threads explicitly yield the CPU Preemptive scheduling OS preempts threads at any time based on policy
11
Launching Applications
Request OS to setup execution context and memory Allocate and initialize memory contents Initialize execution state Registers, stack pointer, etc Set %rip (instruction pointer) to application entry point Memory layout Process memory organized into segments Segments stored in program executable file Binary format organizing data to load into memory Linux + most Unices: ELF Windows: PE OS just copies segments from executable file into correct memory location
12
Launching applications
Linux: exec*() system calls All take a path to an executable file Replaces (overwrites) the current process state But what about scripts? Scripts are executed by an interpreter A binary executable program OS launches interpreter and passes script as argv[1] OS scans file passed to exec*() to determine how to launch it Elf binaries: binary header at start of file specifying format Scripts: #!/path/to/interpreter
13
I/O Devices Two primary aspects of computer system
Processing (CPU + Memory) Input/Output Role of Operating System Provide a consistent interface Simplify access to hardware devices Implement mechanisms for interacting with devices Allocate and manage resources Protection Fairness Obtain Efficient performance Understand performance characteristics of device Develop policies
14
I/O Subsystem User Process Kernel Kernel I/O Subsystem Software
Device Drivers SCSI Bus Keyboard Mouse PCI Bus GPU Harddisk Software Hardware Device Controllers SCSI Bus Keyboard Mouse PCI Bus GPU Harddisk SCSI Bus Keyboard Mouse PCI Bus GPU Harddisk Devices
15
User View of I/O User Processes cannot have direct access to devices
Manage resources fairly Protects data from access-control violations Protect system from crashing OS exports higher level functions User process performs system calls (e.g. read() and write()) Blocking vs. Nonblocking I/O Blocking: Suspends execution of process until I/O completes Simple and easy to understand Inefficient Nonblocking: Returns from system calls immediately Process is notified when I/O completes Complex but better performance
16
User View: Types of devices
Character-stream Transfer one byte (character) at a time Interface: get() or put() Implemented as restricted forms of read()/write() Example: keyboard, mouse, modem, console Block Transfer blocks of bytes as a unit (defined by hardware) Interface: read() and write() Random access: seek() specifies which bytes to transfer next Example: Disks and tapes
17
Kernel I/O Subsystem I/O scheduled from pool of requests Buffering
Requests rearranged to optimize efficiency Example: Disk requests are reordered to reduce head seeks Buffering Deal with different transfer rates Adjustable transfer sizes Fragmentation and reassembly Copy Semantics Can calling process reuse buffer immediately? Caching: Avoid device accesses as much as possible I/O is SLOW Block devices can read ahead
18
Device Drivers Encapsulate details of device
Wide variety of I/O devices (different manufacturers and features) Kernel I/O subsystem not aware of hardware details Load at boot time or on demand IOCTLs: Special UNIX system call (I/O control) Alternative to adding a new system call Interface between user processes and device drivers Device specific operation Looks like a system call, but also takes a file descriptor argument Why?
19
Device Driver: Device Configuration
Interactions directly with Device Controller Special Instructions Valid only in kernel mode X86: In/Out instructions No longer popular Memory-mapped Read and write operations in special memory regions How are memory operations delivered to controller? OS protects interfaces by not mapping memory into user processes Some devices can map subsets of I/O space to processes Buffer queues (i.e. network cards)
20
Interacting with Device Controllers
How to know when I/O is complete? Polling Disadvantage: Busy Waiting CPU cycles wasted when I/O is slow Often need to be careful with timing Interrupts Goal: Enable asynchronous events Device signals CPU by asserting interrupt request line CPU automatically jumps to Interrupt Service Routine Interrupt vector: Table of ISR addresses Indexed by interrupt number Lower priority interrupts postponed until higher priority finished Interrupts can nest Disadvantage: Interrupts “interrupt” processing Interrupt storms
21
Device Driver: Data transfer
Programmed I/O (PIO) Initiate operation and read in every byte/word of data Direct Memory Access (DMA) Offload data xfer work to special-purpose processor CPU configures DMA transfer Writes DMA command block into main memory Target addresses and xfer sizes Give command block address to DMA engine DMA engine xfers data from device to memory specified in command block DMA engine raises interrupt when entire xfer is complete Virtual or Physical address?
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.