Presentation is loading. Please wait.

Presentation is loading. Please wait.

Improving IPC by Kernel Design Jochen Liedtke Slides based on a presentation by Rebekah Leslie.

Similar presentations


Presentation on theme: "Improving IPC by Kernel Design Jochen Liedtke Slides based on a presentation by Rebekah Leslie."— Presentation transcript:

1 Improving IPC by Kernel Design Jochen Liedtke Slides based on a presentation by Rebekah Leslie

2 Microkernels and IPC: Microkernel architectures introduce a heavy reliance on IPC  more modularity implies more IPC Mach pioneered microkernel architectures, but had poor IPC performance  poor performance leads people to avoid microkernels entirely  or architect their design to reduce IPC (combine or co-locate modules) This paper explores a performance oriented design approach and discusses specific IPC optimizations

3 Performance vs. Protection: Mach provided strong isolation between tasks  indirect IPC transfer via ports  limited access controlled by capabilities (port rights) L3 removes indirect transfer, capabilities, and RPC validation  achieves better performance  provides basic address space protection Recent L4 designs incorporate the use of capabilities to achieve isolation in security-critical systems

4 L3 System Architecture: Mach-like design for modular operating systems  Minimal kernel  User level servers for “traditional” OS features: page-fault handling, exception handling, device drivers System organized into tasks and threads IPC: direct data transfer between threads via thread ID

5 Design Philosophy: Focus on IPC  Any feature that will increase cost must be closely evaluated  When in doubt, design in favor of IPC Design for Performance  A poorly performing technique is unacceptable  Evaluate feature cost compared to concrete baseline  Aim for a concrete performance goal Comprehensive design  Consider synergistic effects of all methods and techniques  Cover all levels of implementation, from design to code

6 Performance Baseline: The cost of each feature must be evaluated relative to a concrete performance baseline For IPC, the theoretical minimum is an empty message: this measures the overhead without data transfer cost 127 cycles without prefetching delays or cache misses + 45 cycles for TLB misses = 172 cycle minimum time GOAL: 350 cycles (7  s) for short messages

7 Messages in L3: direct stringindirect stringsmemory objectstag Tag: Description of message contents Direct string: Data to be transferred directly from send buffer to receive buffer Indirect string: Location and size of data to be transferred by reference Memory object: Description of a region of memory to be mapped in receiver address space (shared memory) System calls: Send, receive, call (send and receive), reply/wait (receive and send)

8 Basic Message Optimizations: Ability to transfer long, complex messages reduces the number of messages that need to be sent (system calls) Indirect strings avoid copy operations at user level  User specifies data location, rather than copying data to buffer  Receiver specifies destination, rather than copying from buffer Memory objects transferred lazily, i.e., page table is not modified until access is required Combined send/receive calls reduce number of traps

9 Optimization - Direct Transfer via Temporary Mapping: Bkernel A Two copy message transfer costs 20 + 0.75n cycles L3 copies data once to a special communication window in kernel space Window is mapped to the receiver for the duration of the call (page directory entry) copy mapped with kernel-only permission add mapping to space B

10 Optimization - Transfer Short Messages in Registers: IPC messages are often very short  Example: Device driver ack or error replies  On average, between 50% and 80% of L3 messages are less than eight bytes long Even on the register poor x86, 2 registers can be set aside for short message transfer Register transfer implementation saved 2.4  s, even more than the overhead of temporary mapping (1.2  s)

11 Thread Scheduling in L3: Scheduler maintains several queues to keep track relevant thread-state information  Ready queue stores threads that are able to run  Wakeup queues store threads that are blocked waiting for an IPC operation to complete or timeout (organized by region)  Polling-me queue stores threads waiting to send to some thread Efficient representation of data structures  Queues are stored as doubly-linked lists distributed across TCBs  Scheduling never causes page faults

12 Optimization - Lazy Dequeueing Scheduler overhead (queue manipulation) is a significant component of IPC cost Threads doing IPC are often removed from a queue only to be inserted again a short while later  why not avoid the queue manipulation altogether and simply flag the thread control block?  i.e., move the overhead from IPC path to other scheduling paths Invariants on scheduling queues  The ready queue contains at least all ready threads  A wakeup queue contains at least all waiting threads

13 Optimization - Store Task Control Blocks in Virtual Arrays A task control block (TCB) stores kernel data for a particular thread Every operation on a thread requires lookup, and possibly modification, of that thread’s TCB Storing TCBs in a virtual array provides fast access to TCB structures

14 Optimization - Compact Structures with Good Locality Access TCBs through pointer to the center of the structure so that short displacements can be used  One-byte long registers reach twice as much TCB data as with a pointer to the start of a structure Group related TCB information on cache line boundaries to minimize cache misses Store frequently accessed kernel data in same page as hardware tables (IDT, GDT, TSS)

15 Performance Impact of Specific Optimizations: Large messages dominated by copy overhead Small messages get benefit of faster context switching, fewer system calls, and fast access to kernel structures

16 IPC Performance Compared to Mach (Short Message): Measured using pingpong micro-benchmark that makes use of unified send/receive calls For an n-byte message, the cost is 7 + 0.02n  s in L3

17 IPC Performance Compared to Mach (Long Messages): Same benchmark with larger messages. For n-byte messages larger than 2k, cache misses increase and the IPC time is 10 + 0.04n  s  Slightly higher base cost  Higher per-byte cost By comparison, Mach takes 120 + 0.08n  s

18 Comparison of L3 RPC to Previous Systems:

19 Conclusions: Well-performing IPC was essential in order for microkernels to gain wide adoption, which was a major limitation of Mach L3 demonstrates that good performance is attainable in a microkernel system with IPC performance that is 10 to 22 times better than Mach The performance-centric techniques demonstrated in the paper can be employed in any system, even if the specific optimizations cannot

20 Spare Slides

21 Optimization - Reduce Segment Register Loads Loading segment registers is expensive (9 cycles register), so many systems use a single, flat segment Kernel preservation of the segment registers requires 66 cycles for the naive approach (always reload registers) L3 instead checks if the flat value is still intact, and only does a load if not Checking alone costs 10 cycles


Download ppt "Improving IPC by Kernel Design Jochen Liedtke Slides based on a presentation by Rebekah Leslie."

Similar presentations


Ads by Google