Download presentation
Presentation is loading. Please wait.
1
Why Events Are a Bad Idea (for high-concurrency servers) Author: Rob von Behren, Jeremy Condit and Eric Brewer Presenter: Wenhao Xu Other References: [1] Some slides from Rania Elnaggar [2] Questions from you [3] On the Duality of Operating System Structures. Hugh C. Lauer, et al. [4] Cooperative task management without manual stack management. A. Adya, et al.
2
Agenda Debate (Threads vs Events) Overview Looking insight into Threads’ problems Threads vs. Events for programming high- currency servers Evaluation( Knot vs. Haboob) Conclusion & Discussion
3
Debate between threads and events Camp ThreadsCamp Events Events are WORSE! Threads are BAD! 1978, Lauer and Needham, “On the Duality of Operating System Structures” 1996, John K. Ousterhout, “why threads are a bad idea (for most purposes)” SOSP 2001, Eric Brewer’s group, “SEDA” SOSP 2003, Eric Brewer’s group, “Capriccio”; Hot Os 2003, “Whey events are a bad idea (for high-concurrency servers)” USENIX ATC 2004, “Lazy AIO for event- driven servers” pic,. from Rania Elnaggar “The principal conclusion is that neither model is inherently preferable, and the main consideration for choosing between them is the nature of the machine architecture upon which the system is being built, … Is it a dichotomy?
4
Criticism of Threads 1: Poor Performance O(n) operations. n is #threads Relative high context switching overhead comparing to events. Argument: Artifact of poor threading implementation. Not intrinsic properties of threads. Questionss/Discussion the author's threaded server collapsing after 100000 concurrent tasks, while the event-based server maintains its performance level. is this still just an 'artifact of a poor thread implementation' or do event-based systems handle load 'better'?
5
Criticism of Threads 2: Control Flow Too “linear” control flow: restrictive Argument 1: Control flow patterns used by “Flash, applications in Ninja, SEDA, and Tiny Os” fall into three simple categories: Call/return, parallel calls and pipelines They all can be easily implemented with threads Argument 2: More complex patterns are difficult to use, and are more likely to lead error. Argument 3: Dynamic fan-in and fan-out is less graceful here but not used by highly concurrcy servers.
6
Criticism of Threads 3: Synchronization Too heavy-weight! Events system use cooperative multitasking get “free” synchronization! Argument: Cooperative multitasking can also be used in thread systems. Questions/Discussions Isn‘t the single largest advantage and difficulty of threads the synchronization issue and the fact that they offer true concurrency? Cooperative multi-tasking appears to negate a large part of why we use threads. And in a multi- processor world, is it a feasible solution?
7
Criticism of Threads 4: State Management “Thread stacks are an ineffective way to manage live state. – Large stack: wasting virtual address space; – Small stack: stack overflow Rescue: Propose a mechanism that will enable dynamic stack growth. (Capriccio)
8
Criticism of Threads 5: Scheduling Event systems can schedule event deliveries at application level, which is benefit for efficient scheduling. Event systems allow better code locality by running several of the same kind of event in a row. Argument: Thread system can also apply the same scheduling tricks to cooperatively scheduled threads.
9
Discussion This paper hints, though does not explicitly state, that a user-level threading package is the only practical way to achieve the scheduling goals —primarily, that to avoid difficult synchronization and locking, threads should yield only at specific, known locations, such as explicit yields and blocking system calls. Is there a possibility of kernel-level threading accomplishing the same thing? I think offering the task of managing the state to the user is better than letting the OS to do this thing, even OS can use a heuristic Algorithm.
10
High- concurrency Servers Group call/return: More natural Run-time call stack encapsulate all live state for a task easy debugging Obfuscate the control flow; Stack Ripping; Lead to subtle race conditions and logic errors due to unexpected message arrivals. Usually, task state is heap allocated. Difficult to clean up because braches of the program. Garbage collection is inappropriate for high-performance systems. Easy to clean up task state after exception and normal termination Control Flow Task State Fixing the problems of events is tantamount to switching to threads. What is stack ripping? EventsThreads How is the complexity of debugging in thread-based systems? The paper just simply claims that thread is a natural way to program while ignoring the difficulty in thread debugging?
11
Compiler Support Dynamic Stack Growth. – The stack can be adjusted at run time – (Refer to Capriccio) Live State Management Synchronization – Warn the programmer about data races – Example: nesC Support atomic sections Understands the concurrency model Information can be given to the runtime system to allow safe execution on multiprocessors. Questions/Discussion Except nesC, does this powerful compiler really exist? Or will it show up in the future? Is there anything else the current compiler can help the thread programming?
12
Evaluation Questions It seems that the high performance of Knot-C comes from its policy, which is to serve existing connections and reject new ones. So is it possible that we apply such policy on Haboob so that it can get a comparable performance to Knot-C? Implemented using Coro coroutine 1.Minimize context switch 2.Translate blocking I/O requests to asynchronous requests internally. (/dev/poll) Knot: 700-line web server The same test suit to evaluate SEDA
13
Conclusion From this paper – Simple programming model – Wealth of compiler analyses – Tight integration between the compiler and the thread system. What do you think? Which one is better? Threads or Events? Or it depends.
14
Discussion Prevalence of Multi-core – Threads or Event programming model? – Other programming model? – OS changed? – New programming language? – New Compiler? A good opportunity to revise the whole software stack!
15
Thanks for your attention!
16
Backup slides
17
17From Rania Elnaggar Event-based model “Classic” definition: -- “Message-passing model” – Small number of static processes – Specific communication paths (channels, ports) – Address space is divided – Synchronization and cooperation through message passing, therefore, known as message-passing model – No “mutual” sharing of data – Blocked I/O is complex to handle. Stack ripping. E.g: SEDA
18
18From Rania Elnaggar The Pseudo-concurrent model Scheduler Handler 1 Handler 2 Handler 3 Handler 4
19
19From Rania Elnaggar Thread-based model “Classic” definition: – Large number of dynamic light-weight processes (i.e. threads) – Classically procedure-oriented through fork and join – Synchronization via shared data and interlocking. – System resources encoded as global data structures and shared via locks – Blocked I/O is a blocked thread. State saved in thread context.
20
20From Rania Elnaggar The Quasi-concurrent model 123n Shared resources & address space
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.