Presentation is loading. Please wait.

Presentation is loading. Please wait.

SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

Similar presentations


Presentation on theme: "SEDA: An Architecture for Well-Conditioned, Scalable Internet Services"— Presentation transcript:

1 SEDA: An Architecture for Well-Conditioned, Scalable Internet Services
Matt Welsh, David Culler, and Eric Brewer Computer Science Division University of California, Berkley Presented By Oindrila Mukherjee

2 Agenda What is SEDA ? Why SEDA ? Thread-based Concurrency
Event-driven Concurrency SEDA Goals Stages Applications Dynamic Resource Controllers Implement SEDA concepts using existing OS primitives Asynchronous Socket I/O Asynchronous File I/O Conclusion

3 What is SEDA ? SEDA – Staged Event-Driven Architecture
Enables high concurrency and load conditioning Simplifies construction of well conditioned services Combines threads and event-based programs

4 Why SEDA ? Internet services must be responsive, robust and always available Service request for dynamic content Fluctuation of load SEDA combines threads and event-based programming models

5 Thread-Based Concurrency
Relatively easy to program Associated overheads Cache and TLB misses Scheduling overhead Lock Contention

6 Thread-Based Concurrency (contd.)
Can be achieved in two ways Unbounded thread allocation Thread-per-task model Eat up memory Too many context switches Bounded thread pools Fixed limit on number of threads allocated Avoids throughput degradation Unfairness to clients Large waiting times For servicing internet services

7 Event-Driven Concurrency
Smaller number of threads No Context Switches Little degradation in throughput Designing the scheduler – complex Difficult to achieve modularity 4. Events trigger state transitions in the FSM

8 Threads vs. Events

9 SEDA Application is a network of stages
Stages connected by event queues Uses dynamic resource controllers

10 SEDA - Goals Support massive concurrency
Simplify the construction of well-conditioned services Enable Introspection Support self-tuning resource management

11 SEDA - Stages Event Handler – provides logic of the stage
Incoming Event Queue – queues events processed by the event handler for that stage Thread Pool – for dynamic allocation of threads Controller – manages the stage, affects scheduling and thread allocation

12 SEDA - Applications A network of stages
Queue decouples execution amongst stages Independent load management Improves debugging and performance analysis

13 SEDA - Dynamic Resource Controllers
Thread Pool Controller Avoids allocating too many threads Queue length > threshold, controller adds a thread Threads idle, controller removes thread

14 SEDA - Dynamic Resource Controllers (contd.)
Batching Controller Adjusts number of events processed by each invocation of the event handler within a stage Observes output rate of events Decreases batching factor until throughput begins to degrade

15 Asynchronous I/O Primitives
Asynchronous Socket I/O Provides a non-blocking sockets interface for services Outperforms the thread-based compatibility layer Asynchronous File I/O Implemented by using blocking I/O and a bounded thread pool One thread processes events for a particular file at a time

16 Conclusion Internet services experience huge variations in service load Slashdot Effect Resources must be utilized to make best use of them Dynamic allocation of resources enables this SEDA uses dynamic controllers for resource management and scheduling of each stage SEDA isolates threads within each stage and uses event-driven design to maximize throughput at high load


Download ppt "SEDA: An Architecture for Well-Conditioned, Scalable Internet Services"

Similar presentations


Ads by Google