Presentation is loading. Please wait.

Presentation is loading. Please wait.

OSMOSIS Final Presentation. Introduction Osmosis System Scalable, distributed system. Many-to-many publisher-subscriber real time sensor data streams,

Similar presentations


Presentation on theme: "OSMOSIS Final Presentation. Introduction Osmosis System Scalable, distributed system. Many-to-many publisher-subscriber real time sensor data streams,"— Presentation transcript:

1 OSMOSIS Final Presentation

2 Introduction Osmosis System Scalable, distributed system. Many-to-many publisher-subscriber real time sensor data streams, with QoS constrained routing. Ability to perform distributed processing on stream data. Processing threads can migrate between hosts.

3 Introduction Osmosis System (cont.) Distributed Resource Management. Maintain balanced load. Maximize number of QoS constraints met. Cross platform Implementation.

4 Motivation Possible Systems A distributed video delivery system. Multiple subscribers with different bandwidth requirements. Stream compressed within the pastry network en-route, for lower bandwidth subscribers. Car traffic management system. Cameras at each traffic light, connected in a large distributed network. Different systems can subscribe to different streams to determine traffic in specific areas, which would allow for re-routing of traffic, statistics gathering, etc...

5 Motivation Possible Systems A generalized SETI-at-home type distributed system. Clients can join and leave the Osmosis network. Once part of the network, they can receive content and participate in processing of jobs within the system.

6 Related Work Jessica Project Distributed system with thread migration, but uses centralized server for load balancing, which limits scalability. End System Multicast End systems implement all multicast related functionality including membership management and packet replication. Builds mesh of all nodes in network to build tree topology, not scalable. Pastry/Scribe Application level multicast and anycast on a generic, scalable, self organizing substrate for peer-to-peer applications. Extremely scalable, but no attention to QoS.

7 Related Work Osmosis Goal: Find a middle ground between the optimal, yet non-scalable, performance of End System Multicast, and the scalable, yet sub- optimal, performance of Pastry/Scribe.

8

9

10

11

12

13 System Overview Transport Resource Management Thread Migration Network & CPU Utilization Where & When To Migrate Routing Information Network Utilization Migration Policy

14 Thread Migration Provides a means of transporting a thread from one machine to another. It has no knowledge of either the current resource state or overlay network. System Overview

15 Resource Management API to provide network and utilization information. Used by Transport to create and maintain logical overlay. Used by thread Migration Policy to decide when and where to migrate. System Overview

16 Transport Creates overlay network based on resource management information. Provides communications infrastructure. Provides API to Migration Policy allowing access to routing table information. System Overview

17 Migration Policy Decides when and where to migrate threads based on pluggable policy. Leverages resource metrics and routing table of logical overlay in decision making. Call thread migration API when signaling that it is time to migrate, sends destination address of node to migrate to. System Overview

18 Resource Monitoring In order to provide basic tools for scalability and QoS constrained routing, it is necessary to monitor system resource availability. Measurements Network Characteristics (Bandwidth/Latency) CPU Characteristics (Utilization/Queue Length)

19 Resource Monitoring Bandwidth Measurement When stream exists between hosts, passive measurement is performed. Otherwise, active measurements carried out using packet train technique. Averaging function can be defined by user. Implementation Using pcap library in Linux and Windows.

20 Resource Monitoring CPU Measures Statistics collected at user defined intervals. Implementation Linux Kernel Level: Module collects data every jiffy User Level: Reads loadavg & uptime /proc files. Windows Built in performance counters.

21 Resource Monitoring Evaluation of techniques System/Network wide overhead of running measurement code. How different levels of system/network load affect measurement techniques.

22 Resource Monitoring Evaluation of work Linux functionality implemented CPU measures evaluated In progress Bandwidth measurement evaluation Windows implementation

23 Transport Overview Distributed, scalable, and widely deployable routing infrastructure. Create a logical space correlated with the physical space Create a logical space correlated with the physical space Distributed routing table construction and maintenance. Distributed routing table construction and maintenance. Multicast transmission of data with the ability to meet QoS.

24 Routing Infrastructure Logical Space Assume IP addresses provide approximation of physical topology 1:1 mapping of logical to physical 1:1 mapping of logical to physical Routing Tables Maximum size of 1 K entries Maximum size of 1 K entries Obtained incrementally during joining Obtained incrementally during joining Progressively closer routing ala Pastry Progressively closer routing ala Pastry

25 Multicast Tree Growing QoS considered during join/build phase QoS considered during join/build phase Localized, secondary rendezvous points Localized, secondary rendezvous points Next-hop session information maintained by all nodes in multicast tree Next-hop session information maintained by all nodes in multicast tree

26 Multicast Group Organizational Diagram

27 Transport Evaluation Planned Planned Test the network stress and QoS of our system compared to IP Multicast, Pastry, and End-System Multicast. Test the network stress and QoS of our system compared to IP Multicast, Pastry, and End-System Multicast.

28 Transport Future Work User and kernel space implementations. User and kernel space implementations. Integrate XTP to utilize the system Integrate XTP to utilize the system

29

30 Migration Overview Both user and kernel level implementations: Change node state, associated API (pass-through, processing, corked and uncorked). Migrate nodes while maintaining stream integrity. Kernel/C : Less protection domain switches, less copies, kernel threading, and scalability. Faster. User/Java: Can run on any Java platform. Friendlier.

31 Migration Accomplishments Kernel: IOCTL /dev interface. Different State design and code. Streaming handled by kernel threads in the keventd process. Test and API interface.

32 Migration Accomplishments Java: Command line OR socket-based API. Dynamic binding on processor object, which must be derived from a provided abstract class. Works with any socket producer/consumer pair.

33 Migration Integration Kernel: Non OSMOSIS specific C/C++ API. Socket-based API. Java: Java command line API. Provides abstract classes for processors. Socket-based API.

34 Migration Evaluation Comparison with: Standardized methods for data pass through. Existing non-real-time streaming systems. Existing thread migration systems. Comparison and integration between the Java and Kernel Loadable Module implementations.

35 Migration Future Work Kernel: Implement zero-copy for the processing state. Heterogeneous thread migration. Java: Increased performance. Both: Support for alternate network protocols. Testing and evaluation.

36 Conclusions The systems and algorithms developed are significant initial steps toward a final OSMOSIS system. The have been designed to be modular and easily integrated together. The research and mechanisms developed during this project are not bound to the OSMOSIS system.


Download ppt "OSMOSIS Final Presentation. Introduction Osmosis System Scalable, distributed system. Many-to-many publisher-subscriber real time sensor data streams,"

Similar presentations


Ads by Google