Presentation is loading. Please wait.

Presentation is loading. Please wait.

Performance and Availability in Wide-Area Service Composition Bhaskaran Raman ICEBERG, EECS, U.C.Berkeley Presentation at Siemens, June 2001.

Similar presentations


Presentation on theme: "Performance and Availability in Wide-Area Service Composition Bhaskaran Raman ICEBERG, EECS, U.C.Berkeley Presentation at Siemens, June 2001."— Presentation transcript:

1 Performance and Availability in Wide-Area Service Composition Bhaskaran Raman ICEBERG, EECS, U.C.Berkeley Presentation at Siemens, June 2001

2 The Case for Services "Service and content providers play an increasing role in the value chain. The dominant part of the revenues moves from the network operator to the content provider. It is expected that value-added data services and content provisioning will create the main growth." Subscriber user Service broker Service mgt. Access network operator Core network operator Value added service providers Value added service providers Value added service providers Content providers Content providers Content providers Access Networks Cellular systems Cordless (DECT) Bluetooth DECT data Wireless LAN Wireless local loop SatelliteCableDSL

3 Service Composition Provider Q Texttospeech Provider R Cellular Phone Email repository Provider A Video-on-demand server Provider B Thin Client Provider A Provider B Replicated instances Transcoder

4 Service Composition Operational model: –Service providers deploy different services at various network locations –Next generation portals compose services Quickly enable new functionality on new devices Possibly through SLAs –Code is NOT mobile (mutually untrusting service providers) Composition across –Service providers –Wide-area Notion of service-level path

5 Wide-Area Service Composition: Performance and Availability Performance: Choice of Service Instances Availability: Detecting and Handling Failures

6 Related Work Service composition is complex: –Service discovery, Interface definitions, Semantics of composition Previous efforts have addressed: –Semantics and interface definitions COTS (Stanford), Future Computing Environments (G. Tech) –Fault tolerance composition within a single cluster TACC (Berkeley) –Performance constrained choice of service, but not for composed services SPAND (Berkeley), Harvest (Colorado), Tapestry/CAN (Berkeley), RON (MIT) None address wide-area network performance or failure issues for long-lived composed sessions

7 Our Architecture Internet Service cluster: compute cluster capable of running services Peering: monitoring & cascading Destination Source Composed services Hardware platform Peering relations, Overlay network Service clusters Logical platform Application plane Handling failures Service-level path creation Service location Network performance Detection Recovery

8 Architecture: Advantages Overlay nodes are clusters –Compute platform for services –Hierarchical monitoring Within cluster – for process/machine failures Across clusters – for network path failures –Aggregated monitoring Amortized overhead

9 The Overlay Network Peering relations, Overlay network Handling failures Service-level path creation The overlay network provides the context for service-level path creation and failure handling

10 Service-Level Path Creation Connection-oriented network –Explicit session setup stage –There’s “switching” state at the intermediate nodes Need a connection-less protocol for connection setup Need to keep track of three things: –Network path liveness –Metric information (latency/bandwidth) for optimality decisions –Where services are located

11 Service-Level Path Creation Three levels of information exchange –Network path liveness Low overhead, but very frequent –Metric information: latency/bandwidth Higher overhead, not so frequent Bandwidth changes only once in several minutes [Balakrishnan’97] Latency changes appreciably only once in about an hour [Acharya’96] –Information about location of services in clusters Bulky, but does not change very often (once in a few weeks, or months) Could also use independent service location mechanism

12 Service Level Path Creation Link-state algorithm to exchange information –Lesser overhead of individual measurement  finer time-scale of measurement –Service-level path created at entry node –Link-state because it allows all-pair-shortest-path calculation in the graph

13 Service Level Path Creation Two ideas: –Path caching Remember what previous clients used Another use of clusters –Dynamic path optimization Since session-transfer is a first-order feature First path created need not be optimal

14 Session Recovery: Design Tradeoffs End-to-end vs. local-link End-to-end: –Pre-establishment possible –But, failure information has to propagate –And, performance of alternate path could have changed Local-link: –No need for information to propagate –But, additional overhead Overlay n/w Handling failures Service-level path creation Service location Network performance Detection Recovery Finding entry/exit

15 The Overlay Topology: Design Factors How many nodes? –Large number of nodes  lesser latency overhead –But scaling concerns Where to place nodes? –Close to edges so that hosts have points of entry and exit close to them –Close to backbone to take advantage of good connectivity Who to peer with? –Nature of connectivity –Least sharing of physical links among overlay links

16 Failure detection in the wide-area: Analysis Provider A Video-on-demand server Provider B Thin Client Transcoder Peering relations, Overlay network Handling failures Service-level path creation Service location Network performance Detection Recovery

17 Failure detection in the wide-area: Analysis What are we doing? –Keeping track of the liveness of the WA Internet path Why is it important? –10% of Internet paths have 95% availability [Labovitz’99] –BGP could take several minutes to converge [Labovitz’00] –These could significantly affect real-time sessions based on service-level paths Why is it challenging? –Is there a notion of “failure”? –Given Internet cross-traffic and congestion? –What if losses could last for any duration with equal probability?

18 Failure detection: the trade-off Monitoring for liveness of path using keep-alive heartbeat Time Failure: detected by timeout Timeout period Time False-positive: failure detected incorrectly  unnecessary overhead Timeout period There’s a trade-off between time-to-detection and rate of false-positives

19 UDP-based keep-alive stream Geographically distributed hosts: –Berkeley, Stanford, UIUC, TU-Berlin, UNSW –Some trans-oceanic links, some within the US UDP heart-beat every 300ms between pairs Measure gaps between receipt of successive heart-beats

20 UDP-based keep-alive stream 11 5 6 85 gaps above 900ms 6/11 False-positive rate: 6/11

21 UDP Experiments: What do we conclude? Significant number of outages > 30 seconds –Of the order of once a day But, 1.8 second outage  30 second outage with 50% prob. –If we react to 1.8 second outages by transferring a session can have much better availability than what’s possible today Provider A Video-on-demand server Provider B Thin Client Transcoder

22 UDP Experiments: What do we conclude? 1.8 seconds good enough for non-interactive applications –On-demand video/audio usually have 5-10 second buffers anyway 1.8 seconds not good for interactive/live applications –But definitely better than having the entire session cut- off

23 Overhead of Overlay Network: Preliminary Evaluation Overhead of routing over the Overlay Network –As opposed to using the underlying physical network Estimate routing overhead by using simulation and network model Need placement strategy: assume placement near core Overhead is a function of number of overlay nodes Result: overhead of overlay network is negligible for a size of 5% (200/4000 nodes) Number of IP-Address-Prefixes on the Internet: 100,000  5% is 5000

24 Research Methodology Connection-oriented overlay network of clusters Session-transfer on failure Aggregation – amortization of overhead Simulation –Routing overhead –Effect of size of overlay Implementation –MP3 music for GSM cellular-phones –Codec service for IP- telephony Wide-area monitoring trade- offs –How quickly can failures be detected? –Rate of false- positives Evaluation Analysis Design

25 Research Methodology: Metrics and Approach Metrics: Overhead, Scalability, Stability Approach for evaluation: –Simulation –Trace-based emulation Leverage the Millennium testbed Hundreds of fast, well-connected cluster machines Can emulate wide-area network based on traces/models –Real implementation testbed Possible collaboration?

26 Summary Logical overlay network of service clusters –Middleware platform for service deployment –Optimal service-level path creation –Failure detection and recovery Failures can be detected in O(1sec) over the wide- area –Useful for many applications Number of overlay nodes required seems reasonable –O(1000s) for minimal latency overhead Several interesting issues to look at: –Overhead, Scalability, Stability

27 References [Labovitz’99] C. Labovitz, A. Ahuja, and F. Jahanian, “Experimental Study of Internet Stability and Wide-Area Network Failures”, Proc. Of FTCS’99 [Labovitz’00] C. Labovitz, A. Ahuja, A. Bose, and F. Jahanian, “Delayed Internet Routing Convergence”, Proc. SIGCOMM’00 [Acharya’96] A. Acharya and J. Saltz, “A Study of Internet Round- Trip Delay”, Technical Report CS-TR-3736, U. of Maryland [Yajnik’99] M. Yajnik, S. Moon, J. Kurose, and D. Towsley, “Measurement and Modeling of the Temporal Dependence in Packet Loss”, Proc. INFOCOM’99 [Balakrishnan’97] H. Balakrishnan, S. Seshan, M. Stemm, and R. H. Katz, “Analyzing Stability in Wide-Area Network Performance”, Proc. SIGMETRICS’97


Download ppt "Performance and Availability in Wide-Area Service Composition Bhaskaran Raman ICEBERG, EECS, U.C.Berkeley Presentation at Siemens, June 2001."

Similar presentations


Ads by Google