Presentation is loading. Please wait.

Presentation is loading. Please wait.

Future and Emerging Technologies (FET) Future and Emerging Technologies (FET) The roots of innovation Proactive initiative on: Global Computing (GC) Proactive.

Similar presentations


Presentation on theme: "Future and Emerging Technologies (FET) Future and Emerging Technologies (FET) The roots of innovation Proactive initiative on: Global Computing (GC) Proactive."— Presentation transcript:

1 Future and Emerging Technologies (FET) Future and Emerging Technologies (FET) The roots of innovation Proactive initiative on: Global Computing (GC) Proactive initiative on: Global Computing (GC) DBGlobe IST-2001-32645 1 st Review Paphos, January 31, 2003 Results on Data Delivery (WP3)

2 2 WP3 Outline Data Delivery/Coordination Task 3.1 Data delivery among the system components. Derive adaptive data delivery mechanisms considering various modes of delivery such as  push (transmission of data without an explicit request) and pull,  periodic and aperiodic,  multicast and unicast delivery. Task 3.2 Model the coordination of the mobile entities using workflow management (and transactional workflows) and techniques used in the multi- agent community. DBGlobe, 1 st Annual Review Paphos, Jan 2003

3 3 Timeline … DBGlobe, 1 st Annual Review Paphos, Jan 2003 Year 1 Year 2 36912 15182124 3.3 Performance 3.2 Coordination 3.1: Data Delivery WP3 D8: Data Delivery Mechanisms (Oct 2002) D9: Modeling Coordination Through Workflows (April 2003) D10: Data Delivery and Querying (August 2003) Deliverables

4 4 DBGlobe, 1 st Annual Review Paphos, Jan 2003 A number of specific results in data delivery in Global Computing: Coherent Push-based Data Delivery Adaptive Multi-version Broadcast Data Delivery Efficient Publish-Subscribe Data Delivery D8: Data Delivery Mechanisms A taxonomy of mechanisms An outline of potential use within the DBGlobe architecture Outcomes of WP3 so far:

5 5 DBGlobe, 1 st Annual Review Paphos, Jan 2003 Just a note on the different modes Summary of technical results 1. Coherent Data Delivery 2. Adaptive Multi-version Broadcast Data Delivery 3. Efficient Publish-Subscribe Data Delivery In this presentation:

6 6 DBGlobe, 1 st Annual Review Paphos, Jan 2003 Client Pull vs. Server Push pull-based: transfer of information is initiated by the client push-based: server-initiated, servers send information to clients without any specific request. push is scalable but clients may receive irrelevant data hybrid schemes: hot data are pushed and cold data are pulled Aperiodic vs. Periodic aperiodic delivery: usually event-driven a data request (for pull) or transmission (for push) is triggered by an event (i.e. a user action for pull or a data update for push). periodic delivery: performed according to some pre-arranged schedule D8: Taxonomy of Different Modes of Data Delivery Data Delivery Modes in Global Computing

7 7 DBGlobe, 1 st Annual Review Paphos, Jan 2003 Unicast vs 1-N Unicast: from a data source (server) to the client 1-to-N: data sent received by multiple clients multicast and broadcast Data vs. Query Shipping Based on the unit of interaction between clients and data sources Depends on whether the data sources have query processing capabilities Query shipping may result in reducing the communication load, since only relevant data sets are delivered to the client. D8: Taxonomy of Different Modes of Data Delivery

8 8 DBGlobe, 1 st Annual Review Paphos, Jan 2003 D8: Taxonomy of Different Modes of Data Delivery pull unicast broadcast periodic aperiodic push Email list ( aperiodic, unicast, push) Publish/subscribe (aperiodic, 1-N, push) 1-N Polling ( periodic, unicast, pull) Request/Response (aperiodic, unicast, pull)

9 9 DBGlobe, 1 st Annual Review Paphos, Jan 2003 A note on the different modes Summary of technical results 1. Coherent Push-based Data Delivery 2. Adaptive Multi-version Broadcast Data Delivery 3. Efficient Publish-Subscribe Data Delivery Outline:

10 10 The Data Broadcast Push Model Client Server Broadcast Channel The server broadcasts data from a database to a large number of clients push mode + no direct communication with the server (stateless server, e.g., sensors) “client-side” protocols Data updates at the server Periodic updates for the values on the channel  Efficient way to disseminate information to large client populations with similar interests  Physical support in wireless networks (satellite, cellular)  Various other applications, sensor networks, data streams Coherent Data Delivery

11 11 Ensure that clients receive temporally coherent (e.g., current) and semantically coherent (transaction-wise) data Our Goal Coherent Data Delivery 1.Provide a model for temporal and semantic coherency 2.Show what type of coherency we get if there are no additional protocols 3.Show what type of coherency is achieved by a number of protocols proposed in the literature (and their extensions)

12 12 (Currency Interval of an Item) where c b is the time instance the value of x read by R was stored in the database and c e is the time instance of the next change of this value in the database. If the value read by R has not been changed subsequently, c e is infinity. Based on CI(x, R), two types of currency of the readset of a transaction R Overlapping Oldest-value CI(x, R): currency interval of x in the readset of R = [c b, c e ) Currency properties of the readset (set of items read and their values) based on currency of the items in the readset Coherent Data Delivery

13 13  (x, u)  RS(R) CI(x, R) In general, oldest value currency of a transaction R, denoted OV (R), = c e -, where c e is the smallest among the endpoints of the CI(x, R), for every x, (x, u)  RS(R). there is an interval of time that is included in the currency interval of all items in R's readset , say [c b, c e ) overlapping current, with overlapping currency, Overlap(R) = c e - (if c e is not infinity), current_time (otherwise) If R is overlapping current, Overlap(R) = OV(R) Coherent Data Delivery

14 14 If not overlapping, we want to measure the discrepancy among the database states seen by a transaction: temporal spread (Temporal Spread of a Readset) Let min_c e be the smallest among the endpoints and max_c b the largest among the begin-points of the CI(x, R) for x in the readset of a transaction R. temporal_spread(R) = max_c b - min_c e, if max_c b > min_c e 0 otherwise. For an overlapping current transaction, the temporal spread is zero! Coherent Data Delivery

15 15 Example 2468101214161820 R 1 reads x 1, x 2, x 3, x 4 CI(x 1, R 1 ) CI(x 2, R 1 ) CI(x 3, R 1 ) CI(x 4, R 1 ) Overlapping current with Overlap(R) = 8 and temporal_spread(R) = 0 Coherent Data Delivery

16 16 Example 2468101214161820 R 1 reads x 1, x 2, x 3, x 4 CI(x 1, R 1 ) CI(x 2, R 1 ) CI(x 3, R 1 ) CI(x 4, R 1 ) Not Overlapping, but OV(R) = 8 and temporal_spread(R) = 9 – 8 = 1 Oldest value read (min_c e ) max_c b (most current) Coherent Data Delivery

17 17 Example 2468101214161820 R 1 reads x 1, x 2, x 3, x 4 CI(x 1, R 1 ) CI(x 2, R 1 ) CI(x 3, R 1 ) CI(x 4, R 1 ) Not Overlapping, but OV(R) = 8 and temporal_spread(R) = 15 – 8 = 9 Oldest value read (min_c e ) max_c b (most current) Coherent Data Delivery

18 18 (Transaction-Relative Currency) R is relative overlapping current with respect to time instance t, if t  CI(x, R),  x read by R. R is relative oldest-value current with respect to time instance t, if t ≤ OV(R). Besides discrepancy, currency (how old are the values seen) (Temporal Lag) Let t c be the largest t ≤ t commit_R, with respect to which R is relative (overlapping or oldest value) current, then temporal_lag(R) = t commit_R - t c. The smaller the temporal lag and the temporal spread, the higher the temporal coherency of a read transaction. best temporal coherency when overlapping relative current with respect to t commit_R (both the time lag and the temporal spread are zero). Coherent Data Delivery

19 19 Example 2468101214161820 CI(x 1, R 1 ) CI(x 2, R 1 ) CI(x 3, R 1 ) CI(x 4, R 1 ) Overlapping current with Overlap(R) = 8 temporal_spread(R) = 0 temporal_lag(R) = 0 R1R1 Coherent Data Delivery

20 20 Example 2468101214161820 CI(x 1, R 1 ) CI(x 2, R 1 ) CI(x 3, R 1 ) CI(x 4, R 1 ) Overlapping current with Overlap(R) = 8 temporal_spread(R) = 0 temporal_lag(R) = 12 – 8 = 4 R1R1 Coherent Data Delivery

21 21 Example 2468101214161820 CI(x 1, R 1 ) CI(x 2, R 1 ) CI(x 3, R 1 ) CI(x 4, R 1 ) Overlapping current with Overlap(R) = 8 temporal_spread(R) = 0 temporal_lag(R) = 19 – 8 = 11 R1R1 Coherent Data Delivery

22 22 What is the coherency of R (temporal lag and spread) if R just reads items from the broadcast? Let t lastread_R be the time instance R performs its last read. temporal_lag(R) ≤ t commit_R - begin_cycle(t begin_R ) and temporal_spread(R) ≤ t lastread_R - begin_cycle(t begin_R ) (tight bounds)There are cases that we get the worst lag and spread If p u = 0 (immediate updates), best (worst) lag and spread If all items from the same cycle, spread is 0, and lag = p u Coherent Data Delivery

23 23 Basic Techniques Protocols fall in two broad categories: invalidation (which corresponds to broadcasting the endpoints (c e s) of the currency interval for each item) Periodically broadcast, IR, a list with the items that have been updated since the broadcast of the previous IR versioning (which corresponds to broadcasting the begin points (c b s) of the currency interval for each item) With each item, broadcast a timestamp (version) when it was created And a hybrid protocol that combines versioning and invalidation Coherent Data Delivery

24 24 Definitions of Semantic Coherency (Consistency) C0 C1 RS(R)  DS (subset of a consistent database state) C2 R serializable with the set of server transactions that read values read (directly or indirectly) by R C3 R serializable with the all server transactions C4 R serializable with the all server transactions and the serializability order of the server transactions that R observes is consistent with the commit order of transactions at the server Rigorous schedules: commit order compatible with the serialization order Coherent Data Delivery

25 25 Reading from a single cycle If transaction R reads all items from the same cycle, it is C1 but not necessarily C2 If the server schedule is rigorous and R reads all items from the same cycle, it is C4 Coherent Data Delivery

26 26 Read Test Theorem It suffices to check for violation of C2, C3, and C4 by a client transaction R when R reads a data item if and only if the server schedule is rigorous Protocols Coherent Data Delivery

27 27 Future Work Coherency in Broadcast-Based Dissemination Multiple Servers: What is the semantic and temporal coherency the client gets Performance Evaluation of the various types of coherency Reference E. Pitoura, P. K. Chrysanthis and K. Ramamritham. “Characterizing the Temporal and Semantic Coherency of Broadcast-based Data Dissemination”. Proc. of the 9 th International Conference on Database Theory (ICDT03), January 2003, Siena, Italy.

28 28 DBGlobe, 1 st Annual Review Paphos, Jan 2003 A note on the different modes Summary of technical results 1. Coherent Push-based Data Delivery 2. Adaptive Multi-version Broadcast Data Delivery 3. Efficient Publish-Subscribe Data Delivery Outline:

29 29 Multi-version Broadcast Similar Model BUT The server (data source) at each cycle sends not just one value per item but instead multiple versions per item Enhance concurrency (similar to multi-version schemes in traditional client-server systems) Access multiple server states: access to data series when not enough memory Multiple data servers share the channel (multi-sensors networks)

30 30 Multi-version Broadcast Develop multiple version models to enhance concurrency  Maintain the overhead low and increase the number of consistent client transactions: 10% updated item per cycle If we broadcast 2 versions: 20% increase, abort rate reduces from above 60% to below 20%  Increase tolerance to disconnections: From 25% to 90% depending on the frequency and the duration of the disconnection

31 31 Multi-version Broadcast Issues How should the broadcast be organized? select the order according to which items are broadcast What are appropriate client-cache protocols?

32 32 Multi-version Broadcast Basic Organization Vertical: (per version) broadcast all items with a specific version number Horizontal: (per item) broadcast all versions of each item

33 33 V0 V1 V2 V3 Data 0 2 2 2 2 Data 1 5 9 9 9 Data 2 4 4 4 4 Data 3 3 3 7 7 Multi-version Broadcast Compression V0^3#0=2#2=4^1#3=3^0#1=5V1^2#1=9V2^1#3=7 Extensions for broadcast disk organizations: (broadcast disk: frequency of broadcasting an item depends on its popularity) Actual encoding requires auxiliary symbols! Longer sequences of repetitive data first!

34 34 Multi-version Broadcast Client Access Vertical (or snapshot): a client accesses items of a specific version Horizontal (or historical): a client accesses all versions of a specific item Random Adaptability Performance depends on client access patterns

35 35 Multi-version Broadcast Client Cache Extend LRU  Timestamp (classic LRU)  Version number (replace the oldest item)  F: Compression Rate (replace the less frequently updated) Replacement based on a weighted sum of the above Autoprefetch: to update F

36 36 Multi-version Broadcast Results Better performance when the broadcast organization follows the client access type Client cache improves access for clients with different access types

37 37 References E. Pitoura and P. K. Chrysanthis. “Multiversion Data Broadcast”, IEEE Transactions on Computers 51(10):1224-1230, October, 2002 O. Shigiltchoff, P. K. Chrysanthis and E. Pitoura. “Multi-version Data Broadcast Organizations”. In Proc. of the 6 th East European Conference on Advances in Databases and Information Systems (ADBIS), September 2002, Bratislava, Sloavakia O. Shigiltchoff, P. K. Chrysanthis and E. Pitoura. “Adaptive Multi-version Data Broadcast Organizations”, In preparation for journal publication Multi-version Broadcast

38 38 DBGlobe, 1 st Annual Review Paphos, Jan 2003 A note on the different modes Summary of technical results 1. Coherent Push-based Data Delivery 2. Adaptive Multi-version Broadcast Data Delivery 3. Efficient Publish-Subscribe Data Delivery Outline:

39 39 Framework Personalized Information Delivery: Pub/Sub Systems –Subscriptions – description of interests/services –Events: service invocation –Match: events to all subscriptions impacted Publish/subscribe

40 40 Broker Consumer Broker Producer Consumer Producer Basic Model One or more Event Sources (ES) / Producers produces events in response to changes to a real world variable that it monitors. An Event Brokering System (EBS) It consists of one or more brokers. The events are published to the Event Brokering System, which matches them against a set of subscriptions, submitted by users in the system. One or more Event Displayers (ED) / Consumers If a user’s subscription matches the event, it is forwarded to the Event Displayer for that user. The ED is alerting the user. Publish/subscribe

41 41 The Problem  Key problems when designing and implementing large-scale publish/subscribe systems  efficient propagation of subscriptions and  distributed event processing among the brokers of the system Publish/subscribe

42 42 Motivation and Contributions Key idea: Summarize subscriptions and … process summaries as opposed to subscriptions. Publish/subscribe

43 43 Detailed Contributions: 1.A mechanism compacting subscription information (Per-broker Subscription Summary). Supports event/subscription schemata that are rich with respect to the attribute types and powerful with respect to the operators for these attributes. 2. A protocol for efficient distribution of subscription summaries to brokers and for merging them at each hop (multi-broker summaries). 3. A protocol for the efficient distributed processing of events: events are traversing the net being ‘content routed’ based on the multi- broker summaries, as they are matched to subscriptions from all brokers. 4. An event matching algorithm for the subscription summaries. Publish/subscribe

44 44 Conclusions Great savings for both: subscription propagation and distributed event processing!! Specifically –net bw: several times better than Siena and up to orders of magnitude better than baseline approach), –hop counts for subscription propagation: several times better than Siena –hop counts for event processing: better than Siena, up to 90% probability of subscription subsumption –Storage requirements: several times better than Siena Publish/subscribe

45 45 References P. Triantafillou and A. Economides, “Subscription Summaries for Scalability and Efficiency in Publish/Subscribe Systems”, 1st Intl. IEEE Workshop on Distributed Event-based Systems, (DEBS02) July 2002. P. Triantafillou and A. Economides, “Efficient Distributed Event Processing using Subscription Summaries in Large Scale Publish/Subscribe System”, Submitted for Publication Multi-version Broadcast

46 46 NEXT YEAR COORDINATION TASK INTEGRATION of DATA DELIVERY & QUEYING

47 Future and Emerging Technologies (FET) Future and Emerging Technologies (FET) The roots of innovation Proactive initiative on: Global Computing (GC) Proactive initiative on: Global Computing (GC) DBGlobe IST-2001-32645 1 st Review Paphos, January 31, 2003 Results on Data Delivery (WP3)

48 48 Extra Slides

49 49 Extra Slides (coherency)

50 50 The Model Client Server Broadcast Channel The server repetitively pushes data from a database to a large number of clients sequential client access asymmetry: large number of clients transmission capabilities Client-site protocols The server is stateless Data updates at the server Coherent Data Delivery

51 51 Updates Coherency in Broadcast-Based Dissemination Data are updated at the server What is the value broadcast at time instance t? we assume periodic updates with an update frequency or period of p u : meaning that the value placed at time t is the value of the item at the beginning of the update period denoted begin_cycle(t) For periodic broadcast, usually p u is equal to the broadcast period

52 52 Preliminary Definitions Database state: set of (data item, value) pairs Readset of a transaction R, RS(R): set of (data item, values) that R read BS c : the content of the broadcast at the cycle that starts at time instance c (again a set of (data item, value) pairs R may read items from different broadcast cycles, thus items in RS(R) may correspond to different database states Coherent Data Delivery

53 53 (Currency Interval of an Item) CI(x, R): currency interval of x in the readset of R = [c b, c e ) where c b is the commit time of the transaction that wrote the value of x read by R c e is the commit time of the transaction that updated x immediately after or infinity Coherent Data Delivery

54 54 Variations: instead of a single client transaction a set S of client transactions Example C3- site All transactions of a client serializable with all server transactions C3 - All Semantic Coherency: Model

55 55 Relating Semantic and Temporal Coherency If R is overlapping current, then it is C1 consistent Assumptions: Server schedules are serializable Broadcast only committed values

56 56 Relating Semantic and Temporal Coherency (Currency Interval of an Item) CI(x, R): currency interval of x in the readset of R = [c b, c e ) where c b is the commit time of the transaction that wrote the value of x read by R c e is the commit time of the transaction that updated x immediately after or infinity Note: overlapping currency similar to vintage transactions Server schedules are serializable: c e -vinatge semantic currency similar to t-bound, if OV(R) = t o, then t o -bound

57 57 Previous Work Datacycle [Bowen et al, CACM92] – hardware for detecting changes Extended for multiple servers [Banerjee&Li, JCI94] Certification reports [Barbara, ICDCS97] F-Matrix for (update (C2) consistency) [Shanmugasundaram, SIGMOD99] SGT Graph (for serializability) [Pitoura, ER-Workshop98], [Pitoura, DEXA-Workshop98], [Pitoura&Chrysanthis, ICDCS99] Multiple Versions [Pitoura&Chrysanthis, VLDB99] [Pitoura&Chrysanthis, IEEE TOC 2003] cache consistency (e.g., [Barbara&Imielinski, SIGMOD95, Acharya et al, VLDB1996]) Coherency in Broadcast-Based Dissemination

58 58 Basic Organization Versions Data items Broadcast 1Broadcast 2Broadcast 3 Vertical Multi-version Broadcast

59 59 Horizontal Versions Data items Broadcast 1Broadcast 2Broadcast 3 Multi-version Broadcast Basic Organization

60 60 Broadcast format Sequential Data element: 1 1 1 3 5 7 7 4 4 4 Broadcast: 1 1 1 3 5 7 7 4 4 4 Multi-version Broadcast Data element: 1 1 1 3 5 7 7 4 4 4 Broadcast: 1x2 3 5 7x1 4x2 Compressed

61 61 Compression V0 V1 V2 V3 V0 V1 V2 V3 Data 0 2 2 2 2 2x3 - - - Data 1 5 9 9 9 5 9x2 - - Data 2 4 4 4 4 4x3 - - - Data 3 3 3 7 7 3x1 - 7x1 - V0…V3- versions Data 0… Data 3- data items Horizontal compression: 2x3 5 9x2 4x3 3x1 7x1 Vertical compression: 2x3 5 4x3 3x1 9x2 7x1 Multi-version Broadcast

62 62 V0 V1 V2 V3 Data 0 2 2 2 2 Data 1 5 9 9 9 Data 2 4 4 4 4 Data 3 3 3 7 7 Actual encoding requires auxiliary symbols! Longer sequences of repetitive data first! Multi-version Broadcast Compression V0^3#0=2#2=4

63 63 V0 V1 V2 V3 Data 0 2 2 2 2 Data 1 5 9 9 9 Data 2 4 4 4 4 Data 3 3 3 7 7 Multi-version Broadcast Compression V0^3#0=2#2=4^1#3=3

64 64 V0 V1 V2 V3 Data 0 2 2 2 2 Data 1 5 9 9 9 Data 2 4 4 4 4 Data 3 3 3 7 7 Multi-version Broadcast Compression V0^3#0=2#2=4^1#3=3^0#1=5

65 65 V0 V1 V2 V3 Data 0 2 2 2 2 Data 1 5 9 9 9 Data 2 4 4 4 4 Data 3 3 3 7 7 Multi-version Broadcast Compression V0^3#0=2#2=4^1#3=3^0#1=5V1^2#1=9

66 66 V0 V1 V2 V3 Data 0 2 2 2 2 Data 1 5 9 9 9 Data 2 4 4 4 4 Data 3 3 3 7 7 Actual encoding requires auxiliary symbols! Longer sequences of repetitive data first! Multi-version Broadcast Compression V0^3#0=2#2=4

67 67 V0 V1 V2 V3 Data 0 2 2 2 2 Data 1 5 9 9 9 Data 2 4 4 4 4 Data 3 3 3 7 7 Multi-version Broadcast Compression V0^3#0=2#2=4^1#3=3

68 68 V0 V1 V2 V3 Data 0 2 2 2 2 Data 1 5 9 9 9 Data 2 4 4 4 4 Data 3 3 3 7 7 Multi-version Broadcast Compression V0^3#0=2#2=4^1#3=3^0#1=5

69 69 V0 V1 V2 V3 Data 0 2 2 2 2 Data 1 5 9 9 9 Data 2 4 4 4 4 Data 3 3 3 7 7 Multi-version Broadcast Compression V0^3#0=2#2=4^1#3=3^0#1=5V1^2#1=9

70 70 V0 V1 V2 V3 Data 0 2 2 2 2 Data 1 5 9 9 9 Data 2 4 4 4 4 Data 3 3 3 7 7 Actual encoding requires auxiliary symbols! Longer sequences of repetitive data first! Multi-version Broadcast Compression V0^3#0=2#2=4

71 71 V0 V1 V2 V3 Data 0 2 2 2 2 Data 1 5 9 9 9 Data 2 4 4 4 4 Data 3 3 3 7 7 Multi-version Broadcast Compression V0^3#0=2#2=4^1#3=3

72 72 V0 V1 V2 V3 Data 0 2 2 2 2 Data 1 5 9 9 9 Data 2 4 4 4 4 Data 3 3 3 7 7 Multi-version Broadcast Compression V0^3#0=2#2=4^1#3=3^0#1=5

73 73 V0 V1 V2 V3 Data 0 2 2 2 2 Data 1 5 9 9 9 Data 2 4 4 4 4 Data 3 3 3 7 7 Multi-version Broadcast Compression V0^3#0=2#2=4^1#3=3^0#1=5V1^2#1=9

74 74 Event & Subscription Types Event Schema  an untyped set of typed attributes.  Each attribute consists of a type, a name and a value.  The type of an attribute belongs to a predefined set of primitive data types commonly found in most programming languages.  The attribute’s name is a simple string, while the value can be in any range defined by the corresponding type.  The whole structure of type – name – value for all attributes constitutes the event itself. TypeNameValue string date float int float exchange symbol when price volume high low = NYSE = OTE = Nov 30 12:05:25 EET 2001 = 8.40 = 132700 = 8.80 = 8.22 Publish/subscribe

75 75 Event & Subscription Types (cont’d) Subscription Schema –Can contain all attribute data types. –Supports all interesting operators (=,, ranges, prefix, suffix, containment, etc.). –An attribute can have more than one constraint in the same subscription. TypeNameValue string float exchange symbol price N*SE = OTE < 8.70 > 8.30 Event Matching Constraint There is a match if and only if all the subscription’s attributes constraints are satisfied. Publish/subscribe

76 76 Results Vs Siena and baseline approaches Performance analysis on subscription propagation: –net bandwidth savings –Hop counts Performance analysis on distributed event processing –Hop counts Storage Requirements to maintain subscription summaries vs subscriptions Complexity analysis for event-matching algorithm Publish/subscribe

77 77 Performance Analysis Bandwidth requirements for subscription propagation: 1.From 3X to 6X better than Siena 2.Orders of magnitude better than baseline (subscription bcast) Publish/subscribe

78 78 Performance Analysis (cont’d) Mean number of hops needed for subscription propagation: Better than Siena by 2X – 15X Publish/subscribe

79 79 Performance Analysis (cont’d) Mean number of hops needed for event propagation: Better than Siena – except when prob. of subsumption is ~90%. Publish/subscribe

80 80 Performance Analysis (cont’d) Storage Requirements for Subscriptions ~5X better than Siena Publish/subscribe

81 81 Performance Analysis Bandwidth requirements for subscription propagation: 1.From 3X to 6X better than Siena 2.Orders of magnitude better than baseline (subscription bcast) Publish/subscribe

82 82 Performance Analysis (cont’d) Mean number of hops needed for subscription propagation: Better than Siena by 2X – 15X Publish/subscribe

83 83 Performance Analysis (cont’d) Mean number of hops needed for event propagation: Better than Siena – except when prob. of subsumption is ~90%. Publish/subscribe

84 84 Performance Analysis (cont’d) Storage Requirements for Subscriptions ~5X better than Siena Publish/subscribe

85 85 DBGlobe IST-2001-32645


Download ppt "Future and Emerging Technologies (FET) Future and Emerging Technologies (FET) The roots of innovation Proactive initiative on: Global Computing (GC) Proactive."

Similar presentations


Ads by Google