GD-Aggregate: A WAN Virtual Topology Building Tool for Hard Real-Time and Embedded Applications Qixin Wang*, Xue Liu**, Jennifer Hou*, and Lui Sha* *Dept.

Slides:



Advertisements
Similar presentations
Greening Backbone Networks Shutting Off Cables in Bundled Links Will Fisher, Martin Suchara, and Jennifer Rexford Princeton University.
Advertisements

Scalable Routing In Delay Tolerant Networks
1 Network Telecommunication Group University of Pisa - Information Engineering department January Speaker: Raffaello Secchi Authors: Davide Adami.
Jennifer Rexford Princeton University MW 11:00am-12:20pm Logically-Centralized Control COS 597E: Software Defined Networking.
Impact of Interference on Multi-hop Wireless Network Performance Kamal Jain, Jitu Padhye, Venkat Padmanabhan and Lili Qiu Microsoft Research Redmond.
Winter 2004 UCSC CMPE252B1 CMPE 257: Wireless and Mobile Networking SET 3f: Medium Access Control Protocols.
Wireless Resource Management through Packet Scheduling Outline for this lecture o identify the design challenges for QoS support over wireless mobile networks.
Review: Routing algorithms Distance Vector algorithm. –What information is maintained in each router? –How to distribute the global network information?
Scheduling Heterogeneous Real- Time Traffic over Fading Wireless Channels I-Hong Hou P.R. Kumar University of Illinois, Urbana-Champaign 1/24.
Courtesy: Nick McKeown, Stanford 1 Intro to Quality of Service Tahir Azim.
Recent Progress on a Statistical Network Calculus Jorg Liebeherr Department of Computer Science University of Virginia.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
Differentiated Services. Service Differentiation in the Internet Different applications have varying bandwidth, delay, and reliability requirements How.
Worst-case Fair Weighted Fair Queueing (WF²Q) by Jon C.R. Bennett & Hui Zhang Presented by Vitali Greenberg.
End-to-End Analysis of Distributed Video-on-Demand Systems Padmavathi Mundur, Robert Simon, and Arun K. Sood IEEE Transactions on Multimedia, February.
CS 268: Lecture 15/16 (Packet Scheduling) Ion Stoica April 8/10, 2002.
Generalized Processing Sharing (GPS) Is work conserving Is a fluid model Service Guarantee –GPS discipline can provide an end-to-end bounded- delay service.
Multiple constraints QoS Routing Given: - a (real time) connection request with specified QoS requirements (e.g., Bdw, Delay, Jitter, packet loss, path.
Katz, Stoica F04 EECS 122: Introduction to Computer Networks Packet Scheduling and QoS Computer Science Division Department of Electrical Engineering and.
End-to-End Analysis of Distributed Video-on-Demand Systems P. Mundur, R. Simon, and A. K. Sood IEEE Transactions on Multimedia, Vol. 6, No. 1, Feb 2004.
Quality-Aware Segment Transmission Scheduling in Peer-to-Peer Streaming Systems Cheng-Hsin Hsu Senior Research Scientist Deutsche Telekom R&D Lab USA Los.
In-Band Flow Establishment for End-to-End QoS in RDRN Saravanan Radhakrishnan.
Distributed-Dynamic Capacity Contracting: A congestion pricing framework for Diff-Serv Murat Yuksel and Shivkumar Kalyanaraman Rensselaer Polytechnic Institute,
Real-Time Operating System Chapter – 8 Embedded System: An integrated approach.
Flow Models and Optimal Routing. How can we evaluate the performance of a routing algorithm –quantify how well they do –use arrival rates at nodes and.
Packet Scheduling From Ion Stoica. 2 Packet Scheduling  Decide when and what packet to send on output link -Usually implemented at output interface 1.
A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single-Node Case Abhay K. Parekh, Member, IEEE, and Robert.
CIS679: Scheduling, Resource Configuration and Admission Control r Review of Last lecture r Scheduling r Resource configuration r Admission control.
Integrated Services (RFC 1633) r Architecture for providing QoS guarantees to individual application sessions r Call setup: a session requiring QoS guarantees.
DaVinci: Dynamically Adaptive Virtual Networks for a Customized Internet Jennifer Rexford Princeton University With Jiayue He, Rui Zhang-Shen, Ying Li,
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 23 - Multimedia Network Protocols (Layer 3) Klara Nahrstedt Spring 2011.
Mazumdar Ne X tworking’03 June 23-25,2003, Chania, Crete, Greece The First COST-IST(EU)-NSF(USA) Workshop on EXCHANGES & TRENDS IN N ETWORKING 1 Non-convex.
Budapest University of Technology and Economics Department of Telecommunications and Media Informatics Optimized QoS Protection of Ethernet Trees Tibor.
Scaling Laws for Cognitive Radio Network with Heterogeneous Mobile Secondary Users Yingzhe Li, Xinbing Wang, Xiaohua Tian Department of Electronic Engineering.
Congestion Control in CSMA-Based Networks with Inconsistent Channel State V. Gambiroza and E. Knightly Rice Networks Group
Doc.: IEEE /1077r1 Submission Sep 2013 Slide 1 EDCA Enhancements for HEW Date: Authors: NameAffiliationsAddress Phone Hui
Florida State UniversityZhenhai Duan1 BCSQ: Bin-based Core Stateless Queueing for Scalable Support of Guaranteed Services Zhenhai Duan Karthik Parsha Department.
DaVinci: Dynamically Adaptive Virtual Networks for a Customized Internet Jiayue He, Rui Zhang-Shen, Ying Li, Cheng-Yen Lee, Jennifer Rexford, and Mung.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
On Reducing Mesh Delay for Peer- to-Peer Live Streaming Dongni Ren, Y.-T. Hillman Li, S.-H. Gary Chan Department of Computer Science and Engineering The.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429 Introduction to Computer Networks Lecture 18: Quality of Service Slides used with.
Resource Allocation in Network Virtualization Jie Wu Computer and Information Sciences Temple University.
11/02/2001 Workshop on Optical Networking 1 Design Method of Logical Topologies in WDM Network with Quality of Protection Junichi Katou Dept. of Informatics.
Chengzhi Li and Edward W. Knightly Schedulability Criterion and Performance Analysis of Coordinated Schedulers.
CprE 458/558: Real-Time Systems (G. Manimaran)1 CprE 458/558: Real-Time Systems Energy-aware QoS packet scheduling.
CHANNEL ALLOCATION FOR SMOOTH VIDEO DELIVERY OVER COGNITIVE RADIO NETWORKS Globecom 2010, FL, USA 1 Sanying Li, Tom H. Luan, Xuemin (Sherman) Shen Department.
A Comparison of RaDiO and CoDiO over IEEE WLANs May 25 th Jeonghun Noh Deepesh Jain A Comparison of RaDiO and CoDiO over IEEE WLANs.
Spring Computer Networks1 Congestion Control Sections 6.1 – 6.4 Outline Preliminaries Queuing Discipline Reacting to Congestion Avoiding Congestion.
1 Traffic Engineering By Kavitha Ganapa. 2 Introduction Traffic engineering is concerned with the issue of performance evaluation and optimization of.
Providing QoS in IP Networks
Scheduling for QoS Management. Engineering Internet QoS2 Outline  What is Queue Management and Scheduling?  Goals of scheduling  Fairness (Conservation.
04/02/08 1 Packet Scheduling IT610 Prof. A. Sahoo KReSIT.
Impact of Interference on Multi-hop Wireless Network Performance
University of Maryland College Park
Topics discussed in this section:
Work-in-Progress: Wireless Network Reconfiguration for Control Systems
Measuring Service in Multi-Class Networks
Columbia University in the city of New York
Quality of Service For Traffic Aggregates
High Throughput Route Selection in Multi-Rate Ad Hoc Wireless Networks
Topology Control and Its Effects in Wireless Networks
A Switch Design for Real-Time Industrial Networks
On-time Network On-chip
Yiannis Andreopoulos et al. IEEE JSAC’06 November 2006
CIS679: Two Planes and Int-Serv Model
EECS 122: Introduction to Computer Networks Packet Scheduling and QoS
کنترل جریان امیدرضا معروضی.
Presentation transcript:

GD-Aggregate: A WAN Virtual Topology Building Tool for Hard Real-Time and Embedded Applications Qixin Wang*, Xue Liu**, Jennifer Hou*, and Lui Sha* *Dept. of Computer Science, UIUC **School of Computer Science, McGill Univ.

Demand Big Trend: converge computers with the physical world

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems –Real-Time and Embedded (RTE) GENI

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems –Real-Time and Embedded (RTE) GENI –Virtual Organization

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems –Real-Time and Embedded (RTE) GENI –Virtual Organization Calls for RTE-WAN with following features:

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems –Real-Time and Embedded (RTE) GENI –Virtual Organization Calls for RTE-WAN with following features: –Scalability:

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems –Real-Time and Embedded (RTE) GENI –Virtual Organization Calls for RTE-WAN with following features: –Scalability: Similar traffic aggregation.

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems –Real-Time and Embedded (RTE) GENI –Virtual Organization Calls for RTE-WAN with following features: –Scalability: Similar traffic aggregation. Global/local traffic segregation;

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems –Real-Time and Embedded (RTE) GENI –Virtual Organization Calls for RTE-WAN with following features: –Scalability: Similar traffic aggregation. Global/local traffic segregation; Network hierarchy and modularity

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems –Real-Time and Embedded (RTE) GENI –Virtual Organization Calls for RTE-WAN with following features: –Scalability: Similar traffic aggregation. Global/local traffic segregation; Network hierarchy and modularity, which also assists composability, dependability, debugging etc.

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems –Real-Time and Embedded (RTE) GENI –Virtual Organization Calls for RTE-WAN with following features: –Scalability: Similar traffic aggregation. Global/local traffic segregation; Network hierarchy and modularity; –Configurability:

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems –Real-Time and Embedded (RTE) GENI –Virtual Organization Calls for RTE-WAN with following features: –Scalability: Similar traffic aggregation. Global/local traffic segregation; Network hierarchy and modularity; –Configurability: Runtime behavior regulation

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems –Real-Time and Embedded (RTE) GENI –Virtual Organization Calls for RTE-WAN with following features: –Scalability: Similar traffic aggregation. Global/local traffic segregation; Network hierarchy and modularity; –Configurability: Runtime behavior regulation –Flexibility:

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems –Real-Time and Embedded (RTE) GENI –Virtual Organization Calls for RTE-WAN with following features: –Scalability: Similar traffic aggregation. Global/local traffic segregation; Network hierarchy and modularity; –Configurability: Runtime behavior regulation –Flexibility: Ease of reconfiguration

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems –Real-Time and Embedded (RTE) GENI –Virtual Organization Calls for RTE-WAN with following features: –Scalability: Similar traffic aggregation. Global/local traffic segregation; Network hierarchy and modularity; –Configurability: Runtime behavior regulation –Flexibility: Ease of reconfiguration –Hard Real-Time E2E Delay Guarantee

Demand Big Trend: converge computers with the physical world –Cyber-Physical Systems –Real-Time and Embedded (RTE) GENI –Virtual Organization Calls for RTE-WAN with following features: –Scalability: Similar traffic aggregation. Global/local traffic segregation; Network hierarchy and modularity; –Configurability: Runtime behavior regulation –Flexibility: Ease of reconfiguration –Hard Real-Time E2E Delay Guarantee

Solution? The Train/Railway Analogy

Similar traffic aggregation:carriage train

Solution? The Train/Railway Analogy Similar traffic aggregation:carriage train Global/local traffic segregation: express vs. local train

Solution? The Train/Railway Analogy Similar traffic aggregation:carriage train Global/local traffic segregation: express vs. local train Hierarchical topology: express vs. local train

Solution? The Train/Railway Analogy Similar traffic aggregation:carriage train Global/local traffic segregation: express vs. local train Hierarchical topology: express vs. local train Configuration: routing, capacity planning

Solution? The Train/Railway Analogy Similar traffic aggregation:carriage train Global/local traffic segregation: express vs. local train Hierarchical topology: express vs. local train Configuration: routing, capacity planning Flexibility: change the train planning, not the railway

The Equivalent of Train in Network?

An aggregate (of flows) is like a train A C B Legend Aggregate. End NodeIntermediate Node Member Flow

The Equivalent of Train in Network? An aggregate (of flows) is like a train –Sender End Node: merges member flows into the aggregate A C B Legend Aggregate. End NodeIntermediate Node Member Flow

The Equivalent of Train in Network? An aggregate (of flows) is like a train –Sender End Node: merges member flows into the aggregate –Receiver End Node: disintegrates the aggregate into original flows A C B Legend Aggregate. End NodeIntermediate Node Member Flow

The Equivalent of Train in Network? An aggregate (of flows) is like a train –Sender End Node: merges member flows into the aggregate –Receiver End Node: disintegrates the aggregate into original flows –Intermediate Nodes: only forward the aggregate packets A C B Legend Aggregate. End NodeIntermediate Node Member Flow

The Equivalent of Train in Network? An aggregate (of flows) is like a train Legend Aggregate. End NodeIntermediate Node Member Flow A C B

The Equivalent of Train in Network? An aggregate (of flows) is like a train –Packets of member flows carriages Legend Aggregate. End NodeIntermediate Node Member Flow A C B

The Equivalent of Train in Network? An aggregate (of flows) is like a train –Packets of member flows carriages –Sender End Node: assembles the carriages into a train Legend Aggregate. End NodeIntermediate Node Member Flow A C B

The Equivalent of Train in Network? An aggregate (of flows) is like a train –Packets of member flows carriages –Sender End Node: assembles the carriages into a train –Receiver End Node: dissembles the train into carriages Legend Aggregate. End NodeIntermediate Node Member Flow A C B

The Equivalent of Train in Network? An aggregate (of flows) is like a train –Packets of member flows carriages –Sender End Node: assembles the carriages into a train –Receiver End Node: dissembles the train into carriages –Intermediate Nodes: only forward the train, but cannot add/remove carriages Legend Aggregate. End NodeIntermediate Node Member Flow A C B

The Equivalent of Train in Network? An aggregate (of flows) is like a train –Packets of member flows carriages –Sender End Node: assembles the carriages into a train –Receiver End Node: dissembles the train into carriages –Intermediate Nodes: only forward the train, but cannot add/remove carriages –Forwarding (routing) on the per train basis, not per carriage basis Legend Aggregate. End NodeIntermediate Node Member Flow A C B

The Equivalent of Train in Network? An aggregate (of flows) is like a train –Packets of member flows carriages –Sender End Node: assembles the carriages into a train –Receiver End Node: dissembles the train into carriages –Intermediate Nodes: only forward the train, but cannot add/remove carriages –Forwarding (routing) on the per train basis, not per carriage basis –Local Train: few hops (physical links) Legend Aggregate. End NodeIntermediate Node Member Flow A C B

The Equivalent of Train in Network? An aggregate (of flows) is like a train –Packets of member flows carriages –Sender End Node: assembles the carriages into a train –Receiver End Node: dissembles the train into carriages –Intermediate Nodes: only forward the train, but cannot add/remove carriages –Forwarding (routing) on the per train basis, not per carriage basis –Local Train: few hops –Express Train: many hops Legend Aggregate. End NodeIntermediate Node Member Flow A C B

Virtual Link/Topology Aggregates with the same sender and receiver end nodes collectively embody a virtual link A C B F1 F2 F3 Legend Virtual Link Aggregate. Thickness implies the aggregates data throughput End NodeIntermediate Node

Virtual Link/Topology Aggregates with the same sender and receiver end nodes collectively embody a virtual link Many virtual links altogether build up virtual topology A C B F1 F2 F3 Legend Virtual Link Aggregate. Thickness implies the aggregates data throughput End NodeIntermediate Node

State-of-the-Art: GR-Aggregate How to build virtual link with hard real-time E2E delay guarantee?

State-of-the-Art: GR-Aggregate How to build virtual link with hard real-time E2E delay guarantee? [SunShin05]: Guaranteed Rate Aggregate (GR-Aggregate)

State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant r f (called guaranteed rate) for each of its flow f, such that

State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant r f (called guaranteed rate) for each of its flow f, such that

State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant r f (called guaranteed rate) for each of its flow f, such that p f j : jth packet of flow f

State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant r f (called guaranteed rate) for each of its flow f, such that p f j : jth packet of flow f L(p): time when packet p leaves S

State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant r f (called guaranteed rate) for each of its flow f, such that p f j : jth packet of flow f L(p): time when packet p leaves S A specific function, called GRSFunc

State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant r f (called guaranteed rate) for each of its flow f, such that p f j : jth packet of flow f L(p): time when packet p leaves S A specific function, called GRSFunc

State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant r f (called guaranteed rate) for each of its flow f, such that p f j : jth packet of flow f L(p): time when packet p leaves S A specific function, called GRSFunc r f : guaranteed rate

State-of-the-Art: GR-Aggregate Guaranteed Rate Server (GR-Server) [Goyal97a]: A queueing server S is a GR-Server, as long as there exists a constant r f (called guaranteed rate) for each of its flow f, such that p f j : jth packet of flow f L(p): time when packet p leaves S A specific function, called GRSFunc r f : guaranteed rate

State-of-the-Art: GR-Aggregate [Goyal97a] proves WFQ, WF 2 Q are GR-Server, with r f = f C, where f is the weight of flow f (note f 1), and C is the server output capacity.

State-of-the-Art: GR-Aggregate [Goyal97a] proves WFQ, WF 2 Q are GR-Server, with r f = f C, where f is the weight of flow f (note f 1), and C is the server output capacity. [SunShin05]: GR-Aggregate based Virtual Link:

State-of-the-Art: GR-Aggregate [Goyal97a] proves WFQ, WF 2 Q are GR-Server, with r f = f C, where f is the weight of flow f (note f 1), and C is the server output capacity. [SunShin05]: GR-Aggregate based Virtual Link: –Sender End: aggregates virtual links member flows into an aggregate F using a GR-Server;

State-of-the-Art: GR-Aggregate [Goyal97a] proves WFQ, WF 2 Q are GR-Server, with r f = f C, where f is the weight of flow f (note f 1), and C is the server output capacity. [SunShin05]: GR-Aggregate based Virtual Link: –Sender End: aggregates virtual links member flows into an aggregate F using a GR-Server; –Intermediate Nodes: each forwards F with a GR-Server at a guaranteed rate of R F, where R F F, and F is Fs data throughput.

State-of-the-Art: GR-Aggregate [Goyal97a] proves WFQ, WF 2 Q are GR-Server, with r f = f C, where f is the weight of flow f (note f 1), and C is the server output capacity. [SunShin05]: GR-Aggregate based Virtual Link: –Sender End: aggregates virtual links member flows into an aggregate F using a GR-Server; –Intermediate Nodes: each forwards F with a GR-Server at a guaranteed rate of R F, where R F F, and F is Fs data throughput. –Receiver End: disintegrates F into original flows.

State-of-the-Art: GR-Aggregate [Goyal97a] proves WFQ, WF 2 Q are GR-Server, with r f = f C, where f is the weight of flow f (note f 1), and C is the server output capacity. [SunShin05]: GR-Aggregate based Virtual Link: –Sender End: aggregates virtual links member flows into an aggregate F using a GR-Server; –Intermediate Nodes: each forwards F with a GR-Server at a guaranteed rate of R F, where R F F, and F is Fs data throughput. –Receiver End: disintegrates F into original flows. –E2E Delay d / R F +, where, are certain constants.

New Problem GR-Aggregate fits Internet traffic well.

New Problem GR-Aggregate fits Internet traffic well. When applied to Cyber-Physical Systems traffic

New Problem GR-Aggregate fits Internet traffic well. When applied to Cyber-Physical Systems traffic –Real-time sensing/actuating aggregate:

New Problem GR-Aggregate fits Internet traffic well. When applied to Cyber-Physical Systems traffic –Real-time sensing/actuating aggregate: Small data throughput, small E2E delay requirement

New Problem GR-Aggregate fits Internet traffic well. When applied to Cyber-Physical Systems traffic –Real-time sensing/actuating aggregate: Small data throughput, small E2E delay requirement –Real-time video aggregate:

New Problem GR-Aggregate fits Internet traffic well. When applied to Cyber-Physical Systems traffic –Real-time sensing/actuating aggregate: Small data throughput, small E2E delay requirement –Real-time video aggregate: Large data throughput, small E2E delay requirement

New Problem GR-Aggregate fits Internet traffic well. When applied to Cyber-Physical Systems traffic –Real-time sensing/actuating aggregate: Small data throughput, small E2E delay requirement –Real-time video aggregate: Large data throughput, small E2E delay requirement –Non-real-time traffic

New Problem For real-time sensing/actuating aggregate:

New Problem For real-time sensing/actuating aggregate: –Small data throughput, small E2E delay requirement

New Problem For real-time sensing/actuating aggregate: –Small data throughput, small E2E delay requirement –GR-Aggregate E2E delay d / R F +

New Problem For real-time sensing/actuating aggregate: –Small data throughput, small E2E delay requirement –GR-Aggregate E2E delay d / R F + –Assigning small guaranteed rate R F (i.e., R F = F ) large E2E delay;

New Problem For real-time sensing/actuating aggregate: –Small data throughput, small E2E delay requirement –GR-Aggregate E2E delay d / R F + –Assigning small guaranteed rate R F (i.e., R F = F ) large E2E delay; –Assigning large guaranteed rate R F (i.e., R F > F )

New Problem For real-time sensing/actuating aggregate: –Small data throughput, small E2E delay requirement –GR-Aggregate E2E delay d / R F + –Assigning small guaranteed rate R F (i.e., R F = F ) large E2E delay; –Assigning large guaranteed rate R F (i.e., R F > F ) other aggregates guaranteed rates < their data throughputs (when link capacity is precious).

New Problem For real-time sensing/actuating aggregate: –Small data throughput, small E2E delay requirement –GR-Aggregate E2E delay d / R F + –Assigning small guaranteed rate R F (i.e., R F = F ) large E2E delay; –Assigning large guaranteed rate R F (i.e., R F > F ) other aggregates guaranteed rates < their data throughputs (when link capacity is precious). GR-Aggregate does not talk about this situation.

New Problem For real-time sensing/actuating aggregate: –Small data throughput, small E2E delay requirement –GR-Aggregate E2E delay d / R F + –Assigning small guaranteed rate R F (i.e., R F = F ) large E2E delay; –Assigning large guaranteed rate R F (i.e., R F > F ) other aggregates guaranteed rates < their data throughputs (when link capacity is precious). GR-Aggregate does not talk about this situation. What will happen?

Solution Heuristic Observation:

Solution Heuristic Observation: The purpose of using GR-Server to provide E2E delay guarantee is to provide a constant per hop transmission delay bound.

Solution Heuristic Observation: The purpose of using GR-Server to provide E2E delay guarantee is to provide a constant per hop transmission delay bound. As long as we can provide such a bound, we are fine.

Solution Heuristic So far we know WFQ, WF 2 Q are GR-Servers, and if flow f is assigned weight f, it is guaranteed a rate of r f = f C.

Solution Heuristic So far we know WFQ, WF 2 Q are GR-Servers, and if flow f is assigned weight f, it is guaranteed a rate of r f = f C. Conventionally, we assign weight f proportional to data throughput, i.e., f C f.

Solution Heuristic So far we know WFQ, WF 2 Q are GR-Servers, and if flow f is assigned weight f, it is guaranteed a rate of r f = f C. Conventionally, we assign weight f proportional to data throughput, i.e., f C f. What if we assign arbitrary weight?

Solution Heuristic So far we know WFQ, WF 2 Q are GR-Servers, and if flow f is assigned weight f, it is guaranteed a rate of r f = f C. Conventionally, we assign weight f proportional to data throughput, i.e., f C f. What if we assign arbitrary weight? Discovery: as long as every flow is token-bucket-constrained and f C, every flow still has a bounded transmission delay, and there is an algorithm TDB({ i },{l i max },C) to calculate this transmission delay bound f (l).

Solution Heuristic So far we know WFQ, WF 2 Q are GR-Servers, and if flow f is assigned weight f, it is guaranteed a rate of r f = f C. Conventionally, we assign weight f proportional to data throughput, i.e., f C f. What if we assign arbitrary weight? Discovery: as long as every flow is token-bucket-constrained and f C, every flow still has a bounded transmission delay, and there is an algorithm TDB({ i },{l i max },C) to calculate this transmission delay bound f (l). To the extreme, we can mimic prioritized preemption by assigning proper weights.

Solution Heuristic: What does arbitrary weight imply? F 1, data rate = 0.1 F 2, data rate = 0.4F 3, data rate = 0.5

Solution Heuristic: What does arbitrary weight imply? F 1, data rate = 0.1 F 2, data rate = 0.4F 3, data rate = 0.5 Server Capacity C = 1, all packet length l = 1.

Solution Heuristic: What does arbitrary weight imply? F 1, data rate = 0.1 F 2, data rate = 0.4F 3, data rate = 0.5 Data Rate Proportional Weight: 1 = 0.1, 2 = 0.4, 3 = 0.5 Server Capacity C = 1, all packet length l = 1.

Solution Heuristic: What does arbitrary weight imply? t (sec) F 1, data rate = 0.1 F 2, data rate = 0.4F 3, data rate = 0.5 Data Rate Proportional Weight: 1 = 0.1, 2 = 0.4, 3 = 0.5 Transmission delay bound inverse proportionally coupled with data throughput Server Capacity C = 1, all packet length l = 1.

Solution Heuristic: What does arbitrary weight imply? t (sec) F 1, data rate = 0.1 F 2, data rate = 0.4F 3, data rate = 0.5 Data Rate Proportional Weight: 1 = 0.1, 2 = 0.4, 3 = 0.5 Prioritized Weight: 1 = 0.999, 2 = , 3 = Transmission delay bound inverse proportionally coupled with data throughput Server Capacity C = 1, all packet length l = 1.

Solution Heuristic: What does arbitrary weight imply? t (sec) t (sec) F 1, data rate = 0.1 F 2, data rate = 0.4F 3, data rate = 0.5 Data Rate Proportional Weight: 1 = 0.1, 2 = 0.4, 3 = 0.5 Transmission delay bound inverse proportionally coupled with data throughput According to TDB algorithm, transmission delay bound decoupled from data throughput, and reflects priority: higher priority maps to shorter Server Capacity C = 1, all packet length l = 1. Prioritized Weight: 1 = 0.999, 2 = , 3 =

Solution: GD-Aggregate Proposal: Guaranteed Delay Server (GD-Server): A queueing server S is a GD-Server, as long as there exists a non- negative monotonically non-decreasing function f (l) (called guaranteed delay function) for each of its flow f, such that

Solution: GD-Aggregate Proposal: Guaranteed Delay Server (GD-Server): A queueing server S is a GD-Server, as long as there exists a non- negative monotonically non-decreasing function f (l) (called guaranteed delay function) for each of its flow f, such that

Solution: GD-Aggregate Proposal: Guaranteed Delay Server (GD-Server): A queueing server S is a GD-Server, as long as there exists a non- negative monotonically non-decreasing function f (l) (called guaranteed delay function) for each of its flow f, such that p f j : jth packet of flow f

Solution: GD-Aggregate Proposal: Guaranteed Delay Server (GD-Server): A queueing server S is a GD-Server, as long as there exists a non- negative monotonically non-decreasing function f (l) (called guaranteed delay function) for each of its flow f, such that p f j : jth packet of flow f L(p): time when packet p leaves S

Solution: GD-Aggregate Proposal: Guaranteed Delay Server (GD-Server): A queueing server S is a GD-Server, as long as there exists a non- negative monotonically non-decreasing function f (l) (called guaranteed delay function) for each of its flow f, such that p f j : jth packet of flow f L(p): time when packet p leaves S A specific function, called GDSFunc

Solution: GD-Aggregate Proposal: Guaranteed Delay Server (GD-Server): A queueing server S is a GD-Server, as long as there exists a non- negative monotonically non-decreasing function f (l) (called guaranteed delay function) for each of its flow f, such that p f j : jth packet of flow f L(p): time when packet p leaves S A specific function, called GDSFunc

Solution: GD-Aggregate Proposal: Guaranteed Delay Server (GD-Server): A queueing server S is a GD-Server, as long as there exists a non- negative monotonically non-decreasing function f (l) (called guaranteed delay function) for each of its flow f, such that p f j : jth packet of flow f L(p): time when packet p leaves S A specific function, called GDSFunc f (l) : guaranteed delay function

Solution: GD-Aggregate Proposal: Guaranteed Delay Server (GD-Server): A queueing server S is a GD-Server, as long as there exists a non- negative monotonically non-decreasing function f (l) (called guaranteed delay function) for each of its flow f, such that p f j : jth packet of flow f L(p): time when packet p leaves S A specific function, called GDSFunc f (l) : guaranteed delay function

Solution: GD-Aggregate Discovery: If each ingress flow/aggregate is token-bucket-constrained, WFQ and WF 2 Q servers are GD-Servers.

Solution: GD-Aggregate Discovery: If each ingress flow/aggregate is token-bucket-constrained, WFQ and WF 2 Q servers are GD-Servers. Design: We modified the design of Sun and Shins GR-Aggregate into GD-Aggregate, (mainly) by changing GR-Servers to GD- Servers.

Solution: GD-Aggregate GD-Aggregate Features:

Solution: GD-Aggregate GD-Aggregate Features: –E2E Delay d k (l max ) +, where k (l) is the guaranteed delay function at the kth hop, l max is the maximal packet length.

Solution: GD-Aggregate GD-Aggregate Features: –E2E Delay d k (l max ) +, where k (l) is the guaranteed delay function at the kth hop, l max is the maximal packet length. –We found a way to assign weight to mimic priority so that

Solution: GD-Aggregate GD-Aggregate Features: –E2E Delay d k (l max ) +, where k (l) is the guaranteed delay function at the kth hop, l max is the maximal packet length. –We found a way to assign weight to mimic priority so that An aggregate with small data throughput can have small k (l),

Solution: GD-Aggregate GD-Aggregate Features: –E2E Delay d k (l max ) +, where k (l) is the guaranteed delay function at the kth hop, l max is the maximal packet length. –We found a way to assign weight to mimic priority so that An aggregate with small data throughput can have small k (l), and hence small E2E delay guarantee.

Solution: GD-Aggregate GD-Aggregate Features: –E2E Delay d k (l max ) +, where k (l) is the guaranteed delay function at the kth hop, l max is the maximal packet length. –We found a way to assign weight to mimic priority so that An aggregate with small data throughput can have small k (l), and hence small E2E delay guarantee. No waste of link capacity

Solution: GD-Aggregate GD-Aggregate Features: –E2E Delay d k (l max ) +, where k (l) is the guaranteed delay function at the kth hop, l max is the maximal packet length. –We found a way to assign weight to mimic priority so that An aggregate with small data throughput can have small k (l), and hence small E2E delay guarantee. No waste of link capacity k (l) is a linear function of packet length l.

Solution: GD-Aggregate GD-Aggregate Features: –E2E Delay d k (l max ) +, where k (l) is the guaranteed delay function at the kth hop, l max is the maximal packet length. –We found a way to assign weight to mimic priority so that An aggregate with small data throughput can have small k (l), and hence small E2E delay guarantee. No waste of link capacity k (l) is a linear function of packet length l. Each prioritys capacity and delay guarantee can be planned with simple optimization tools.

Solution: GD-Aggregate GD-Aggregate Features: –E2E Delay d k (l max ) +, where k (l) is the guaranteed delay function at the kth hop, l max is the maximal packet length. –We found a way to assign weight to mimic priority so that An aggregate with small data throughput can have small k (l), and hence small E2E delay guarantee. No waste of link capacity k (l) is a linear function of packet length l. Each prioritys capacity and delay guarantee can be planned with simple optimization tools. (8 Theorems, 4 Corollaries, 14 Lemmas)

Evaluation: Application Background Underground Mining: A Typical Cyber- Physical Systems Application

3000m 300m 6000m Panel 1 Panel 2Panel 3 North EastWest South Coal An underground coal mine

3000m 300m 6000m Panel 1 Panel 2Panel 3 Active Mining Area (Face) Underground mines often cover huge areas; and are dangerous. North EastWest South Coal

3000m 300m 6000m Panel 1 Panel 2Panel 3 Active Mining Area (Face) Underground mines often cover huge areas; and are dangerous. Need to replace all human workers with remotely controlled robots/vehicles. North EastWest South Coal

3000m 300m Active Mining Area (Face) Above-Ground Remote Control Room 6000m Panel 1 Panel 2Panel 3 Vision: Human remotely controls robots/vehicles from above-ground control room, via wired WAN backbone and wireless LANs North EastWest South Coal

3000m 300m Active Mining Area (Face) Above-Ground Remote Control Room 6000m A wireless LAN base station (a.k.a. access point, in the case of IEEE ) A wireline physical link, part of the underground mining RTE-WAN Panel 1 Panel 2Panel 3 Coal North EastWest South

3000m 300m Active Mining Area (Face) Above-Ground Remote Control Room 6000m A wireless LAN base station (a.k.a. access point, in the case of IEEE ) A wireline physical link, part of the underground mining RTE-WAN A virtual link (may consist of several GR/GD-aggregates) with its two end nodes A B Panel 1 Panel 2Panel 3 Coal North EastWest South

Evaluation: Traffic Feature Remote underground mining creates all typical CPS traffic (aggregates)

Evaluation: Traffic Feature Remote underground mining creates all typical CPS traffic (aggregates) Virtual link AB may consist of following aggregates:

Evaluation: Traffic Feature Remote underground mining creates all typical CPS traffic (aggregates) Virtual link AB may consist of following aggregates: –F 1 : tele-robotic sensing/actuating aggregate small data throughput, short hard real-time E2E delay requirement ( 50ms)

Evaluation: Traffic Feature Remote underground mining creates all typical CPS traffic (aggregates) Virtual link AB may consist of following aggregates: –F 1 : tele-robotic sensing/actuating aggregate small data throughput, short hard real-time E2E delay requirement ( 50ms) –F 2 : tele-robotic video aggregate large data throughput, short hard real-time E2E delay requirement ( 50ms)

Evaluation: Traffic Feature Remote underground mining creates all typical CPS traffic (aggregates) Virtual link AB may consist of following aggregates: –F 1 : tele-robotic sensing/actuating aggregate small data throughput, short hard real-time E2E delay requirement ( 50ms) –F 2 : tele-robotic video aggregate large data throughput, short hard real-time E2E delay requirement ( 50ms) –F 3 : Non-real-time traffic aggregate tolerates seconds of E2E delay.

Evaluation: Result Aggregates data throughput (kbps)

Evaluation: Result When link capacity C is precious, i.e., total data throughput of F 1, F 2, and F 3 = = C. Aggregates data throughput (kbps)

Evaluation: Result When link capacity C is precious, i.e., total data throughput of F 1, F 2, and F 3 = = C. GR-Aggregate has to allocate guaranteed rate proportional to data throughput. Aggregates data throughput (kbps)

Evaluation: Result When link capacity C is precious, i.e., total data throughput of F 1, F 2, and F 3 = = C. GR-Aggregate has to allocate guaranteed rate proportional to data throughput. Aggregates data throughput (kbps)

Evaluation: Result When link capacity C is precious, i.e., total data throughput of F 1, F 2, and F 3 = = C. GR-Aggregate has to allocate guaranteed rate proportional to data throughput. Aggregates data throughput (kbps)

Evaluation: Result When link capacity C is precious, i.e., total data throughput of F 1, F 2, and F 3 = = C. GR-Aggregate has to allocate guaranteed rate proportional to data throughput. GD-Aggregate can still let F 1 has highest priority. Aggregates data throughput (kbps)

Evaluation: Result When link capacity C is precious, i.e., total data throughput of F 1, F 2, and F 3 = = C. GR-Aggregate has to allocate guaranteed rate proportional to data throughput. GD-Aggregate can still let F 1 has highest priority. Aggregates data throughput (kbps)

Evaluation: Result When link capacity C is precious, i.e., total data throughput of F 1, F 2, and F 3 = = C. GR-Aggregate has to allocate guaranteed rate proportional to data throughput. GD-Aggregate can still let F 1 has highest priority. Aggregates data throughput (kbps)

Related Work Overlay Network: soft real-time, statistic

Related Work Overlay Network: soft real-time, statistic DiffServ: FIFO, poor performance for bursty traffic

Related Work Overlay Network: soft real-time, statistic DiffServ: FIFO, poor performance for bursty traffic Real-Time Virtual Machine: still open problem, especially on mutual exclusion and closed-form schedulability formulae.

Related Work Overlay Network: soft real-time, statistic DiffServ: FIFO, poor performance for bursty traffic Real-Time Virtual Machine: still open problem, especially on mutual exclusion and closed-form schedulability formulae. [Geogiadis96] also found the decoupling technique, fluid model, no aggregation.

Related Work Overlay Network: soft real-time, statistic DiffServ: FIFO, poor performance for bursty traffic Real-Time Virtual Machine: still open problem, especially on mutual exclusion and closed-form schedulability formulae. [Geogiadis96] also found the decoupling technique, fluid model, no aggregation. [Goyal97b] per packet guaranteed rate, known a priori, or refer to the minimum rate. Does not talk about aggregation.

Conclusion GD-Aggregate:

Conclusion GD-Aggregate: Supports flow aggregation and E2E delay guarantee

Conclusion GD-Aggregate: Supports flow aggregation and E2E delay guarantee A tool to build hard real-time virtual link/topology

Conclusion GD-Aggregate: Supports flow aggregation and E2E delay guarantee A tool to build hard real-time virtual link/topology Decouples E2E delay guarantee from data throughput

Conclusion GD-Aggregate: Supports flow aggregation and E2E delay guarantee A tool to build hard real-time virtual link/topology Decouples E2E delay guarantee from data throughput Supports priority

Conclusion GD-Aggregate: Supports flow aggregation and E2E delay guarantee A tool to build hard real-time virtual link/topology Decouples E2E delay guarantee from data throughput Supports priority Simple linear closed-form formulae for analysis and admission control

Conclusion GD-Aggregate: Supports flow aggregation and E2E delay guarantee A tool to build hard real-time virtual link/topology Decouples E2E delay guarantee from data throughput Supports priority Simple linear closed-form formulae for analysis and admission control Can be planned with simple optimization tools

References [Fisher04] B. Fisher et al., Seeing, hearing, and touching: Putting it all together, SIGGRAPH'04 Course, [Georgiadis96] L. Georgiadis et al., Efficient network QoS provisioning based on per node traffic shaping, IEEE/ACM Trans. on Networking, vol. 4, no. 4, August [Goyal97a] P. Goyal et al., Determining end-to-end delay bounds in heterogeneous networks, Multimedia Systems, no. 5, pp , [Goyal97b] P. Goyal and H. M. Vin, Generalized guaranteed rate scheduling algorithms: A framework, IEEE/ACM Trans. on Networking, vol. 5, no. 4, pp , August [Hartman02] H. L. Hartman and J. M. Mutmansky, Introductory Mining Engineering (2 nd Ed.). Wiley, August [SunShin05] W. Sun and K. G. Shin, End-to-end delay bounds for trafc aggregates under guaranteed-rate scheduling algorithms, IEEE/ACM Trans. on Networking, vol. 13, no. 5, pp , October 2005.

Thank You!