Priority Scheduling and Buffer Management for ATM Traffic Shaping Authors: Todd Lizambri, Fernando Duran and Shukri Wakid Present: Hongming Wu.

Slides:



Advertisements
Similar presentations
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Advertisements

1 CONGESTION CONTROL. 2 Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because.
EE 4272Spring, 2003 Chapter 12 Congestion in Data Networks Effect of Congestion Control  Ideal Performance  Practical Performance Congestion Control.
TELE202 Lecture 8 Congestion control 1 Lecturer Dr Z. Huang Overview ¥Last Lecture »X.25 »Source: chapter 10 ¥This Lecture »Congestion control »Source:
William Stallings Data and Computer Communications 7 th Edition Chapter 13 Congestion in Data Networks.
Network and Communications Hongsik Choi Department of Computer Science Virginia Commonwealth University.
Congestion Control Tanenbaum 5.3 Tanenbaum 6.5. Congestion Control Network Layer – Congestion control point to point Transport Layer – Congestion control.
24-1 Chapter 24. Congestion Control and Quality of Service (part 1) 23.1 Data Traffic 23.2 Congestion 23.3 Congestion Control 23.4 Two Examples.
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 16 – Multimedia Transport Subsystem (Part 3) Klara Nahrstedt Spring 2009.
24.1 Chapter 24 Congestion Control and Quality of Service Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
ATM : CONGESTION CONTROL Group 7 29 DECEMBER 2004.
1.  Congestion Control Congestion Control  Factors that Cause Congestion Factors that Cause Congestion  Congestion Control vs Flow Control Congestion.
Engineering Internet QoS
Courtesy: Nick McKeown, Stanford 1 Intro to Quality of Service Tahir Azim.
Traffic and Congestion Control in ATM Networks
Presented By: Pariya Raoufi. Motivations Future applications require: higher bandwidth, generate a heterogeneous mix of network traffic, low latency.
1 EE 400 Asynchronous Transfer Mode (ATM) Abdullah AL-Harthi.
In-Band Flow Establishment for End-to-End QoS in RDRN Saravanan Radhakrishnan.
Asynchronous Transfer Mode (ATM) and QoS
Traffic Management & QoS. Quality of Service (QoS) J The collective effect of service performances which determine the degree of satisfaction of a user.
ATM Traffic Management
24-1 Chapter 24. Congestion Control and Quality of Service part Quality of Service 23.6 Techniques to Improve QoS 23.7 Integrated Services 23.8.
Pipelined Two Step Iterative Matching Algorithms for CIOQ Crossbar Switches Deng Pan and Yuanyuan Yang State University of New York, Stony Brook.
Localized Asynchronous Packet Scheduling for Buffered Crossbar Switches Deng Pan and Yuanyuan Yang State University of New York Stony Brook.
Buffer Management for Shared- Memory ATM Switches Written By: Mutlu Apraci John A.Copelan Georgia Institute of Technology Presented By: Yan Huang.
CIS679: Scheduling, Resource Configuration and Admission Control r Review of Last lecture r Scheduling r Resource configuration r Admission control.
CSE QoS in IP. CSE Improving QOS in IP Networks Thus far: “making the best of best effort”
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Chapter 23 Congestion Control and Quality of Service.
Univ. of TehranAdv. topics in Computer Network1 Advanced topics in Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
CONGESTION CONTROL and RESOURCE ALLOCATION. Definition Resource Allocation : Process by which network elements try to meet the competing demands that.
1 Why traffic shaping? yIn packet networks that implement resource sharing xadmission control and scheduling alone are insufficient users may attempt to.
Chapter 24. Congestion Control and Quality of Service part 3
A T M (QoS).
Univ. of TehranAdv. topics in Computer Network1 Advanced topics in Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Fair Queueing. 2 First-Come-First Served (FIFO) Packets are transmitted in the order of their arrival Advantage: –Very simple to implement Disadvantage:
© 2006 Cisco Systems, Inc. All rights reserved. Optimizing Converged Cisco Networks (ONT) Module 3: Introduction to IP QoS.
Competitive Queue Policies for Differentiated Services Seminar in Packet Networks1 Competitive Queue Policies for Differentiated Services William.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
ECS5365 Lecture 6 ATM Traffic and Network Management
Efficient Gigabit Ethernet Switch Models for Large-Scale Simulation Dong (Kevin) Jin David Nicol Matthew Caesar University of Illinois.
Queuing Delay 1. Access Delay Some protocols require a sender to “gain access” to the channel –The channel is shared and some time is used trying to determine.
1 Fair Queuing Hamed Khanmirza Principles of Network University of Tehran.
Queue Scheduling Disciplines
1 Buffering Strategies in ATM Switches Carey Williamson Department of Computer Science University of Calgary.
24.1 Chapter 24 Congestion Control and Quality of Service ICE302 Term # 2 Lecture # 3 Md. Asif Hossain.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 16 – Multimedia Transport (Part 2) Klara Nahrstedt Spring 2011.
Providing QoS in IP Networks
1 Lecture 15 Internet resource allocation and QoS Resource Reservation Protocol Integrated Services Differentiated Services.
Chapter 10 Congestion Control in Data Networks and Internets 1 Chapter 10 Congestion Control in Data Networks and Internets.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 17 – Multimedia Transport Subsystem (Part 3) Klara Nahrstedt Spring 2011.
Scheduling Mechanisms Applied to Packets in a Network Flow CSC /15/03 By Chris Hare, Ricky Johnson, and Fulviu Borcan.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
04/02/08 1 Packet Scheduling IT610 Prof. A. Sahoo KReSIT.
GRF Overview Simple as UBR from end system view – End system does no policing or traffic shaping – May transmit at line rate of ATM adaptor Modest requirements.
CPU SCHEDULING.
QoS & Queuing Theory CS352.
Topics discussed in this section:
Buffer Management in a Switch
CONGESTION CONTROL.
CprE 458/558: Real-Time Systems
CPU Scheduling G.Anuradha
Module 5: CPU Scheduling
3: CPU Scheduling Basic Concepts Scheduling Criteria
Computer Science Division
Figure Areas in an autonomous system
Buffer Management for Shared-Memory ATM Switches
Module 5: CPU Scheduling
Module 5: CPU Scheduling
Introduction to Packet Scheduling
Presentation transcript:

Priority Scheduling and Buffer Management for ATM Traffic Shaping Authors: Todd Lizambri, Fernando Duran and Shukri Wakid Present: Hongming Wu

Outline 1. Introduction 2. Buffer Partitioning, Discard Policies and Scheduling Algorithms 3. Network and Traffic Scenario 4. Dynamically Weighted Priority Scheduling Algorithm 5. Implementation of the Simulation 6. Results 7. Conclusions

Introduction Congestion control Congestion occurs when too many cells try to access the same buffer pool in a switch. So congestion control mechanism is needed

Introduction Logical approach of congestion control - Open loop: Prevents congestion from happening * Admission control * Policing * Traffic shaping — My topic today - Closed loop: rely on feedback to regulate the source rate

Introduction Traffic shaper The basic function of a traffic shaper is regulating the traffic flow as per the Quality of Service (QoS) negotiated during the session set up to achieve better network efficiency

Introduction Shaping method: ^ Buffering(+leaky bucket) ^ Spacing ^ Peak cell rate reduction ^ Scheduling ^ Burst length limiting ^ Source rate limitation ^ Priority queuing ^ Framing

Introduction QoS Parameters ^ Negotiable parameters ! Peak-to-peak Cell Delay Variation (CDV) ! Maximum Cell Transfer Delay (Max CTD) ! Mean Cell Transfer Delay (Mean CTD) ! Cell Loss Ratio (CLR) ^ Non-negotiable parameters ! Cell Error Ratio (CER) ! Severely Errored Cell Block Ratio (SECBR) ! Cell Misinsertion Ratio (CMR)

Introduction We concentrate on two of the most important internal design factors in traffic shapers ^ Buffer management ^ Scheduling algorithms we examine the impact of buffer management and scheduling algorithms on the two most important QoS attributes, cell loss and delay, under stress conditions

Buffer Partitioning, Discard Policies and Scheduling Algorithms Buffer partitioning delineates the amount of buffer space available to a given queue and defines how space is shared among the different queues The discard policy determines whether an incoming cell is to be dropped or placed into the buffer space The scheduling algorithm is the component that determines which queue is given the opportunity to transmit a cell that is stored in the buffer

Buffer Partitioning methods Complete Partitioning scheme: each queue gets a fixed amount of the buffer space Complete Sharing scheme: where all the buffer space is fully shared among all the queues Sharing with Minimum Allocation scheme: a compromise method which reserves a minimum buffer space for each queue while the rest of the buffer is completely shared among the queues

Scheduling Algorithms First-In-First-Out(FIFO) Round Robin(RR) Fixed Priorities A queue with higher priority is always served before a queue with a lower priority Dynamically Weighted Priority scheduling algorithm

Network and Traffic Scenario We consider two classes of services - Constant Bit Rate (CBR) - Variable Bit Rate (VBR)

Dynamically Weighted Priority Scheduling Algorithm We consider a time dependent instantaneous priority index P j (t) for a j th class of service at a given time t to be: - u j is the associated fixed priority number - w j (t) is the amount of time the oldest cell in the j th class has waited in queue j - The floor function is used to get integer units of time

Dynamically Weighted Priority Scheduling Algorithm The priority index for each queue is recalculated for every output time slot The queue with the lowest value of priority index is awarded the time slot and is permitted to transmit a cell during that time If ß=0 we have a fixed priority scheduling For a very large ß, P j (t) is heavily influenced by the wait time of the cell and the scheduling mechanism behaves as a FIFO

Implementation of the Simulation The configuration of the traffic shaper consisted of two queues - Queue 1: CBR traffic class - Queue 2: VBR traffic class * The total buffer space = 1024 entries * The CBR sources were transmitting data at rate of 155 Mbs * The VBR sources were transmitting bursty traffic at a rate of 155 Mbs for 2 milliseconds followed by an OFF period where no data was transmitted for 2 milliseconds.

Results - FIFO Scheduling Because the VBR source is only transmitting 50% of the time the number of cells dropped from the VBR source (Queue 2) is approximately half of the cells dropped from the CBR source (Queue 1) Since the traffic for both queues is transmitted at the same rate (155 Mbs), the average cell delay for each queue will be the same

Result - Round Robin Scheduling (1) The cell loss of the CBR data is prevalent in this scenario due to the constant arrival of data in Queue 1

Result - Round Robin Scheduling (2) The average cell delay of Queue 1 is greater than that of Queue 2 due to nature characteristic of RR scheduling

Results - Dynamically Weighted Priority Scheduling The effect of Beta value

Result-Dynamically Weighted Priority Scheduling

Conclusions The dynamically weighted priority scheduling algorithm provides a mechanism for simultaneously improving the balance of cell loss and delay Optimal value for ß is based input traffic