Buffer Management for Shared- Memory ATM Switches Written By: Mutlu Apraci John A.Copelan Georgia Institute of Technology Presented By: Yan Huang.

Slides:



Advertisements
Similar presentations
Ch. 12 Routing in Switched Networks
Advertisements

Review of Topology and Access Techniques / Switching Concepts BSAD 141 Dave Novak Sources: Network+ Guide to Networks, Dean 2013.
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Paging: Design Issues. Readings r Silbershatz et al: ,
Maximum Battery Life Routing to Support Ubiquitous Mobile Computing in Wireless Ad Hoc Networks By C. K. Toh.
24-1 Chapter 24. Congestion Control and Quality of Service (part 1) 23.1 Data Traffic 23.2 Congestion 23.3 Congestion Control 23.4 Two Examples.
Memory Management Chapter 7.
Review of Topology and Access Techniques / Switching Concepts BSAD 141 Dave Novak Sources: Network+ Guide to Networks, Dean 2013.
Fundamentals of Python: From First Programs Through Data Structures
Partial Packet Discard The effective throughput of TCP over ATM can be quite low when cells are dropped at the congested ATM switch. The low throughput.
Priority Scheduling and Buffer Management for ATM Traffic Shaping Authors: Todd Lizambri, Fernando Duran and Shukri Wakid Present: Hongming Wu.
Load Balancing of Elastic Traffic in Heterogeneous Wireless Networks Abdulfetah Khalid, Samuli Aalto and Pasi Lassila
Differentiated Services. Service Differentiation in the Internet Different applications have varying bandwidth, delay, and reliability requirements How.
Nick McKeown CS244 Lecture 6 Packet Switches. What you said The very premise of the paper was a bit of an eye- opener for me, for previously I had never.
A Case for Relative Differentiated Services and the Proportional Differentiation Model Constantinos Dovrolis Parameswaran Ramanathan University of Wisconsin-Madison.
End-to-End Analysis of Distributed Video-on-Demand Systems Padmavathi Mundur, Robert Simon, and Arun K. Sood IEEE Transactions on Multimedia, February.
Performance analysis for high speed switches Lecture 6.
Channel Allocation for the GPRS Design and Performance Study Huei-Wen Ferng, Ph.D. Assistant Professor Department of Computer Science and Information Engineering.
1 Performance Evaluation of Computer Networks Objectives  Introduction to Queuing Theory  Little’s Theorem  Standard Notation of Queuing Systems  Poisson.
Little’s Theorem Examples Courtesy of: Dr. Abdul Waheed (previous instructor at COE)
Performance and Robustness Testing of Explicit-Rate ABR Flow Control Schemes Milan Zoranovic Carey Williamson October 26, 1999.
Analysis of Input Queueing More complex system to analyze than output queueing case. In order to analyze it, we make a simplifying assumption of "heavy.
7/15/2015HY220: Ιάκωβος Μαυροειδής1 HY220 Schedulers.
Pipelined Two Step Iterative Matching Algorithms for CIOQ Crossbar Switches Deng Pan and Yuanyuan Yang State University of New York, Stony Brook.
Localized Asynchronous Packet Scheduling for Buffered Crossbar Switches Deng Pan and Yuanyuan Yang State University of New York Stony Brook.
Lesson 11: Solved M/G/1 Exercises
A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single-Node Case Abhay K. Parekh, Member, IEEE, and Robert.
“A non parametric estimate of performance in queueing models with long-range correlation, with applications to telecommunication” Pier Luigi Conti, Università.
Management of Waiting Lines McGraw-Hill/Irwin Copyright © 2012 by The McGraw-Hill Companies, Inc. All rights reserved.
ATM SWITCHING. SWITCHING A Switch is a network element that transfer packet from Input port to output port. A Switch is a network element that transfer.
1 Copyright © Monash University ATM Switch Design Philip Branch Centre for Telecommunications and Information Engineering (CTIE) Monash University
Time Parallel Simulations II ATM Multiplexers and G/G/1 Queues.
QoS Support in High-Speed, Wormhole Routing Networks Mario Gerla, B. Kannan, Bruce Kwan, Prasasth Palanti,Simon Walton.
On QoS Guarantees with Reward Optimization for Servicing Multiple Priority Class in Wireless Networks YaoChing Peng Eunyoung Chang.
MIT Fun queues for MIT The importance of queues When do queues appear? –Systems in which some serving entities provide some service in a shared.
Improving Capacity and Flexibility of Wireless Mesh Networks by Interface Switching Yunxia Feng, Minglu Li and Min-You Wu Presented by: Yunxia Feng Dept.
Utilizing Call Admission Control for Pricing Optimization of Multiple Service Classes in Wireless Cellular Networks Authors : Okan Yilmaz, Ing-Ray Chen.
6 Memory Management and Processor Management Management of Resources Measure of Effectiveness – On most modern computers, the operating system serves.
Subject: Operating System.
Univ. of TehranAdv. topics in Computer Network1 Advanced topics in Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
1 Chapters 8 Overview of Queuing Analysis. Chapter 8 Overview of Queuing Analysis 2 Projected vs. Actual Response Time.
TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang.
1 Optical Packet Switching Techniques Walter Picco MS Thesis Defense December 2001 Fabio Neri, Marco Ajmone Marsan Telecommunication Networks Group
OPERATING SYSTEMS CS 3530 Summer 2014 Systems with Multi-programming Chapter 4.
Virtual Memory The memory space of a process is normally divided into blocks that are either pages or segments. Virtual memory management takes.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429 Introduction to Computer Networks Lecture 18: Quality of Service Slides used with.
We used ns-2 network simulator [5] to evaluate RED-DT and compare its performance to RED [1], FRED [2], LQD [3], and CHOKe [4]. All simulation scenarios.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Unit III Bandwidth Utilization: Multiplexing and Spectrum Spreading In practical life the bandwidth available of links is limited. The proper utilization.
Maciej Stasiak, Mariusz Głąbowski Arkadiusz Wiśniewski, Piotr Zwierzykowski Model of the Nodes in the Packet Network Chapter 10.
1 Buffering Strategies in ATM Switches Carey Williamson Department of Computer Science University of Calgary.
Univ. of TehranIntroduction to Computer Network1 An Introduction to Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Management of Waiting Lines Copyright © 2015 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent.
Courtesy Piggybacking: Supporting Differentiated Services in Multihop Mobile Ad Hoc Networks Wei LiuXiang Chen Yuguang Fang WING Dept. of ECE University.
Scheduling for QoS Management. Engineering Internet QoS2 Outline  What is Queue Management and Scheduling?  Goals of scheduling  Fairness (Conservation.
CPU Scheduling Operating Systems CS 550. Last Time Deadlock Detection and Recovery Methods to handle deadlock – Ignore it! – Detect and Recover – Avoidance.
Univ. of TehranIntroduction to Computer Network1 An Introduction to Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Improving OBS Efficiency Li, Shuo, Meiqian Wang. Eric W. M. Wong, Moshe Zukerman City University of Hong Kong 1.
Muhammad Mateen Yaqoob Department of Computer Science COMSATS Institute of Information Technology, Abbottabad 1.
Network layer (addendum) Slides adapted from material by Nick McKeown and Kevin Lai.
William Stallings Data and Computer Communications
Topics discussed in this section:
Buffer Management in a Switch
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
Switching Techniques.
EE 122: Lecture 7 Ion Stoica September 18, 2001.
COMP/ELEC 429 Introduction to Computer Networks
Buffer Management for Shared-Memory ATM Switches
Chapter-5 Traffic Engineering.
Satellite Packet Communications A UNIT -V Satellite Packet Communications.
Presentation transcript:

Buffer Management for Shared- Memory ATM Switches Written By: Mutlu Apraci John A.Copelan Georgia Institute of Technology Presented By: Yan Huang

Outline Describe several buffer management policies, and their strengths and weakness. Evaluate the performance of various policies using computer simulations Compare of the most import schemes

Some basic definition The prime purpose of an ATM switch is to route incoming cells arriving on a particular input link to the output link (switching) Three basic techniques are used –space-division: crossbar switch –shared-medium:based on a common high-speed bus –shared-memory Also have the functionality of queue –Input queuing, output queuing, shared memory

Shared-Memory Switch Consists of a single dual-ported memory shared by all input and output line Bring both switching and queuing Does not suffer from the throughput degradation caused by head of the line blocking(HOL) The main focus is buffer allocation –determines how the total buffer spaces(memory) will be used by individual output ports of the switches.(Cont’d)

Shared-Memory Switch (Cont’d) The selection and implementation of the buffer allocation policy is refereed as buffer management Model of the SM switch –N output ports –M buffer space Performance –cell loss: occurs when a cell arrive at a switch node and find the buffer is full

Buffer Allocation Policies Stochastic assumption –Poisson arrivals –Exponential service time Static Thresholds –Complete Partition The entire buffer space is permanently partitioned among the N servers. Does not provide any sharing –Complete Sharing An arriving packet is accepted if any space is available in the switch memory Independent of the server to which the packet is directed

Comparison of CP and CS CP policy –the buffer allocated to a port is wasted if that port is inactive, since it can not be used by other possibly active lines CS policy – one of the ports may monopolize most of the storage space if it is highly utilized. In the CS policy, a packet is lost when the common memory is full. In CP, a packet is lost when its corresponding queue has already reaches its maximum allocation. The assumption of the traffic arrival process enable us to model the switch as a Markov process (Fig 3)

Simulation The assumption of exponential inter arrival and service time dist is not realistic for ATM system The traffic in ATM networks is bursty in nature. –To model it, use an ON/OFF source Simulation –mean duration of ON state = 240. –Mean duration of OFF state = 720 –cell interarrival time =5 –Switch model has two output ports(N=2) –The size of the shared memory is 300 cells(M=300) –Performance metric is the cell loss ration(CLR) at the port

Performance of CS and CP Balanced traffic: load at the port are equal –For medium traffic load, CS achieve lower CLR

Performance of CS and CP(cont’d) Imbalanced traffic –The load at one port is varied, but remain constant at the other port –CS: both port have the same CLR –CP: port buffer are isolated. CLR at port 1 increase with the traffic load

Sharing with Maximum Queue Length SMXQ -a limit is imposed on the number of buffers to be allocated at any time to any server. There is one global threshold for all the queues. The advantage of SMXQ: –SMXQ achieves lower CLR than CP, manages to isolate the “good”port from the “bad” port. The better CLR performance is obtained with buffer sharing, the isolation is obtained by restricting the queue length.

SMA and SMQMA Two variation of SMXQ –SMA (sharing with a minimum allocation) A minimum number of buffer is always reserved for each port. –SMQMA (sharing with a maximum queue and minimum allocation ) each port always has access to a minimum allocated space, but they cannot have arbitrarily long queues. –SMQMA has the following advantage over SMXQ A minimum space is allocated for each port in order to simplify the issue of serving high-priority traffic in a buffer-sharing environment.

Push-Out Push-out (PO): drop-on-demand(DoD) –A previously accepted packet can be dropped from the longest queue in the switch Advantage –Fair. –Efficient. –Naturally adaptive. –Achieves a lower CLR than the optimal SMXQ setting Drawback –Difficult to implement

Push-Out with Threshold In ATM networks, different ports carrying different traffic type might have different priorities. A modification to PO, CSVP is to achieve priorities among ports. Similar idea is called POT (push-out with threshold) CSVP has the following attributes. –N users share the total available buffer space M, which is virtually partitioned into N segments corresponding to the N ports When the buffer is full, there are two possibilities: –If the arriving cell’s type, i, occupies less space than its allocation Ki, then, at least one other type must be occupy more than its own allocation, for instance, Kj. The admission policy will admit the newly arriving type i cell by pushing out a type j cell. –If the arriving cell’s queue exceeds its allocation at the time of arrival, then the cell will be rejected. When the buffer is not full –CSVP operates as CS. –Under heavy traffic loads, the system tends to become a CP management.

Dynamic Policies The analyses of the buffer allocation problem above assume static environments Dynamic Threshold (DT)can be used to adapt to changes in traffic conditions. –The queue length thresholds of the ports, are proportional to the current amount of unused buffering in the switch. T(t) = a (M- Q(t)) –Cell arrivals for an output port are blocked whenever the output port’s queue length equals or exceed the current threshold value –Major advantage of DT is to be it’s robustness to traffic load changes, a feature not present in the ST policy

Comparison

The End