More on protocol implementation Version walks Timers and their problems.

Slides:



Advertisements
Similar presentations
Router/Classifier/Firewall Tables Set of rules—(F,A)  F is a filter Source and destination addresses. Port number and protocol. Time of day.  A is an.
Advertisements

COS 461 Fall 1997 Routing COS 461 Fall 1997 Typical Structure.
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
COL 106 Shweta Agrawal and Amit Kumar
1 CONGESTION CONTROL. 2 Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because.
Congestion Control Reasons: - too many packets in the network and not enough buffer space S = rate at which packets are generated R = rate at which receivers.
Quick Review of Apr 10 material B+-Tree File Organization –similar to B+-tree index –leaf nodes store records, not pointers to records stored in an original.
ICOM 6005 – Database Management Systems Design Dr. Manuel Rodríguez-Martínez Electrical and Computer Engineering Department Lecture 11 – Hash-based Indexing.
Dr. Kalpakis CMSC 661, Principles of Database Systems Index Structures [13]
MPLS additions to RSVP Tunnel identification Tunnel parameter negotiation Routing policy distribution Routing debugging information Scalability improvements.
1 Foundations of Software Design Fall 2002 Marti Hearst Lecture 18: Hash Tables.
Hashing CS 3358 Data Structures.
Binary Heaps CSE 373 Data Structures Lecture 11. 2/5/03Binary Heaps - Lecture 112 Readings Reading ›Sections
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
Tirgul 4 Sorting: – Quicksort – Average vs. Randomized – Bucket Sort Heaps – Overview – Heapify – Build-Heap.
Multiprocessing Memory Management
More on protocol implementation Packet parsing Memory management Data structures for lookup.
The Most Commonly-used Data Structures
Hashing COMP171 Fall Hashing 2 Hash table * Support the following operations n Find n Insert n Delete. (deletions may be unnecessary in some applications)
Tirgul 7 Heaps & Priority Queues Reminder Examples Hash Tables Reminder Examples.
Dr. Andrew Wallace PhD BEng(hons) EurIng
Maps A map is an object that maps keys to values Each key can map to at most one value, and a map cannot contain duplicate keys KeyValue Map Examples Dictionaries:
Indexing and Hashing (emphasis on B+ trees) By Huy Nguyen Cs157b TR Lee, Sin-Min.
CSCE 3110 Data Structures & Algorithm Analysis Binary Search Trees Reading: Chap. 4 (4.3) Weiss.
Modularizing B+-trees: Three-Level B+-trees Work Fine Shigero Sasaki* and Takuya Araki NEC Corporation * currently with 1st Nexpire Inc.
Power Save Mechanisms for Multi-Hop Wireless Networks Matthew J. Miller and Nitin H. Vaidya University of Illinois at Urbana-Champaign BROADNETS October.
Brought to you by Max (ICQ: TEL: ) February 5, 2005 Advanced Data Structures Introduction.
Chapter 11 Indexing & Hashing. 2 n Sophisticated database access methods n Basic concerns: access/insertion/deletion time, space overhead n Indexing 
Hashing Chapter 20. Hash Table A hash table is a data structure that allows fast find, insert, and delete operations (most of the time). The simplest.
Priority Queues and Binary Heaps Chapter Trees Some animals are more equal than others A queue is a FIFO data structure the first element.
Chapter 11 Heap. Overview ● The heap is a special type of binary tree. ● It may be used either as a priority queue or as a tool for sorting.
External data structures
A Scalable Content-Addressable Network (CAN) Seminar “Peer-to-peer Information Systems” Speaker Vladimir Eske Advisor Dr. Ralf Schenkel November 2003.
David Luebke 1 10/25/2015 CS 332: Algorithms Skip Lists Hash Tables.
Growth Codes: Maximizing Sensor Network Data Persistence abhinav Kamra, Vishal Misra, Jon Feldman, Dan Rubenstein Columbia University, Google Inc. (SIGSOMM’06)
Protection and Restoration Definitions A major application for MPLS.
Hashing Sections 10.2 – 10.3 CS 302 Dr. George Bebis.
Time Management.  Time management is concerned with OS facilities and services which measure real time, and is essential to the operation of timesharing.
Indexing and hashing Azita Keshmiri CS 157B. Basic concept An index for a file in a database system works the same way as the index in text book. For.
Lecture 2 Agenda –Finish with OSPF, Areas, DR/BDR –Convergence, Cost –Fast Convergence –Tools to troubleshoot –Tools to measure convergence –Intro to implementation:
RSVP and implementation Details for the lab. RSVP messages PATH, RESV –To setup the LSP PATHtear, RESVtear –To tear down an LSP PATHerr, RESVerr –For.
Routing Networks and Protocols Prepared by: TGK First Prepared on: Last Modified on: Quality checked by: Copyright 2009 Asia Pacific Institute of Information.
Data Structure II So Pak Yeung Outline Review  Array  Sorted Array  Linked List Binary Search Tree Heap Hash Table.
Data Structures Chapter 6. Data Structure A data structure is a representation of data and the operations allowed on that data. Examples: 1.Array 2.Record.
Hashing 1 Hashing. Hashing 2 Hashing … * Again, a (dynamic) set of elements in which we do ‘search’, ‘insert’, and ‘delete’ n Linear ones: lists, stacks,
CSS446 Spring 2014 Nan Wang.  to study trees and binary trees  to understand how binary search trees can implement sets  to learn how red-black trees.
An Energy-Efficient MAC Protocol for Wireless Sensor Networks Speaker: hsiwei Wei Ye, John Heidemann and Deborah Estrin. IEEE INFOCOM 2002 Page
Hashing COMP171. Hashing 2 Hashing … * Again, a (dynamic) set of elements in which we do ‘search’, ‘insert’, and ‘delete’ n Linear ones: lists, stacks,
Advanced Data Structure By Kayman 21 Jan Outline Review of some data structures Array Linked List Sorted Array New stuff 3 of the most important.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
Spring 2000CS 4611 Routing Outline Algorithms Scalability.
1 Chapter 6 Heapsort. 2 About this lecture Introduce Heap – Shape Property and Heap Property – Heap Operations Heapsort: Use Heap to Sort Fixing heap.
Time Management.  Time management is concerned with OS facilities and services which measure real time.  These services include:  Keeping track of.
RIP Routing Protocol. 2 Routing Recall: There are two parts to routing IP packets: 1. How to pass a packet from an input interface to the output interface.
CSC317 Selection problem q p r Randomized‐Select(A,p,r,i)
B/B+ Trees 4.7.
IP Routers – internal view
Efficient implementations of Alignment-based algorithms
Priority Queues and Heaps
Hashing CENG 351.
Programming Abstractions
Caches II CSE 351 Spring 2017 Instructor: Ruth Anderson
Hashing Exercises.
CPSC 531: System Modeling and Simulation
Indexing and Hashing B.Ramamurthy Chapter 11 2/5/2019 B.Ramamurthy.
Congestion Control Reasons:
Chapter 4: Simulation Designs
Heaps By JJ Shepherd.
Data Structures for Shaping and Scheduling
Presentation transcript:

More on protocol implementation Version walks Timers and their problems

How to choose search structure Insert pattern –Random enough or pseudo-sequential Lifetime –Ratio of lookups/insert+deletes Lookup pattern Lookup type –Exact? –Prefix?

Example RSVP –Key for the session tree is (ingress_ip, egress_ip, lsp_id) –After I setup the LSPs they stay up for long time Few deletes, mostly inserts All inserts at the beginning few later Most probably all inserts are uniformly distributed over the key space –During operation I will have to look up all LSPs Lookup pattern is uniformly distributed No need to think about optimizations like splay trees –Which tree would perform better in these conditions? I could use an unbalanced tree but it is risky Hashing could be used by it will be tricky to get a good hash function –Ingress_ip, egress_ip are few 100s of addresses –Lsp_id is correlated with ingress_ip

Version Walks Walking all the tree may be an overkill if only few elements changed Assign a version to each node –Each time it changes increase it by one –Copy this version to all the nodes predecessors all the way to the root of the tree Each node’s version is the max version among all its children Now, when walking specify a min version –If I reach a node that has smaller version no need to visit its children Can save a lot of walk time –Depending on the shape of the tree, the locality of changes etc etc Versioning has a small cost –For each version change I will have to visit all the path from the root of the tree to the node that changed

Timers Timer has associated with it its firing time I need –Create/delete/modify timer –Find the next timer to fire Scalability means all should be fast –Quagga has a list of timers ordered according to their expiration, create/modify will be very slow – O(n) –Need a structure that has fast insert and delete and O(1) for fiding the next timer to fire Plain binary trees, O(logn) for insert and O(logn) for find next Priority heaps O(logn) for insert and O(logn) for find next Calendar queues around O(1) for insert and find next –Same problem for event driven simulators Lots of work has been done there

Calendar Queue A set of lists or buckets Each bucket corresponds to some period of time and holds all the timers that will fire in that period. Timers in the same bucket are in a sorted list I walk the buckets in a round robin fashion O(1) to find the next timer to fire Challenges – Determine the bucket size so I keep only few timers per bucket Dynamically adjust the bucket size Will cost me to move timers around –Skewed timer distributions will cause a very small bucket duration and many empty buckets –If the number of active timers is very variable (from few 10s to few 10,000s and back) then I will spend too much time readjusting the bucket sizes

Timers are Tricky Timers can get synchronized –Usually need jitter –A good timer library should provide these Timer slip –Timer fires after it should have –Happens if I do not schedule things properly Timer clustering –Many timers will fire at the same time Too low granularity of bad scheduling –Have to process them in a controlled way Queue of expired timers and process few at a time Clock drift –Hardware clocks are usually drifting Running slower or faster, can be up to 2 min per year A good timer library or gettime function be aware of that –Or use ntp to sync with an accurate source

How do timers synchronize Triggered updates –When something change in the network all routers reset their timers E.g. link fails and LSP path changes, all PATH and RESV refreshed can get synchronized They will remain in sync Update collision Update processing time is relatively large Routers first finish processing all updates they receive and then they set their timer –If two routers receive each other’s updates they will get in sync and will remain in sync –Slowly more routers get in sync

Solutions Do not reset timers when there is a triggered update –But if routers start synchronized they can not get desynchronized Jitter in the timers –Add a random component to the expiration time –Analysis has shown that this jitter must be around half the period –This is where the refresh formula for RSVP comes from

Timer design Create/delete patterns are important again –Consider RSVP. Need to: Refresh PATH Refresh RESV Check for timeout in PATH Check for timeout in RESV One timer is enough for all of these, each time it fires compute what is the next thing to do and –One timer for LSP, so number of active timers does not change too much This is good for calendar queues (and heaps) –RSVP timers Typical values: refresh every 10 seconds, timeout after loosing 3 refreshes, I.e. 45 seconds. So max value for the timer is 45 seconds Can use this to optimize the calendar queue

Too many timers? Even with a single timer per LSP I may have 100,000 timers –Too much memory –Very large tree or too many buckets in a calendar queue May be possible to have fewer timers –If I can live with little bit of inaccuracy –For N timers that are going to fire close use a single timer –When this timer fires process all N timers

Timers and state refresh RSVP is a soft state protocol, state must be refreshed –Frequent refresh: faster detection of changes –Frequent refresh: more bandwidth used for refreshes 100,000 LSPs, 120 bytes per update –Update each second = 96 Mbit/sec –For a 1 Gig link, to keep updates to less than 1% of link capacity I must send 1 update each 10 seconds

Fixed and dynamic timers Fixed timers –More state – more bandwidth for refreshes Scalable or dynamic timers –Tie state refresh to link capacity –Do not use more than x % of link capacity for refreshes –More state – less frequent refreshes –The receiver estimates the sender’s refresh rate to determine when to time out state –Send messages for new state first, updates with less priority