Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dept. of Computer Science

Similar presentations


Presentation on theme: "Dept. of Computer Science"— Presentation transcript:

1 Dept. of Computer Science
Location-to-URL Mapping Protocol (LUMP) draft-schulzrinne-ecrit-lump-00 Henning Schulzrinne Dept. of Computer Science Columbia University August 2005 IETF 63 - ECRIT

2 Overview Support global-scale resolution of service identifiers (e.g., service URNs) + locations to other URLs Attempts to be reliable and scalable borrow concepts, but not protocol limitations, from DNS Architecture: “Forest of trees with a cloud above” avoid root as only deployment alternative Uses standard web services building blocks August 2005 IETF 63 - ECRIT

3 Overall architecture carrier X customers R R R R R R flooding nj.us
knows all trees; caches results R R R R R R flooding nj.us ny.us bergen.nj.us leonia.bergen.nj.us August 2005 IETF 63 - ECRIT

4 Resolution Client queries local resolver
doesn’t know which tree to query! If not in cache, find tree root Resolver queries tree root recursively or iteratively August 2005 IETF 63 - ECRIT

5 Cluster architecture Each resolver or auth. server is actually a cluster of 1+ nodes Automatically replicates changes (per record) to all other nodes tested with mSLP nj.us node node = R node node August 2005 IETF 63 - ECRIT

6 “I’m responsible for (parts of) Anytown”
Split jurisdictions “I’m responsible for (parts of) Anytown” Anytown asks both child nodes query Elm Street Oak Street Anytown Main Street Broad Street Main Square Anytown August 2005 IETF 63 - ECRIT

7 Building trees Trees are built from the top down Parents add children
query for coverage region Next level down may differ for each service type (sos.fire vs. sos.police vs. roadside-assistance) geo vs. civic August 2005 IETF 63 - ECRIT

8 Top-level distribution
Top-levels of trees distribute their coverage region to peers either civic pattern (C=US, A1=NJ) or polygon Peers distribute data to other peers  top-level scope data gets flooded to all resolvers new resolvers query peer for current data set periodic “do you have any new data” to avoid sync loss volume of data modest since top-level coverage changes slowly estimate: one map/month (few thousand bytes/month) August 2005 IETF 63 - ECRIT

9 Caching Caching at all levels Cache populated by verification queries
best effort update: if expired and auth. server can’t be contacted within T, use local value  no good alternative Cache populated by verification queries MSAG validation and/or access testing Query returns hints: e.g., ask for specific geo location get back enclosing polygon with service area August 2005 IETF 63 - ECRIT

10 Operations Query record (recursive, redirection)
either from LObj or LRef Query range (poly, civic) return map or list of civic ranges Provide update vector (from peer) identified by time & hash also used to “seed” data Obtain updates via vector get list of objects Push new data object for cluster synchronization August 2005 IETF 63 - ECRIT

11 Discovery LUMP resolver discovery via DHCP (if offered by ISP)
likely close-by and with full cache SIP configuration (MSP) most likely to be operated by MSP? “Bonjour” subnet August 2005 IETF 63 - ECRIT

12 Design assumptions Can’t predict deployment model
May change over lifetime of protocol from “peer” to “root” model Want to have diverse implementations in each cluster and tree  need standardized synchronization mechanisms August 2005 IETF 63 - ECRIT

13 Open issues Clean up architecture description and terminology
Need to define WSDL Define coverage maps for geo and civic Estimate bandwidth usage for flooding Security – how to trust tree scope announcements sign (XMLDSIG) announcements who can sign for what: some authority (e.g., regulator, NENA in US?) mediator (trust aggregator) may do job August 2005 IETF 63 - ECRIT


Download ppt "Dept. of Computer Science"

Similar presentations


Ads by Google