Presentation on theme: "Multihoming in IPV6 Habib Naderi Department of Computer Science University of Auckland."— Presentation transcript:
Multihoming in IPV6 Habib Naderi Department of Computer Science University of Auckland
Definition of Multihoming Having connection to the Internet through more than one provider Provider1 Provider2 Internet
Multihoming Benefits Redundancy: having backup links to protect against failures. Traffic engineering: distributing traffic on different links to achieve better performance. Policy selection: assigning different traffics to different links based on site’s policies (cost, commercial reasons,...). Around 60 % of stub networks are multihomed and this number is growing.
How Multihoming Is Achieved in the Current Internet The site acquires its own address block, can be Provider Independent or Provider Aggregatable, and announces that through all its providers. Provider1 Provider2 Internet / / / / / / /24 PI address PA address
Is Current Solution deployable in IPV6? Scalability concerns: each MH site inserts new routing entries in global BGP routing tables. As the number of MH sites grows, the routing table size will cause a big problem. It needs more memory and processing power. Current studies show a growth of about 25% per annum in routing table. 20% of the entire size is related to multi-homing prefixes.
Solutions for IPV6 Router based: routing system should provide the MH functionality - MH with BGP: like ipv4 - MH using cooperation between providers - MH support at site exit router Middle Box: a box between multihomed hosts and Internet provides MH functionality. - NAT - Multihoming Aliasing Protocol - Multihoming Translation Portocol
Solutions for IPV6 Host Centric: multihomed hosts with multiple prefixes provide MH functionality. - HIP - SHIM6 - SCTP - multihomed TCP.
Shim6 Shim6 is a host centric solution, chosen by IETF, for providing MH in IPV6 Internet (RFC 5533). It inserts a shim layer between IP and transport layer which switches between different IP addresses transparently. Transport (TCP, UDP,...) IP routing sub-layer Shim6 IP endpoint sub-layer (Frag/Reass, Dst opts)
Shim6 To be transparent to the transport layer, shim6 uses the concept of ID/Locator separation. Transport layer uses one of the host’s IPV6 addresses, which is called ULID, for establishing connection. This ID will not change during the connection life. When a failure happens, shim6 will pick one of other host’s addresses, which are called set of locators, and switch to that. Switching is completely transparent to the transport layer.
Shim6 One important part of shim6 is failure detection and recovery mechanism. A protocol called REAchability Protcol (REAP) has been designed for this purpose. RFC 5534
Failure Detection in REAP REAP uses FBD (Force Bidirectional Detection) for verifying reachability. When there is a traffic in one direction, there should be also traffic in the other direction. REAP sends KEEP_ALIVE messages in the case that there is no data to be sent. So, when there is an outgoing traffic but no return traffic, it’s a sign of failure. REAP employs two timers (send timer and keep alive timer) to manage this process.
Recovery Mechanism in REAP When a failure is detected, REAP starts the exploration process to find a working pair of locators. A set of locator pairs are built using host and peer locator sets. The set is pruned and ordered using DAS rules(RFC 3484). Then REAP starts to send probe messages for each member of the resulting set to test its reachability. This process finishes when an operational address pair is found.
An Example B1 B2 A1 A2 A1, B1 A1, B2 A2, B1 A2, B2 Candidate Set B1 B2 A1 A2 B1 B2 A1 A2
Our Current Research Topic Analyzing the behaviour of the REAP in large sites to answer the following questions: - If the REAP is deployed in future internet what will happen if a failure happens in a site with thousands of hosts? - How much traffic will be generated? - How long the recovery process will take?
Our Current Research Topic To answer these questions, we have built and simulated a model of the REAP by using Möbius modeling tool. We have performed some experiments with 3000 instances of the REAP to measure the generated traffic and recovery time.
Some Preliminary results Delay: Normal No congestion Mean= 0.08, 0.09, 0.1, 0.11, 0.12 sec Variance=0.04 Number of shim6 contexts: 3000 Send timer = 10 sec
Some Preliminary Results
Some Preliminary Results(including congestion)
Some Preliminary Results(more congestion)
Research Areas in Networking Field in the Department of Computer Science DNS History Database –Tracks FQDNs and their IP addresses, keeping first and last appearance dates –Started in 2006, now10 collectors world-wide –Around 750 million entries overall –Current Auckland research on tracking Fast Flux Networks (Leo) –Working on a web site for the project! Traffic Flow measurements (DongJin, Jerry) DNS RTT: long-term measurements (Nevil) Performance Measurements of NZ ISPs (Nevil) For more details contact Nevil Brownlee at
Research Areas in Networking Field in the Department of Computer Science Routing issues IPv6 deployment For more details contact Brian Carpenter at