Presentation is loading. Please wait.

Presentation is loading. Please wait.

IP Multicasting: Explaining Multicast

Similar presentations


Presentation on theme: "IP Multicasting: Explaining Multicast"— Presentation transcript:

1 IP Multicasting: Explaining Multicast
BSCI Module 7

2 Objectives Describe the IP multicast group.
Compare and contrast Unicast packets and multicast packets. List the advantages and disadvantages of multicast traffic. Discuss two types of multicast applications. Describe the types of IP multicast addresses. Describe how receivers can learn about a scheduled multicast session.

3 Multicast Overview

4 IP Multicast Distribute information to large audiences over an IP network Multicast deployments require three elements: the application, the network infrastructure, and client devices. IP Multicast is a bandwidth-conserving technology that reduces traffic by simultaneously delivering a single stream of information to thousands of corporate recipients and homes. IP Multicast utilizes a single data stream, which is replicated by routers at branch points throughout the network. This mechanism uses bandwidth much more efficiently and greatly decreases load on content servers, reaching more users at a lower cost per user. IP Multicast is an efficient means of delivering media to many hosts over a single IP flow.

5 Multicast Advantages Enhanced efficiency: Controls network traffic and reduces server and CPU loads Optimized performance: Eliminates traffic redundancy Distributed applications: Makes multipoint applications possible For the equivalent amount of multicast traffic, the sender needs much less processing power and bandwidth. Multicast packets do not impose as high a rate of bandwidth utilization as unicast packets, so there is a greater possibility that they will arrive almost simultaneously at the receivers. Multicast transmission affords many advantages over unicast transmission in a one-to-many or many-to-many environment: Enhanced Efficiency: available network bandwidth is utilized more efficiently since multiple streams of data are replaced with a single transmission Optimized Performance: less copies of data require forwarding and processing Distributed Applications: multipoint applications will not be possible as demand and usage grows because unicast transmission will not scale (traffic level and clients increase at a 1:1 rate with unicast transmission) Multicast enables a whole range of new applications that were not possible on unicast (for example, video on demand [VoD]).

6 Multicast Disadvantages
Multicast is UDP-based. Best-effort delivery Heavy drops in Voice traffic Moderate to Heavy drops in Video No congestion avoidance Duplicate packets may be generated Out-of-sequence delivery may occur Efficiency issues in filtering and in security There are also some disadvantages of multicast that need to be considered Most Multicast Applications are UDP based. This results in some undesirable side-effects when compared to similar unicast, TCP applications. Best Effort Delivery results in occasional packet drops. Drops are to be expected. Multicast applications must not expect reliable delivery of data and should be designed accordingly. Many multicast applications that operate in real-time (e.g. Video, Audio) can be impacted by these losses. Also, requesting retransmission of the lost data at the application layer in these sort of real-time applications is not feasible. Reliable multicast applications will address this issue. Heavy drops on Voice applications result in jerky, missed speech patterns that can make the content unintelligible when the drop rate gets high enough. Moderate to Heavy drops in Video is sometimes better tolerated by the human eye and appear as unusual “artifacts” on the picture. However, some compression algorithms can be severely impacted by even low drop rates; causing the picture to become jerky or freeze for several seconds while the decompression algorithm recovers. No Congestion Control can result in overall Network Degradation as the popularity of UDP based Multicast applications grow. The lack of TCP windowing and “slow-start” mechanisms can result in network congestion. If possible, multicast applications should attempt to detect and avoid congestion conditions. Duplicate packets can occasionally be generated as multicast network topologies change. Applications should expect occasional duplicate packets to arrive and should be designed accordingly. Out-of-sequence delivery of packets to the application may also result during network topology changes or other network events that affect the flow of multicast traffic. Efficient filtering and security for multicast flows is a big issue since senders randomly select UDP ports and very often multicast addresses as well. This kind of behavior prevents effective static filtering.

7 Types of Multicast Applications
One-to-many A single host sending to two or more (n) receivers Many-to-many Any number of hosts sending to the same multicast group; hosts are also members of the group (sender = receiver) Many-to-one Any number of receivers sending data back to a source (via unicast or multicast) There are different types of multicast applications. Two most common models are: one-to-many model where one sender sends data to many receivers Examples of uses of the one-to-many model are: Audio or video distribution Push media Announcements Monitoring If a one-to-many application needs feedback from receivers, then it becomes a many-to-many application. many-to-many model where a host can be a sender as well as receiver simultaneously Examples of uses of the many-to-many model are: Collaboration Concurrent processing Distributed interactive simulations Other models (e.g. many-to-one where many receivers are sending data back to one sender) are used less frequently. Examples of uses of the many-to-one model are: Resource discovery Data collection Auctions Polling

8 IP Multicast Applications
Corporate Broadcasts Live TV and Radio Broadcast to the Desktop Distance Learning Multicast File Transfer Data and File Replication Many new multipoint applications are emerging as demand for them grows: Real-time applications include live broadcasts, financial data delivery, whiteboard collaboration, and video conferencing. Non-real-time applications include file transfer, data and file replication, and video-on-demand. Note also that the latest version of Novell Netware uses IP multicast for file and print service announcements. Training Videoconferencing Video-on-Demand Whiteboard/Collaboration Real-Time Data Delivery—Financial

9 Multicast Addressing

10 IP Multicast Address Structure
IP group addresses: Class D address (high-order three bits are set) Range from through This information is covered in CCNP 3 v4.0 Module 8: The Internet Assigned Numbers Authority (IANA) controls the assignment of IP multicast addresses. IANA has assigned the IPv4 Class D address space to be used for IP multicast. Class D addresses are denoted by the high 4 bits set to Therefore, all IP multicast group addresses fall in the range from through The remaining 28 bits of the IP address identify the multicast group ID. This multicast group ID is a single address typically written as decimal numbers in the range through The high-order bits in the first octet identify this 224-base address.

11 Source Address can never be Class D Multicast Group Address
Multicast Addressing IPv4 Header Version IHL Type of Service Total Length Source Address can never be Class D Multicast Group Address Identification Flags Fragment Offset Time to Live Protocol Header Checksum Source Address (Class A, B, C) Source Source The Class D address is used for the destination IP address of multicast traffic for a specific group. Multicast addresses may be dynamically or statically allocated. Dynamic multicast addressing provides applications with a group address on demand. Dynamic multicast addresses have a specific lifetime and applications must request and use the address as needed. Static addresses are used at all times. As with IP addressing, there is the concept of private address space that may be used for local, organization wide traffic and public or Internet wide multicast addresses. There are also addresses reserved for specific protocols that require well-known addresses. The Internet Assigned Numbers Authority (IANA) manages the assignment of multicast addresses that are called permanent host groups and are similar in concept to well known TCP and UDP port numbers. Destination Address (Class D) Multicast Group Address Range Destination Destination Options Padding

12 IP Multicast Address Groups
Local scope addresses to Global scope addresses to Administratively scoped addresses to This information is covered in CCNP 3 v4.0 Module 8: The multicast IP address space is separated into following address groups: Local Scope Addresses are addresses through and are Reserved by IANA for network protocol use. Multicasts in this range are never forwarded off the local network regardless of TTL and usually the TTL is set to 1. Global Scope Addresses are addresses through and are allocated dynamically throughout the Internet Administratively Scoped Addresses are addresses through and are reserved for use inside of private domains. Multicast delivery is enabled by setting up a multicast address on the Content Engine in the form of a multicast cloud configuration to which different devices, configured to receive content from the same channel, can subscribe. The delivering device sends content to the multicast address set up at the Content Engine, from which it becomes available to all subscribed receiving devices.

13 Local Scope Addresses Well-known addresses assigned by IANA
Reserved use: through (all multicast systems on subnet) (all routers on subnet) (all DVMRP routers) (all PIMv2 routers) , , , and used by unicast routing protocols This information is covered in CCNP 3 v4.0 Module 8: Local Scope Addresses are addresses through and are Reserved by IANA for network protocol use. Multicasts in this range are never forwarded off the local network regardless of TTL and usually the TTL is set to 1. Network protocols use these addresses for automatic router discovery and to communicate important routing information. Address identifies the all-hosts group. Every multicast-capable host must join this group. If a ping command is issued using this address, all multicast-capable hosts on the network must answer the ping request. Address identifies the all-routers group. Multicast routers must join that group on all multicast-capable interfaces. Other local multicast addresses are: All Distance Vector Multicast Routing Protocol (DVMRP) routers on a subnet All OSPF Routers on a subnet All OSPF DRs on a subnet All RIPv2 routers on a subnet All EIGRP routers on a subnet All PIMv2 routers on a subnet

14 Global Scope Addresses
Transient addresses, assigned and reclaimed dynamically (within applications): Global range: 224.2.X.X usually used in MBONE applications Part of a global scope recently used for new protocols and temporary usage This information is covered in CCNP 3 v4.0 Module 8: For multicast applications, transient addresses are dynamically assigned and then returned for others to use when no longer needed. Two address types are (1) global scope addresses and (2) administratively scoped addresses. Global scope addresses are addresses through , and are allocated dynamically throughout the Internet. For example, the X.X range is used in Mbone applications. Mbone stands for Multicast Backbone and is a collection of Internet routers that support IP multicasting. The Mbone is used as a virtual network (multicast channel) on which various public and private audio and video programs are sent. Mbone was originated by the Internet Engineering Task Force (IETF) in an effort to multicast audio and video meetings. Some of these addresses have been registered with IANA, for example IP address has been reserved for Network Time Protocol (NTP). Applications that use multicast addresses in the /32, /23, and /32 ranges have been demonstrated to be vulnerable to exploitation, which has led to serious security problems. Addresses in the to range are reserved for Source Specific Multicast (SSM). SSM is an extension of Protocol Independent Multicast (PIM), which allows for an efficient data delivery mechanism in one-to-many communications. Parts of a global scope are also used for new protocols and temporary usage: ST Multicast Groups Multimedia Conference Calls SAPv1 Announcements SAPv0 Announcements (deprecated) SAP Dynamic Assignments DIS transient groups VMTP transient groups

15 Administratively Scoped Addresses
Transient addresses, assigned and reclaimed dynamically (within applications): Limited (local) scope: /8 for private IP multicast addresses (RFC-2365) Site-local scope: /16 Organization-local scope: to This information is covered in CCNP 3 v4.0 Module 8: For multicast applications, transient addresses are dynamically assigned and then returned for others to use when no longer needed. Two address types are (1) global scope addresses and (2) administratively scoped addresses. Like private IP address space that is used within the boundaries of a single organization, "limited" or "administratively scoped" addresses in the range to are constrained to a local group or organization. Companies, universities, or other organizations can use limited scope addresses to have local multicast applications that will not be forwarded over the Internet. Typically, routers are configured with filters to prevent multicast traffic in this address range from flowing outside of an AS. Within an autonomous system or domain, the limited scope address range can be further subdivided so that local multicast boundaries can be defined. This subdivision is called "address scoping" and allows for address reuse between smaller domains. These addresses are described in RFC 2365, Administratively Scoped IP Multicast. Administratively scoped multicast address space is divided into following scopes: Site-local scope ( /16 and grows downward to /16, /16...) Organization-local scope ( /14 with possible expansion to ranges /10, /10 and /10)

16 Layer 2 Multicast Addressing
IEEE MAC Address Format Historically, network interface cards (NICs) on a LAN segment could receive only packets destined for their burned-in MAC address or the broadcast MAC address. In IP multicast, several hosts need to be able to receive a single data stream with a common destination MAC address. Some means had to be devised so that multiple hosts could receive the same packet and still be able to differentiate between several multicast groups. One method to accomplish this is to map IP multicast Class D addresses directly to a MAC address. Today, using this method, NICs can receive packets destined to many different MAC addresses—their own unicast, broadcast, and a range of multicast addresses. The IEEE LAN specifications made provisions for the transmission of broadcast and multicast packets. In the standard, bit 0 of the first octet is used to indicate a broadcast or multicast frame. The graphic in this slide shows the location of the broadcast or multicast bit in an Ethernet frame. This bit indicates that the frame is destined for a group of hosts or all hosts on the network (in the case of the broadcast address 0xFFFF.FFFF.FFFF). IP multicast makes use of this capability by sending IP packets to a group of hosts on a LAN segment.

17 IANA Ethernet MAC Address Range
Available range of MAC addresses for IP multicast e through e-7f-ff-ff IANA owns a block of Ethernet MAC addresses that start with 01:00:5E in hexadecimal format. Half of this block is allocated for multicast addresses. The range from e through e7f.ffff is the available range of Ethernet MAC addresses for IP multicast.

18 IANA Ethernet MAC Address Range
Available range of MAC addresses for IP multicast : : : : : through : : : : : Within this range, these MAC addresses have the first 25 bits in common. The remaining 23 bits are available for mapping to the lower 23 bits of the IP multicast group address. This allocation allows for 23 bits in the Ethernet address to correspond to the IP multicast group address. The mapping places the lower 23 bits of the IP multicast group address into these available 23 bits in the Ethernet address. Within this range, these MAC addresses have the first 25 bits in common. The remaining 23 bits are available for mapping to the lower 23 bits of the IP multicast group address.

19 Ethernet MAC Address Mapping
Given that the IPv4 Class D Addresses are used for IP multicast, all IP multicast addresses begin with the same first four bits of Therefore, all IP multicast group addresses fall in the range from through The remaining 28 bits of the IP address identify the multicast group ID. When mapping Layer 3 to Layer 2 addresses, the low order 23 bits of the Layer 3 IP multicast address are mapped into the low order 23 bits of the IEEE MAC address. Notice that this results in 5 bits of information being lost. Because there are 28 bits of unique address space for an IP multicast address (32 bits minus the first 4 bits containing the 1110 Class D prefix), and there are only 23 bits mapped into the IEEE MAC address, there are five bits of overlap, or 28 bits – 23 bits = 5 bits, and 2 to the fifth = 32. Therefore, there is a 32-to-1 overlap of Layer 3 addresses to Layer 2 addresses. Be aware that 32 Layer 3 addresses map to the same Layer 2 multicast address. Continuing with this example, take the lower 23 bits of the Layer 3 address and convert the binary to hexadecimal. Then add the prefix of 01:00:5E. The resulting Layer 2 multicast address of 01:00:5E:01:01:01 will map to the IP multicast address of A bit of History It turns out that this loss of 5 bits worth of information was not originally intended. When Dr. Steve Deering was doing his seminal research on IP Multicast, he approached his advisor with the need for 16 OUI’s to map all 28 bits worth of Layer 3 IP Multicast address into unique Layer 2 MAC addresses. Note: An OUI (Organizationally Unique Identifier) is the high 24 bits of a MAC address that is assigned to “an organization” by the IEEE. A single OUI therefore provides 24 bits worth of unique MAC addresses to the organization. Unfortunately, at that time the IEEE charged $1000 for each OUI assigned which meant that Dr. Deering was requesting that his advisor spend $16,000 so he could continue his research. Due to budget constraints, the advisor agreed to purchase a single OUI for Dr. Deering. However, the advisor also chose to reserve half of the MAC addresses in this OUI for other graduate research projects and granted Dr. Deering the other half. This resulted in Dr. Deering having only 23 bits worth of MAC address space with which to map 28 bits of IP Multicast addresses. (It’s too bad that it wasn’t known back then how popular IP Multicast would become. If they had, Dr. Deering might have been able to “pass the hat” around to interested parties and collected enough money to purchase all 16 OUI’s. :-) )

20 Be Aware of the 32:1 Address Overlap 1 - Multicast MAC Address
Multicast Addressing IP Multicast MAC Address Mapping (FDDI & Ethernet) Be Aware of the 32:1 Address Overlap 32 - IP Multicast Addresses . 1 - Multicast MAC Address (FDDI and Ethernet) Remember, there is a 32-to-1 overlap of Layer 3 addresses to Layer 2 addresses. Be aware that 32 Layer 3 addresses map to the same Layer 2 multicast address. For example, all of the above IP Multicast addresses map to the same Layer 2 multicast of e 0x0100.5E

21 IP Multicasting: IGMP and Layer 2 Issues
BSCI Module 7 Lesson 2

22 Internet Group Management Protocol (IGMP)
How hosts tell routers about group membership Routers solicit group membership from directly connected hosts RFC 1112 specifies IGMPv1 Supported on Windows 95 RFC 2236 specifies IGMPv2 Supported on latest service pack for Windows and most UNIX systems RFC 3376 specifies IGMPv3 Supported in Window XP and various UNIX systems Most of this information is covered in CCNP 3 v4.0 Module 8: Internet Group Management Protocol (IGMP) is a host-to-router protocol. The primary purpose of IGMP is to permit hosts to communicate their desire to receive multicast traffic to the IP Multicast router(s) on the local network. This, in turn, permits the IP Multicast router(s) to “Join” the specified multicast group and to begin forwarding the multicast traffic onto the network segment. The initial specification for IGMP (v1) was documented in RFC 1112, “Host Extensions for IP Multicasting.” Since that time, many problems and limitations with IGMPv1 have been discovered. This has lead to the development of the IGMPv2 specification which was ratified in November 1997 as RFC Even before IGMPv2 had been ratified, work on the next generation of the IGMP protocol, IGMPv3, had already begun. However, the IGMPv3 specification is still in the working stage and has not been implemented by any vendors.

23 IGMPv2 RFC 2236 Group-specific query Leave group message
Router sends query membership message to a single group rather than all hosts (reduces traffic). Leave group message Host sends leave message if it leaves the group and is the last member (reduces leave latency in comparison to v1). Query-interval response time The Query router sets the maximum Query-Response time (controls burstiness and fine-tunes leave latencies). Querier election process IGMPv2 routers can elect the Query Router without relying on the multicast routing protocol. With IGMPv1, routers send periodic membership queries to the multicast address Hosts send membership reports to the group multicast address they want to join; hosts silently leave the multicast group. As a result of some of the limitations discovered in IGMPv1, work was begun on IGMPv2 in an attempt to remove these limitations. Most of the changes between IGMPv1 and IGMPv2 are primarily to address the issues of Leave and Join latencies as well as address ambiguities in the original protocol specification. IGMPv2 is backward-compatible with IGMPv1. Some of the more significant changes: A Group Specific query was added in v2 to allow the router to only query membership in a single group instead of all groups. This is an optimized way to quickly find out if any members are left in a group without asking all groups for a report. The difference between the Group Specific query and the Membership Query is that a Membership Query is multicast to the “All-Hosts” ( ) address while a Group Specific query for Group “G”, is multicast to the Group “G” multicast address. A Leave Group message was also added in IGMPv2. This allows end systems to tell the router they are leaving the group which reduces the leave latency for the group on the segment when the member leaving is the last member of the group. The standard is loosely written on when leave group messages should and must be sent. This is an important consideration when discussing CGMP. The query-interval response time was added to control the burstiness of reports. This time is set in queries to convey to the members how much time they have to respond to a query with a report. A querier election process was added to provide the capability for IGMPv2 routers to elect the Query Router without having to rely on the multicast routing protocol to perform this process.

24 IGMPv2—Joining a Group 224.1.1.1 Join Group
Members joining a multicast group do not have to wait for a query to join. They send an unsolicited report indicating their interest. This procedure reduces join latency for the end system joining if no other members are present. In this example, after host H2 sends the join group message for group , group is active on the router interface Ethernet0. Using the show ip igmp group command reveals the following: Group is active on Ethernet 0 and has been active on this interface for 1 hour and 3 minutes. It expires (and will be deleted) in 2 minutes and 31 seconds if an IGMP Host Membership report for this group is not heard in that time. The last Host to report membership was (H2). Note: When there are two IGMP routers on the same Ethernet segment (broadcast domain), the router with the highest IP address is the designated querier.

25 IGMPv2—Leaving a Group In IGMPv1, hosts would leave passively - i.e.. they would not explicitly say they are leaving - they just stop reporting. However, IGMPv2 has explicit Leave Group messages. When the IGMPv2 Query router receives a Leave Message, it responds by sending a Group-Specific Query for the associated group to see if there are still other hosts wishing to receive traffic for the group. This process helps to reduce overall Leave Latency. When CGMP is in use, the IGMPv2 Leave Message mechanism also helps the router to better manage the CGMP state in the switch. This also improves the leave latency for the specific host at layer 2. (Note: Due to the wording of the current IGMPv2 draft specification, hosts may chose to NOT send Leave messages if they are not the last host to leave the group. This can adversely affect CGMP performance.) IGMPv2 has explicit Leave Group messages, which reduces overall leave latency.

26 IGMPv2—Leaving a Group (Cont.)
Example : H2 and H3 are members of group Step 1 - Host H2 sends an IGMPv2 Leave Group message to the all-routers multicast group ( ) informing all routers on the subnet that H2 is leaving group Hosts H2 and H3 are members of group H2 sends a leave message.

27 IGMPv2—Leaving a Group (Cont.)
Example : Step 2 - Router A hears the Leave Group message from Host H2. However, because routers keep a list only of the group memberships that are active on a subnet – no individual hosts that are members – Router A sends a Group-Specific Query to determine if any other group members are present. Note that the Group-Specific Query is multicast to the target group. Therefore, only hosts that are members of this group will respond. Router sends group-specific query.

28 IGMPv2—Leaving a Group (Cont.)
Example : Step 3 - Host H3 is still a member of group and, therefore, hears the Group-Specific Query and responds to the query with an IGMPv2 Membership Report to inform the routers on the subnet that a member of this group is still present. Router A keeps sending multicast for since there is at least one member present. A remaining member host sends report, so group remains active.

29 IGMPv2—Leaving a Group (Cont.)
At this point, the group is still active. However, the router shows that Host 3 is the last host to send an IGMP Group Membership Report. The group will time out in 1 minute and 47 seconds. After receiving a leave message from H3, the router sends a group-specific query to see whether any other group members are present. Because host H3 was the last remaining member of the multicast group , no IGMP membership report for group is received, and the group times out. This activity typically takes from 1 to 3 seconds, from the time that the leave message is sent until the group-specific query times out and multicast traffic stops flowing for that group.

30 IGMPv2—Leaving a Group (Cont.)
At this point, all hosts have left the group on Ethernet0. This is indicated in the output of the show ip igmp group command.

31 IGMPv3—Joining a Group IGMP version 3 (IGMPv3) adds the ability to filter multicasts based on multicast source. The main intention of IGMPv3, which is a proposed standard, is to allow hosts to indicate that they want to receive traffic only from particular sources within a multicast group. This enhancement makes the utilization of routing resources more efficient. In IGMPv3, reports are sent to “All IGMPv3 Routers” ( ). IGMPv3 hosts no longer send their reports to the target multicast group address. Instead, they send their IGMPv3 Membership Reports to the “All IGMPv3 Routers” multicast address. Routers listen to the address in order to receive and maintain IGMP membership state for every member on the subnet. This is a radical change over the behavior in IGMPv1/v2 where the routers only maintained group state on a subnet basis. Hosts do not listen to and therefore do not hear other hosts’ IGMPv3 membership reports. IGMPv3 drops the Report Suppression mechanism that was used in IGMPv1/v2. All IGMPv3 hosts on the wire respond to Queries by sending and IGMPv3 membership reports containing their total IGMP state for all groups in the report. In order to prevent huge bursts of IGMPv3 Reports, the Response Interval may now be tuned over a much greater range than before. This permits the network engineer to adjust the burstiness of IGMPv3 Reports when there is a large number of hosts on the subnet. Members joining a group do not have to wait for a query to join. Instead, they send an unsolicited IGMPv3 Membership Report indicating their interest. This reduces join latency for the end system joining if no other members are present. In the example above, host H2 is joining the multicast group and is willing to receive any and all sources in this group. Joining member sends IGMPv3 report to immediately upon joining.

32 IGMPv3—Joining Specific Source(s)
Joining only a specific source or selected sources Hosts may signal the router that it wishes to receive only a specific set of sources sending to the group. This is done by using an “Include” list in the Group record of the Report. When an Include list is in use, only the specific sources listed in the Include list are joined. IGMPv3 uses this list for "source filtering.“ In this “source filtering, a host has the ability to report interest in receiving packets only from specific source addresses, or from all but specific source addresses, sent to a particular multicast address. That information may be used by multicast routing protocols to avoid delivering multicast packets from specific sources to networks where there are no interested receivers. In the example above, host H2 is joining the multicast group and only wants to receive from source sending to the group. IGMPv3 Report contains desired sources in the Include list. Only “Included” sources are joined.

33 IGMPv3—Maintaining State
The router multicasts periodic Membership Queries to the “All-Hosts” ( ) group address. All hosts on the wire respond by sending back an IGMPv3 Membership Report that contains their complete IGMP Group state for the interface. Router sends periodic queries: All IGMPv3 members respond. Reports contain multiple group state records.

34 IGMP Layer 2 Issues

35 Determining IGMP Version Running
Determining which IGMP version is running on an interface. rtr-a>show ip igmp interface e0 Ethernet0 is up, line protocol is up Internet address is , subnet mask is IGMP is enabled on interface Current IGMP version is 2 CGMP is disabled on interface IGMP query interval is 60 seconds IGMP querier timeout is 120 seconds IGMP max query response time is 10 seconds Inbound IGMP access group is not set Multicast routing is enabled on interface Multicast TTL threshold is 0 Multicast designated router (DR) is (this system) IGMP querying router is (this system) Multicast groups joined: Verifying the IGMP Version on an Interface Use the show ip igmp interface command to determine which version of IGMP is currently active on an interface. This is indicated by the line in the above example that says Current IGMP version is 2

36 Layer 2 Multicast Frame Switching
Problem: Layer 2 flooding of multicast frames Typical Layer 2 switches treat multicast traffic as unknown or broadcast and must flood the frame to every port (in VLAN). Static entries may sometimes be set to specify which ports receive which groups of multicast traffic. Dynamic configuration of these entries may reduce administration. For most Layer 2 switches, multicast traffic is normally treated like an unknown MAC address or broadcast frame, which causes the frame to be flooded out every port within a VLAN at rates of over 1 Mbps. This is fine for unknowns and broadcasts but as we have seen earlier, IP multicast hosts may join and be interested in only specific multicast groups. On most Layer 2 switches, all this traffic is forwarded out all ports resulting in wasted bandwidth on both the segments and on the end stations. One way around this on Cisco Catalyst switches is to program the switch in the command line interface to associate a multicast MAC address with various ports. For example, the administrator may configure ports 5, 6, 7 so only ports 5, 6, and 7 receive the multicast traffic destined for the multicast group. This works fine but it is not scalable. IP multicast hosts dynamically join and leave groups using IGMP to signal to the multicast router. Dynamic configuration of the forwarding tables of the switches would be more effective and cut down on user administration.

37 Layer 2 Multicast Switching Solutions
Cisco Group Management Protocol (CGMP): Simple, proprietary; routers and switches IGMP snooping: Complex, standardized, proprietary implementations; switches only To improve the behavior of the switches when they receive multicast frames, many multicast switching solutions have been developed, such as the following: Cisco Group Management Protocol (CGMP) – CGMP is a Cisco Systems proprietary protocol that runs between a multicast router and a switch. This protocol enables the Cisco multicast router, following the receipt of IGMP messages sent by hosts, to inform the switch about the information contained in the IGMP packet. IGMP snooping – With IGMP snooping, the switch intercepts IGMP messages from the host and updates the MAC table accordingly. To implement IGMP snooping without suffering switch performance loss, it is necessary to make the switch Layer 3-aware. This result is typically accomplished by using Layer 3 ASICs.

38 Layer 2 Multicast Frame Switching CGMP
Solution 1: CGMP Runs on switches and routers. CGMP packets sent by routers to switches at the CGMP multicast MAC address of cdd.dddd. CGMP packet contains: Type field: join or leave MAC address of the IGMP client Multicast MAC address of the group Switch uses CGMP packet information to add or remove an entry for a particular multicast MAC address. CGMP is the most common multicast switching solution designed by Cisco. CGMP is based on a client/server model, where the router can be considered a CGMP server and the switch taking on the client role. There are software components running on both devices, with the router translating IGMP messages into CGMP commands which are then processed in the switches and used to populate the Layer 2 forwarding tables with the correct multicast entries. The basis of CGMP is that the IP multicast router sees all IGMP packets and therefore can inform the switch when specific hosts join or leave multicast groups. Routers use well-known CGMP multicast MAC addresses to send CGMP control packets to the switch. The switch then uses this information to program its forwarding table. When the router sees an IGMP control packet, it creates a CGMP packet that contains the request type (Join or Leave), the Layer 2 Multicast MAC Address, and the actual MAC address of the client. This packet is sent to a well-known CGMP multicast MAC address (0x0100.0cdd.dddd), which all CGMP switches listen on. It is then interpreted and the proper entries created in the switch content-addressable memory (CAM) Table to constrain the forwarding of multicast traffic for this group.

39 IGMP Snooping Solution 2: IGMP snooping Switches become IGMP-aware.
IGMP packets are intercepted by the CPU or by special hardware ASICs. Switch examines contents of IGMP messages to learn which ports want what traffic. Effect on switch without Layer 3-aware Hardware/ASICs Must process all Layer 2 multicast packets Administration load increased with multicast traffic load Effect on switch with Layer 3-aware Hardware/ASICs Maintain high-throughput performance but cost of switch increases The second multicast switching solution is IGMP snooping. As its name implies, switches become IGMP “aware” and listen in on the IGMP conversations between hosts and routers. This requires the processor in the switch to identify and intercept a copy of all IGMP packets flowing between router and hosts in both directions. These IGMP packet types are: IGMP Membership Reports IGMP Leaves If care is not taken as to how IGMP Snooping is implemented, a switch may have to intercept ALL layer 2 multicast packets in order to identify IGMP packets. This can have a significant impact on the performance of the switch. Proper designs require special hardware (Layer 3 ASICs) to avoid this problem. This can directly affect the overall cost of the switch. Therefore, switches must effectively become Layer 3 “aware” to avoid serious performance problems because of IGMP snooping.

40 IGMPv3 and IGMP Snooping
Impact of IGMPv3 on IGMP Snooping IGMPv3 Reports are sent to a separate group ( ) reduces load on switch CPU No Report Suppression in IGMPv3 IGMP Snooping should not cause a serious performance problem once IGMPv3 is implemented. In considering the impact of IGMPv3 on IGMP Snooping, keep in mind that IGMPv3 Reports are sent to a separate group ( ). Switches will listen to just this group ( ). So the switches will only snoop on the IGMPv3 traffic and won’t snoop on other data traffic. Therefore, this will substantially reduce the load on the switch CPU and will permit low-end switches to implement IGMPv3 Snooping. No Report Suppression in IGMPv3 Enables individual member tracking IGMPv3 supports Source-specific Includes/Excludes Permits (Source, Group) state to be maintained by switch Currently not implemented by any switches. May be necessary for full IGMPv3 functionality

41 Multicast Distribution Trees

42 Multicast Protocol Basics
Types of multicast distribution trees: Source distribution trees; also called shortest path trees (SPTs) Shared distribution trees; rooted at a meeting point in the network A core router serves as a rendezvous point (RP) To understand the IP multicast model, it is important to have a good working knowledge of multicast distribution trees. Multicast Distribution Trees define the path from the source to the receivers, over which the multicast traffic flows. Unlike unicast forwarding, which uses the “destination” address to make it’s forwarding decision, multicast forwarding uses the “source” address to make it’s forwarding decision. There are two types of multicast distribution trees – source trees and shared trees. With a source tree, a separate tree is built for each source to all members of its group. Because the source tree takes a direct, or the shortest, path from source to its receivers, it is also called a Shortest Path Tree (SPT). Shared tree protocols create multicast forwarding paths that rely on a central core router that serves as a rendezvous point (RP) between multicast sources and destinations. Sources initially send their multicast packets to the RP, which in turn forwards data through a shared tree to the members of the group. A shared tree is less efficient than an SPT (paths between the source and receivers are not necessarily the shortest), but it is less demanding on routers (memory, CPU).

43 Multicast Distribution Trees
Shortest Path or Source Distribution Tree Source 1 Notation: (S, G) S = Source G = Group Source 2 A B D F A Shortest path or source distribution tree is a minimized tree with the lowest cost from the source to all leaves of the tree. We forward packets on the Shortest Path Tree according to both the Source Address that the packets originated from and the Group address “G” that the packets are addressed to. For this reason, we refer to the forwarding state on the SPT by the notation (S,G) (pronounced “S comma G”), where: “S” is the IP address of the source. “G” is the multicast group address Example 1: This graphic illustrates a Shortest Path Tree (SPT), also known as a Source-Rooted Tree, from Source 1 to Receiver 1 and Receiver 2. The shortest path between Source 1 and Receiver 1 is through Routers A and C, and the shortest path to Receiver 2 is one additional hop through Router E. C E Receiver 1 Receiver 2

44 Multicast Distribution Trees
Shortest Path or Source Distribution Tree Source 1 Notation: (S, G) S = Source G = Group Source 2 A B D F Example 2: This graphic illustrates a Shortest Path Tree (SPT), also known as a Source-Rooted Tree, from Source 2 to Receiver 1 and Receiver 2. The shortest path between Source 2 and Receiver 1 is through Routers F, D and C, and the shortest path to Receiver 2 is one additional hop through Router E. The main point is that a separate SPT is built for every source S to group G. C E Receiver 1 Receiver 2

45 Multicast Distribution Trees
Shared Distribution Tree Notation: (*, G) * = All Sources G = Group A B D (RP) F A Shared distribution tree is the path derived from a shared point through which sources and receivers must send and receive multicast data. This graphic illustrates a shared distribution tree. Router D is the root of this shared tree, which is built from router D to router C toward receiver 1 and from router D to routers C and E to receiver 2. In Protocol-Independent Multicast (PIM), the root of the shared tree is called a Rendezvous Point (RP). Packets are forwarded down the shared tree to the receivers. The default forwarding state for the shared tree is identified by the notation “(*,G)” (pronounced “star comma G”), where the asterisk (*) is a wildcard entry, meaning any source, and G is the multicast group address. Regardless of location and number of receivers, senders register with the Shared Root (Router D) and always send a single copy of multicast data through the Shared Root to the registered receivers. Regardless of location and number of sources, group members always receive forwarded multicast data from the Shared Root (Router D). (RP) PIM Rendezvous Point C E Shared Tree Receiver 1 Receiver 2

46 Multicast Distribution Trees
Shared Distribution Tree Source 1 Notation: (*, G) * = All Sources G = Group Source 2 A B D (RP) F In this example, source 1 and source 2 are sending multicast packets toward a rendezvous point through the source trees, and from the RP through the shared tree toward receiver 1 and receiver 2. (RP) PIM Rendezvous Point C E Shared Tree Source Tree Receiver 1 Receiver 2

47 Multicast Distribution Tree Identification
(S,G) entries For this particular source sending to this particular group Traffic is forwarded through the shortest path from the source (*,G) entries For any (*) source sending to this group Traffic is forwarded through a meeting point for this group The multicast forwarding entries that appear in multicast forwarding tables may be read in the following way: (S, G): (pronounced “S comma G”), For the source S sending to the group G; those entries typically reflect the SPT but may also appear on a shared tree. (*, G): (pronounced “star comma G”), For any source (*) sending to the group G; those entries reflect the shared tree but are also created (in Cisco routers) for any existing (S, G) entry.

48 Multicast Distribution Trees
Characteristics of Distribution Trees Source or Shortest Path trees Uses more memory but optimal paths from source to all receivers; minimizes delay Shared trees Uses less memory but sub-optimal paths from source to all receivers; may introduce extra delay This information is covered in CCNP 3 v4.0 Module 8: Source or Shortest Path Tree Characteristics Provides optimal path (shortest distance and minimized delay) from source to all receivers, but requires more memory to maintain. SPT state entries use more router memory because there is an entry for each sender and group pair, but the traffic is sent over the optimal path to each receiver, thus minimizing the delay in packet delivery. Shared Tree Characteristics Provides suboptimal path (may not be shortest distance and may introduce extra delay) from source to all receivers, but requires less memory to maintain. Shared distribution tree state entries consume less router memory, but you may get suboptimal paths from a source to receivers, thus introducing extra delay in packet delivery.

49 Multicast Routing

50 Protocols for IP Multicast Routing
The Cisco IOS software supports the following protocols to implement IP multicast routing: • Internet Group Management Protocol (IGMP) is used between hosts on a LAN and the router(s) on that LAN to track the multicast groups of which hosts are members. • Protocol-Independent Multicast (PIM) is used between routers so that they can track which multicast packets to forward to each other and to their directly connected LANs. • Distance Vector Multicast Routing Protocol (DVMRP) is used on the MBONE (the multicast backbone of the Internet). The Cisco IOS software supports PIM-to-DVMRP interaction. • Cisco Group Management Protocol (CGMP) is used on routers connected to Cisco Catalyst switches to perform tasks similar to those performed by IGMP. This slide shows where these protocols operate within the IP multicast environment. PIM is used between routers so that they can track which multicast packets to forward to each other and to their directly connected LANs.

51 Protocol-Independent Multicast (PIM)
PIM maintains the current IP multicast service mode of receiver-initiated membership. PIM is not dependent on a specific unicast routing protocol. With PIM, routers maintain forwarding tables to forward multicast datagrams. PIM can operate in dense mode or sparse mode. Dense mode protocols flood multicast traffic to all parts of the network and prune the flows where there are no receivers using a periodic flood-and-prune mechanism. Sparse mode protocols use an explicit join mechanism where distribution trees are built on demand by explicit tree join messages sent by routers that have directly connected receivers. At layer 3, the router may also multicast over the Internet using Protocol-Independent Multicast (PIM). This is a family of multicast routing protocols that can provide one-to-many and many-to-many distribution of data over the Internet. The "protocol-independent" part refers to the fact that PIM does not include its own topology discovery mechanism, but instead uses routing information supplied by other traditional routing protocols such as BGP. Routers executing a multicast routing protocol, such as Protocol-Independent Multicast (PIM), maintain forwarding tables to forward multicast datagrams. Protocol-Independent Multicast (PIM) is used between routers so that they can track which multicast packets to forward to each other and to their directly connected LANs. PIM can operate in dense mode or sparse mode. It is possible for the router to handle both sparse groups and dense groups at the same time. In dense mode, a router assumes that all other routers want to forward multicast packets for a group. If a router receives a multicast packet and has no directly connected members or PIM neighbors present, a Prune message is sent back to the source. Subsequent multicast packets are not flooded to this router on this pruned branch. PIM builds source-based multicast distribution trees. In sparse mode, a router assumes that other routers do not want to forward multicast packets for a group, unless there is an explicit request for the traffic. When hosts join a multicast group, the directly connected routers send PIM Join messages toward the rendezvous point (RP). The RP keeps track of multicast groups. Hosts that send multicast packets are registered with the RP by that host’s first-hop router. The RP then sends Join messages toward the source. At this point, packets are forwarded on a shared distribution tree. If the multicast traffic from a specific source is sufficient, the receiver’s first-hop router may send Join messages toward the source to build a source-based distribution tree.

52 Multicast Tree Creation
PIM Join/Prune Control Messages Used to create/remove Distribution Trees Shortest Path trees PIM control messages are sent toward the Source Shared trees PIM control messages are sent toward RP Source or Shortest Path Tree Characteristics Provides optimal path (shortest distance and minimized delay) from source to all receivers, but requires more memory to maintain Shared Tree Characteristics Provides suboptimal path (may not be shortest distance and may introduce extra delay) from source to all receivers, but requires less memory to maintain

53 Multicast Forwarding Multicast routing operation is the opposite of unicast routing. Unicast routing is concerned with where the packet is going. Multicast routing is concerned with where the packet comes from. Multicast routing uses Reverse Path Forwarding (RPF) to prevent forwarding loops. In unicast routing, when the router receives the packet, the decision about where to forward the packet depends on the destination address of the packet. The destination IP address directly indicates where to forward a packet. Unicast forwarding is hop-by-hop. The unicast routing table determines the interface and the next-hop router to use when forwarding the packet. In multicast routing, the decision about where to forward the multicast packet depends on where the packet came from. Because the destination IP address of a multicast packet is a Group address, it can’t be used to “directly” determine where to forward the packet. Multicast forwarding is connection-oriented in that multicast traffic does not flow until “connection” messages are sent toward the source to set up the flow paths for the traffic. The collection of these paths form multicast distribution trees that define the path and replication points in the network for multicast forwarding. The building of the multicast distribution trees through “connection” messages is a dynamic process. When network topology changes occur, the distribution trees are rebuilt around failed links. Multicast routing uses a mechanism called Reverse Path Forwarding (RPF) to prevent forwarding loops and to ensure the shortest path from the source to the receivers. ... Broadcast: floods packets out all interfaces except incoming from source; initially assuming every host on network is part of multicast group ... Prune: eliminates tree branches without multicast group members; cuts off transmission to LANs without interested receivers ... Selective Forwarding: requires its own integrated unicast routing protocol

54 Reverse Path Forwarding (RPF)
The RPF Calculation The multicast source address is checked against the unicast routing table. This determines the interface and upstream router in the direction of the source to which PIM Joins are sent. This interface becomes the “Incoming” or RPF interface. A router forwards a multicast datagram only if received on the RPF interface. Reverse Path Forwarding Routers forward multicast datagrams received from incoming interface on distribution tree leading to source Routers check the source IP address against their multicast routing tables (RPF check); ensure that the multicast datagram was received on the specified incoming interface Note that changes in the unicast topology trigger an immediate RPF recalculation. However, in old versions of IOS, the RPF check is performed every 5 seconds.

55 Reverse Path Forwarding (RPF)
RPF Calculation Based on Source Address. Best path to source found in Unicast Route Table. Determines where to send Joins. Joins continue towards Source to build multicast tree. Multicast data flows down tree. Join Reverse-Path Forwarding (RPF) is an algorithm used for forwarding multicast datagrams. It functions as follows: • If a router receives a datagram on an interface it uses to send unicast packets to the source, the packet has arrived on the RPF interface. • If the packet arrives on the RPF interface, a router forwards the packet out the interfaces present in the outgoing interface list of a multicast routing table entry. • If the packet does not arrive on the RPF interface, the packet is silently discarded to prevent loops. Join E0 E1 E2 Unicast Route Table Network Interface / E0

56 Reverse Path Forwarding (RPF)
RPF Calculation (cont.) Repeat for other receivers… Join Join E0 E1 E2

57 Reverse Path Forwarding (RPF)
RPF Calculation What if we have equal-cost paths? We can’t use both. Tie-Breaker Use highest Next-Hop IP address. Join E0 E1 E2 Unicast Route Table Network Intfc Nxt-Hop / E / E

58 Multicast Distribution Tree Creation
Shared Tree Example PIM uses both source trees and RP-rooted shared trees to forward datagrams; the RPF check is performed differently for each, as follows: • If a PIM router has source-tree state (that is, an (S,G) entry is present in the multicast routing table), the router performs the RPF check against the IP address of the source of the multicast packet. • If a PIM router has shared-tree state (and no explicit source-tree state), it performs the RPF check on the RP's address (which is known when members join the group). This is illustrated in this slide.

59 PIM Dense Mode Operation

60 PIM-DM Flood and Prune Initial Flooding PIM-DM Initial Flooding
PIM dense mode (PIM-DM) initially floods multicast traffic to all parts of the network. PIM-DM initially floods traffic out of all non-RPF interfaces where there is another PIM-DM neighbor or a directly connected member of the group. In the example in the figure, multicast traffic being sent by the source is flooded throughout the entire network. As each router receives the multicast traffic through its RPF interface (the interface in the direction of the source), it forwards the multicast traffic to all of its PIM-DM neighbors. Note that this results in some traffic arriving through a non-RPF interface, as with the two routers in the center and far right of the figure. Packets arriving through the non-RPF interface are then discarded. These non-RPF flows are normal for the initial flooding of data and are corrected by the normal PIM-DM pruning mechanism.

61 PIM-DM Flood and Prune (Cont.)
In the example shown, PIM-DM prune messages are sent (denoted by dashed arrows) to stop the unwanted traffic. Prune messages are also sent on non-RPF interfaces to shut off the flow of multicast traffic, because it is arriving through an interface that is not on the shortest path to the source. The example of prune messages sent on a non-RPF interface may be seen on the routers in the middle and far right of the figure. Prune messages are sent on an RPF interface only when the router has no downstream receivers for multicast traffic from the specific source.

62 PIM-DM Flood and Prune (Cont.)
Results After Pruning The slide shows the shortest path tree (SPT) resulting from pruning the unwanted multicast traffic in the network. Although the flow of multicast traffic is no longer reaching most of the routers in the network, the (S, G) state remains for all of them and will remain until the source stops sending. In PIM-DM, all prune messages expire in 3 minutes. After that, the multicast traffic is flooded again to all of the routers. This periodic flood-and-prune behavior is normal and must be taken into account when the network is designed to use PIM-DM.

63 PIM Sparse Mode Operation

64 PIM Sparse Mode PIM-SM works with any of the underlying unicast routing protocols. PIM-SM supports both source and shared trees. PIM-SM is based on an explicit pull model. PIM-SM uses an RP. Senders and receivers “meet each other.” Senders are registered with RP by their first-hop router. Receivers are joined to the shared tree (rooted at the RP) by their local DR. PIM Sparse Mode (PIM-SM) is described in RFC 2362. As with PIM-DM, PIM-SM is also independent of underlying unicast protocols. PIM-SM uses shared distribution trees, but it may also switch to the SPT. PIM-SM is based on an explicit pull model. Therefore, traffic is forwarded only to the parts of the network that need it. PIM-SM uses an RP to coordinate forwarding of multicast traffic from a source to receivers. Regardless of location/number of receivers, senders register with the RP and send a single copy of multicast data through it to registered receivers. Regardless of location/number of sources, group members register to receive data and always receive it through the RP. Group members are joined to the shared tree by their local designated router (DR). A shared tree that is built this way is always rooted at the RP. PIM-SM is appropriate for: Wide scale deployment for both densely and sparsely populated groups in the Enterprise network. Optimal choice for all production networks regardless of size and membership density. There are many optimizations and enhancements to PIM, including the following: Bidirectional PIM mode, which is designed for many-to-many applications (that is, many hosts multicasting to each other). Source Specific Multicast (SSM) is a variant of PIM-SM that builds only source-specific SPTs and does not need an active RP for source-specific groups (address range 232/8).

65 PIM-SM Shared Tree Join
RP (*, G) State created only along the Shared Tree. PIM-SM Shared Tree Joins In this example, there is an active receiver (attached to leaf router at the bottom of the drawing) that has joined multicast group “G”. The leaf router knows the IP address of the Rendezvous Point (RP) for group G and it sends a (*,G) Join for this group towards the RP. This (*, G) Join travels hop-by-hop to the RP building a branch of the Shared Tree that extends from the RP to the last-hop router directly connected to the receiver. At this point, group “G” traffic can flow down the Shared Tree to the receiver. (*, G) Join Shared Tree Receiver

66 PIM-SM Sender Registration
RP Source (S, G) State created only along the Source Tree. PIM-SM Sender Registration As soon as an active source for group G sends a packet, the leaf router that is attached to this source is responsible for “Registering” this source with the RP and requesting the RP to build a tree back to that router. The source router encapsulates the multicast data from the source in a special PIM-SM message called the Register message and unicasts that data to the RP. When the RP receives the Register message it does two things It de-encapsulates the multicast data packet inside of the Register message and forwards it down the Shared Tree. The RP also sends an (S,G) Join back towards the source network S to create a branch of an (S, G) Shortest-Path Tree. This results in (S, G) state being created in all the routers along the SPT, including the RP. Traffic Flow Shared Tree Source Tree (S, G) Register (unicast) Receiver (S, G) Join

67 PIM-SM Sender Registration
RP Source (S, G) traffic begins arriving at the RP through the Source tree. PIM-SM Sender Registration (cont.) As soon as the SPT is built from the Source router to the RP, multicast traffic begins to flow natively from source S to the RP. Once the RP begins receiving data natively (i.e. down the SPT) from source S it sends a ‘Register Stop’ to the source’s first hop router to inform it that it can stop sending the unicast Register messages. Traffic Flow Shared Tree Source Tree RP sends a Register-Stop back to the first-hop router to stop the Register process. (S, G) Register (unicast) Receiver (S, G) Register-Stop (unicast)

68 PIM-SM Sender Registration
RP Source Source traffic flows natively along SPT to RP. PIM-SM Sender Registration (cont.) At this point, multicast traffic from the source is flowing down the SPT to the RP and from there, down the Shared Tree to the receiver. Traffic Flow Shared Tree From RP, traffic flows down the Shared Tree to Receivers. Source Tree Receiver

69 PIM-SM SPT Switchover RP Source Last-hop router joins the Source Tree.
PIM-SM Shortest-Path Tree Switchover PIM-SM has the capability for last-hop routers (i.e. routers with directly connected members) to switch to the Shortest-Path Tree and bypass the RP if the traffic rate is above a set threshold called the “SPT-Threshold”. The default value of the SPT-Threshold” in Cisco routers is zero. This means that the default behavior for PIM-SM leaf routers attached to active receivers is to immediately join the SPT to the source as soon as the first packet arrives through the (*,G) shared tree. In this slide, the last-hop router (at the bottom of the drawing) sends an (S, G) Join message toward the source to join the SPT and bypass the RP. This (S, G) Join message travels hop-by-hop to the first-hop router (i.e. the router connected directly to the source) thereby creating another branch of the SPT. This also creates (S, G) state in all the routers along this branch of the SPT. Last-hop router joins the Source Tree. Traffic Flow Shared Tree Additional (S, G) State is created along new part of the Source Tree. Source Tree (S, G) Join Receiver

70 PIM-SM SPT Switchover RP Source Traffic Flow
PIM-SM Shortest-Path Tree Switchover Once the branch of the Shortest-Path Tree has been built, (S, G) traffic begins flowing to the receiver through this new branch. Next, special (S, G) RP-bit Prune messages are sent up the Shared Tree to prune off the redundant (S,G) traffic that is still flowing down the Shared Tree. If this were not done, (S, G) traffic would continue flowing down the Shared Tree resulting in duplicate (S, G) packets arriving at the receiver. Traffic Flow Traffic begins flowing down the new branch of the Source Tree. Shared Tree Source Tree Additional (S, G) State is created along along the Shared Tree to prune off (S, G) traffic. (S, G)RP-bit Prune Receiver

71 PIM-SM SPT Switchover RP Source
PIM-SM Shortest-Path Tree Switchover As the (S, G)RP-bit Prune message travels up the Shared Tree, special (S, G)RP-bit Prune state is created along the Shared Tree that selectively prevents this traffic from flowing down the Shared Tree. At this point, (S, G) traffic is now flowing directly from the first-hop router to the last-hop router and from there to the receiver. (S, G) Traffic flow is now pruned off of the Shared Tree and is flowing to the Receiver through the Source Tree. Traffic Flow Shared Tree Source Tree Receiver

72 PIM-SM SPT Switchover RP Source
PIM-SM Shortest-Path Tree Switchover At this point, the RP no longer needs the flow of (S, G) traffic since all branches of the Shared Tree (in this case there is only one) have pruned off the flow of (S, G) traffic. As a result, the RP will send (S, G) Prunes back toward the source to shut off the flow of the now unnecessary (S, G) traffic to the RP Note: This will occur IF the RP has received an (S, G)RP-bit Prune on all interfaces on the Shared Tree. (S, G) traffic flow is no longer needed by the RP so it Prunes the flow of (S, G) traffic. Traffic Flow Shared Tree Source Tree (S, G) Prune Receiver

73 PIM-SM SPT Switchover RP Source
PIM-SM Shortest-Path Tree Switchover As a result of the SPT-Switchover, (S, G) traffic is now only flowing from the first-hop router to the last-hop router and from there to the receiver. Notice that traffic is no longer flowing to the RP. As a result of this SPT-Switchover mechanism, it is clear that PIM-SM also supports the construction and use of SPT (S,G) trees but in a much more economical fashion than PIM-DM in terms of forwarding state. (S, G) Traffic flow is now only flowing to the Receiver through a single branch of the Source Tree. Traffic Flow Shared Tree Source Tree Receiver

74 PIM-SM Frequently Forgotten Fact
“The default behavior of PIM-SM is that routers with directly connected members will join the Shortest Path Tree as soon as they detect a new multicast source.” PIM-SM Frequently Forgotten Fact

75 PIM-SM Evaluation Effective for Sparse or Dense distribution of multicast receivers Advantages: Traffic only sent down “joined” branches Can switch to optimal source-trees for high traffic sources dynamically Unicast routing protocol-independent Basis for inter-domain multicast routing Evaluation: PIM Sparse-mode Can be used for sparse or dense distribution of multicast receivers (no necessity to flood) Advantages Traffic sent only to registered receivers that have explicitly joined the multicast group RP can be switched to optimal shortest-path-tree when high-traffic sources are forwarding to a sparsely distributed receiver group Interoperates with DVMRP Potential issues Requires RP during initial setup of distribution tree (can switch to shortest-path-tree once RP is established and determined suboptimal) RPs can become bottlenecks if not selected with great care Complex behavior is difficult to understand and therefore difficult to debug

76 Multiple RPs with Auto RP
PIM Sparse-Dense-Mode The figure depicts two multicast sources. For maximum efficiency, multiple RPs can be implemented, with each RP in an optimum location. This design is difficult to configure, manage, and troubleshoot with manual configurations of RPs. PIM sparse-dense mode supports automatic selection of RPs for each multicast. Router A in the figure could be the RP for source 1, and router D could be the RP for source 2. PIM sparse-dense mode is the recommended solution from Cisco for IP multicast, because PIM-DM does not scale well and requires heavy router resources and PIM-SM offers limited RP configuration options. If no RP is discovered for the multicast group or none is manually configured, PIM sparse-dense mode operates in dense mode. Therefore, automatic RP discovery should be implemented with PIM sparse-dense mode.

77 IGMPv3 and IGMP Snooping
Impact of IGMPv3 on IGMP Snooping IGMPv3 Reports are sent to a separate group ( ) reduces load on switch CPU No Report Suppression in IGMPv3 IGMP Snooping should not cause a serious performance problem once IGMPv3 is implemented. In considering the impact of IGMPv3 on IGMP Snooping, keep in mind that IGMPv3 Reports are sent to a separate group ( ). Switches will listen to just this group ( ). So the switches will only snoop on the IGMPv3 traffic and won’t snoop on other data traffic. Therefore, this will substantially reduce the load on the switch CPU and will permit low-end switches to implement IGMPv3 Snooping. No Report Suppression in IGMPv3 Enables individual member tracking IGMPv3 supports Source-specific Includes/Excludes Permits (Source, Group) state to be maintained by switch Currently not implemented by any switches. May be necessary for full IGMPv3 functionality


Download ppt "IP Multicasting: Explaining Multicast"

Similar presentations


Ads by Google