Presentation is loading. Please wait.

Presentation is loading. Please wait.

Optimizing Converged Cisco Networks (ONT)

Similar presentations


Presentation on theme: "Optimizing Converged Cisco Networks (ONT)"— Presentation transcript:

1 Optimizing Converged Cisco Networks (ONT)
Module 3: Introduction to IP QoS

2 Lesson 3.1: Introducing QoS

3 Objectives Explain why converged networks require QoS.
Identify the major quality issues with converged networks. Calculate available bandwidth given multiple flows. Describe mechanisms designed to use bandwidth more efficiently. Describe types of delay. Identify ways to reduce the impact of delay on quality. Describe packet loss and ways to prevent or reduce packet loss in the network.

4 Traditional Nonconverged Network
Before converged networks were common, network engineering focused on connectivity. The rates at which data came onto the network resulted in bursty data flows. In a traditional network, data, arriving in packets, tries to acquire and use as much bandwidth as possible at any given time. Access to bandwidth is on a first-come, first-served (FIFO) basis. The data rate available to any one user varies depending on the number of users accessing the network at that time. Protocols in nonconverged traditional networks handle the bursty nature of data networks. Data networks can survive brief outages. For example, when you retrieve , a delay of a few seconds is generally not noticeable. A delay of minutes is annoying, but not serious. Traditional networks also had requirements for applications such as data, video, and systems network architecture (SNA). Since each application has different traffic characteristics and requirements, network designers deployed nonintegrated networks. These nonintegrated networks carried specific types of traffic: data network, SNA network, voice network, and video network. Traditional data traffic characteristics: Bursty data flow FIFO access Not overly time-sensitive; delays OK Brief outages are survivable

5 Converged Network Realities
A converged network carries voice, video, and data traffic. These flows use the same network facilities. Merging these different traffic streams with dramatically differing requirements can lead to a number of problems. Key among these problems is that voice and video traffic is very time-sensitive and must get priority. In a converged network, constant, small-packet voice flows compete with bursty data flows. Although the packets carrying voice traffic on a converged network are typically very small, the packets cannot tolerate delay and delay variation as they traverse the network. When delay and delay variations occur, voices break up and words become incomprehensible. Conversely, packets carrying file transfer data are typically large and the nature of IP lets the packets survive delays and drops. It is possible to retransmit part of a dropped data file, but it is not feasible to retransmit part of a voice conversation. Critical voice and video traffic must have priority over data traffic. Mechanisms must be in place to provide this priority. The key reality in converged networks is that service providers cannot accept failure. While a file transfer or an packet can wait until a down network recovers and delays are almost transparent, voice and video packets cannot wait. Converged networks must provide secure, predictable, measurable, and, sometimes, guaranteed services. Even a brief network outage on a converged network seriously disrupts business operations. Network administrators and architects achieve required performance from the network by managing delay, delay variation (jitter), bandwidth provisioning, and packet loss parameters with quality of service (QoS) techniques. Multimedia streams, such as those used in IP telephony or videoconferencing, are very sensitive to delivery delays and create unique QoS demands. If service providers rely on a best-effort network model, packets may not arrive in order, in a timely manner, or maybe not at all. The result is unclear pictures, jerky and slow movement, and sound that is not synchronized with images. Converged network realities: Constant small-packet voice flow competes with bursty data flow. Critical traffic must have priority. Voice and video are time-sensitive. Brief outages are not acceptable.

6 Converged Network Quality Issues
Lack of bandwidth: Multiple flows compete for a limited amount of bandwidth. End-to-end delay (fixed and variable): Packets have to traverse many network devices and links; this travel adds up to the overall delay. Variation of delay (jitter): Sometimes there is a lot of other traffic, which results in varied and increased delay. Packet loss: Packets may have to be dropped when a link is congested. With inadequate network configuration, voice transmission is irregular or unintelligible. Gaps in speech where pieces of speech are interspersed with silence are particularly troublesome. Delay causes poor caller interactivity. Poor caller interactivity can cause echo and talker overlap. Echo is the effect of the signal reflecting the speaker voice from the far-end telephone equipment back into the speaker ear. Talker overlap is caused when one-way delay becomes greater than 250 ms. When this long delay occurs; one talker steps in on the speech of the other talker. The worst-case result of delay is a disconnected call. If there are long gaps in speech, the parties will hang up. If there are signaling problems, calls are disconnected. Such events are unacceptable in voice communications, yet are quite common for an inadequately prepared data network that is attempting to carry voice. The four major issues that face converged enterprise networks: Lack of Bandwidth capacity: Large graphics files, multimedia uses, and increasing use of voice and video cause bandwidth capacity problems over data networks. End-to-end delay (both fixed and variable): Delay is the time it takes for a packet to reach the receiving endpoint after being transmitted from the sending endpoint. This period of time is called the “end-to-end delay” and consists of two components: Fixed network delay: Two types of fixed network delay are serialization and propagation delays. Serialization is the process of placing bits on the circuit. The higher the circuit speed, the less time it takes to place the bits on the circuit. Therefore, the higher the speed of the link, the less serialization delay is incurred. Propagation delay is the time it takes frames to transit the physical media. Variable network delay: Processing delay is a type of variable delay and is the time required by a networking device to look up the route, change the header, and complete other switching tasks. In some cases, the packet must also be manipulated, as, for example, when the encapsulation type or the hop count must be changed. Each of these steps can contribute to processing delay. Variation of delay (also called jitter): Jitter is the delta, or difference, in the total end-to-end delay values of two voice packets in the voice flow. Packet loss: WAN congestion is the usual cause for packet loss and results in speech dropouts or a stutter effect if the play out side tries to accommodate for the loss by retransmitting previously sent packets.

7 Measuring Available Bandwidth
This example shows a network with four hops between a server and a client. Each hop uses different media with different bandwidths. The maximum available bandwidth is equal to the bandwidth of the slowest link. The calculation of the available bandwidth, however, is much more complex in cases where multiple flows are traversing the network. In such cases, you must calculate average bandwidth available per flow. Inadequate bandwidth can have performance impacts on network applications, especially those that are time-sensitive (such as voice) or consume a lot of bandwidth (such as videoconferencing). These performance impacts result in poor voice and video quality. In addition, interactive network services, such as terminal services and remote desktops, may also suffer from lower bandwidth, which results in slow application responses. The maximum available bandwidth is the bandwidth of the slowest link. Multiple flows are competing for the same bandwidth, resulting in much less bandwidth being available to one single application. A lack in bandwidth can have performance impacts on network applications.

8 Increasing Available Bandwidth
Bandwidth is one of the key factors that affects QoS in a network; the more bandwidth there is, the better the QoS will be. The best way to increase bandwidth is to increase the link capacity of the network to accommodate all applications and users, allowing extra, spare bandwidth. Although this solution sounds simple, increasing bandwidth is expensive and takes time to implement. There are often technological limitations in upgrading to a higher bandwidth. The better option is to classify traffic into QoS classes and prioritize each class according to its relative importance. The basic queuing mechanism is First In First Out (FIFO). Other queuing mechanisms provide additional granularity to serve voice and business-critical traffic. Such traffic types should receive sufficient bandwidth to support their application requirements. Voice traffic should receive prioritized forwarding, and the least important traffic should receive the unallocated bandwidth that remains after prioritized traffic is accommodated. Cisco IOS QoS software provides a variety of mechanisms to assign bandwidth priority to specific classes of traffic: Priority queuing (PQ) or custom queuing (CQ) Modified deficit round robin (MDRR) (on Cisco Series Routers) Distributed type of service (ToS)-based and QoS group-based weighted fair queuing (WFQ) (on Cisco 7x00 Series Routers) Class-based weighted fair queuing (CBWFQ) Low-latency queuing (LLQ) A way to increase the available link bandwidth is to optimize link usage by compressing the payload of frames (virtually). Compression, however, also increases delay because of the complexity of compression algorithms. Using hardware compression can accelerate packet payload compressions. Stacker and Predictor are two compression algorithms that are available in Cisco IOS software. Another mechanism that is used for link bandwidth efficiency is header compression. Header compression is especially effective in networks where most packets carry small amounts of data (that is, where the payload-to-header ratio is small). Typical examples of header compression are TCP header compression and Real-Time Transport Protocol (RTP) header compression. Upgrade the link (the best but also the most expensive solution). Improve QoS with advanced queuing mechanisms to forward the important packets first. Compress the payload of Layer 2 frames (takes time). Compress IP packet headers.

9 Using Available Bandwidth Efficiently
Voice (Highest) Data (High) (Medium) (Low) Voice LLQ RTP header compression 1 2 3 4 4 3 2 1 Data CBWFQ TCP header compression Example: Using Available Bandwidth More Efficiently In a network with remote sites that use interactive traffic and voice for daily business, bandwidth availability is an issue. In some regions, broadband bandwidth services are difficult to obtain or, in the worst case, are not available. This situation means that available bandwidth resources must be used efficiently. Advanced queuing techniques, such as CBWFQ or LLQ, and header compression mechanisms, such as TCP and RTP header compression, are needed to use the bandwidth much more efficiently. In this example, a low-speed WAN link connects two office sites. Both sites are equipped with IP phones, PCs, and servers that run interactive applications, such as terminal services. Because the available bandwidth is limited, an appropriate strategy for efficient bandwidth use must be determined and implemented. Administrators must chose suitable queuing and compression mechanisms for the network based on the kind of traffic that is traversing the network. The example uses LLQ and RTP header compression to provide the optimal quality for voice traffic. CBWFQ and TCP header compression are effective for managing interactive data traffic. Using advanced queuing and header compression mechanisms, the available bandwidth can be used more efficiently: Voice: LLQ and RTP header compression Interactive traffic: CBWFQ and TCP header compression

10 Types of Delay Four types of delay: Processing delay: Processing delay is the time that it takes for a router (or Layer 3 switch) to take the packet from an input interface and put the packet into the output queue of the output interface. The processing delay depends on various factors: CPU speed CPU use IP switching mode Router architecture Configured features on both the input and output interfaces Queuing delay: Queuing delay is the time that a packet resides in the output queue of a router. Queuing delay depends on the number of packets that are already in the queue and packet sizes. Queuing delay also depends on the bandwidth of the interface and the queuing mechanism. Serialization delay: Serialization delay is the time that it takes to place a frame on the physical medium for transport. This delay is typically inversely proportional to the link bandwidth. Propagation delay: Propagation delay is the time that it takes for the packet to cross the link from one end to the other. This time usually depends on the type of media that is being transmitted, be it data, voice or video. For example, satellite links produce the longest propagation delay because of the high altitudes of communications satellites. Processing delay: The time it takes for a router to take the packet from an input interface, examine the packet, and put the packet into the output queue of the output interface. Queuing delay: The time a packet resides in the output queue of a router. Serialization delay: The time it takes to place the “bits on the wire.” Propagation delay: The time it takes for the packet to cross the link from one end to the other.

11 The Impact of Delay and Jitter on Quality
End-to-end delay and jitter have a severe quality impact on the network: End-to-end delay is the sum of all types of delays. Each hop in the network has its own set of variable processing and queuing delays, which can result in jitter. Internet Control Message Protocol (ICMP) echo (ping) is one way to measure the round-trip time of IP packets in a network. End-to-end delay: The sum of all propagation, processing, serialization, and queuing delays in the path Jitter: The variation in the delay. In best-effort networks, propagation and serialization delays are fixed, while processing and queuing delays are unpredictable.

12 Ways to Reduce Delay When considering solutions to the delay problem, there are two things to note: Processing and queuing delays are related to devices and are bound to the behavior of the operating system. Propagation and serialization delays are related to the media. There are many ways to reduce the delay at a router. Assuming that the router has enough power to make forwarding decisions rapidly, these factors influence most queuing and serialization delays: Average length of the queue Average length of packets in the queue Link bandwidth Network administrators can accelerate the packet dispatching for delay-sensitive flows: Increase link capacity: Sufficient bandwidth causes queues to shrink so that packets do not wait long before transmittal. Increasing bandwidth reduces serialization time. This approach can be unrealistic because of the costs that are associated with the upgrade. Prioritize delay-sensitive packets: This approach can be more cost-effective than increasing link capacity. WFQ, CBWFQ, and LLQ can each serve certain queues first (this is a pre-emptive way of servicing queues). Reprioritize packets: In some cases, important packets need to be reprioritized when they are entering or exiting a device. For example, when packets leave a private network to transit an Internet service provider (ISP) network, the ISP may require that the packets be reprioritized. Compress payload: Payload compression reduces the size of packets, which virtually increases link bandwidth. Compressed packets are smaller and take less time to transmit. Compression uses complex algorithms that add delay. If you are using payload compression to reduce delay, make sure that the time that is needed to compress the payload does not negate the benefits of having less data to transfer over the link. Use header compression: Header compression is not as CPU-intensive as payload compression. Header compression reduces delay when used with other mechanisms. Header compression is especially useful for voice packets that have a bad payload-to-header ratio (relative large header in comparison to the payload), which is improved by reducing the header of the packet (RTP header compression). By minimizing delay, network administrators can also reduce jitter (delay is more predictable than jitter and easier to reduce). Upgrade the link (the best solution but also the most expensive). Forward the important packets first. Enable reprioritization of important packets. Compress the payload of Layer 2 frames (takes time). Compress IP packet headers.

13 Reducing Delay in a Network
In this example, an ISP providing QoS connects the offices of the customer to each other. A low-speed link (512 kbps) connects the branch office while a higher-speed link (1024 kbps) connects the main office. The customer uses both IP phones and TCP/IP-based applications to conduct daily business. Because the branch office only has a bandwidth of 512 kbps, the customer needs an appropriate QoS strategy to provide the highest possible quality for voice and data traffic. In this example, the customer needs to communicate with HTTP, FTP, , and voice services in the main office. Because the available bandwidth at the customer site is only 512 kbps, most traffic, but especially voice traffic, would suffer from end-to-end delays. In this example, the customer performs TCP and RTP header compression, LLQ, and prioritization of the various types of traffic. These mechanisms give voice traffic a higher priority than HTTP or traffic. In addition to these measures, the customer has chosen an ISP that supports QoS in the backbone. The ISP performs reprioritization for customer traffic according to the QoS policy for the customer so that the traffic streams arrive on time at the main office of the customer. This design guarantees that voice traffic has high priority and a guaranteed bandwidth of 128 kbps, FTP and traffic receive medium priority and a bandwidth of 256 kbps, and HTTP traffic receives low priority and a bandwidth of 64 kbps. Signaling and other management traffic uses the remaining 64 kbps. Customer routers perform: TCP/RTP header compression LLQ Prioritization ISP routers perform: Reprioritization according to the QoS policy

14 The Impacts of Packet Loss
After delay, the next most serious concern for networks is packet loss. Usually, packet loss occurs when routers run out of buffer space for a particular interface (output queue). This graphic shows examples of the results of packet loss in a converged network. Telephone call: “I cannot understand you. Your voice is breaking up.” Teleconferencing: “The picture is very jerky. Voice is not synchronized.” Publishing company: “This file is corrupted.” Call center: “Please hold while my screen refreshes.”

15 Types of Packet Drops This graphic illustrates a full interface output queue, which causes newly arriving packets to be dropped. The term that is used for such drops is “output drop” or “tail drop” (packets are dropped at the tail of the queue). Routers might also drop packets for other less common reasons: Input queue drop: The main CPU is busy and cannot process packets (the input queue is full). Ignore: The router runs out of buffer space. Overrun: The CPU is busy and cannot assign a free buffer to the new packet. Frame errors: The hardware detects an error in a frame; for example, cyclic redundancy checks (CRCs), runt, and giant. Tail drops occur when the output queue is full. Tail drops are common and happen when a link is congested. Other types of drops, usually resulting from router congestion, include input drop, ignore, overrun, and frame errors. These errors can often be solved with hardware upgrades.

16 Ways to Prevent Packet Loss
Packet loss is usually the result of congestion on an interface. Most applications that use TCP experience slowdown because TCP automatically adjusts to network congestion. Dropped TCP segments cause TCP sessions to reduce their window sizes. Some applications do not use TCP and cannot handle drops (fragile flows). These approaches prevent drops in sensitive applications: Increase link capacity to ease or prevent congestion. Guarantee enough bandwidth and increase buffer space to accommodate bursts of traffic from fragile flows. There are several mechanisms available in Cisco IOS QoS software that can guarantee bandwidth and provide prioritized forwarding to drop-sensitive applications. Prevent congestion by dropping lower-priority packets before congestion occurs. Cisco IOS QoS provides queuing mechanisms that start dropping lower-priority packets before congestion occurs. Upgrade the link (the best solution but also the most expensive). Guarantee enough bandwidth for sensitive packets. Prevent congestion by randomly dropping less important packets before congestion occurs.

17 Traffic Policing and Traffic Shaping
Time Traffic Traffic Rate Time Traffic Traffic Rate Policing Time Traffic Traffic Rate Time Traffic Traffic Rate Cisco IOS QoS software provides the following mechanisms to prevent congestion: Traffic policing Traffic policing propagates bursts. When the traffic rate reaches the configured maximum rate, excess traffic is dropped (or remarked). The result is an output rate that appears as a saw-tooth with crests and troughs. Traffic shaping In contrast to policing, traffic shaping retains excess packets in a queue and then schedules the excess for later transmission over increments of time. The result of traffic shaping is a smoothed packet output rate. Shaping implies the existence of a queue and of sufficient memory to buffer delayed packets, while policing does not. Queuing is an outbound concept; packets going out an interface get queued and can be shaped. Only policing can be applied to inbound traffic on an interface. Ensure that you have sufficient memory when enabling shaping. In addition, shaping requires a scheduling function for later transmission of any delayed packets. This scheduling function allows you to organize the shaping queue into different queues. Examples of scheduling functions are CBWFQ and LLQ. Shaping

18 Reducing Packet Loss in a Network
Example: Packet Loss Solution This graphic shows a customer connected to the network via the WAN who is suffering from packet loss that is caused by interface congestion. The packet loss results in poor voice quality and slow data traffic. Upgrading the WAN link is not an option to increase quality and speed. Other options must be considered to solve the problem and restore network quality. Congestion-avoidance techniques monitor network traffic loads in an effort to anticipate and avoid congestion at common network and internetwork bottlenecks before congestion becomes a problem. These techniques provide preferential treatment for premium (priority) traffic when there is congestion while concurrently maximizing network throughput and capacity use and minimizing packet loss and delay. For example, Cisco IOS QoS congestion-avoidance features include Weighted Random Early Detection (WRED) and low latency queuing (LLQ) as possible solutions. The WRED algorithm allows for congestion avoidance on network interfaces by providing buffer management and allowing TCP traffic to decrease, or throttle back, before buffers are exhausted. The use of WRED helps avoid tail drops and global synchronization issues, maximizing network use and TCP-based application performance. There is no such congestion avoidance for User Datagram Protocol (UDP)-based traffic, such as voice traffic. In case of UDP-based traffic, methods such as queuing and compression techniques help to reduce and even prevent UDP packet loss. As this example indicates, congestion avoidance combined with queuing can be a very powerful tool for avoiding packet drops. Problem: Interface congestion causes TCP and voice packet drops, resulting in slowing FTP traffic and jerky speech quality. Conclusion: Congestion avoidance and queuing can help. Solution: Use WRED and LLQ.

19 Summary Converged networks carry different types of traffic over a shared infrastructure. This creates the need to differentiate traffic and give priority to time-sensitive traffic. Various mechanisms exist that help to maximize the use of the available bandwidth, including queuing techniques and compression mechanisms. All networks experience delay. Delay can effect time sensitive traffic such as voice and video. Without proper provisioning and management, networks can experience packet loss. Packet loss is especially important with voice and video, as no resending of lost packets can occur.

20 Resources Quality of Service Networking QoS Congestion Avoidance
QoS Congestion Avoidance QoS Congestion Management (queuing)

21 Optimizing Converged Cisco Networks (ONT)
Module 3: Introduction to IP QoS

22 Lesson 3.2: Implementing Cisco IOS QoS

23 Objectives Describe the need for QoS as it relates to various types of network traffic. Identify QoS mechanisms. Describe the steps used to implement QoS.

24 What Is Quality of Service? Two Perspectives
The user perspective Users perceive that their applications are performing properly Voice, video, and data The network manager perspective Need to manage bandwidth allocations to deliver the desired application performance Control delay, jitter, and packet loss QoS is a generic term that refers to algorithms that provide different levels of quality to different types of network traffic. The most common implementation uses some sort of advanced queuing algorithm. Quality of Service (QoS) has become mission-critical as organizations move to reduce cost of operation, manage expensive WAN bandwidth, or deploy applications such as Voice over IP (VoIP) to the desktop. The goal of QoS is to provide better and more predictable network service by providing dedicated bandwidth, controlled jitter and latency, and improved loss characteristics. QoS achieves these goals by providing tools for managing network congestion, shaping network traffic, using expensive wide-area links more efficiently, and setting traffic policies across the network. QoS offers intelligent network services that, when correctly applied, help to provide consistent and predictable performance. Simple networks process traffic with a FIFO queue. Network administrators need QoS when some packets need different treatments than others. For example, packets can be delayed for several minutes with no one noticing, while VoIP packets cannot be delayed for more than a tenth of a second before users notice the delay. QoS is the ability of the network to provide better or “special” services to selected users and applications to the detriment of other users and applications. In any bandwidth-limited network, QoS reduces jitter, delay, and packet loss for time-sensitive and mission-critical applications.

25 Different Types of Traffic Have Different Needs
Application Examples Sensitivity to QoS Metrics Delay Jitter Packet Loss Interactive Voice and Video Y Streaming Video N Transactional/ Interactive Bulk Data File Transfer Real-time applications especially sensitive to QoS Interactive voice Videoconferencing Causes of degraded performance Congestion losses Variable queuing delays The QoS challenge Manage bandwidth allocations to deliver the desired application performance Control delay, jitter, and packet loss Delay: Jitter: Variability of delay Packet loss: Packets not forwarded (dropped) Need to manage bandwidth allocations

26 Cisco IOS QoS Tools Congestion management: Queue management
PQ CQ WFQ CBWFQ Queue management WRED Link efficiency Link fragmentation and interleave RTP and CRTP Traffic shaping and traffic policing QoS Toolbox One way network elements handle an overflow of arriving traffic is to use a queuing algorithm to sort the traffic and then determine some method of prioritizing it onto an output link. Cisco IOS software includes the following queuing tools: First-in, first-out (FIFO) queuing Priority queuing (PQ) Custom queuing (CQ) Flow-based weighted fair queuing (WFQ) Class-based weighted fair queuing (CBWFQ) Each queuing algorithm was designed to solve a specific network traffic problem and has a particular effect on network performance, as described in the following sections. Traffic policing propagates bursts. When the traffic rate reaches the configured maximum rate, excess traffic is dropped (or remarked). The result is an output rate that appears as a saw-tooth with crests and troughs. In contrast to policing, traffic shaping retains excess packets in a queue and then schedules the excess for later transmission over increments of time. The result of traffic shaping is a smoothed packet output rate. Traffic Shaping: Traffic shaping monitors traffic from each source for bandwidth use. When traffic from a specific source is too high, packets from that source are then queued (delayed). Traffic Policing: Traffic policing is also called “rate limiting.” Traffic policing is an improvement on traffic shaping. In traffic policing, the packets are not simply queued; they also have their IP priority levels altered or they are dropped.

27 Priority Queuing FIFO: Basic Store-and-Forward Capability In its simplest form, FIFO queuing involves storing packets when the network is congested and forwarding them in order of arrival when the network is no longer congested. FIFO is the default queuing algorithm in some instances, thus requiring no configuration, but it has several shortcomings. Most important, FIFO queuing makes no decision about packet priority; the order of arrival determines bandwidth, promptness, and buffer allocation. Nor does it provide protection against ill-behaved applications (sources). Bursty sources can cause long delays in delivering time-sensitive application traffic, and potentially to network control and signaling messages. FIFO queuing was a necessary first step in controlling network traffic, but today's intelligent networks need more sophisticated algorithms. In addition, a full queue causes tail drops. This is undesirable because the dropped packet could be a high-priority packet. The router can’t prevent this packet from being dropped because there is no room in the queue for it (in addition to the fact that FIFO cannot tell a high-priority packet from a low-priority packet). Cisco IOS software implements queuing algorithms that avoid the shortcomings of FIFO queuing. PQ: Prioritizing Traffic PQ ensures that important traffic gets the fastest handling at each point where it is used. It was designed to give strict priority to important traffic. Priority queuing can flexibly prioritize according to network protocol (for example IP, IPX, or AppleTalk), incoming interface, packet size, source/destination address, and so on. In PQ, each packet is placed in one of four queues—high, medium, normal, or low—based on an assigned priority. Packets that are not classified by this priority list mechanism fall into the normal queue. During transmission, the algorithm gives higher-priority queues absolute preferential treatment over low-priority queues. PQ puts data into four levels of queues: high, medium, normal, and low.

28 Custom Queuing CQ: Guaranteeing Bandwidth CQ allows various applications or organizations to share the network among applications with specific minimum bandwidth or latency requirements. In these environments, bandwidth must be shared proportionally between applications and users. You can use the Cisco CQ feature to provide guaranteed bandwidth at a potential congestion point, ensuring the specified traffic a fixed portion of available bandwidth and leaving the remaining bandwidth to other traffic. Custom queuing handles traffic by assigning a specified amount of queue space to each class of packets and then servicing the queues in a round-robin fashion). As shown here, CQ handles traffic by assigning a specified amount of queue space to each class of packet and then servicing up to 17 queues in a round-robin fashion. The queuing algorithm places the messages in one of 17 queues (queue 0 holds system messages such as keepalives, signaling, and so on) and is emptied with weighted priority. The router services queues 1 through 16 in round-robin order, dequeuing a configured byte count from each queue in each cycle. This feature ensures that no application (or specified group of applications) achieves more than a predetermined proportion of overall capacity when the line is under stress. Like PQ, CQ is statically configured and does not automatically adapt to changing network conditions. CQ handles traffic by assigning a specified amount of queue space to each class of packet and then servicing up to 17 queues in a round-robin fashion.

29 Weighted Fair Queuing Flow-Based WFQ: Creating Fairness Among Flows For situations in which it is desirable to provide consistent response time to heavy and light network users alike without adding excessive bandwidth, the solution is flow-based WFQ (commonly referred to as just WFQ). It is a flow-based queuing algorithm that creates bit-wise fairness by allowing each queue to be serviced fairly in terms of byte count. For example, if queue 1 has 100-byte packets and queue 2 has 50-byte packets, the WFQ algorithm will take two packets from queue 2 for every one packet from queue 1. This makes service fair for each queue: 100 bytes each time the queue is serviced. WFQ ensures that queues do not starve for bandwidth and that traffic gets predictable service. Low-volume traffic streams that comprise the majority of traffic, receive increased service, transmitting the same number of bytes as high-volume streams. This behavior results in what appears to be preferential treatment for low-volume traffic, when in actuality it is creating fairness. Class-Based WFQ: Ensuring Network Bandwidth CBWFQ is one of Cisco's newest congestion-management tools for providing greater flexibility. It will provide a minimum amount of bandwidth to a class as opposed to providing a maximum amount of bandwidth as with traffic shaping. CBWFQ allows a network administrator to create minimum guaranteed bandwidth classes. Instead of providing a queue for each individual flow, the administrator defines a class that consists of one or more flows, each class with a guaranteed minimum amount of bandwidth. CBWFQ prevents multiple low-priority flows from swamping out a single high-priority flow. To contrast the behavior of CBWFQ with WFQ, for example, WFQ will provide a video stream that needs half the bandwidth of T1 if there are two flows. But, if more flows are added, the video stream gets less of the bandwidth because WFQ's mechanism creates fairness. If there are 10 flows, the video stream will get only 1/10th of the bandwidth, which is not enough. CBWFQ provides the mechanism needed to provide the half of the bandwidth that video needs. The network administrator defines a class, places the video stream in the class, and tells the router to provide 768 kbps (half of a T1) service for the class. Video therefore gets the bandwidth that it needs. The remaining flows receive a default class. The default class uses flow-based WFQ schemes fairly allocating the remainder of the bandwidth (half of the T1, in this example). WFQ makes the transfer rates and interarrival periods of active high-volume conversations much more predictable.

30 Weighted Random Early Detection
Congestion avoidance is a form of queue management. Congestion-avoidance techniques monitor network traffic loads in an effort to anticipate and avoid congestion at common network bottlenecks, as opposed to congestion-management techniques that operate to control congestion after it occurs. The primary Cisco IOS congestion avoidance tool is WRED. The random early detection (RED) algorithms avoid congestion in internetworks before it becomes a problem. RED works by monitoring traffic load at points in the network and stochastically discarding packets if the congestion begins to increase. The result of the drop is that the source detects the dropped traffic and slows its transmission. RED is primarily designed to work with TCP in IP internetwork environments. WRED combines the capabilities of the RED algorithm with IP precedence. This combination provides for preferential traffic handling for higher-priority packets. It can selectively discard lower-priority traffic when the interface starts to get congested and can provide differentiated performance characteristics for different classes of service. A full queue causes tail drops. Tail drops are dropped packets that could not fit into the queue because the queue was full. This is undesirable because the dropped packet may have been a high-priority packet and the router did not have a chance to queue it. If the queue is not full, the router can look at the priority of all arriving packets and drop the lower-priority packets, allowing high-priority packets into the queue. By managing the depth of the queue (the number of packets in the queue) by dropping specified packets, the router does its best to make sure that the queue does not fill and that tail drops do not happen. WRED provides a method that stochastically discards packets if congestion begins to increase.

31 A communications network forms the backbone of any successful organization. These networks transport a multitude of applications and data, including high-quality video and delay-sensitive data such as real-time voice. The bandwidth-intensive applications stretch network capabilities and resources, but also complement, add value, and enhance every business process. Networks must provide secure, predictable, measurable, and sometimes guaranteed services. Achieving the required Quality of Service (QoS) by managing the delay, delay variation (jitter), bandwidth, and packet loss parameters on a network becomes the secret to a successful end-to-end business solution. Thus, QoS is the set of techniques to manage network resources.

32 Implementing QoS Step 1: Identify types of traffic and their requirements. Step 2: Divide traffic into classes. Step 3: Define QoS policies for each class. Cisco IOS QoS features enable network administrators to control and predictably service a variety of networked applications and traffic types, allowing network managers to take advantage of a new generation of media-rich and mission-critical applications. There are three basic steps involved in implementing QoS on a network: Identify types of traffic and their requirements: Study the network to determine the type of traffic that is running on the network and then determine the QoS requirements needed for the different types of traffic. Define traffic classes: This activity groups the traffic with similar QoS requirements into classes. For example, three classes of traffic might be defined as voice, mission-critical, and best effort. Define QoS policies: QoS policies meet QoS requirements for each traffic class.

33 Step 1: Identify Types of Traffic and Their Requirements
Network audit: Identify traffic on the network. Business audit: Determine how important each type of traffic is for business. Service levels required: Determine required response time. The first step in implementing QoS is to identify the traffic on the network and then determine the QoS requirements and the importance of the various traffic types. This step provides some high-level guidelines for implementing QoS in networks that support for multiple applications, including delay-sensitive and bandwidth-intensive applications. These applications may enhance business processes, but stretch network resources. QoS can provide secure, predictable, measurable, and guaranteed services to these applications by managing delay, delay variation (jitter), bandwidth, and packet loss in a network. Determine the QoS problems of users. Measure the traffic on the network during congested periods. Conduct CPU use assessment on each of the network devices during busy periods to determine where problems might be occurring. Determine the business model and goals and obtain a list of business requirements. This activity helps define the number of classes that are needed and allows you to determine the business requirements for each traffic class. Define the service levels required by different traffic classes in terms of response time and availability. Questions to consider when defining service levels include what is the impact on business if the network delays a transaction by two or three seconds. A service level assignment will include the priority and the treatment a packet will receive. For example, you would assign voice applications a high service level (high priority, LLQ and RTP compression). You would assign low priority data a lower service level (lower priority, WFQ, TCP header compression).

34 Step 2: Define Traffic Classes
Scavenger Class Less than Best Effort After identifying and measuring network traffic, use business requirements to perform the second step: define the traffic classes. Because of its stringent QoS requirements, voice traffic is usually in a class by itself. Cisco has developed specific QoS mechanisms, such as LLQ, to ensure that voice always receives priority treatment over all other traffic. After the applications with the most critical requirements have been defined, the remaining traffic classes are defined using business requirements. A typical enterprise might define five traffic classes: Voice: Absolute priority for VoIP traffic. Mission-critical: Small set of locally defined critical business applications. For example, a mission-critical application might be an order-entry database that needs to run 24 hours a day. Transactional: Database access, transaction services, interactive traffic, and preferred data services. Depending on the importance of the database application to the enterprise, you might give the database a large amount of bandwidth and a high priority. For example, your payroll department performs critical or sensitive work. Their importance to the organization determines the priority and amount of bandwidth you would give their network traffic. Best effort: Popular applications such as and FTP could each constitute a class. Your QoS policy might guarantee employees using these applications a smaller amount of bandwidth and a lower priority then other applications. Incoming HTTP queries to your company's external website might be a class that gets a moderate amount of bandwidth and runs at low priority. Scavenger: The unspecified traffic is considered as less than best effort. Scavenger applications, such as BitTorrent and other point-to-point applications, are served by this class.

35 Step 3: Define QoS Policy
A QoS policy is a network-wide definition of the specific levels of QoS that are assigned to different classes of network traffic. In the third step, define a QoS policy for each traffic class. Defining a QoS policy involves one or more of these activities: Setting a minimum bandwidth guarantee Setting a maximum bandwidth limit Assigning priorities to each class Using QoS technologies, such as advanced queuing, to manage congestion Using the traffic classes, previously defined QoS policies can be mandated based on the following priorities (with Priority 5 being the highest and Priority 1 being the lowest): Priority 5—Voice: Minimum bandwidth of 1 Mbps. Use LLQ to give voice priority always. Priority 4—Mission-critical: Minimum bandwidth of 1 Mbps. Use CBWFQ to prioritize critical-class traffic flows. Priority 3—Transactional: Minimum bandwidth of 1 Mbps. Use CBWFQ to prioritize transactional traffic flows. Priority 2—Best-effort: Maximum bandwidth of 500 kbps. Use CBWFQ to prioritize best-effort traffic flows that are below mission-critical and voice. Priority 1—Scavenger (less-than-best-effort): Maximum bandwidth of 100 kbps. Use WRED to drop these packets whenever the network has a tendency toward congestion.

36 Quality of Service Operations How Do QoS Tools Work?
Post-Queuing Operations Classification and Marking Queuing and (Selective) Dropping In this illustration, the packages on the conveyer belt represent data packets moving through the network. As packets move through each phase, they are identified and prioritized, then managed and sorted, and finally processed and sent. As the packets are identified, they are sorted into separate queues. Notice how some packets receive priority (more of these packets are processed over time), and some packets are selectively dropped. In the next lessons, we will examine in detail the mechanisms used to support these processes.

37 Self Check What types of applications are particularly sensitive to QoS issues? What is WFQ? How is it different than FIFO? What are the 3 basic steps involved in implementing QoS? What is Scavenger Class? Real-time applications are especially sensitive to QoS: interactive voice and videoconferencing. Weighted Fair Queuing (WFQ): FIFO systems store all packets in one queue. WFQ stores each type of packet in a separate queue and assigns each queue a different priority level. There are three basic steps involved in implementing QoS on a network: Identify types of traffic and their requirements: Study the network to determine the type of traffic that is running on the network and then determine the QoS requirements needed for the different types of traffic. Define traffic classes: This activity groups the traffic with similar QoS requirements into classes. For example, three classes of traffic might be defined as voice, mission-critical, and best effort. Define QoS policies: QoS policies meet QoS requirements for each traffic class. Scavenger: The unspecified traffic is considered as less than best effort.

38 Summary QoS is important to both the end user and the network administrator. End users experience lack of QoS as poor voice quality, dropped calls or outages. Network traffic differs in its ability to handle delay, jitter and packet loss. Traffic sensitive to these issues requires priority treatment. QoS measures can provide priority to sensitive traffic, while still providing services to more resilient traffic. Implementing QoS involves 3 basic steps: identify the types of traffic on your network, divide the traffic into classes, and define a QoS policy for each traffic class.

39 Resources QoS Best Practices At-A-Glance QoS Tools At-A-Glance
QoS Tools At-A-Glance

40 Optimizing Converged Cisco Networks (ONT)
Module 3: Introduction to IP QoS

41 Lesson 3.3: Selecting an Appropriate QoS Policy Model

42 Objectives Describe 3 QoS models: best effort, IntServ and Diffserv.
Identify the strengths and weaknesses of each of the 3 QoS models. Describe the purpose and functionality of RSVP.

43 Three QoS Models Model Characteristics Best effort
No QoS is applied to packets. If it is not important when or how packets arrive, the best-effort model is appropriate. Integrated Services (IntServ) Applications signal to the network that the applications require certain QoS parameters. Differentiated Services (DiffServ) The network recognizes classes that require QoS. IP without QoS provides best effort service: All packets are treated equally Bandwidth is unpredictable Delay and jitter are unpredictable Cisco IOS Software supports two fundamental Quality of Service architectures: Differentiated Services (DiffServ) and Integrated Services (IntServ). In the DiffServ model a packet's "class" can be marked directly in the packet, which contrasts with the IntServ model where a signaling protocol is required to tell the routers which flows of packets requires special QoS treatment. DiffServ achieves better QoS scalability, while IntServ provides a tighter QoS mechanism for real-time traffic. These approaches can be complimentary and are not mutually exclusive.

44 Best-Effort Model Internet was initially based on a best-effort packet delivery service. Best-effort is the default mode for all traffic. There is no differentiation among types of traffic. Best-effort model is similar to using standard mail—“The mail will arrive when the mail arrives.” Benefits: Highly scalable No special mechanisms required Drawbacks: No service guarantees No service differentiation The basic design of the Internet provides for best-effort packet delivery and provides no guarantees. This approach is still predominant on the Internet today and remains appropriate for most purposes. The best-effort model treats all network packets in the same way, so an emergency voice message is treated the same way a digital photograph attached to an is treated. Without QoS, the network cannot tell the difference between packets and, as a result, cannot treat packets preferentially. When you mail a letter using standard postal mail, you are using a best-effort model. Your letter is treated exactly the same as every other letter. With the best-effort model, the letter may never arrive, and, unless you have a separate notification arrangement with the letter recipient, you may never know that the letter did not arrive. Benefits: The model has nearly unlimited scalability. The only way to reach scalability limits is to reach bandwidth limits, in which case all traffic is equally affected. You do not need to employ special QoS mechanisms to use the best-effort model. Best-effort is the easiest and quickest model to deploy. Drawbacks: There are no guarantees of delivery. Packets will arrive whenever they can and in any order possible, if they arrive at all. No packets have preferential treatment. Critical data is treated the same as casual e‑mail is treated.

45 Integrated Services (IntServ) Model Operation
Ensures guaranteed delivery and predictable behavior of the network for applications. Provides multiple service levels. RSVP is a signaling protocol to reserve resources for specified QoS parameters. The requested QoS parameters are then linked to a packet stream. Streams are not established if the required QoS parameters cannot be met. Intelligent queuing mechanisms needed to provide resource reservation in terms of: Guaranteed rate Controlled load (low delay, high throughput) Integrated Services (IntServ) provides a way to deliver the end-to-end QoS that real-time applications require by explicitly managing network resources to provide QoS to specific user packet streams, sometimes called microflows. IntServ uses resource reservation and admission-control mechanisms as key building blocks to establish and maintain QoS. This practice is similar to a concept known as “hard QoS.” Hard QoS guarantees traffic characteristics, such as bandwidth, delay, and packet-loss rates, from end to end. Hard QoS ensures both predictable and guaranteed service levels for mission-critical applications. IntServ uses Resource Reservation Protocol (RSVP) explicitly to signal the QoS needs of an application’s traffic to devices along the end-to-end path through the network. If network devices along the path can reserve the necessary bandwidth, the originating application can begin transmitting. If the requested reservation fails along the path, the originating application does not send any data. In the IntServ model, the application requests a specific kind of service from the network before sending data. The application informs the network of its traffic profile and requests a particular kind of service that can encompass its bandwidth and delay requirements. The application sends data only after it receives confirmation for bandwidth and delay requirements from the network. The network performs admission control based on information from the application and available network resources. The network commits to meeting the QoS requirements of the application as long as the traffic remains within the profile specifications. The network fulfills its commitment by maintaining the per-flow state and then performing packet classification, policing, and intelligent queuing based on that state.

46 IntServ Functions Control Plane Data Plane Flow Identification
Packet Scheduler Data Plane Routing Selection Admission Control Reservation Setup Control Plane Reservation Table As a means of illustrating the function of the IntServ model, this graphic shows the control and data planes. In addition to end-to-end signaling, IntServ requires several functions in order to be available on routers and switches along the network path. These functions include the following: Admission control: Admission control determines whether a new flow requested by users or systems can be granted the requested QoS without affecting existing reservations in order to guarantee end-to-end QoS. Admission control ensures that resources are available before allowing a reservation. Classification: Entails using a traffic descriptor to categorize a packet within a specific group to define that packet and make it accessible for QoS handling on the network. Classification is pivotal for policy techniques that select packets for different types of QoS service. Policing: Takes action, including possibly dropping packets, when traffic does not conform to its specified characteristics. Policing is defined by rate and burst parameters, as well as by actions for in-profile and out-of-profile traffic. Queuing: Queuing accommodates temporary congestion on an interface of a network device by storing excess packets in buffers until access to the bandwidth becomes available. Scheduling: A QoS component, the QoS scheduler, negotiates simultaneous requests for network access and determines which queue receives priority. IntServ uses round robin scheduling. Round robin scheduling is a time-sharing approach in which the scheduler gives a short time slice to each job before moving on to the next job, polling each task round and round. This way, all the tasks advance, little by little, on a controlled basis. Packet scheduling enforces the reservations by queuing and scheduling packets for transmission.

47 Benefits and Drawbacks of the IntServ Model
Explicit resource admission control (end to end) Per-request policy admission control (authorization object, policy object) Signaling of dynamic port numbers (for example, H.323) Drawbacks: Continuous signaling because of stateful architecture Flow-based approach not scalable to large implementations, such as the public Internet Benefits: IntServ supports admission control that allows a network to reject or downgrade new RSVP sessions if one of the interfaces in the path has reached the limit (that is, if all bandwidth that can be reserved is booked). RSVP signals QoS requests for each individual flow. In the request, the authorized user (authorization object) and needed traffic policy (policy object) are sent. The network can then provide guarantees to these individual flows. RSVP informs network devices of flow parameters (IP addresses and port numbers). Some applications use dynamic port numbers, such as H.323-based applications, which can be difficult for network devices to recognize. Network-Based Application Recognition (NBAR) is a mechanism that complements RSVP for applications that use dynamic port numbers but do not use RSVP. Drawbacks: There is continuous signaling because of the stateful RSVP architecture that adds to the bandwidth overhead. RSVP continues signaling for the entire duration of the flow. If the network changes, or links fail and routing convergence occurs, the network may no longer be able to support the reservation. The flow-based approach is not scalable to large implementations, such as the public Internet, because RSVP has to track each individual flow. This circumstance makes end-to-end signaling difficult. A possible solution is to combine IntServ with elements from the DiffServ model to provide the needed scalability.

48 Resource Reservation Protocol (RSVP)
Is carried in IP—protocol ID 46 Can use both TCP and UDP port 3455 Is a signaling protocol and works with existing routing protocols Requests QoS parameters from all devices between the source and destination Sending Host RSVP Receivers RSVP Tunnel Resource Reservation Protocol (RSVP) is used to implement IntServ models. The Resource Reservation Protocol (RSVP) is a network-control protocol that enables Internet applications to obtain differing qualities of service (QoS) for their data flows. Such a capability recognizes that different applications have different network performance requirements. Some applications, including the more traditional interactive and batch applications, require reliable delivery of data but do not impose any stringent requirements for the timeliness of delivery. Newer application types, including videoconferencing, IP telephony, and other forms of multimedia communications require almost the exact opposite: Data delivery must be timely but not necessarily reliable. Thus, RSVP was intended to provide IP networks with the capability to support the divergent performance requirements of differing application types. It is important to note that RSVP is not a routing protocol. RSVP works in conjunction with routing protocols and installs the equivalent of dynamic access lists along the routes that routing protocols calculate. Thus, implementing RSVP in an existing network does not require migration to a new routing protocol. If resources are available, RSVP accepts a reservation and installs a traffic classifier to assign a temporary QoS class for that traffic flow in the QoS forwarding path. The traffic classifier tells the QoS forwarding path how to classify packets from a particular flow and what forwarding treatment to provide. RSVP is an IP protocol that uses IP protocol ID 46 and TCP and UDP port 3455. In RSVP, a data flow is a sequence of datagrams that have the same source, destination (regardless of whether that destination is one or more physical machines), and QoS requirements. QoS requirements are communicated through a network via a flow specification. A flow specification describes the level of service that is required for that data flow. RSVP focuses on the following two main traffic types: Rate-sensitive traffic: Traffic that requires a guaranteed and constant (or nearly constant) transmission rate from its source to its destination. An example of such an application is H.323 videoconferencing. RSVP enables constant-bit-rate service in packet-switched networks via its rate-sensitive level of service. This service is sometimes referred to as guaranteed-bit-rate service. Delay-sensitive traffic: Traffic that requires timeliness of delivery and that varies its rate accordingly. MPEG-II video, for example, averages about 3 to 7 Mbps, depending on the rate at which the picture is changing. RSVP services supporting delay-sensitive traffic are referred to as controlled-delay service (non-real-time service) and predictive service (real-time service). Provides divergent performance requirements for multimedia applications: Rate-sensitive traffic Delay-sensitive traffic

49 RSVP Daemon Policy Control Admission Packet Classifier Scheduler
Routing RSVP Daemon Reservation Data Each node that uses RSVP has two local decision modules: Admission control: Admission control keeps track of the system resources and determines whether the node has sufficient resources to supply the requested QoS. The RSVP daemon monitors both of these checking actions. If either check fails, the RSVP program returns an error message to the application that originated the request. If both checks succeed, the RSVP daemon sets parameters in the packet classifier and packet scheduler to obtain the requested QoS. Policy control: Policy control determines whether the user has administrative permission to make the reservation. If both Admission control and Policy control succeed, the daemon then sets parameters in two entities, packet classifier and packet scheduler. Packet classifier: The RSVP packet classifier determines the route and QoS class for each packet. Packet scheduler: The RSVP packet scheduler orders packet transmission to achieve the promised QoS for each stream. The scheduler allocates resources for transmission on the particular data link layer medium used by each interface. Routing Process—The RSVP daemon also communicates with the routing process to determine the path to send its reservation requests and to handle changing memberships and routes. Each router participating in resource reservation passes incoming data packets to a packet classifier and then queues the packets as necessary in a packet scheduler. Once the packet classifier determines the route and QoS class for each packet, and the scheduler allocates resources for transmission, RSVP passes the request to all the nodes (routers and hosts) along the reverse data paths to the data sources. At each node, the RSVP program applies a local decision procedure called admission control to determine whether that node can supply the requested QoS. If admission control succeeds in providing the required QoS, the RSVP program sets the parameters of the packet classifier and scheduler to obtain the desired QoS. If admission control fails at any node, the RSVP program returns an error indication to the application that originated the request. Routers along the reverse data stream path repeat this reservation until the reservation merges with another reservation for the same source stream.

50 Reservation Merging R1, R2 and R3 all request the same reservation.
Sender When a potential receiver initiates a reservation request, the request does not need to travel all the way to the source of the sender. Instead, it travels upstream until it meets another reservation request for the same source stream. The request then merges with that reservation. This example shows how the reservation requests merge as they progress up the multicast tree. Reservation merging leads to the primary advantage of RSVP, scalability. This allows a large number of users to join a multicast group without significantly increasing the data traffic. RSVP scales to large multicast groups. The average protocol overhead decreases as the number of participants increases. R1, R2 and R3 all request the same reservation. The R2 and R3 request merges at R4. The R1 request merges with the combined R2 and R3 request at R5. RSVP reservation merging provides scalability.

51 RSVP in Action In this example, RSVP is enabled on each router interface in the network. An IntServ-enabled WAN connects three Cisco IP phones to each other and to the Cisco Unified CallManager 5.0. Because bandwidth is limited on the WAN links, RSVP determines whether the requested bandwidth for a successful call is available. For performing CAC, Cisco Unified CallManager 5.0 uses RSVP. An RSVP-enabled voice application wants to reserve 20 kbps of bandwidth for a data stream from IP-Phone 1 to IP-Phone 2. Recall that RSVP does not perform its own routing; instead, RSVP uses underlying routing protocols to determine whether to carry reservation requests. As routing changes paths to adapt to changes in topology, RSVP adapts reservations to the new paths wherever reservations are in place. The RSVP protocol attempts to establish an end-to-end reservation by checking for available bandwidth resources on all RSVP-enabled routers along the path from IP-Phone 1 to IP-Phone 2. As the RSVP messages progress through the network from Router R1 via R2 to R3, the available RSVP bandwidth is decremented by 20 kbps on the router interfaces. For voice calls, a reservation must be made in both directions. The available bandwidth on all interfaces is sufficient to accept the new data stream, so the reservation succeeds and the application is notified. RSVP sets up a path through the network with the requested QoS. RSVP is used for CAC in Cisco Unified CallManager 5.0.

52 The Differentiated Services Model
Overcomes many of the limitations best-effort and IntServ models Uses the soft QoS provisioned-QoS model rather than the hard QoS signaled-QoS model Classifies flows into aggregates (classes) and provides appropriate QoS for the classes Minimizes signaling and state maintenance requirements on each network node Manages QoS characteristics on the basis of per-hop behavior (PHB) You choose the level of service for each traffic class Edge Interior DiffServ Domain End Station The differentiated services (DiffServ) architecture specifies a simple, scalable, and coarse-grained mechanism for classifying and managing network traffic and providing QoS guarantees. For example, DiffServ can provide low-latency guaranteed service (GS) to critical network traffic such as voice or video while providing simple best-effort traffic guarantees to non-critical services such as web traffic or file transfers. The DiffServ design overcomes the limitations of both the best-effort and IntServ models. DiffServ can provide an “almost guaranteed” QoS while still being cost-effective and scalable. The concept of soft QoS is the basis of the DiffServ model. You will recall that IntServ (hard QoS) uses signaling in which the end-hosts signal their QoS needs to the network. DiffServ does not use signaling but works on the provisioned-QoS model, where network elements are set up to service multiple classes of traffic each with varying QoS requirements. By classifying flows into aggregates (classes), and providing appropriate QoS for the aggregates, DiffServ can avoid significant complexity, cost, and scalability issues. For example, DiffServ groups all TCP flows as a single class, and allocates bandwidth for that class, rather than for the individual flows as hard QoS (DiffServ) would do. In addition to classifying traffic, DiffServ minimizes signaling and state maintenance requirements on each network node. DiffServ divides network traffic into classes based on business requirements. Each of the classes can then be assigned a different level of service. As the packets traverse a network, each of the network devices identifies the packet class and services the packet according to that class. The hard QoS model (IntServ) provides for a rich end-to-end QoS solution, using end-to-end signaling, state-maintenance and admission control. This approach consumes significant overhead, thus restricting its scalability. On the other hand, DiffServ cannot enforce end-to-end guarantees, but is a more scalable approach to implementing QoS. DiffServ maps many applications into small sets of classes. DiffServ assigns each class with similar sets of QoS behaviors and enforces and applies QoS mechanisms on a hop-by-hop basis, uniformly applying global meaning to each traffic class to provide both flexibility and scalability. DiffServ works like a packet delivery service. You request (and pay for) a level of service when you send your package. Throughout the package network, the level of service is recognized and your package is given either preferential or normal service, depending on what you requested. Benefits: Highly scalable Many levels of quality possible Drawbacks: No absolute service guarantee Requires a set of complex mechanisms to work in concert throughout the network

53 Self Check Which of the QoS models is more scalable, yet still provides QoS for sensitive traffic? Which QoS model relies on RSVP? What are some drawbacks of using IntServ for QoS? What is admission control? What are the drawbacks of using Diffserv? Diffserv IntServ Requires continuous signaling and is not scalable to large installations. Admission control keeps track of the system resources and determines whether the node has sufficient resources to supply the requested QoS. Provides no absolute service guarantee and requires a set of complex mechanisms to work in concert throughout the network

54 Summary Best effort QoS is appropriate where sensitive traffic does not have to be services. When sensitive traffic must be services, IntServ or Diffserv should be used to provide QoS. IntServ uses RSVP to guarantee end to end services for a traffic flow. RSVP has significant signaling overhead and is not highly scalable. Diffserv uses classes to identify traffic and then provides QoS to those classes. Diffserv is highly scalable, but does not provide a service guarantee.

55 Resources Resource Reservation Protocol (RSVP) – from the Cisco Internetworking Technology Handbook Quality of Service – from the Cisco Internetworking Technology Handbook

56 Optimizing Converged Cisco Networks (ONT)
Module 3: Introduction to IP QoS

57 Lesson 3.4: Using MQC for Implementing QoS

58 Objectives Identify the features of each method for QoS policy implementation. Describe the guidelines for using CLI to implement QoS policy. Describe the Modular QoS Command Line (MQC)

59 Methods for Implementing QoS Policy
Description Legacy CLI Coded at the CLI Requires each interface to be individually configured Time-consuming MQC Uses configuration modules Best method for QoS fine tuning Cisco AutoQoS Applies a possible QoS configuration to the interfaces Fastest way to implement QoS Cisco SDM QoS wizard Application for simple QoS configurations In the past, the only way to implement QoS in a network was by using the command-line interface (CLI) to configure individual QoS policies at each interface. This is a time-consuming and error-prone task involving cutting and pasting configurations from one interface to another. Cisco introduced the Modular QoS CLI (MQC) to simplify QoS configuration by making configurations modular. MQC provides a building-block approach that uses a single module repeatedly to apply a policy to multiple interfaces. Cisco AutoQoS represents innovative technology that simplifies the challenges of network administration by reducing QoS complexity, deployment time, and cost to enterprise networks. Cisco AutoQoS incorporates value-added intelligence in Cisco IOS software and Cisco Catalyst software to provision and assist in the management of large-scale QoS deployments. Customers can easily configure, manage, and successfully troubleshoot QoS deployments by using the Cisco Router and Security Device Manager (SDM) QoS wizard. The Cisco SDM QoS wizard provides centralized QoS design, administration, and traffic monitoring that scales to large QoS deployments.

60 Configuring QoS at the CLI
Uses the CLI via console and Telnet Traditional method Nonmodular Cannot separate traffic classification from policy definitions Time-consuming and potentially error-prone task Used to augment and fine-tune newer Cisco AutoQoS method Cisco does not recommend the legacy CLI method for initially implementing QoS policies. The CLI method is time-consuming and prone to errors. Nonetheless, QoS implementation at the CLI remains the choice for some administrators, especially for fine-tuning and adjusting QoS properties. The legacy CLI method of QoS implementation has the following limitations: It is the hardest and most time-consuming way to configure QoS. Has little opportunity to fine-tune and there is less granularity for supported QoS features than other QoS configuration techniques. QoS functionalities have limited options; for example, you cannot fully separate the traffic classification from the QoS mechanisms. To implement QoS this way, use the console or Telnet to access the CLI. Using the CLI approach is simple but only allows basic features to be configured. To implement QoS this way, you must first build a QoS policy (traffic policy) and then apply the policy to the interface.

61 Guidelines for Using the CLI Configuration Method
Build a traffic policy: Identify the traffic pattern. Classify the traffic. Prioritize the traffic. Select a proper QoS mechanism: Queuing Compression Apply the traffic policy to the interface. Identify the traffic patterns in your network by using a packet analyzer. This activity gives you the ability to identify the traffic types, for example, IP, TCP, User Datagram Protocol (UDP), DECnet, AppleTalk, and Internetwork Packet Exchange (IPX). After you have performed the traffic identification, start classifying the traffic. For example, separate the voice traffic class from the business-critical traffic class. For each traffic class, specify the priority for the class. For example, voice is assigned a higher priority than business-critical traffic. After applying the priorities to the traffic classes, select a proper QoS mechanism, such as queuing, compression, or a combination of both. This choice determines which traffic leaves the device first and how traffic leaves the device.

62 For interactive traffic, you can use CQ and TCP header compression.
Legacy CLI QoS Example interface multilink ip address load-interval 30 custom-queue-list 1 ppp multilink ppp multilink fragment-delay 10 ppp multilink interleave multilink-group 1 ip tcp header-compression iphc-format ! queue-list 1 protocol ip 2 tcp 23 This graphic shows a possible implementation scenario for legacy CLI followed by a sample configuration. In this scenario, a low-speed WAN link connects the office site to the central site. Both sites are equipped with PCs and servers that run interactive applications, such as terminal services. Because the available bandwidth is limited, you must devise an appropriate strategy for efficient bandwidth use. In a network with remote sites that use interactive traffic for daily business, bandwidth availability is an issue. The available bandwidth resources need to be used efficiently. Because only simple services are run, basic queuing techniques, such as PQ or CQ, and header compression mechanisms, such as TCP header compression, are needed to use the bandwidth much more efficiently. Depending on the kind of traffic in the network, you must choose suitable queuing and compression mechanisms. In the example, CQ and TCP header compression are a strategy for interactive traffic quality assurance. The output illustrates complex configuration tasks that can be involved with using the CLI as follows: Each QoS feature needs a separate line. CQ needs two lines: one line that sets up the queue list, in this example for Telnet traffic, and a second line that binds the queue list to an interface and activates the list. PPP multilink configuration needs four lines and another line for TCP header compression. For interactive traffic, you can use CQ and TCP header compression.

63 Modular QoS CLI A command syntax for configuring QoS policy
Reduces configuration steps and time Configures policy, not “raw” per-interface commands Uniform CLI across major Cisco IOS platforms Uniform CLI structure for all QoS features Separates classification engine from the policy The Cisco MQC allows users to create traffic policies and then attach these policies to interfaces. A QoS policy contains one or more traffic classes and one or more QoS features. A traffic class classifies traffic, and the QoS features in the QoS policy determine how to treat the classified traffic. The Cisco MQC offers significant advantages over the legacy CLI method for implementing QoS. By using MQC, a network administrator can significantly reduce the time and effort it takes to configure QoS in a complex network. Rather than configuring “raw” CLI commands interface by interface, the administrator develops a uniform set of traffic classes and QoS policies that are applied on interfaces. The use of the Cisco MQC allows the separation of traffic classification from the definition of QoS policy. This capability enables easier initial QoS implementation and maintenance as new traffic classes emerge and QoS policies for the network evolve.

64 Modular QoS CLI Components
This graphic summarizes the three steps to follow when configuring QoS using Cisco MQC configuration. Each step answers a question concerning the classes assigned to different traffic flows: Build a class map: What traffic do we care about? The first step in QoS deployment is to identify the interesting traffic, that is, classify the packets. This step defines a grouping of network traffic—a class-map in MQC terminology—with various classification tools: Access Control Lists (ACLs), IP addresses, IP precedence, IP Differentiated Services Code Point (DSCP), IEEE 802.1p, MPLS EXP, and Cisco Network Based Application Recognition (NBAR). In this step, you configure traffic classification by using the class-map command. Policy map: What will happen to the classified traffic? Decide what to do with a group once you identify its traffic. This step is the actual construction of a QoS policy—a policy-map in MQC terminology—by choosing the group of traffic (class-map) on which to perform QoS functions. Examples of QoS functions are queuing, dropping, policing, shaping, and marking. In this step, you configure each traffic policy by associating the traffic class with one or more QoS features using the policy-map command. Service policy: Where will the policy apply? Apply the appropriate policy map to the desired interfaces, sub-interfaces, or Asynchronous Transfer Mode (ATM) or Frame Relay Permanent Virtual Circuits (PVCs). In this step, you attach the traffic policy to inbound or outbound traffic on interfaces, subinterfaces, or virtual circuits by using the service-policy command.

65 Step 1: Creating Class Maps: “What Traffic Do We Care About?”
Each class is identified using a class map. A traffic class contains three major elements: A case-sensitive name A series of match commands An instruction on how to evaluate the match commands if more than one match command exists in the traffic class Class maps can operate in two modes: Match all: All conditions have to succeed. Match any: At least one condition must succeed. The default mode is match all. Step 1 requires you to tell the router what traffic gets QoS and to what degree. An ACL is the traditional way to define any traffic for a router. A class-map defines the traffic into groups with classification templates that are used in policy maps where QoS mechanisms are bound to classes. You can configure up to 256 class maps on a router. For example, you might assign video applications to a class map called Video, and application traffic to a class map called Mail. You could, you could also create a class map called VoIP traffic and put all VoIP protocols under it. There are two ways of processing conditions when there is more than one condition in a class map: Match all: Must meet all conditions to bind a packet to the class. Match any: Meet at least one condition to bind the packet to the class. The default match strategy of class maps is match all.

66 Configuring Class Maps
Enter class-map configuration mode. Specify the matching strategy. class-map [match-all | match-any] class-map-name router(config)# Use at least one condition to match packets. match any router(config-cmap)# match not match-criteria Use the class-map global configuration command to create a class map. Identify class maps with case-sensitive names. All subsequent references to the class map must use the same name. Each class map contains one or more conditions that define which packets belong to the class. The match commands specify various criteria for classifying packets. Packets are checked to determine whether they match the criteria that are specified in the match commands. If a packet matches the specified criteria, that packet is considered a member of the class and is forwarded according to the QoS specifications set in the traffic policy. Packets that fail to meet any of the matching criteria are classified as members of the default traffic class. The Cisco MQC does not necessarily require that users associate a single traffic class to one traffic policy. Multiple types of traffic can be associated with a single traffic class using the match any command. The match not command inverts the specified condition. This command specifies a match criterion value that prevents packets from being classified as members of a specified traffic class. All other values of that particular match criterion belong to the class. At least one match command should be used within the class-map configuration mode. The description command is used for documenting a comment about the class map. Use descriptions in large and complex configurations. The description has no operational meaning. description description router(config-cmap)#

67 Classifying Traffic with ACLs
Standard ACL router(config)# access-list access-list-number {permit | deny | remark} source [mask] Extended ACL router(config)# access-list access-list-number {permit | deny} protocol source source-wildcard [operator port] destination destination-wildcard [operator port] [established] [log] There are many ways to classify traffic when configuring class maps. One possible way to classify traffic is by using access control lists (ACLs) to specify the traffic that needs to match for the QoS policy. Class maps support standard ACLs and extended ACLs. The match access-group command allows an ACL to be used as a match criterion for traffic classification. Use an ACL as a match criterion router(config-cmap)# match access-group access-list-number

68 Step 2: Policy Maps: “What Will Be Done to This Traffic?”
A policy map defines a traffic policy, which configures the QoS features associated with a traffic class that was previously identified using a class map. A traffic policy contains three major elements: A case-sensitive name A traffic class The QoS policy that is associated with that traffic class Up to 256 traffic classes can be associated with a single traffic policy. Multiple policy maps can be nested to influence the sequence of QoS actions. The policy-map command creates a traffic policy. The purpose of a traffic policy is to configure the QoS features that should be associated with the traffic that is classified into a traffic class or classes. You can then assign as much bandwidth or set whatever priority you need to that class. A traffic policy contains three elements: a case-sensitive name, a traffic class (specified with the class command), and the QoS policies. The policy-map command specifies the name of a traffic policy (for example, issuing the policy-map class1 command would create a traffic policy named class1). After you issue the policy-map command, you enter policy-map configuration mode. You can then enter the name of a traffic class. You must be in the policy-map configuration mode to enter QoS features that apply to the traffic matching the named class. A packet can match only one traffic class within a traffic policy. If a packet matches more than one traffic class in the traffic policy, the first traffic class defined in the policy is used. On the other hand, the Cisco MQC does not necessarily require that you associate only one traffic class to a single traffic policy. When packets match to more than one match criterion, multiple traffic classes can be associated with a single traffic policy. The next topic will explain the concept of nested class maps.

69 Configuring Policy Maps
Enter policy-map configuration mode. Policy maps are identified by a case-sensitive name. policy-map policy-map-name router(config)# Enter the per-class policy configuration mode by using the name of a previously configured class map. Use the class-default name to configure the policy for the default class. class {class-name | class-default} router(config-pmap)# You configure service policies with the policy-map command. One policy map can have up to 256 classes using the class command with the name of a preconfigured class map. This graphic shows the policy-map and class command syntax. A nonexistent class can also be used within the policy-map configuration mode if the match condition is specified after the name of the class. The running configuration will reflect such a configuration by using the match-any strategy and inserting a full class map configuration. All traffic that is not classified by any of the class maps that are used within the policy map is part of the default class “class-default.” This class has no QoS guarantees by default. The default class, when used on output, can use one FIFO queue or flow-based WFQ. The default class is part of every policy map, even if a default class is not included in the configuration. Optionally, you can define a new class map by entering the condition after the name of the new class map. Uses the match-any strategy. class class-name condition router(config-pmap)#

70 Step 3: Attaching Service Policies: “Where Will This Policy Be Implemented?”
Attach the specified service policy map to the input or output interface service-policy {input | output} policy-map-name router(config-if)# class-map HTTP match protocol http ! policy-map PM class HTTP bandwidth 2000 class class-default bandwidth 6000 interface Serial0/0 service-policy output PM Service policies can be applied to an interface for inbound or outbound packets Like an ACL, you must apply the policy map to the specific interface you want it to affect. You can apply the policy map in either output or input mode. The last configuration step when configuring QoS mechanisms using the Cisco MQC is to attach a policy map to the inbound or outbound packets using the service-policy command. Use the service-policy command to assign a single policy map to multiple interfaces or assign multiple policy maps to a single interface (a maximum of one in each direction, inbound and outbound). A service policy can be applied for inbound or outbound packets. Use the service-policy interface configuration command to attach a traffic policy to an interface and to specify the direction in which the policy should be applied (either on packets coming into the interface or on packets leaving the interface). The router immediately verifies the parameters that are used in the policy map. If there is a mistake in the policy map configuration, the router displays a message explaining what is wrong with the policy map. The sample configuration shows how a policy map is used to separate HTTP from other traffic. HTTP is guaranteed 2 Mbps of bandwidth. All other traffic belongs to the default class and is guaranteed to get 6 Mbps of bandwidth.

71 Modular QoS CLI Configuration Example
router(config)# class-map match-any business-critical-traffic router(config-cmap)# match protocol http url “*customer*” router(config-cmap)# match protocol http url citrix router(config)# policy-map myqos policy router(config-pm am)# class business-critical-traffic router(config-pm am-c)# bandwidth 1000 router(config)# interface serial 0/0 router(config-if)# service-policy output myqos policy 1 2 This graphic is a simple example of using the three-step process. The classification step is modular and independent of what happens to the packet after it is classified. For example, a defined policy map contains various class maps and the configuration within a policy map can be changed independently from the configuration of a defined class map (and vice versa). Further, use of the no policy-map command can disable an entire QoS policy. 3

72 Boolean Nesting Goal There are two reasons to use the match class-map command. One reason is maintenance; if a long traffic class currently exists, using the Traffic Class match criterion is simply easier than retyping the same traffic class configuration. The more prominent reason for the match class-map command is to allow users to use match-any and match-all statements in the same traffic class. If you want to combine match-all and match-any characteristics in a traffic policy, create a traffic class using one match criteria evaluation instruction (either match any or match all) and then use this traffic class as a match criterion in a traffic class that uses a different match criteria type. A simple library analogy illustrated in Figure [1] will serve to clarify the concept. Let us assume you are looking in a library database for a book that covers the salaries of either football players or hockey players. In Boolean terms, your search is represented by the phrase (salaries AND [football players OR hockey players]). This is a ‘nested search’, one search within another. The part of the search enclosed in brackets, football players OR hockey players, will be performed first, followed by the AND operation. This search will retrieve items on salaries and football players as well as items on salaries and hockey players. The Venn diagram shows six different sectors, three of which overlap to a degree. The overlapping area in the center includes salaries, hockey players, and football players. The area to the right of center contains items on salaries and hockey players. The overlapping area to the left of center contains items on salaries and football players. Te Only the left of center and the right of center match our criteria. Goal: Find books that cover the salaries of either football players or hockey players. Solution: Boolean (salaries AND [football players OR hockey players]).

73 MQC Example The example shows a network using interactive traffic and VoIP with an applied Cisco MQC configuration. In this scenario, the office site connects over a low-speed WAN link to the central site. Both sites are equipped with IP phones, PCs, and servers that run interactive applications, such as terminal services. Because the available bandwidth is limited, an administrator must implement an appropriate strategy for efficient bandwidth use. The strategy must meet the requirements of voice traffic’s need for high priority, low delay, and constant bandwidth along the communication path and interactive traffic’s need for bandwidth and low delay. Classification requirements also affect the strategy. Classification of the important traffic streams and policing by applying traffic parameters to the classified traffic, such as priority, queuing, and bandwidth, are the major elements of the traffic policy that improve the overall quality. Finally, the traffic policy is applied to the WAN interface of the routers. Voice traffic needs priority, low delay, and constant bandwidth. Interactive traffic needs bandwidth and low delay.

74 MQC Configuration hostname Office ! class-map VoIP match access-group 100 class-map Application match access-group 101 policy-map QoS-Policy class VoIP priority 100 class Application bandwidth 25 class class-default fair-queue interface Serial0/0 service-policy output QoS-Policy access-list 100 permit ip any any precedence 5 access-list 100 permit ip any any dscp ef access-list 101 permit tcp any host access-list 101 permit tcp any host Classification QoS Policy An example of the complex configuration tasks involved in using Cisco MQC on the router Office. QoS Policy on Interface Classification

75 Basic Verification Commands
Display the class maps show class-map router# Display the policy maps show policy-map router# To display and verify basic QoS classes and policies configured by using the Cisco MQC, use the commands listed here. Display the applied policy map on the interface show policy-map interface type number router#

76 Summary There are 4 basic ways to implement QoS policy on Cisco devices: CLI, MQC, AutoQoS and SDM. Choosing a method will depend on the complexity of the network on the expertise of the administrator. The Cisco MQC offers significant advantages over the legacy CLI method for implementing QoS. By using MQC, a network administrator can significantly reduce the time and effort it takes to configure QoS in a complex network. There are three steps to follow when configuring QoS using Cisco MQC configuration. Each step answers a question concerning the classes assigned to different traffic flows: What traffic do we care about? What will happen to the classified traffic? Where will the policy apply?

77 Self Check What is a class map?
How many class maps can be configured on a Cisco router? What is a traffic policy? What are the 3 basic elements of a traffic policy? What command is used to assign a policy map to an interface? A class-map defines the traffic into groups with classification templates that are used in policy maps where QoS mechanisms are bound to classes. You can configure up to 256 class maps on a router. The purpose of a traffic policy is to configure the QoS features that should be associated with the traffic that is classified into a traffic class or classes. You can then assign as much bandwidth or set whatever priority you need to that class. A traffic policy contains three elements: a case-sensitive name, a traffic class (specified with the class command), and the QoS policies. A traffic policy contains three major elements: A case-sensitive name A traffic class The QoS policy that is associated with that traffic class Use the service-policy command to assign a single policy map to multiple interfaces or assign multiple policy maps to a single interface (a maximum of one in each direction, inbound and outbound). A service policy can be applied for inbound or outbound packets.

78 Q and A

79 Resources Modular Quality of Service Command-Line Interface
QoS Policing: Cisco Modular Quality of Service Command Line Interface

80 Optimizing Converged Cisco Networks (ONT)
Module 3: Introduction to IP QoS

81 Lesson 3.5: Implementing QoS with Cisco AutoQoS

82 Objectives Describe LAN and WAN features of Cisco AutoQoS.
Identify the guidelines when using Cisco AutoQoS to implement QoS policies. Describe the features of the Cisco Security Device Manager (SDM). Explain how SDM can be used to implement QoS on Cisco devices. Compare and contrast four methods for configuring QoS on a network.

83 Cisco AutoQoS Cisco AutoQoS simplifies and shortens the QoS deployment cycle. Cisco AutoQoS helps in all of the following five major aspects of successful QoS deployments: Application classification: Cisco AutoQoS leverages intelligent classification on routers using Cisco NBAR to provide deep and stateful packet inspection. AutoQoS uses Cisco Discovery Protocol (CDP) for voice packets to ensure that the device attached to the LAN is really an IP phone. Policy generation: Cisco AutoQoS evaluates the network environment and generates an initial policy. AutoQoS automatically determines WAN settings for fragmentation, compression, encapsulation, and Frame Relay-to-ATM Service Interworking (FRF.8), eliminating the need to understand QoS theory and design practices in various scenarios. Customers can then meet any additional or special requirements by modifying the initial policy. The first release of Cisco AutoQoS provides the AutoQoS VoIP feature to automate QoS settings for VoIP deployments. This feature automatically generates interface configurations, policy maps, class maps, and ACLs. Cisco AutoQoS VoIP automatically employs Cisco NBAR to classify voice traffic and mark the traffic with the appropriate DSCP value. AutoQoS VoIP can be instructed to use the DSCP markings that were previously applied to the packets. Introduced in Cisco IOS Software Release 12.3(7)T, AutoQoS for Enterprise extends the capabilities of AutoQoS on a Cisco router platform. Specifically, AutoQoS for Enterprise allows a router to recognize multiple protocols traversing an interface and recommends a customized policy, based on learned traffic patterns. Configuration: With one command, Cisco AutoQoS configures the port to prioritize voice traffic without affecting other network traffic while still offering the flexibility to adjust QoS settings for unique network requirements. Not only does Cisco AutoQoS automatically detect Cisco IP phones and enable QoS settings for them, Cisco AutoQoS disables the QoS settings when you relocate or move a Cisco IP phone to prevent malicious activity. You can customize Cisco AutoQoS-generated router and switch configurations using the Cisco MQC. Monitoring and reporting: Cisco AutoQoS provides visibility into the classes of service that are deployed via system logging and Simple Network Management Protocol (SNMP) traps with notification of abnormal events (such as VoIP packet drops). Consistency: When you deploy QoS configurations using Cisco AutoQoS, the configurations that are generated are consistent among router and switch platforms. This level of consistency ensures seamless QoS operation and interoperability within the network. The Cisco SDM QoS wizard can be used in conjunction with the Cisco AutoQoS VoIP feature to provide a centralized, web-based tool to cost-effectively manage and monitor network-wide QoS policies. The Cisco AutoQoS VoIP feature, together with the Cisco SDM QoS wizard, eases QoS implementation, provisioning, and management.

84 Cisco AutoQoS Features in a WAN
Benefit Autodetermination of WAN Settings Eliminates the need to know QoS theory and design in common deployment scenarios Autoclassification of VoIP Settings Automatically classifies RTP payload and VoIP control packets (H.323, H.225 unicast, Skinny, SIP), and MGCP Initial Policy Generation Reduces the time needed to establish an initial, feasible QoS policy solution VoIP LLQ Provisioning Provisions LLQ for the VoIP bearer and bandwidth guarantees for control traffic WAN Traffic Shaping Enables WAN traffic shaping (FRTS, CIR and burst) Link Efficiency Enables link efficiency mechanisms (LFI and cRTP) as appropriate Management Provides SNMP and syslog alerts for VoIP packet drops Cisco AutoQoS simplifies deployment and speeds provisioning of Quality of Service technology over a Cisco network infrastructure. It reduces human error and lowers training costs. With AutoQoS-VoIP, you use just one command to enable QoS for VoIP across every Cisco router and switch. You can also modify an AutoQoS-generated policy to meet your specific requirements. Cisco AutoQoS performs these functions in a WAN : Autodetermination of WAN settings: Automatic determination of WAN settings for fragmentation and interleaving, compression, encapsulation, and Frame Relay-ATM Interworking. This eliminates the need to understand QoS theory and design practices in common deployment scenarios. Autoclassification: Automatically classifies Real-Time Transport Protocol (RTP) payload and VoIP control packets (H.323, H.225 unicast, Skinny, Session Initiation Protocol [SIP], and Media Gateway Control Protocol [MGCP]). Initial Policy Generation: Initial Policy Generation provides users an advanced starting point for VoIP deployments. This reduces the time needed to establish an initial, feasible QoS policy solution that includes providing QoS to VoIP bearer traffic, signaling traffic, and best-effort data. Initial service policies can be modified by using the Cisco MQC. VoIP guarantees: Provisions LLQ for the VoIP bearer and bandwidth guarantees for control traffic. WAN traffic shaping: Enables WAN traffic shaping (for example, Frame Relay traffic shaping [FRTS]), that adheres to Cisco best practices. Parameters such as Committed Information Rate (CIR) and burst can be used for traffic shaping. Link efficiency: Enables link efficiency mechanisms, such as LFI, and compressed RTP (cRTP) where appropriate. Management: Provides SNMP and syslog alerts for VoIP packet drops.

85 Cisco AutoQoS Features in a LAN
Benefit Simplified Configuration One-command voice configuration does not affect other network traffic. Can be fine tuned. Queue Configuration Configures queue admission criteria, Cisco Catalyst strict-priority queuing with WRR scheduling, modifies queue sizes and weights. Automated & Secure Detects Cisco IP Phones and enables AutoQoS settings. Protects against malicious activity during Cisco IP phone relocations and moves. Optimal VoIP Performance Leverages decades of networking experience and uses all advanced QoS capabilities of the Cisco Catalyst switches. End-to-End Interoperability Works with AutoQoS settings on all other Cisco switches and routers. Trust Boundary Enforcement Enforces the trust boundary on Cisco Catalyst switch access ports, uplinks, and downlinks NBAR Support Enables NBAR for different traffic types Cisco AutoQoS performs these functions in a LAN: Simplified configuration: In one command, AutoQoS configures the port to prioritize voice traffic without affecting other network traffic. Includes the flexibility to tune AutoQoS settings for unique network requirements. Queue configuration: Configures queue admission criteria (maps CoS values in incoming packets to the appropriate queues). Enables Cisco Catalyst strict-priority queuing (also known as expedited queuing) with WRR scheduling for voice and data traffic, where appropriate. Modifies queue sizes and weights. Automated and secure: Automatically detects Cisco IP Phones and enables AutoQoS settings (Catalyst 2950 & 3550). Prevents malicious activity by disabling QoS settings when a Cisco IP phone is relocated or moved. Optimal VoIP performance: Leverages decades of networking experience, extensive lab performance testing, and input from a broad base of customer AVVID installations to determine the optimal QoS configuration for typical VoIP deployments. Uses all advanced QoS capabilities of the Cisco Catalyst switches. End-to-end interoperability: Works in harmony with the AutoQoS settings on all other Cisco switches and routers, ensuring consistent end-to-end QoS. Trust boundary enforcement: Enforces the trust boundary on Cisco Catalyst switch access ports, uplinks, and downlinks NBAR: Enables NBAR for different traffic types

86 Cisco AutoQoS Use Guidelines
Make sure that: Any QoS configurations on the WAN interface are removed. CEF is enabled. NBAR is enabled. Correct bandwidth statement is configured on the interface. Cisco AutoQoS is enabled on the interface. As a general guideline to Cisco AutoQoS implementation, you must ensure that the following requirements have been met: You must remove any pre-existing QoS configuration from the (WAN) interface. You must ensure that Cisco Express Forwarding (CEF) is active. CEF is a prerequisite for NBAR. You must enable NBAR because Cisco AutoQoS uses it for the traffic classification. Make sure that the correct bandwidth is configured on the interface. AutoQoS takes the interface type and bandwidth into consideration when implementing these QoS features as follows: LLQ: The LLQ (specifically, PQ) is applied to the voice packets to meet the latency requirements. cRTP: With cRTP, the 40-byte IP header of the voice packet is reduced to 2 or 4 bytes (with or without cyclic redundancy check [CRC]), reducing voice bandwidth requirements. cRTP must be applied at both ends of a network link. LFI: LFI is used to reduce the jitter of voice packets by preventing voice packets from being delayed behind large data packets in a queue. LFI must be applied at both ends of a network link. Finally, you enable Cisco AutoQoS on the interface.

87 Cisco AutoQoS Example This graphic shows an implementation of Cisco AutoQoS in a campus network environment followed by the configuration output for the implementation. In this scenario, two campus sites connect over a WAN. For ease of deployment, Cisco AutoQoS VoIP is chosen for the campus-wide QoS setup. Only the relevant devices, such as the LAN switches on which servers, IP phones, PCs, and videoconferencing systems are connected, as well as the WAN routers that carry the traffic into the Internet, are enabled with the Cisco AutoQoS VoIP feature. Cisco AutoQoS VoIP installs the appropriate QoS policy for these devices. Enable Cisco AutoQoS on relevant devices (such as LAN switches and WAN routers) that need to perform QoS.

88 Cisco AutoQoS Example (Cont.)
interface Serial1/3 ip cef bandwidth 1540 ip address auto qos voip IP CEF and Bandwidth AutoQoS for VoIP Traffic Recognized by NBAR An example of the configuration tasks using Cisco AutoQoS as follows: ip cef command bandwidth statement auto qos command Apply all three commands to the interface. Cisco AutoQoS generates the correct QoS policy.

89 Cisco Security Device Manager (SDM)
Cisco Router and Security Device Manager (SDM) is an intuitive, web-based device management tool that was created for easy and reliable deployment and management of Cisco IOS routers. SDM allows you to easily configure routing, security, and QoS services on Cisco routers while helping to enable proactive management through performance monitoring. Whether you are deploying a new router or installing Cisco SDM on an existing router, you can now remotely configure and monitor these routers without using the Cisco IOS software CLI. The Cisco SDM GUI helps people who are not expert users of Cisco IOS software in day-to-day operations, provides easy-to-use smart wizards, automates router security management, and assists them through comprehensive online help and tutorials. Cisco SDM smart wizards guide you step by step through router and security configuration workflow by systematically configuring the LAN and WAN interfaces, firewall, Network Address Translation (NAT), intrusion prevention system (IPS), IPsec virtual private network (VPNs) routing, and QoS. Cisco SDM smart wizards can intelligently detect incorrect configurations and propose fixes. Online help embedded within Cisco SDM contains appropriate background information in addition to step-by-step procedures to help you enter correct data in the Cisco SDM. In addition, the Cisco SDM QoS wizard supports NBAR, which provides real-time validation of application use of WAN bandwidth against predefined service policies as well as QoS policing and traffic monitoring. Cisco SDM is factory installed on all Cisco 1800, 2800 and 3800 series routers both non-bundle and bundle SKUs. On Cisco 1700 Series, Cisco 2600XM, Cisco 2691, Cisco 3700 Series, Cisco 7204VXR, 7206XVR, and Cisco 7301 Cisco SDM is factory installed on the security bundles (K9) and optionally orderable on all other SKUs. On Cisco 831-SDM, Cisco 836-SDM, Cisco 837-SDM, Cisco Small Business 100 Series, Cisco 850 Series, and Cisco 870 Series Cisco SDM Express is factory installed on the router flash, and a Cisco SDM CD is bundled with the router. For routers that did not ship with Cisco SDM preinstalled, Cisco SDM can be downloaded free of charge from the Software Center on Cisco.com. This graphic shows the main page of Cisco SDM which consists of the following two sections: About Your Router: This section displays the hardware and software configuration of the router. Configuration Overview: This section displays basic traffic statistics. There are two important icons in the top horizontal navigation bar: Configure icon: Opens the configuration page. Monitor icon: Opens the page where the status of the tunnels, interfaces, and device can be monitored.

90 Steps 1 to 4: Creating a QoS Policy
1. 3. To begin creating the QoS policy using SDM: Enter configuration mode by clicking Configure in the top toolbar of the Cisco SDM window. Select Quality of Service in the Tasks toolbar at the left side of the Cisco SDM window. Select the Create QoS Policy tab. Click the Launch QoS Wizard button to launch the wizard. 2. 4.

91 Step 5: Launching the QoS Wizard
The Cisco SDM QoS wizard offers easy and effective optimization of LAN, WAN, and VPN bandwidth and application performance for different business needs (for example, voice and video, enterprise applications, and web). There are two predefined categories of business needs: Real-time: Voice over IP (VoIP) traffic and voice-signaling traffic. Business-critical: Business traffic important to a typical corporate environment. Some of the protocols included in this traffic category are: Citrix, SQLNet, Notes, LDAP, and Secure LDAP. Routing protocols included in this category are egp, bgp, eigrp, and rip. The remaining traffic will be categorized Best-effort: SDM dynamically adjusts the value for Best-Effort traffic when you enter values for Real Time or Business-Critical so that the total bandwidth is always 100%.

92 Step 6: Selecting the Interface
You are asked to select an interface that you want a QoS policy applied to outgoing. Click Next to proceed.

93 Step 7: Generating a QoS Policy
The QoS Policy Generation screen prompts you to enter the bandwidth percentages for each class. After you enter the numbers, Cisco SDM automatically calculates the best-effort class and the bandwidth requirements for each class. The Cisco IOS software does not allow you to allocate more than 75% of the total interface bandwidth to one or more QoS classes. The kbps value fields display the bandwidth allocated to each type of traffic in kilo bits per second (kbps) units. This field is read-only and is automatically updated based on the percentage value entered in the Bandwidth in % field. Click on the View Details button if you want to check QoS classes created for the selected traffic type. SDM will generate default QoS policy consisting of pre-defined QoS classes for each traffic type. Click Next to proceed to the Summary screen and review the QoS configuration.

94 Reviewing the QoS Configuration
Before delivering the configuration to the router, review the QoS configuration summary. In the Summary window, Cisco SDM shows the QoS configuration that was configured. The wizard applies this configuration to your router after you click the Finish button. Before clicking Finish, review the QoS configuration carefully to ensure that there are no errors. This graphic shows a sample configuration for the FastEthernet 0/1 interface. Information is provided about the classes that will be defined and the queuing and bandwidth settings for those classes. You will use the scroll bars to see the entire configuration.

95 Completing the Configuration: Command Delivery Status
The last step is to click Finish to deliver the configuration to the router. The Commands Delivery Status window shows the progress of the delivery of the configuration to the window. When the window tells you that the commands have been delivered to the router, click OK to complete your configuration. You can review and edit your configuration after this step is complete; however, Cisco recommends that you be very knowledgeable about your network when editing any of the automatically configured QoS configurations.

96 Monitoring QoS Status 1. A B 2.
After QoS is configured, you can use the following steps to monitor its status: To enter monitor mode, click the Monitor icon in the toolbar at the top of the Cisco SDM window. Click QoS Status in the Tasks toolbar at the left side of the Cisco SDM window. The traffic statistics appear in bar charts based on the combination of the selected interval and QoS parameters for monitoring as follows: The interval can be changed using the View Interval drop-down menu. The options available are Now, Every 1 Minute, Every 5 Minutes, and Every 1 Hour. QoS parameters for monitoring include Direction (input and output) and Statistics (bandwidth, bytes, and packets dropped). 2.

97 Comparing QoS Implementation Methods
Legacy CLI MQC Cisco AutoQoS Cisco SDM QoS Wizard Ease of use Poor Moderately easy Simple Ability to fine-tune Acceptable Very good Limited Time to implement Longest Average Shortest Short Modularity Excellent The four methods for configuring QoS on a network are the legacy CLI method, the MQC method, Cisco AutoQoS, and Cisco SDM QoS wizard. Cisco Systems recommends that you use MQC or Cisco AutoQoS to implement QoS. The table summarizes a comparison of these methods. MQC offers excellent modularity and the ability to fine-tune complex networks. Cisco AutoQoS offers the fastest way to implement QoS, but has limited fine-tuning capabilities. When an AutoQoS configuration has been generated, use CLI commands to fine-tune the configuration if necessary. Although MQC is much easier to use than the CLI method, Cisco AutoQoS can further simplify the configuration of QoS. As a result, the fastest possible implementation is usually accomplished using Cisco AutoQoS. Note: On most networks, fine-tuning is not necessary for Cisco AutoQoS. The Cisco SDM QoS wizard lets you easily configure QoS services and other features, such as IPsec and VPNs, on Cisco routers while enabling proactive management through performance monitoring. When you use the Cisco SDM QoS wizard, you can remotely configure and monitor your Cisco routers without using the Cisco IOS software CLI. The GUI helps people who are not expert users of Cisco IOS software in day-to-day operations and provides easy-to-use smart wizards for configuring QoS policies.

98 Summary Cisco AutoQoS simplifies and shortens the QoS deployment cycle. Cisco AutoQoS helps in all of the five major aspects of successful QoS deployments. Cisco AutoQoS simplifies deployment and speeds provisioning of Quality of Service technology over a Cisco network infrastructure. It reduces human error and lowers training costs. Cisco Security Device Manager (SDM) is an intuitive, web-based device management tool that was created for easy and reliable deployment and management of Cisco IOS routers.

99 Self Check What are the requirements that must be met in order to run AutoQoS? What command is used to enable AutoQoS on an interface? What traffic classes are supported by SDM? Which method of configuring QoS is the hardest to implement, requires the most time and offers the least modularity? You must remove any pre-existing QoS configuration from the (WAN) interface. You must ensure that Cisco Express Forwarding (CEF) is active. CEF is a prerequisite for NBAR. You must enable NBAR because Cisco AutoQoS uses it for the traffic classification. Make sure that the correct bandwidth is configured on the interface. Finally, you must enable AutoQoS on the interface. In interface configuration mode, use the auto qos command. Real-time, business-critical and best-effort. Command Line Interface (CLI) method.

100 Q and A

101 Resources Cisco AutoQoS Q&A SDM Demo Lab (Live Demo)
SDM Demo Lab (Live Demo) Cisco SDM Multimedia Demo SDM Presentations (VoDs) SDM Homepage

102


Download ppt "Optimizing Converged Cisco Networks (ONT)"

Similar presentations


Ads by Google