Presentation on theme: "Video over IP – Get the Picture!"— Presentation transcript:
1Video over IP – Get the Picture! IneoQuest Technologies, Inc.Video over IP – Get the Picture!IP Video Basics sessionPresenter: Rico E. Vitale(603)
2Video over IP Training Agenda – Video over IP Basics IneoQuest OverviewPrinciples of Video over IPCompression OverviewMPEG Data StreamsNetworking FundamentalsVideo over IPUnicasting / MulticastingVideo over IP – Monitoring & MeasurementsIneoQuest SolutionsReferences & Contact Information
3Founded in 2001, based in Mansfield, MA Fast and steady growth Video over IP Training Company OverviewFounded in 2001, based in Mansfield, MAFast and steady growthGreater than 670% - three-year growth rateRecognized as one of the top ten fastest growing companies (Boston Business Journal)IP Video Measurement and Quality/Service Assurance SolutionsOver 300+ unique customers, worldwideTelecom Tier 1/2/3, MSO Cable, Broadcast/Satellite, Equipment Manufacturer MarketsDirect sales and support in North America, Europe and AsiaCommitted to helping service providers improve video quality and control OPEXPioneering open streaming IP Video StandardsCo-author with Cisco of the Media Delivery Index (RFC #4445)
4Dropping 1 IP Packet in every 400 Packets (1/second) Video over IP Training Does this annoy you?1 every 400 packets droppedDropping 1 IP Packet in every 400 Packets (1/second)
5Why monitor video at all? Video over IP TrainingWhy monitor video at all?“So quiet you can hear a pin drop!” – US Sprint 1986Voice customer are LESS demandingConsumers are less forgiving when it comes to poor video quality compared to voice calls or data connectionsMore demanding since HDVery little loss can have a detrimental effect on video and the viewers Quality of Experience (QoE)
6Video over IP Training Principles of Video over IP Given good quality source video, Packet Loss is the only thing an IP transport network can do to affect video quality.The first principle of Video over IP is that the quality of the video content must be good going into the IP networkThe only thing an IP network can do to affect the quality of IPTV to the home is cause lossThe perceived quality (MOS score) of the video is the same at the headend as it is at the STB, if there is no loss within the networkThe measurement we are discussing is part of the standard for measuring video delivery quality which is the Media Delivery Index (MDI). The parameter of MDI that measures loss is the Media Loss Rate (MLR).MDI = DF : MLRMake sure to check the QualityBEFORE making millions of copies
7Video over IP Training Principles of Video over IP Jitter on a single flow can and will lead to changes in behavior on other flows.Cumulative Jitter does not directly affect video quality, but it is an indicator of impending loss.The second principle of Video over IP is Jitter on a single flow can and will lead to changes in behavior on other flows on the networkCumulative IP Jitter does not impact video quality, but can be used as an indicator of impending lossWe use the analogy of traffic patterns on a highway. If traffic on a highway is going the same speed, as entering traffic merges it will have to slow down and/or the traffic on the highway will have to alter speed and direction to allow the merging. We will be discussing this concept of Delay Factor which is the cumulative IP jitter on an IP network.MDI = DF : MLR
8Monitor All Live IPTV flows, Video over IP Training Principles of Video over IP130131132133134ChannelsChannelsAll programs should be inspected continuously to effectively monitor IPTV throughout a network.The third principle of Video over IP is in order to monitor Video over IP quality correctly at any measurement node, All Live Flows must be monitored continuously for errors in quality and/or deliveryInspection of live IPTV traffic by sampling has obvious holes. That is, in order to guarantee delivery of video quality to the subscriber, each program needs to be monitored all the time.To support the second principle (A flow that has jitter will affect other flows in the network) implies that all flows need to be monitored in order to look for delay factor changes in the networkLive video flow inspection is critical. Eg, If you want to know if ESPN is good or bad, you need to look at ESPN.MDI = DF : MLRMonitor All Live IPTV flows,What you don’t watch your customer does!
10Video over IP Training Video and Audio Compression Compression OverviewVideo CompressionKey to Compression: Remove RedundancyVideo Compression FormatsMPEG Compression TechnologiesMPEG Video CompressionMPEG Audio Compression
11Video over IP Training The Need for Compression Storage RequirementsDigital storage costs are decreasing significantlyStill be very expensive to store uncompressed TV dataA two-hour SD television program ≈ 200GBBandwidth RequirementsTransmitting uncompressed data significant distance is extremely difficultUncompressed Standard Definition (SD) digital video requires > 200 Mb/sUncompressed High Definition (HD) digital video requires > 1Gb/sProcessing Power / Hardware RequirementsProcessing large amounts of video data (storage) in real-time (bandwidth)A digitization of an analog standard definition NTSC signal via the 8 bit CCIR 601 standard produces a continuous stream of digital bits in the order of 216 Mbps. An hour of digitized standard definition television would therefore produce 97.2 GB. HDTV has even higher bit rates and therefore higher storage requirements. These kinds of data rates are prohibitive in terms of storage requirements, transmission bandwidth requirements, and processor demands.Storage RequirementsWhile prices of digital storage has decreased significantly, it would still be very expensive to store a reasonable amount of uncompressed tv data. A two-hour standard definition tv program would require almost 200GB. A DVD disc holds 4.7 GB or just a few minutes of uncompressed video. The fact that a DVD can store 3 hours of compressed video attests to the power of compression.Bandwidth RequirementsTransmitting 216 Mbps of uncompressed data over any significant distance is extremely difficult with today’s technology. Even some of the highest bandwidth digital channels, such as the connection between a hard drive and the CPU in a computer would be maxed out at these data rates. Transmitting over longer distance is impossible with commercial networking technologies. Cable and DSL broadband data services are in the order of only a few Mbps while conventional Ethernet within a local area network (LAN) is capable of only 10 or in some cases 100 Mbps. A Video over IP service that attempts to distribute digital tv signals over broadband and Ethernet is going to require drastically reduced bit rate to work.Processing PowerWatching digital TV requires reading the digital pixels and recreating the images. Sometimes additional processing is also required, such as changing the size of the image to fit a particular monitor or display, or playing the video quicker to fast-forward or rewind. If the number of operations required to be performed on each pixel grows, it may require many billions of operations per second from the processor.So, you can see why compression is important to the storage and delivery of Video over IP services. Without proper use of data compression techniques, either the picture would look much worse or the movie would require much more disk space.Video Technology users are free to choose whether or not to use compression for their video signals. It is important to understand that the choice of a compression method can sometimes mean the difference between success and failure of a video networking project.
12The generalized process of compressing digital video for delivery over Video over IP Training Video CompressionThe goal of video compression is to reduce the quantity of data used to represent video content without substantially reducing the quality of the picture.DigitizationCompressionDecodeEncode/ /TransportCompressedDigital BitstreamUncompressedAnalog VideoSequenceFilm orVideoCameraAnalogTVDigitalDigital video requires high data rates - the better the picture, the more data is needed. This means powerful hardware, and lots of bandwidth when video is transmitted. Video compression refers to reducing the quantity of data used to represent video content without excessively reducing the quality of the picture. It also reduces the number of bits required to store and/or transmit digital media. Compressed video can be transmitted more economically over a smaller bandwidth.Video compression is based on two principles. The first is the spatial redundancy that exists in each frame. The second is the fact that most of the time, a video frame is very similar to its immediate neighbors. This is called temporal redundancy. A typical technique for video compression should therefore start by encoding the first frame using a still image compression method. It should then encode each successive frame by identifying the differences between the frame and its predecessor, and encoding these differences. If the frame is very different from its predecessor (as happens with the first frame of a shot), it should be coded independently of any other frame. In the video compression, a frame that is coded using its predecessor is called inter frame, while a frame that is coded independently is called intra frame.The generalized process of compressing digital video for delivery overtransport networks where they are decoded back into digital or analog video
13Fewer Bits (storage) & Fewer Bits/second (bandwidth) Video over IP Training Key to Compression: Remove RedundancyVideo compression algorithms take advantage of several Types of Redundancy to reduce the size of the Video Stream.Spatial RedundancyPixels can be encoded in groups (macro blocks)Color and Brightness of neighboring pixels often have similar valuesTemporal RedundancyChanges in an objects location and motion are normally very small from video frame to frameCoding RedundancyPatterns and common motions often form in videoPerceptual Coding RedundancyThe human eye cannot perceive minute differences in color and brightnessCompression Algorithms are able to reduce the size of a video bit stream significantly because video typically contains duplicate or redundant information both within and between frames. Image and video compression algorithms can take advantage of several types of redundancy to reduce the size of the resulting bitstream.Spatial RedundancySpatial method is performed by taking advantage of the fact that the human eye is unable to distinguish small differences in color. Neighboring pixels in an image often have similar values: color or brightness of an object typically does not vary significantly over small areas. Instead of encoding each pixel individually, a compression algorithm could save bits by encoding only the difference between neighboring pixels. That difference is typically a smaller value than the full range of possible pixel values and therefore can be encoded with fewer bits.Temporal RedundancyUnder the NTSC standard, 30 television fps is the norm. During the 33 ms between frames, most of the image changes little or moves slightly from one place to another. Compression algorithms often encode only the small changes or the direction that a part of the image moved between frames. These changes typically require fewer bits then representing the whole image again.Coding RedundancySome patterns and motions are more common than others in natural images. The most frequent patterns can be encoded more efficiently than less frequent ones. This form of variable length coding assigns fewer bits to more common codes. Encoding the small motions with few bits and the large ones with more bits results in more efficient system than encoding all of them with the same number of bits.Perceptual Coding RedundancyThe most efficient compression algorithms take advantage of the human biology as well. Not all visual patterns are equally visible to the human eye. For example, fine details under low light or low contrast may not be visible. Efficient algorithms remove perceptual redundant patterns that are not visible to the human eye.Fewer Bits (storage) & Fewer Bits/second (bandwidth)
14Video over IP Training MPEG Compression MPEG generally takes analog or digital video signals and converts them to packets of digital data that are more efficiently transported over a networkThe MPEG system consists of two layers :System Layer (timing information to synchronize video and audio)Compression Layer (includes audio and video streams) .The job of MPEG compression is to take analog or digital video signals and convert them to packets of digital data that are more efficiently transported over a network. Being digital it has the following advantages :- Signal does not degrade- Picture does not get fuzzy- Signal-to-Noise ratio goes down slowlyThe MPEG system consists of two layers :System Layer (timing information to synchronize video and audio)Compression Layer (includes audio and video streams)The system decoder extracts the timing information from the MPEG system stream and sends it to the other system components. The decoder also de-multiplexes the video and audio streams from the system stream and passes it onto the appropriate decoder. The video and audio decoders decompress the information as specified in parts 2 and 3 of the MPEG standard respectively.General MPEG Decoding System
15Video over IP Training MPEG Compression: I, P, B Frames & Group of Pictures A frame is a single image from a video sequence.An I frame (initial, intra) is a frame that is compressed solely based in the information contained in the frame.A P frame (predicted) is a frame that has been compressed using the data contained in the frame itself and data from the closest preceding I or P frame.A B frame (bi-directional predicted) is a frame that has been compressed using the data from the closest preceding I or P frame and the closest following I or P frame.A Group of Pictures or GOP is a series of frames consisting of a single I frame and zero or more P and B frames.246810IBP~ 64k Bytes12 frames typically in a GOP. GOP starts with an I frame. I frame is aprox 64,000 bytesThe MPEG standard is primarily a bitstream specification, although it also specifies a typical decoding process to assist in interpreting the bitstream specification. This approach supports data interchange, but does not restrict innovation in the means for creating or decoding that bitstream. The bitstream specification is based on a data hierarchy. The data hierarchy is pretty self-explaining and is useful for the following reasons :Groups of pictures allow random access into a sequenceSlices aid error recovery, in that if one slice contains an error then it can be skipped.Video Sequence = Seq. Header + n(GOP's) + End of Seq.GOP = Group of Pictures (frames) = header + series of pictures (frames)Picture (Frame) = A primary coding unit of a video sequence. It consists of 3 matrices Y, Cb and Cr. An RGB system can be easily converted to the Y,Cb,Cr system by a linear transformation.Slice = Group of MacroblocksMacroblock = 2 x 2 matrix of BlocksBlock = 8 x 8 pixel setIn simple terms the ascending order of hierarchy is [ Block --> Macroblock --> Slice --> Frame ]The video bitstream architecture is based on a sequence of pictures, each of which contains the data needed to create a single display-able image. I frames, P frames, B frames and Group of Pictures are all terms that describe the way the picture data is structured in an MPEG video stream or file.MPEG-2 (and MPEG-1) video compression makes use of the Discrete Cosine Transform (DCT) algorithm to transform 8x8 blocks of pixels into variable length codes (VLC). These VLC's are the representation of the quantized coefficients from the DCT. MPEG-2 encoders produce three types of frames: Intra (I) frames, Predictive (P) frames, and Bidirectional (B) frames.To understand why MPEG uses these different frames, it is illuminating to look at the amount of data that is required to represent each frame type. With a video image of normal complexity, a P frame will take 2-4 times less data than an I frame. A B frame will take 2-5 times less data than a P frame.In MPEG-2, three 'picture types' are defined. The picture type defines which prediction modes may be used to code each block. 'Intra' pictures (I-pictures) are coded without reference to other pictures. Moderate compression is achieved by reducing spatial redundancy, but not temporal redundancy. They can be used periodically to provide access points in the bitstream where decoding can begin.'Predictive' pictures (P-pictures) can use the previous I- or P-picture for motion compensation and may be used as a reference for further prediction. Each block in a P-picture can either be predicted or intra-coded. By reducing spatial and temporal redundancy, P-pictures offer increased compression compared to I-pictures. 'Bidirectionally-predictive' pictures (B-pictures) can use the previous and next I- or P-pictures for motion-compensation, and offer the highest degree of compression. Each block in a B-picture can be forward, backward or bidirectionally predicted or intra-coded. To enable backward prediction from a future frame, the coder reorders the pictures from natural 'display' order to 'bitstream' order so that the B-picture is transmitted after the previous and next pictures it references. This introduces a reordering delay dependent on the number of consecutive B-pictures.Relative amounts of data for each frame type in a typical MPEG GOP
18Video over IP Training MPEG Compression: I, P, B Frames & Group of Pictures The order video frames are transmitted can be different than the order they are displayedA typical GOP in display order is: B1 B2 I3 B4 B5 P6 B7 B8 P9 B10 B11 P12The corresponding bitstream order is: I3 B1 B2 P6 B4 B5 P9 B7 B8 P12 B10 B11MPEG can also use a variable GOP to better deal with complex video (not shown). This concentrates I frames together during complex scenesThe video bitstream architecture is based on a sequence of pictures, each of which contains the data needed to create a single display-able image. I frames, P frames, B frames and Group of Pictures are all terms that describe the way the picture data is structured in an MPEG video stream or file.In MPEG video encoding, a group of pictures, or GOP, specifies the order in which intra-frames and inter-frames are arranged.The GOP is a group of successive pictures within a MPEG-coded film and/or video stream. Each MPEG-coded film and/or video stream consists of successive GOPs. From the MPEG pictures contained in it the visible frames are generated.A GOP can contain the following picture types:I-picture and/or I-Frame (English intra coded picture) reference picture, corresponds to a fixed image and is independent of other picture types. Each GOP begins with this type of picture.P-picture and/or P-Frame (English predictive coded picture) contains difference information from the preceding i or P-Frame.B-picture and/or B-Frame (English bidirectionally predictive coded pictures) contains difference information from the preceding and/or following i or P-Frame.A GOP always begins with an I-Frame. Afterwards several P-Frames follow, in each case with some frames distance. In the remaining gaps are B-Frames. With the next I-Frame a new GOP begins.The different picture types typically occur in a repeating sequence, termed a 'Group of Pictures' or GOP. A typical GOP in display order is:B1 B2 I3 B4 B5 P6 B7 B8 P9 B10 B11 P12 The corresponding bitstream order is:I3 B1 B2 P6 B4 B5 P9 B7 B8 P12 B10 B11 A regular GOP structure can be described with two parameters: N, which is the number of pictures in the GOP, and M, which is the spacing of P-pictures. The GOP given here is described as N=12 and M=3. MPEG-2 does not insist on a regular GOP structure. For example, a P-picture following a shot-change may be badly predicted since the reference picture for prediction is completely different from the picture being predicted. Thus, it may be beneficial to code it as an I-picture instead.For a given decoded picture quality, coding using each picture type produces a different number of bits. In a typical example sequence, a coded I-picture was three times larger than a coded P-picture, which was itself 50% larger than a coded B-picture.
19Video over IP Training Every Packet Counts Video and Audio CODECs remove large amounts of redundancyHighly compressed data streams are createdVery small interruptions in the data stream can significantly reduce video quality1st Principle.: Given good quality source video, Packet Loss is the only thing an IP transport network can do to affect video quality.
20Building an MPEG Bitstream Formatting MPEG Video for Transmission
21Video over IP Training Building an MPEG Bitstream System Layer OverviewElementary Streams (ES)Packetized Elementary Stream (PES)Program Stream (PS)Transport Stream (TS)Program Clock Reference (PCR)
22Video over IP Training System Layer: MPEG Stream Types MPEG Stream Types: Elementary Streams, Packetized Elementary Streams, Program Streams, Transport StreamsVideo ESVideo PESAudio PESMultiple ProgramTransport StreamAudio ESVideoEncoderAudioPacketizerPSIPDataTransportStreamMUXThe MPEG system layer describes how video and audio bitstreams are packetized, interleaved into programs, and how they are mixed with additional information that is used to separate out the desired audio and video components.The compressed video and audio streams are called elementary streams. These are broken into packets, called packetized elementary streams (PES). The various streams from a single program are interleaved into program streams. Finally all of the programs are interleaved, along with descriptive data, into a transport stream. The descriptive data includes information about the digital channels contained within the transport stream such as which audio and video streams are grouped into the various programs and other information.Reference Page #(s): 187
23Video over IP Training System Layer: Program Stream A Program Stream (PS) carries a single programIn MPEG, a program is a combination of video, audio, and related dataAll information in the program stream must have a common time-base.Typically one video is combined with one or more audio streamsVideo PES+ Audio PES 1+ Audio PES 2= Program Stream1 PacketPacket HeaderIs this also done for Closed-Captioning as well?Answer: YesProgram stream (PS or MPEG-PS) is a name for the formats specified in MPEG-1 Systems and MPEG-2 Part 1, Systems (ISO/IEC standard ). It is a container format designed for reasonably reliable media such as disks (DVD) and other error-free environments, in contrast to transport stream which is for data transmission in which loss of data is likely.A Program Stream carries a single programIn MPEG, a program is a combination of video, audio, and related dataAll information in the program stream must have a common time-base.Typically one video is combined with one or more audio streams and possibly some data streams. Generally, the 1st audio stream carries stereo sound track for the video. Other audio streams can include surround sound, alternate languages, and commentaries. Data streams includes captioning information, program stream info (title length, owner, content descriptions, etc.) and anything else the content owner wants to provide.Control Information is another type of dataEncryption systems can also require data to be includedA program stream can include multiple video streams (up to 16 videos, 32 audios, and 16 data streams). The interleaving of various PES packets (video, audio, and data) results in a program stream that is further packetized; it is broken up in chunks called ‘packs’. These packs are usually 2048 bytes in long. As with PES packets, a program stream pack also has a header to provide information about the pack.Reference Page #(s): 191
24Video over IP Training System Layer: Transport Stream Transport Streams (TS) contains one or more program streams along with additional informationThe Transport Stream breaks the Elementary Streams into fixed length packetsA transport stream containing a single program is called a Single Program Transport Stream (SPTS)A transport stream with more than one program is called a Multi-program Transport Stream (MPTS)Program StreamPacket Header1 Packet = 188 BytesTS Packet Header4 bytes=Transport StreamProgram 3Program 2Program 1Data StreamTransport Streams, the final layer in the MPEG-2 System Layer hierarchy, contain one or more program streams multiplexed together along with additional information describing the various program streams it contains. MPEG-2 transport streams are used in digital cable, digital satellite, and HDTV broadcast systems.A transport stream containing a single program is called a Single Program Transport Stream (SPTS) and a transport stream with more than one program is called a multi-program transport stream (MPTS). Typically, an MPTS contains several different programs, plus descriptive information, and the whole MPTS is delivered as a single bitstream over cable or satellite broadcast systems.Like program streams, transport streams are formed by breaking up either elementary streams or program streams or both into TS packets. The packets are fixed 188 bytes including a 4 byte transport stream header. Each packet contains data from only a single elementary stream, so each packet is either video, audio, or control information, but not a mixture. Forward Error Correct (FEC) codes can also be added to transport stream packets through the use of standardized codes, such as Reed-Solomon, adding 16 or 20 bytes to the 188 byte packet, bring the total packet length up to 204 or 208 bytes. Packets 204 bytes are commonly used in applications relating to the Digital Video Broadcasting (DVB) consortium series of specifications which have widespread use in Europe and a number of DTH satellite systems. Packets of 208 bytes in size are used in Advanced Television Systems Committee (ATSC) applications which are used in terrestrial digital television broadcast applications in the USA (sometimes call DTV or HDTV). The primary benefit of adding RS coding to the transport stream packets is to give the receiver the ability to recover from transmission errors.Reference Page #(s): 192
25Video over IP Training System Layer: Transport Stream MPEG Packet & Header 1 TS MPEG Packet(188 bytes)HeaderPayloadMinimum 4-byte HeaderSyncByte8TransportErrorIndicator1StartIndicator1TransportPriority1PID13ScramblingControl2AdaptationFieldControl2ContinuityCounter4AdaptationFieldPayloadAdaptationField Length8DiscontinuityIndicator1RandomAccessIndicator1ESPriorityIndicator15 Flags5OptionalFieldsStuffingBytesHere the figure shows the minimum header of 4 bytes. In this header, the most important information is:The sync byte. This byte is recognized by the decoder so that the header and the payload can be deserialized.The transport error indicator. This indicator is set if the error correction layer above the transport layer is experiencing a raw-bit error rate (BER) that is too high to be correctable. It indicates that the packet may contain errors.The packet identification (PID). This thirteen-bit code is used to distinguish between different types of packets.The continuity counter. This four-bit value is incremented by the multiplexer as each new packet having the same PID is sent. It is used to determine if any packets are lost, repeated, or out of sequence.A 13-bit field in the transport packet header contains the Packet Identification code (PID). The PID is used by the demultiplexer to distinguish between packets containing different types of information. The transport-stream bit rate must be constant, even though the sum of the rates of all of the different streams it contains can vary. This requirement is handled by the use of null packets. If the real payload rate falls, more null packets are inserted. Null packets always have the same PID, which is 8191 (thirteen ones in the binary representation).In a given transport stream, all packets belonging to a given elementary stream will have the same PID. The demultiplexer can easily select all data for a given elementary stream simply by accepting only packets with the right PID. Data for an entire program can be selected using the PIDs for video, audio and data streams such as subtitles or teletext. The demultiplexer can correctly select packets only if it can correctly associate them with the elementary stream to which they belong. The demultiplexer can do this task only if it knows what the right PIDs are. This is the function of the Program Specific Information (PSI).PCR48OPCR48SpliceCountdown8TransportPrivateDataAdaptationFieldExtensionReference Page #(s): 193
26Constant and Variable Bit Rates CBR Video over IP Training Transport Stream: Constant & Variable Bit RatesConstant and Variable Bit RatesCBRRate of CODEC’s data stream consumption is constant in the decoderUseful in streaming media when the transport media is a fixed resourceUsually created by stuffing null packets into transport streamVBRCODEC can vary the amount of output data per time segmentMore bits are allocated to more complex contentUses less overall bandwidthNo stuffing
27Video over IP Training Program Specific Information & Packet Identifiers (PIDs) Each Program Stream (in MPEG TS) has unique 13-bit Packet Identifiers (PIDs)Standardized PIDs:Program Association Table (PAT)Program Map Table (PMT)StuffingConfigurable PID’sVideoAudioDataProgram Specific Information (PSI) is carried in packets having unique PIDs, some of which are standardized and some of which are specified by the program association table (PAT), conditional access table (CAT) and the transport stream description table (TSDT). These packets must be included periodically in every transport stream.The PAT always has a PID of 0, the CAT always has a PID of 1, and the TSDT always has a PID of 2. These values and the null-packet PID of 8191 are the only PIDs fixed by the MPEG standard. The decoder must determine all of the remaining PIDs by accessing the appropriate tables.The programs that exist in the transport stream are listed in the program association table (PAT) packets (PID = 0) that carries the PID of each PMT packet. The first entry in the PAT, program 0, is reserved for network data and contains the PID of network information table (NIT) packets. Usage of the NIT is optional in MPEG-2, but is mandatory in DVB.The PIDs of the video, audio, and data elementary streams that belong in the same program are listed in the Program Map Table (PMT) packets. Each PMT packet normally has its own PID, but MPEG-2 does not mandate this. The program number within each PMT will uniquely define each PMT.A given network information table (NIT) contains details of more than just the transport stream carrying it. Also included are details of other transport streams that may be available to the same decoder, for example, by tuning to a different RF channel or steering a dish to a different satellite. The NIT may list a number of other transport streams and each one must have a descriptor that specifies the radio frequency, orbital position, and so on. In DVB, additional metadata, known as DVB-SI, is included, and the NIT is considered to be part of DVB-SI. When discussing the subject in general, the term PSI/SI is used.Upon first receiving a transport stream, the decoder must look for PIDs 0 and 1 in the packet headers. All PID 0 packets contain the PAT. All PID 1 packets contain CAT data.By reading the PAT, the decoder can find the PIDs of the NIT and of each program map table (PMT). By finding the PMTs, the decoder can find the PIDs of each elementary stream.Consequently, if the decoding of a particular program is required, reference to the PAT and then the PMT is all that is needed to find the PIDs of all of the elementary streams in the program. If the program is encrypted, access to the CAT will also be necessary. As demultiplexing is impossible without a PAT, the lockup speed is a function of how often the PAT packets are sent. MPEG specifies a maximum interval of 0.5 seconds for the PAT packets and the PMT packets that are referred to in those PAT packets.
28Video over IP Training Program Clock Reference (PCR) Assisting the decoder:Presenting programs on timeAt the right speedAudio synchronizationPrograms periodically provide a Program Clock Reference (PCR), on one of the PIDs in the programTo assist the decoder in presenting programs on time, at the right speed, and with synchronization, programs usually periodically provide a Program Clock Reference, or PCR, on one of the PIDs in the program.Simplified…the TS generator takes a “snapshot” of the reference clock and sends it in the video packets. The decoder reads the snapshots and corrects its internal clock.Encoder clock (reference, transmitted)Decoder clock (recovered, corrected)
29Video over IP Training MPEG Encoding / Transmission / Decoding The video bitstream architecture is based on a sequence of pictures, each of which contains the data needed to create a single display-able image. I frames, P frames, B frames and Group of Pictures are all terms that describe the way the picture data is structured in an MPEG video stream or file.In MPEG video encoding, a group of pictures, or GOP, specifies the order in which intra-frames and inter-frames are arranged.The GOP is a group of successive pictures within a MPEG-coded film and/or video stream. Each MPEG-coded film and/or video stream consists of successive GOPs. From the MPEG pictures contained in it the visible frames are generated.A GOP can contain the following picture types:I-picture and/or I-Frame (English intra coded picture) reference picture, corresponds to a fixed image and is independent of other picture types. Each GOP begins with this type of picture.P-picture and/or P-Frame (English predictive coded picture) contains difference information from the preceding i or P-Frame.B-picture and/or B-Frame (English bidirectionally predictive coded pictures) contains difference information from the preceding and/or following i or P-Frame.A GOP always begins with an I-Frame. Afterwards several P-Frames follow, in each case with some frames distance. In the remaining gaps are B-Frames. With the next I-Frame a new GOP begins.The different picture types typically occur in a repeating sequence, termed a 'Group of Pictures' or GOP. A typical GOP in display order is:B1 B2 I3 B4 B5 P6 B7 B8 P9 B10 B11 P12 The corresponding bitstream order is:I3 B1 B2 P6 B4 B5 P9 B7 B8 P12 B10 B11 A regular GOP structure can be described with two parameters: N, which is the number of pictures in the GOP, and M, which is the spacing of P-pictures. The GOP given here is described as N=12 and M=3. MPEG-2 does not insist on a regular GOP structure. For example, a P-picture following a shot-change may be badly predicted since the reference picture for prediction is completely different from the picture being predicted. Thus, it may be beneficial to code it as an I-picture instead.For a given decoded picture quality, coding using each picture type produces a different number of bits. In a typical example sequence, a coded I-picture was three times larger than a coded P-picture, which was itself 50% larger than a coded B-picture.
30NETWORKING FUNDAMENTALS for VIDEO PROFESSIONALS Data Transmission over IP Networks
31Video over IP Training IP Networking Fundamentals for Video Professionals Network OverviewIP ProtocolLayer Model & OSI ModelIP Addresses, Datagrams and PacketsEthernet and IPEthernet AddressingEthernet InterfacesVideo over IPMPEG Streams into IP PacketsPacket TransportMulticastingBasic Concepts & ApplicationsMulticasting System ArchitectureSystem Impact
32Video over IP Training Network Overview A network is comprised of two fundamental parts, the Nodes and the LinksA Node is some type of network device, such as a computer. Nodes are able to communicate with other nodes through Links, like cables.IP-based networks forming the backbone and making convergence possibleThere are basically two different network techniques for establishing communication between nodes on a network:circuit-switched networks (connection-oriented)packet-switched network techniquesNetworks support large number of users for communicationPartitions users for bandwidth efficiencyPermits backup connections and sharing connectionsNetwork SwitchUser AUser BModern digital technology allows different sectors, e.g. telecom, data, radio and television, to be merged together. This convergence is happening on a global scale and is drastically changing the way in which both people and devices communicate. At the center of this process, forming the backbone and making convergence possible, are IP-based networks.Services and integrated consumer devices for purposes such as telephony, entertainment, security or personal computing are constantly being developed, designed and converged towards a communication standard that is independent from the underlying physical connection. The cable network, for instance, first designed for transmitting television to the consumer, can now also be utilized for sending voice, video, and data. These features are also available over other physical networks, e.g. telephone, mobile phone, satellite and computer networks.The Internet has become the most powerful factor guiding the ongoing convergence process. This is mainly due to the fact that the Internet protocol suite has become a shared standard used with almost any service. The Internet protocol suite consists primarily of the Internet Protocol (IP) and the Transport Control Protocol (TCP); consequently, the term TCP/IP commonly refers to the protocol family.A network is comprised of two fundamental parts: nodes and links. A node is some type of network device, such as a computer. Nodes are able to communicate with other nodes through links, like cables. There are basically two different network techniques for establishing communication between nodes on a network: the circuit-switched network and the packet-switched network techniques. The former is used in a traditional telephone system, while the latter is used in IP-based networks. As network size grows, data partitioning becomes more important. For example, worldwide broadcasts would waste network bandwidth, raise data privacy concerns, delay data delivery, etc.
33Video over IP Training Network Techniques Connection-Oriented (Circuit-Switched)Dedicated, private connection (circuit) between two points - e.g. telephone callDedicated bandwidth for duration of callBandwidth is wasted if not constantly utilizedPacket-SwitchedData is transferred in small messages (packets) allowing many devices to share the bandwidth of a network connectionNetwork devices divide data stream into packets for transmission and reassemble packets to reconstruct the data streamToo many devices active at once can oversubscribe a network forcing a device to wait to sendMost data networks today are built with packet switched technology due to efficiency (cost) advantagesRequires protocols to permit the efficient use of shared paths and to allow packet data to be directed through interconnected shared paths
34Video over IP Training Network Types & Characteristics Wide Area Network (WAN)Between campus, Metropolitan area, or cross country distancesUsually operate at lower speeds: 56Kbps –155MbpsDelays range from milliseconds to 100s of millisecondsLocal Area Network (LAN)Short haul, within a building or campusUsually higher speed than WAN: 10 Mbps – 1GbpsDelays range from tenths of a millisecond to 10s of millisecondsPersonal Area Network (PAN)Very short range network, usually wireless, that allows devices to work together such as a PDA to Desktop. See IEEE : (ex: Bluetooth, Zigbee)Typical speeds: 20Kbps – 50MbpsVery short end-to-end delays
35Network Terms: Video over IP Training Data Networks Overview A Protocol is a set of rules governing the interactions of two entitiesAn Application is a user service such as or File Transfer that utilizes a network in providing its serviceA Packet is a small message with self-contained addressing information. Large blocks of data (e.g. files) or streaming data such as video may be forwarded as many small messages or “packets”Bandwidth is used to indicate the amount of information carrying capacity of a device (switch, router) or link (fiber, copper, RF)A Switch or Hub is a packet forwarding device that uses information in layer 2 only in its forwarding algorithmsA Router or Gateway is a packet forwarding device that uses information in layer 3 only in its forwarding algorithms
36Video over IP Training Open System Interconnection (OSI) Reference Model Application(7)Provides services directly to user applications. Because of the potentially wide variety of applications, this layerMust provide a wealth of services. Among these services are establishing privacy mechanisms, authenticating theIntended communication partners, and determining if adequate resources are present.Presentation(6)Performs data transformations to provide a common interface for user applications, including services such asreformatting, data compression, and encryption.Session(5)Establishes, maintains, and ends user connections and manages the interaction between end systems. Servicesinclude such things as establishing communications as full or half duplex and group data.Transport(4)Insulates the 3 upper Layers 5-7 from having to deal with the complexities of Layers 1-3 by providing the functionsnecessary to guarantee a reliable network link. Among other functions, this Layer provides error recovery andflow control between the two end points of the network connection.Network(3)Establishes, maintains, and terminates network connections. Among other functions, standards define how datarouting and relaying are handled.Ethernet and IP are not the same.Layer 2 and 3 of the OSI model.Data-Link(2)Ensures the reliability of the physical link established at Layer 1. Standards define how data frames arerecognized and provide necessary flow control and error handling at the frame level.Physical(1)Controls transmission of raw bitstream over the transmission medium. Standards for this layer define suchparameters as the amount of signal voltage swing, the duration of voltages, etc.
37Video over IP Training Physical Layer 1 Copper: Twisted Pair, CoaxPros: Lowest cost, simple installation, Trained Support plentifulCons: Shorter distances, ingress/egress interference, bandwidth limitsFiberPros: Long haul, Low loss, High bandwidth, Interference ResistantCons: Higher Cost, More Complex Installation, Optical component ageingMT-RJSCSTLCCopper: Twisted Pair, CoaxPros: Lowest cost, simple installation, Trained Support plentifulCons: Shorter distances, ingress/egress interference, bandwidth limits10Base2, 2-pair 10BaseT, 2-pair 100BaseT, 4-pair 1000BaseT, 4-pair 10GBASET soonHigher speed signalling made possible over the years with increasing signal processing through more transistors per acre of silicon (pulse shaping, echo cancellation) combined with improved cable construction (improved common mode rejection, lower loss)Connector types: BNC for coax, RJ telephone style for twisted pairFiberPros: Long haul, Low loss, High bandwidth, Interference ResistantCons: Higher Cost, More Complex Installation, Optical component ageing10BASE-FL (FOIRL), 100BASE-FX, 1000BASE-SX & -LX, 10GBASE-S, -L, -E (850, 1310, 1550 nm)As for Copper, higher speed signalling made possible through both improved electronic interfaces, improved optics (lasers & detectors), and fiber optic cabling bandwidthConnector types: ST, SC, LC (smaller, RJ-like connector)RFPros: Wireless, mobility,Cons: Limited Shared Spectrum, Interference, Higher Cost, Lowest bandwidth, Distance limitsWiFi a (5GHz, to 54Mb/s, 60’), b (2.4GHz, to 11Mb/s, 300’), g (2.4GHz, to 54 Mb/s, 300’)2 chip solution (digital, RF) made possible by recent gains in chip speed due to transistor shrinkageConnector types: usually integrated antennas or miniature coaxial connectors for remote antenna connectionRFPros: Wireless, mobilityCons: Limited Shared Spectrum, Interference, Higher Cost, Lowest bandwidth, Distance limitsCommunication
38Ethernet Frame Format: Video over IP Training Data-Link Layer 2Data-Link(2)Ensures the reliability of the physical link established at Layer 1. Standards define how data frames arerecognized and provide necessary flow control and error handling at the frame level.Ethernet Frame Format:DestinationMACSourceTypeDataCRC6 bytes4 bytesLink Layer (Layer 2) - Ethernet Example:Ethernet IEEE 802.3Most common data link layer: In 1998, 86% all network connections were Ethernet (>118 Million devices) [IDC].Specs for 10, 100, 1000, 10,000 Mb/s operationSpecs for coax, twisted pair, and short and long haul Fiber Optic Phy layersFDX, HDX operation supportVariable length framesType/VLAN ID supportAutonegotiation SupportFor the complete specification, see:Main point is that this type of network is based off of MAC addresses.
39Video over IP Training Data-Link Layer 2 Data-Link Layer 2 Examples:Ethernet – Most prevalent data network typeDOCSIS – Data network for cable systemsResilient Packet Ring (RPR) – Emerging data network with features for carrying multimedia traffic, fast fault recovery, easy interface to Ethernet networks, OAM carrier requirementsDVB- SSI, ASI – common point-to-point protocol in use for linking video distribution devices
40Video over IP Training Ethernet Addressing (Data-Link Layer 2) Ethernet equipment uses Media Access Control (MAC) addresses for each piece of equipment.MAC address is represented as 6 fields two HEX characters each. Sample: 00:08:D4:01:03:03MAC addresses are uniquely assigned to each piece of hardware by the manufacturer and do not change. First six digits can be used to identify the ManufacturerFrames are the format of data packets on the wire. Note that a frame viewed on the actual physical hardware would show start bits, sometimes called the preamble, and the trailing Frame Check Sequence. These are required by all physical hardware and is seen in all four following frame types. They are not displayed by packet sniffing software because these bits are removed by the Ethernet adapter before being passed on to the network protocol stack software.
41Video over IP Training Network Layer 3 – IP (3)Establishes, maintains, and terminates network connections. Among other functions, standardsdefine how data routing and relaying are handled.IP Packet Format:EthernetHeaderIPDataCRC16 bytes20 bytes4 bytesInternet Protocol (IP) IETF RFC0791Most common network layerFeatures support for addressing, fragmentation, routing, priorityThe Ethernet link layer provides these services to the IP network layer:Hardware address (MAC address)Protocol Type Identifier (data link SAP field)Maximum Transfer Unit (MTU)Broadcast CapabilityMulticast Capability
42Video over IP Training IP Datagrams & Packets IP works as a delivery service for “datagrams” also called “IP packets”A datagram is a single, logical block of data that is being sent over IP, augmented by IP headerMaximum Transmission Unit (MTU) of a standard Ethernet link is 1,500 bytes (RFC894)Address Resolution Protocol (ARP) used to correlate MAC address to IP addressIP works as a delivery service for “datagrams” also called “IP packets”. A datagram is a single, logical block of data that is being sent over IP, augmented by IP header. The Maximum Transmission Unit (MTU) of a standard Ethernet link is 1500 bytes (RFC894).Procedure for the process of sending an IP packet:The header of the IP packet must have header checksum verifiedThe header is examined to determine where the packet is goingTime-to-Live counter is decremented, a new checksum is calculated, and inserted back into the IP header
43Video over IP Training IP Protocol Header IP header is 20 bytes in length (32-bits/byte)Contains Source and Destination IP AdressesDSCP / Type of Service (TOS) field can be set for network devices to use in configuring QoSIP HEADER:VERS – version number of IP protocolHLEN – datagram header length given in 32 bit wordsDSCP / Type of Service – priority, delay, throughput, reliabilityTotal Length – datagram lengthIdentification – datagram ID for reassemblyFlags/Fragment offset – Fragmentation control & fragment ID for reassemblyTime to Live – limits time a datagram is in the networkProtocol – Identifies next layer protocolHeader Checksum – verifies integrity of IP headerSource/Destination Address – Identifies sender and destination deviceOptions – Used for network testing & debuggingPadding – Zero bits used to ensure header is a multiple of 32 bits
44Video over IP Training TCP Protocol Header TCP Header contains Source and Destination PortsTCP is “connection based”Receipt of each TCP packet is acknowledged and dropped or errored packets are retransmittedIP HEADER:VERS – version number of IP protocolHLEN – datagram header length given in 32 bit wordsDSCP / Type of Service – priority, delay, throughput, reliabilityTotal Length – datagram lengthIdentification – datagram ID for reassemblyFlags/Fragment offset – Fragmentation control & fragment ID for reassemblyTime to Live – limits time a datagram is in the networkProtocol – Identifies next layer protocolHeader Checksum – verifies integrity of IP headerSource/Destination Address – Identifies sender and destination deviceOptions – Used for network testing & debuggingPadding – Zero bits used to ensure header is a multiple of 32 bits
45UDP is most common protocol for Video over IP (simpler, less overhead) Video over IP Training UDP Protocol HeaderUDP header contains Source and Destination PortsUDP is “connectionless”UDP Header is 8 bytes in length (32 bits/byte)Packets are not acknowledged and there is no retransmission of dropped or errored packetsIP HEADER:VERS – version number of IP protocolHLEN – datagram header length given in 32 bit wordsDSCP / Type of Service – priority, delay, throughput, reliabilityTotal Length – datagram lengthIdentification – datagram ID for reassemblyFlags/Fragment offset – Fragmentation control & fragment ID for reassemblyTime to Live – limits time a datagram is in the networkProtocol – Identifies next layer protocolHeader Checksum – verifies integrity of IP headerSource/Destination Address – Identifies sender and destination deviceOptions – Used for network testing & debuggingPadding – Zero bits used to ensure header is a multiple of 32 bitsUDP is most common protocol for Video over IP (simpler, less overhead)
46Video over IP Training IP Addressing (Network Layer 3) IP addresses special format “dotted decimal” is easy to recognize i.eDotted decimal number represents a 32-bit number broken up into four 8-bit numbers =An IP address encodes the ID of a device’s network (Net ID) as well as a unique device (Host ID) on that networkAll devices on a network share a common network prefixEach address is a pair of numbers: (netid, hostid). The netid identifies the network; the hostid identifies the device on the network. Each number can have different binary lengths depending on the Class of network
47Video over IP Training IP Addressing Classes IP address ranges are broken down into classes: A, B, C, D, and E“Classful” addressing schemeDifferent size networks and device types were assigned to a specific class of addressesHelped in routing“Classless” addressing is used todayAny size network or type of device can be assigned a class A,B or C addressThe value of the first number of the IP address that determines the classWhen the first precursor of the Internet was developed, some of the requirements of the internetwork, both present and future, were quickly realized. The Internet would start small but eventually grow. It would be shared by many organizations. Since it is necessary for all IP addresses on the internetwork to be unique, a system had to be created for dividing up the available addresses and share them amongst those organizations. A central authority had to be established for this purpose, and a scheme developed for it to effectively allocate addresses.The developers of IP recognized that organizations come in different sizes and would therefore need varying numbers of IP addresses on the Internet. They devised a system whereby the IP address space would be divided into classes, each of which contained a portion of the total addresses and were dedicated to specific uses. Some would be devoted to large networks on the Internet, while others would be for smaller organizations, and still others reserved for special purposes.Since this was the original system, it had no name; it was just “the” IP addressing system. Today, in reference to its use of classes, it is called the “classful” addressing scheme, to differentiate it from the newer classless scheme. As I said at the end of the introduction to this section, “classful” isn't really a word, but it's what everyone uses.IP Address Classes There are five classes in the “classful” system, which are given letters A through E.Class Lowest Address Highest AddressABCDEMulticast Address Range
48Video over IP Training IP Addressing Classes A Subnet (short for Subnetwork) is a range of IP addresses within the overall address space assigned to an organizationCreated to allow Classless Inter-Domain Routing (CIDR)Subnet masks are also represented by the dotted decimal format and follow the IP address range classesSubnets are used to divide or combine groups of IP addresses
49Video over IP Training Ethernet Hubs Ethernet hubs provide 3 main functionsActs as a repeater, taking an incoming signal, amplifying it, and sending it out onto all other portsIsolates the ports electrically (electrical termination)Acts as a splitter/combinerNo collision preventionSimply receive and retransmit signalsNot used much todayEthernet hubs provide 3 main functionsActs as a repeater, taking an incoming signal, amplifying ti, and sending it out onto all other portsIsolates the ports electrically so that you can add or remove connections with having to disable the rest of the networkActs as a splitter/combiner allowing devices to share connection to a third deviceHubs don’t do anything to prevent collisions from occuring: they simply receive and transmit signalsReference Page #(s): 175
50Video over IP Training Ethernet Switches – Layer 2 A switch provides a separate logical and physical network on every one of its connections, or portsCommon practice is to put a single device on each connection of the switchThis practice provides 3 benefits:Each device can transmit and receive data without worrying about collisions with other devices, so data transmission speeds can increaseA switch will send out packets only on a port that is destined for a device that is connected to that port which improves network security and reliabilityCertain devices can operate in Full-Duplex mode which means that they can transmit and receive data at the same timeSwitches tend to have most of their ports configured as a single type of interfaceWorks on MAC AddressingReference Page #(s):
51Video over IP Training Ethernet Router – Layer 3 Routers provide a crucial function for IP network by examining the headers and addresses of IP packets and then transmitting them onward to their destinations.Basic Functions of a RouterAccept IP packets from devices and forward them to recipient devicesForward packets to another router when recipient device is not local to the routerMaintain “Routing Tables” for different destination networksBridging between different network technologiesMonitor status of connected networks to determine if they are vacant, busy, or disabledQueuing packets of different priority levels so that high-priority packets have precedenceInforming other routers of network configurationsPerforming many other administrative and security related functionsRouters process IP Addresses whereas Switches process MAC AddressesReference Page #(s):
52Encapsulation of MPEG Transport Streams Video over IPEncapsulation of MPEG Transport Streams
53Video over IP Training Video over IP or Networks Video into PacketsEncapsulating Media DataTransport ProtocolsPorts & SocketsUDP / TCP / RTPPacket TransportTransport MethodsConsiderations
54Video over IP Training IP Encapsulation IP Encapsulation is the process of taking a data stream, formatting it into packets, and adding the headers and other data requiredMPEG over IP Transport streams consist of a series of multiple MPEG TS packets packed inside UDP datagramsA typical IP video packet will contain 7 TS packets (188 x 7 = 1316 bytes)Add Ethernet, IP and UDP headers (46 bytes)Ethernet Maximum Transmission Unit (MTU) = 1,500 bytes1,316 bytesbytes= 1,362 bytesEthernetIP/UDPMPEG2 TSVideo Packet188 bytesCRCIP Encapsulation is the process of taking a data stream, formatting it into packets, and adding the headers and other data requiredPerformance of IP video signals will be impacted by the video packet size.Long Packet benefits: Less Overhead, Reduced Packet processing Load, Greater network LoadShort Packet benefits: Loss Packets are less harmful, Reduced Latency, Less need for FragmentationTypically, video signals tend to use the longest possible packet sizesMPEG Transport streams consist of a series of 188 byte TS packets. An IP packet payload will contain 7 TS packets. MTU size is 1500 bytes, so 7 TS packets will fit and not 8 TS packetsIP Packet with MPEG2 TS Video Payload carried over Ethernet
55Standard vs. Jumbo Frame Standard IP PacketJumbo IP PacketApproximately 20% more efficient.
56Video over IP Training Transport Protocols Transport Stream Protocols are used to control transmission of data packets3 major protocols used to transport real-time video:UDP or User Datagram Protocol: This is one of the simplest and earliest of the IP Protocols. UDP is often used for video and other data that is very time sensitiveTCP or Transmission Control Protocol: This is a well established Internet Protocol that is widely used for data transportRTP or Real-Time Transport Protocol: This protocol has been specifically developed to support real-time data transport, such as video streamingUDP is a connectionless transport protocol that is used for high-speed information flows (widely used for Video)TCP is a connection-oriented protocol that provides high reliability (retransmission)RTP is intended for real-time multimedia applications. RTP is not strictly a protocol like UDP or TCP. RTP was designed to use UDP as a packet transport mechanismReference Page #(s):
57Video over IP Training Transport Methods Technologies used for packet transportPacket over SONET/SDH (Synchronous Optical Network and Synchronous Digital Hierarchy)Cable and DSL (Digital Subscriber Line)Optical NetworksIP over ATM (Asynchronous Transfer Mode)MPLS/GMPLS (Multi-Protocol Label Switching and General MPLS)RPR (Resilient Packet Ring)WirelessEthernet is the most popular technology for IP packet transport in local areas. It is not intended for long distances, so the above technologies were developed to transport IP packets over long distancesReference Page #(s):
58Video over IP Training Transport Considerations When video is being transported over an IP network, users need to consider a other factors that can significantly affect the users’ viewing experienceMultiplexing is a process of combining video streams from different sources into 1 IP flow. Two forms of Multiplexing commonly used today: Time Division and StatisticalTraffic Shaping consists of various techniques that are used to make video traffic easier to handle on a network. Overall goal is to make an IP flow less prone to sudden peaks in bit rateBuffering is basically a collection of memory that is used to temporarily store information prior to taking some action. Buffers can have a major impact of video network performanceFirewalls are used to control the flow of information between two networks. Need to be aware of the constraints that firewalls impose on video servicesReference Page #(s):
59IGMP – Internet Group Management Protocol MulticastingIGMP – Internet Group Management Protocol
60Video over IP Training Multicasting Basic ConceptsUnicastingMulticastingJoining and Leaving Multicast
61Video over IP Training Unicast vs. Multicast High Bandwidth required between the video source and a number of end-usersVideo source make separate video streams for each recipientReduced Bandwidth requirements between video source and multiple end-usersNetwork devices (routers) makes copies of video stream for every recipientUnicast = one to oneMulticasting is the process of simultaneously sending a single video signal to multiple users.Through the use of special protocols, the network is directed to make copies of the video stream for every recipient. This copying occurs in the network rather than the video source.Multicast = one to many
62Video over IP Training Unicasting Unicasting is the traditional way that packets are sent from a source to a single destinationEach user who wants to view the video must make a request to the video source.The source needs to know the destination IP address of each user and must create IP packets addressed to each user. As the # of viewers increase, the load on the network increasesEach viewer gets a custom tailored video stream which allows the video source to offer specialized features such as pause, rewind and fast-forward of video.Unicasting is the traditional way that packets are sent from a source to a single destinationEach user who wants to view the video must make a request to the video source.The source needs to know the destination IP address of each user and must create IP packets addressed to each user. As the # of viewers increase, the load on the network increasesEach viewer gets a custom tailored video stream which allows the video source to offer specialized features such as pause, rewind and fast-forward of video.VODServerUnicasting
63Video over IP Training Multicasting Multicasting unlike Unicasting, puts the burden of creating streams for each user on that network rather than on the video sourceIP packets are given special IP addresses to be recognized by the network as Mutlicast. IP Address range is Class D: throughIP Multicast uses UDP packetsIGMP (Internet Group Management Protocol) Protocol controls access to Multicast streamsUser must request to Join and Leave a Multicast programMulticasting unlike Unicasting, puts the burden of creating streams for each user on that network rather than on the video sourceIP packets are given special Ip addresses to be recognized by the network as mutlicastThere is a special protocol IGMP (Internet Group Management Protocol) for users that allows them to inform the network that they wish to join the multicast.Multicasting
64Video over IP Training Joining and Leaving a Multicast IGMP MessagesMembership QueryUsed by multicast enabled routers running IGMP to discover which hosts on attached networks are members of which multicast groupsMembership ReportSent by a host when it wants to join a multicast group or when responding to membership query’sLeave GroupSent by a host when it wants to leave a multicast groupOne advantage of multicasting is that it gives users the ability to control when they join and leave a multicast. There are no special actions neededWhen users want to watch a multicast program, they must join in at whatever point the program happens to be in. So, a join request is sent to the router. If the router is already receiving that video then it simply makes a copy. If it is not then the router must make a request to a device closer to the multicast source.When a user wants to leave a multicast, they should inform their local router. The router in turn will stop sending the video. When a router no longer has any users it must inform the network to stop sending it the video stream.Multicasting
66Video over IP Training Video over IP Monitoring & Measurements Network ImpairmentsFlow BehaviorVideo over IP MeasurementsMDI – Media Delivery IndexDistributed Continuous Program (DCP) MonitoringDetermining Packet Loss (MLR) on a UDP flowDelay Factor (DF) & the effects of a high Delay Factor
67Video over IP Training Video Network Impairments Packet Loss is when an IP packet does not arrive at its intended destination. This can be caused by any number of circumstances: Network Saturation, Network hardware failure, Queuing misconfiguration, etc.Packet Reordering occurs in a network when packets arrive in a different order than how they were sent. Since MPEG has a very precisely defined structure and sequence, out of order packets can cause problemsDelay is going to happen in any network. Two types of delay: Propagation delay and Switching. Propagation is the amount of time to travel from one location to another. Switching delay occurs at any point in the network where a signal needs to be switched or routed.Jitter is a measurement of variation in the arrival time of the data packets. Receivers must be built to tolerate jitter and networks should be designed not to create a lot of jitter.The first principle of Video over IP is The quality of the video content must be good going into the IP networkThe only thing an IP network can do to affect the quality of IP Packet Loss is when an IP packet does not arrive at its intended destination. This can be caused by any number of circumstances: Network Saturation, Network hardware failure, Queuing misconfiguration, etc.Packet Reordering occurs in a network when packets arrive in a different order than how they were sent. Since MPEG has a very precisely defined structure and sequence, out of order packets can cause problemsDelay is going to happen in any network. Two types of delay: Propagation delay and Switching. Propagation is the amount of time to travel from one location to another. Switching delay occurs at any point in the network where a signal needs to be switched or routed.Jitter is a measurement of variation in the arrival time of the data packets. Receivers must be built to tolerate jitter and networks should be designed not to create a lot of jitter.
68MDI = DF : MLR Media Delivery Index Video over IP Training The Media Delivery Index (MDI) is a metric that captures the amount of Cumulative Packet Jitter and the amount of Packet Loss of an IP stream. These are the only types of impairments that can be caused by an IP transport network.MDI consists of two components:Delay Factor : Media Loss RateDelay Factor (DF) is the size of buffer required to transport jittered packets in the network without loss divided by the rate of the media stream – it is proportional to the delay introduced in the system due to the network buffering. The buffer value is expressed in the time (milliseconds) it takes to transmit (drain) the maximum buffer size at outflow rate.Media Loss Rate (MLR) is the total Media Packets Lost (per second)See RFC 4445 for complete details on how to calculate MDISee Application Notes at:MDI = DF : MLR
69Video over IP Training Flow Behavior EthernetInter-Packet GapDecoderMonitor, TV, etcBuffer(Removes Ethernet frame and buffers MPEG)MPEGRate is determined by the MPEGExample 4.5Mb/sEach Ethernet packet contains up to 7 MPEG packetsEthernet PacketsMPEG PacketsPayload is extractedPayload is bufferedPayload is clocked outIn order to transmit video over an IP network, we are packetizing what would be a constant video flow into IP packets. In order to do this successfully we need to have a balance between the rate at which the payload is consumed by a decoder and the rate in which the encoder places the video into IP packets and puts it on the network.In order for the encoder to provide video at the appropriate rate it must control the interpacket gap between each packet. So the rate is determined by the mpeg that is if we are trying to deliver 4.5 MBits per second of video content to a decoder, we must insure the we send the right # of packets that equals the that amount of payload to be consumed by the decoder and that we provide it in a periodic nature.So each packet is containing 7 mpeg packets and is delivered to a buffer which in turn consumed by the decoder and played out to the TV or monitor.188-bytes MPEG2 TS packet encapsulated within an IP Ethernet Frame.Rate of IP delivery is the same as the rate of drain of the video (MPEG2 TS).The packet arrival rate of each IP packet is exactly to the rate used to clock the contents of one IP packet from the receiver buffer.
70Video over IP Training Simple IP Switch (example) Basic MDI TheoryNext thing we are going to talk about is Delay Factor. As mentioned earlier the goal here is to provide video into the network in IP packets at the appropriate rate to be consumed by the decoder. It is very important in terms of video that we insure we maintain a periodic flow over the network for a particular flow. The animation with show this affect. Effectively we are doing a load balance. We are expected to consume data into the decoder and it is the responsibility of the encoder to provide the information into the network periodically.
71Video over IP Training Flow Behavior: IP Flow with Jitter & Under Run Rate Under Run: Avg. Ethernet inter-packet gap timing at the delivery rate is less than MPEG video rate hence buffer runs empty1EthernetInter-Packet GapDecoderMonitor, TV, etcBuffer(Removes Ethernet frame and buffers MPEG)MPEGEthernet PacketsMPEG Packets2DecoderBuffer(Buffer start to drain at MPEG rate 3.75 Mbps)For example: 3.50 Mbps rateFor example: 3.75 Mbps rateMonitor, TV, etc3DecoderBuffer(Buffer is empty waiting for more IP packets)For example: 3.50 Mbps rateFor example: 3.75 Mbps rateMonitor, TV, etcNothing to Decode; Poor Video
72Video over IP Training Flow Behavior: IP Flow with Jitter & Over Run Rate Over Run: Avg. Ethernet inter-packet gap timing at the delivery rate is more than buffer can handle hence the buffer drops packets1EthernetInter-Packet GapEthernet PacketsMPEG PacketsMPEGInter-Packet GapBuffer(Removes Ethernet frame and buffers MPEG)DecoderMonitor, TV, etc2Shorter Ethernet Inter-Packet GapFor example: 3.75 Mbps rateFor example: 4.90 Mbps rateBuffer(Buffer starts to fill up)DecoderMonitor, TV, etc3For example: 3.75 Mbps rateFor example: 4.90 Mbps rateBuffer(Buffer Overflows)DecoderEthernet packets are dropped at the network deviceMonitor, TV, etcImpaired Video
73Video over IP Training Simple IP Switch with High MDI Use animated diagram.
74Video over IP Training Flow Behavior: IP Flow with IP Packet Loss IP Packet Loss: Ethernet inter-packet gap is enlarged due to IP packet loss, causing bursty IP Video delivery (Jitter)1EthernetInter-Packet GapEthernet PacketsMPEG PacketsMPEGInter-Packet GapBuffer(Removes Ethernet frame and buffers MPEG)DecoderMonitor, TV, etc2For example: 3.75 Mbps rateFor example: 3.75 Mbps rateBuffer(Buffer starts to fill up)DecoderMonitor, TV, etcDecoderBuffer(Buffer could Under Run)For example: 3.75 Mbps rateMonitor, TV, etc3Impaired VideoEthernet packets are dropped in the networkLoss adds Jitter
75Video over IP Training Program Clock Reference (PCR) PCR Jitter vs. IP JitterPCR Jitter (recovered clock inaccuracy)Serial transport media use a common clock between transmitter and receiver and can guarantee high accuracy of packet arrival timesJitter is classified into two categories: PCR accuracy errors (PCR_AC) and network jitter. These two are then combined into PCR overall jitter (PCR_OJ)Ethernet / IP Jitter (variation in expected packet arrival times)No clock reference for transmission of packetsBecause transport can include multiple devices (all with different buffer cues), there is no guarantee that packets transmitted with a given inter-packet spacing will arrive with the same spacingIP jitter is categorized and measured by the Media Delivery Index (MDI) Delay Factor (DF)PCR Jitter = clock inaccuracyIP Jitter = variation in expected packet arrival time
76Video over IP Training Constant Bit Rate (CBR) Constant Bit Rate exampleAn encoder ideally transmits IP packets at the rate matching the MPEG encoded bit rate as shown here.PCR time stamp updates occur every 40 ms in a stream continuously informing a decoder of the MPEG encoded bit rate.Constant Bit Rate (CBR) encoding shown here. “Stuffing” bits maintain a constant bit rate even though picture complexity is dynamic.
77Variable Bit Rate (VBR) example Video over IP Training Variable Bit Rate (VBR)Variable Bit Rate (VBR) exampleThis example has high DFThe instantaneous, per packet IP bit rate is bursty and does not track the dynamic encoded PCR bit rate.PCR bit rate varies dynamically with picture complexity with VBR since there is no stuffing PID. The instantaneous peak PCR rate may be peak limited (“capped”) by configuration.
78Video over IP Training Delay Factor (DF) DF continuously tracks the cumulative difference between MPEG bit rate and IP bit rate capturing the stream’s burstinessIf an IP stream is bursty, its instantaneous bit rate may significantly stress network transport device queues.
79Video over IP Training IneoQuest Software Application: IQ MediaAnalyzer Pro
80Video over IP Training MLR: Determining Loss on UDP Flows PID 481 CCPID 482 CCMLT = 0MLT = 60910020201080403070506000104How do we determine loss on UDP flows?To determine loss on a UDP flow, we have to do a little more work, so we take advantage of the payload itself. As stated earllier, within the payload there are 7 MPEG packets of an IP packet. Within that there is a combination of things, video, audio, stuffing, and control information. In the case of a constant bit rate IP flow, the difference between the IP bitrate and the video bit rate is made up of stuffing component. In affect, what is happening when we transmit information, for example, our audio is running at 400 KHz and the video is running at 4 MBits, the # of video packets we will see is ten times the amount required for audio. Basically the # of packet types placed inside IP flows are directly corrollated to the bit rate of the information that is going to be consumed by the decoder.In order for us to monitor information, we use the PIDs (video and audio elements) to take advantage of the fact that each element carries a continuity counter. So, each sequential video packet will have a new sequence # ranging from 0 to 15 and monitor theses continuity counters to determine loss.In this animation you will see that we are monitoring not the IP loss but the media loss itself.*NOTE: talk about how TSX creates a visual correlation between the IP and MPEG transport layers.
81Video over IP Training Video over IP Measurements Properties that must be Measured and Monitored simultaneously to ensure Quality of Video over IP.IP packet arrival times where jitter causes delay (Under Runs)IP packet arrival times where jitter causes bursts (Over Runs)IP packet bit rate average drift/deviation from the Video bit rateIP packet lossVideo packet loss / CC errors
82Video over IP Training IneoQuest Software Application: IQ MediaAnalyzer Pro MDIPAYLOADMDIIf we are monitoring 2 points in the network and we are visually monitoring with a TV, the loss would result in pixelization or some effect to the video quality delivered at the endpoint of the network.As mentioned in the third principle is in order to monitor Video over IP quality correctly at any measurement node, All Live Flows must be monitored continuously for errors in quality and/or delivery. In order to look at them simultaneously, we have a requirement to characterize each IP packet into a particular flow and to further monitor the payload information for loss and jitterEncoderEdgeHeadEndDropped IP Packets
83Supported Technologies Encapsulation Supported Video over IP Training Monitored Flow TypesMPEG2 Transport Stream HD/SDH.264 (MPEG-4 part 10)MPEG-4 part 2ISMAVC1.01 / VC1.1Eth2/IP/UDPEth2/IP/UDP/RTPEth2/VLAN/IP/UDPEth2/VLAN/IP/UDP/RTPEth2/PPPoE/IP/UDPEth2/PPPoE/IP/UDP/RTPEth2/PPPoE/VLAN/IP/UDP/RTPSupported TechnologiesEncapsulation SupportedVODBroadcastFEC Flow DetectionFlow TypeStandard / High DefinitionSPTS – Single Program Transport StreamMPTS – Multi-Program Transport StreamBitrate
84Video over IP Training Alarms & Warnings Possible CausesMDI-DF : Delay Factor (max value exceeded)NTWK-UTL : Network Utilization (max value exceeded)IP Flow Media Bit Rate Deviation (%)• Over Subscription• Encoder Behavior• Bursty Traffic• VOD Server ConfigurationWARNINGMDI-MLR : Media Loss Rate (max value exceeded)RTP-LDE : Loss Distance Error (min value exceeded)RTP-LPE : Loss Period Error (max value exceeded)MLT-15 : 15min. Media Loss Total (max value exceeded)MLT-24 : 24hr. Media Loss Total (max value exceeded)MLS-15 : 15min. Media loss Seconds Total (max value exceeded)MLS-24 : 24hr. Media Loss Seconds Total (max value exceeded)RTP-SE : RTP-Total Sequence Errors (max value exceeded)• Noise• Bad Connectors• Pinched Cables• QoS Configuration• Equipment Configuration• Transient PowerLOSSVIDO-LOS : Video Flow Outage• Faulty Equipment• Loss of Power• NatureOutageIGMPv2 / IGMPv3 supportJoin & Leave (min/max/average)IGMP Zap timeAutoScan / Manual• Faulty Equipment• Configuration• Over SubscriptionIGMPTS-PID : Transport Stream PID Bit Rate (lower limit exceeded)TS-SYNC : Transport Stream Sync Byte ErrorV-TSB : VIDEO-TS PCR Bit Rate (lower limit exceeded)IP-SBRMX : IP-Stream Bit Rate (upper limit exceeded)IP-SBRMN : IP-Stream Bit Rate (lower limit exceeded)• Encoder Issues (config, fault equipment)• Loss Video/Voice feedsPAYLOAD
85IneoQuest Monitoring and Troubleshooting Solutions
86Video over IP Training How IP Video is Challenging Service Providers The biggest problem facing IP Video service providers is unbounded operational expenses (OPEX)The inability to sustain quality across a distributed service area no matter how much is spent in OPEX – loosing business modelOPEX DriversIncreased call volume – $5.00-$15.00 per callIncreased truck rolls – $ plus per rollChronic problems – Problems “come and go”Lingering problems – No definitive problem resolution; “voodoo” troubleshootingNo visibility – The customer becomes the monitoring and analysis systemLack of education – New technology presents new problemsSummaryIP Video distribution presents a new set of problemsUnique issues that traditional monitoring systems are ill-equipped to handle or detectIP Video is very different than voice and dataRequires an evolved multi-dimensional approach to quality and service assurance
87Video over IP Training Video Across Multiple Systems (end-to-end program flow) 1000s of Video FlowsEncoderHeadendNetworkVideoServersVideo HeadendIP TransportCoreNetworkHub/VHOEdgeEndUserSubscriberLast MileNetworkPremiseLast Mile NetworksDecoder
88Video over IP Training Complexities of IP Video 1000s of Video FlowsHeadendVideoCoreEdgeLast MilePremiseEndUserEncoderHub/VHODecoderNetworkServersNetworkNetworkNetworkNetworkHeadendIP TransportLast Mile TechnologySubscriberResults in increased call volume ($) and truck rolls ($)No matter where the issue is across any subsystem, the effect is seen at the end of the system at the subscriberOperational dollars get spent and problem is often not found or fixed….system never improves
89Video over IP Training Coverage Areas MPEG Monitoring SubsystemDSL/RF Monitoring SubsystemNetwork Monitoring SubsystemEncoderHeadendNetworkVideoServersCoreHub/VHOEdgeLast MilePremiseEndUserDecoderCoverage AreaTraditional MPEG Monitoring System CoverageTraditional Core Network Monitoring System CoverageTraditional DSL/RF Component Monitoring System Coverage
90Video over IP Training Traditional Monitoring – Blind to Video Issues Single Video ProgramProblem Origination1000s of Video FlowsHeadendVideoCoreEdgeLast MilePremiseEndUserEncoderHub/VHODecoderNetworkServersNetworkNetworkNetworkNetworkVideo HeadendIP TransportLast Mile TechnologySubscriberMPEG Monitoring SubsystemNetwork Monitoring SubsystemDSL/RF Monitoring SubsystemSystems like MPEG analyzers are blind to problems that originate downstream and other systems designed for data transport monitoring do not have the visibility to understand an IP Video flow is bad entering or with in the subsystem...the first time it is realized there is an issue is at the customers TV, so customer calls and trucks roll.System Reports GoodSystem Reports GoodSystem Reports GoodThe first time it is realized there is an issue is at the customers TV, so customer calls and trucks roll.
91Video over IP Training Multi-Dimensional: All Flows, All Locations, All the Time 1000s of Video FlowsEncoderHeadendNetworkVideoServersCoreHub/VHOEdgeLast MilePremiseEndUserDecoderVideo HeadendIP TransportLast Mile NetworkCoverage AreaIneoQuest IQPinPointMulti-Dimensional Video Quality Management System CoverageWith Analysis, Monitoring, and Remote Troubleshooting all in one
92Video over IP Training Multi-Dimensional Management: Detect, Isolate, Resolve Reports GoodVideoReports Bad Video1000s of Video FlowsHeadendVideoCoreEdgeLast MilePremiseEndUserEncoderHub/VHODecoderNetworkServersNetworkNetworkNetworkNetworkVideo HeadendIP TransportLast Mile NetworkSubscriberSingle Video ProgramProblem OriginationUsing Multi-Dimensional Video Quality Management,Operations now can detect a Video issue.Trouble ticket to specific sub system and use remote troubleshooting to solve issue.If the customer calls, no need to roll truck since the issue is not at the premise.
93Video over IP Solutions IneoQuest Hardware Platform: Singulus G1-T Generate network traffic up to 2 GbEMonitor & Analyze IP Video up to 1 GbE80 MB Capture & RecordPacket Morph (add Impairments)1 GbE Copper & Fiber Connections10/100 Management portASI Output port256 IP Flows
94Video over IP Solutions IneoQuest Hardware Platform: Singulus Lite “Cricket” Interactive Subscriber “Visual Impairment” FeedbackIn-band IP Video/IPTV control and statsSubscriber Behavior TrackingEmulates an end pointMonitor & Analyze IP Video up to 10 IP Flows80 MB Capture & Record10 / 100 MbE Copper ConnectionsUSB Management portAvailable Versions:EthernetQAMASI
95Traffic Generation Software Application Video over IP Solutions IneoQuest Software Application: IQMediaStimulusTraffic Generation Software ApplicationUsed with Geminus, Singulus G10, Singulus G1-TGenerate Video, Voice, or Data flowsTS files, LIBpcap files (TS with encapsulation), Data files, voice files (.au, .wav, etc)Live Stream ReplicationCan cause ImpairmentsDrop IP Packets, add Jitter, change IP Bitrate, change PCR rate, drop PIDsSupports Multiple STIM targetsTest Set-upsAbility to Auto Run Tests
96Video over IP Solutions IneoQuest Software Application: IQMediaAnalyzer Pro Monitoring & Analysis Software ApplicationNew DashboardImpairments windowEnhanced Trigger & Capture CapabilitiesCommercial Insertion SupportMicrosoft IPTV supportSoftware Included with Hardware
97Video over IP Solutions IneoQuest Software Application: IQTsX Pro Post Analysis Software ApplicationSearch and Explore the captureDisplay the packet dataDecode media packet headersIP & Media Packet ExplorerPacket arrival time reportsPCR comparison reports & chartsPID list reportsGOP Structure reportsIndividual Channel analysis on MPTSCC error detectionPacket Modification3rd party tool supportPlay the capture with VLC Media PlayerView Packets with EtherealMicrosoft IPTV supportLicensed SoftwareMPEG Deep Packet Analysis
98iVMS Video over IP Solutions IneoQuest End-to-End Solution Overview Beginning of Last MileEnd of Last Mile(Subscriber)Video HeadendIP TransportEnd-to-EndDeep MPEG Analysis,IP Video Monitoring,& Remote TroubleshootingSimultaneous IP VideoMonitoring & RemoteTroubleshootingLast Mile TechnologiesIP, QAM, HPNA, ADSL2+,VDSL, ASI, WirelessLast Mile TechnologiesIP, QAM, HPNA, ADSL2+,VDSL, ASI, Wireless
99Video over IP Solutions IneoQuest iVMS IP Video Management System
100Video over IP Solutions iVMS – IQ Map View Google Maps integration.Visually see where the probes are in your network and what the status is.
101Video over IP Solutions iVMS – IQ Topology View
102Video over IP Solutions iVMS – Real-Time Monitoring Real-time Views with where one click show what the problems are and which flows they are affecting.IQTV enables real-time content confirmation.
103Select the metric to monitor Program Monitoring Across Multiple Points Video over IP Solutions iVMS – Real-Time MonitoringSelect the metric to monitorReal-time views show status of programs across multiple points in the network.Compare quality.Program status shows alarms regardless of probe.Program Monitoring Across Multiple Points
104Video over IP Solutions iVMS – Reporting & Trending Show loss distribution across all probes.Show loss over time.
105Video over IP Solutions iVMS – Reporting & Trending Look at historical trending in 15 minute increments. Number of flows vs. Time.How many good, how many bad.Thumbnails for each 15 minute intervals.
106Video over IP Solutions iVMS – Reporting & Trending (Drill Down to PID level) Daily View shows all alarms in 15 minute intervals.Interactivity enables zooming to flows for alarm details all the way down to the PID level.
107Video over IP Solutions iVMS – Reporting & Trending (PID Details) Reports currently show data for 24 hour periods.Most reports go all the way down to the PID details, including bitrate and loss.
108Video over IP Solutions iVMS – Daily Reports (IQ Watch Services) Daily Reports generates a PDF report for multiple probes across multiple days.Reports and contain Flow Details and VOD information.
109Video over IP Solutions iVMS – Configuration & Security Display Active UsersUser configuration allows for options and menus to be enabled or disabled per-user.Groups control which probes each user has access too.Define Menus Per-User
110Video over IP Solutions iVMS – Configuration Use a reference Probe to copy configurations to a group of Probes simultaneously.
111Video over IP Solutions iVMS – Configuration (Firmware Downloads) Multiple Probes upgraded simultaneously.
112Video over IP Solutions iVMS – Email Notifications Configure the system to notify via . Either per-alarm or summary information.s can be throttled on a 15 minute and/or a 24 hour basis.
113IQFastLink Embedded URL in Message Video over IP Solutions iVMS – Northbound to NMS/OSSIQFastLink Embedded URL in MessageChameleon is our open source reference configuration platform.Integrators can use Chameleon to understand how to integrate iVMS to a NMS/OSS.Each Alarm message contains an URL link for IQFastLink. This will enable direct linking to IQ visualizations wrapped in custom skins.
114Video over IP Solutions iVMS – Customized Skins to NMS/OSS IQFastLink allows NMS/OSS systems to connect directly to iVMS visualizations wrapped in seamless looking skins.Skins can be easily created for different systems to maintain a constant look and feel from the operators point of view.
115Resources for Video over IP ReferencesResources for Video over IP
116Video over IP Training References & Resources Video over IP: A Practical Guide to technology and Applications by Wes Simpson, Focal PressIPTV Crash Course by Joseph Weber and Tom Newberry, McGraw HillTCP/IP Illustrated, Volume 1, The Protocols by W. Richard Stevens, Addison WesleyInternetworking with TCP/IP, Volume 1, Principles, Protocols, and Architecture by Douglas E. Comer, Prentice-Hall, Inc.A Guide to MPEG Fundamentals and Protocol Analysis, TektronixA Transport Protocol for Real-Time Applications, RFC3550Requirements for Internet Hosts - Communications Layers, RFC1122Internet Protocol, RFC791Internet Control Message Protocol (ICMP), RFC792Internet Group Management Protocol (IGMP), RFC 2236Host Extensions for IP Multicast, RFC 1112Media Delivery Index (MDI), RFC 4445