Presentation is loading. Please wait.

Presentation is loading. Please wait.

Video over IP – Get the Picture!

Similar presentations


Presentation on theme: "Video over IP – Get the Picture!"— Presentation transcript:

1 Video over IP – Get the Picture!
IneoQuest Technologies, Inc. Video over IP – Get the Picture! IP Video Basics session Presenter: Rico E. Vitale (603)

2 Video over IP Training Agenda – Video over IP Basics
IneoQuest Overview Principles of Video over IP Compression Overview MPEG Data Streams Networking Fundamentals Video over IP Unicasting / Multicasting Video over IP – Monitoring & Measurements IneoQuest Solutions References & Contact Information

3 Founded in 2001, based in Mansfield, MA Fast and steady growth
Video over IP Training Company Overview Founded in 2001, based in Mansfield, MA Fast and steady growth Greater than 670% - three-year growth rate Recognized as one of the top ten fastest growing companies (Boston Business Journal) IP Video Measurement and Quality/Service Assurance Solutions Over 300+ unique customers, worldwide Telecom Tier 1/2/3, MSO Cable, Broadcast/Satellite, Equipment Manufacturer Markets Direct sales and support in North America, Europe and Asia Committed to helping service providers improve video quality and control OPEX Pioneering open streaming IP Video Standards Co-author with Cisco of the Media Delivery Index (RFC #4445)

4 Dropping 1 IP Packet in every 400 Packets (1/second)
Video over IP Training Does this annoy you? 1 every 400 packets dropped Dropping 1 IP Packet in every 400 Packets (1/second)

5 Why monitor video at all?
Video over IP Training Why monitor video at all? “So quiet you can hear a pin drop!” – US Sprint 1986 Voice customer are LESS demanding Consumers are less forgiving when it comes to poor video quality compared to voice calls or data connections More demanding since HD Very little loss can have a detrimental effect on video and the viewers Quality of Experience (QoE)

6 Video over IP Training Principles of Video over IP
Given good quality source video, Packet Loss is the only thing an IP transport network can do to affect video quality. The first principle of Video over IP is that the quality of the video content must be good going into the IP network The only thing an IP network can do to affect the quality of IPTV to the home is cause loss The perceived quality (MOS score) of the video is the same at the headend as it is at the STB, if there is no loss within the network The measurement we are discussing is part of the standard for measuring video delivery quality which is the Media Delivery Index (MDI). The parameter of MDI that measures loss is the Media Loss Rate (MLR). MDI = DF : MLR Make sure to check the Quality BEFORE making millions of copies

7 Video over IP Training Principles of Video over IP
Jitter on a single flow can and will lead to changes in behavior on other flows. Cumulative Jitter does not directly affect video quality, but it is an indicator of impending loss. The second principle of Video over IP is Jitter on a single flow can and will lead to changes in behavior on other flows on the network Cumulative IP Jitter does not impact video quality, but can be used as an indicator of impending loss We use the analogy of traffic patterns on a highway. If traffic on a highway is going the same speed, as entering traffic merges it will have to slow down and/or the traffic on the highway will have to alter speed and direction to allow the merging. We will be discussing this concept of Delay Factor which is the cumulative IP jitter on an IP network. MDI = DF : MLR

8 Monitor All Live IPTV flows,
Video over IP Training Principles of Video over IP 130 131 132 133 134 Channels Channels All programs should be inspected continuously to effectively monitor IPTV throughout a network. The third principle of Video over IP is in order to monitor Video over IP quality correctly at any measurement node, All Live Flows must be monitored continuously for errors in quality and/or delivery Inspection of live IPTV traffic by sampling has obvious holes. That is, in order to guarantee delivery of video quality to the subscriber, each program needs to be monitored all the time. To support the second principle (A flow that has jitter will affect other flows in the network) implies that all flows need to be monitored in order to look for delay factor changes in the network Live video flow inspection is critical. Eg, If you want to know if ESPN is good or bad, you need to look at ESPN. MDI = DF : MLR Monitor All Live IPTV flows, What you don’t watch your customer does!

9 VIDEO & AUDIO COMPRESSION

10 Video over IP Training Video and Audio Compression
Compression Overview Video Compression Key to Compression: Remove Redundancy Video Compression Formats MPEG Compression Technologies MPEG Video Compression MPEG Audio Compression

11 Video over IP Training The Need for Compression
Storage Requirements Digital storage costs are decreasing significantly Still be very expensive to store uncompressed TV data A two-hour SD television program ≈ 200GB Bandwidth Requirements Transmitting uncompressed data significant distance is extremely difficult Uncompressed Standard Definition (SD) digital video requires > 200 Mb/s Uncompressed High Definition (HD) digital video requires > 1Gb/s Processing Power / Hardware Requirements Processing large amounts of video data (storage) in real-time (bandwidth) A digitization of an analog standard definition NTSC signal via the 8 bit CCIR 601 standard produces a continuous stream of digital bits in the order of 216 Mbps. An hour of digitized standard definition television would therefore produce 97.2 GB. HDTV has even higher bit rates and therefore higher storage requirements. These kinds of data rates are prohibitive in terms of storage requirements, transmission bandwidth requirements, and processor demands. Storage Requirements While prices of digital storage has decreased significantly, it would still be very expensive to store a reasonable amount of uncompressed tv data. A two-hour standard definition tv program would require almost 200GB. A DVD disc holds 4.7 GB or just a few minutes of uncompressed video. The fact that a DVD can store 3 hours of compressed video attests to the power of compression. Bandwidth Requirements Transmitting 216 Mbps of uncompressed data over any significant distance is extremely difficult with today’s technology. Even some of the highest bandwidth digital channels, such as the connection between a hard drive and the CPU in a computer would be maxed out at these data rates. Transmitting over longer distance is impossible with commercial networking technologies. Cable and DSL broadband data services are in the order of only a few Mbps while conventional Ethernet within a local area network (LAN) is capable of only 10 or in some cases 100 Mbps. A Video over IP service that attempts to distribute digital tv signals over broadband and Ethernet is going to require drastically reduced bit rate to work. Processing Power Watching digital TV requires reading the digital pixels and recreating the images. Sometimes additional processing is also required, such as changing the size of the image to fit a particular monitor or display, or playing the video quicker to fast-forward or rewind. If the number of operations required to be performed on each pixel grows, it may require many billions of operations per second from the processor. So, you can see why compression is important to the storage and delivery of Video over IP services. Without proper use of data compression techniques, either the picture would look much worse or the movie would require much more disk space. Video Technology users are free to choose whether or not to use compression for their video signals. It is important to understand that the choice of a compression method can sometimes mean the difference between success and failure of a video networking project.

12 The generalized process of compressing digital video for delivery over
Video over IP Training Video Compression The goal of video compression is to reduce the quantity of data used to represent video content without substantially reducing the quality of the picture. Digitization Compression Decode Encode / / Transport Compressed Digital Bitstream Uncompressed Analog Video Sequence Film or Video Camera Analog TV Digital Digital video requires high data rates - the better the picture, the more data is needed. This means powerful hardware, and lots of bandwidth when video is transmitted. Video compression refers to reducing the quantity of data used to represent video content without excessively reducing the quality of the picture. It also reduces the number of bits required to store and/or transmit digital media. Compressed video can be transmitted more economically over a smaller bandwidth. Video compression is based on two principles. The first is the spatial redundancy that exists in each frame. The second is the fact that most of the time, a video frame is very similar to its immediate neighbors. This is called temporal redundancy. A typical technique for video compression should therefore start by encoding the first frame using a still image compression method. It should then encode each successive frame by identifying the differences between the frame and its predecessor, and encoding these differences. If the frame is very different from its predecessor (as happens with the first frame of a shot), it should be coded independently of any other frame. In the video compression, a frame that is coded using its predecessor is called inter frame, while a frame that is coded independently is called intra frame. The generalized process of compressing digital video for delivery over transport networks where they are decoded back into digital or analog video

13 Fewer Bits (storage) & Fewer Bits/second (bandwidth)
Video over IP Training Key to Compression: Remove Redundancy Video compression algorithms take advantage of several Types of Redundancy to reduce the size of the Video Stream. Spatial Redundancy Pixels can be encoded in groups (macro blocks) Color and Brightness of neighboring pixels often have similar values Temporal Redundancy Changes in an objects location and motion are normally very small from video frame to frame Coding Redundancy Patterns and common motions often form in video Perceptual Coding Redundancy The human eye cannot perceive minute differences in color and brightness Compression Algorithms are able to reduce the size of a video bit stream significantly because video typically contains duplicate or redundant information both within and between frames. Image and video compression algorithms can take advantage of several types of redundancy to reduce the size of the resulting bitstream. Spatial Redundancy Spatial method is performed by taking advantage of the fact that the human eye is unable to distinguish small differences in color. Neighboring pixels in an image often have similar values: color or brightness of an object typically does not vary significantly over small areas. Instead of encoding each pixel individually, a compression algorithm could save bits by encoding only the difference between neighboring pixels. That difference is typically a smaller value than the full range of possible pixel values and therefore can be encoded with fewer bits. Temporal Redundancy Under the NTSC standard, 30 television fps is the norm. During the 33 ms between frames, most of the image changes little or moves slightly from one place to another. Compression algorithms often encode only the small changes or the direction that a part of the image moved between frames. These changes typically require fewer bits then representing the whole image again. Coding Redundancy Some patterns and motions are more common than others in natural images. The most frequent patterns can be encoded more efficiently than less frequent ones. This form of variable length coding assigns fewer bits to more common codes. Encoding the small motions with few bits and the large ones with more bits results in more efficient system than encoding all of them with the same number of bits. Perceptual Coding Redundancy The most efficient compression algorithms take advantage of the human biology as well. Not all visual patterns are equally visible to the human eye. For example, fine details under low light or low contrast may not be visible. Efficient algorithms remove perceptual redundant patterns that are not visible to the human eye. Fewer Bits (storage) & Fewer Bits/second (bandwidth)

14 Video over IP Training MPEG Compression
MPEG generally takes analog or digital video signals and converts them to packets of digital data that are more efficiently transported over a network The MPEG system consists of two layers : System Layer (timing information to synchronize video and audio) Compression Layer (includes audio and video streams) . The job of MPEG compression is to take analog or digital video signals and convert them to packets of digital data that are more efficiently transported over a network. Being digital it has the following advantages : - Signal does not degrade - Picture does not get fuzzy - Signal-to-Noise ratio goes down slowly The MPEG system consists of two layers : System Layer (timing information to synchronize video and audio) Compression Layer (includes audio and video streams) The system decoder extracts the timing information from the MPEG system stream and sends it to the other system components. The decoder also de-multiplexes the video and audio streams from the system stream and passes it onto the appropriate decoder. The video and audio decoders decompress the information as specified in parts 2 and 3 of the MPEG standard respectively. General MPEG Decoding System

15 Video over IP Training MPEG Compression: I, P, B Frames & Group of Pictures
A frame is a single image from a video sequence. An I frame (initial, intra) is a frame that is compressed solely based in the information contained in the frame. A P frame (predicted) is a frame that has been compressed using the data contained in the frame itself and data from the closest preceding I or P frame. A B frame (bi-directional predicted) is a frame that has been compressed using the data from the closest preceding I or P frame and the closest following I or P frame. A Group of Pictures or GOP is a series of frames consisting of a single I frame and zero or more P and B frames. 2 4 6 8 10 I B P ~ 64k Bytes 12 frames typically in a GOP. GOP starts with an I frame. I frame is aprox 64,000 bytes The MPEG standard is primarily a bitstream specification, although it also specifies a typical decoding process to assist in interpreting the bitstream specification. This approach supports data interchange, but does not restrict innovation in the means for creating or decoding that bitstream. The bitstream specification is based on a data hierarchy. The data hierarchy is pretty self-explaining and is useful for the following reasons : Groups of pictures allow random access into a sequence Slices aid error recovery, in that if one slice contains an error then it can be skipped. Video Sequence = Seq. Header + n(GOP's) + End of Seq. GOP                  = Group of Pictures (frames) =  header + series of pictures (frames) Picture (Frame)  = A primary coding unit of a video sequence.  It consists of 3 matrices Y, Cb and Cr.  An RGB system                               can be easily converted to the Y,Cb,Cr system by a linear transformation. Slice                 = Group of Macroblocks Macroblock        = 2 x 2 matrix of Blocks Block                  = 8 x 8 pixel set In simple terms the ascending order of hierarchy is [ Block --> Macroblock --> Slice --> Frame ] The video bitstream architecture is based on a sequence of pictures, each of which contains the data needed to create a single display-able image. I frames, P frames, B frames and Group of Pictures are all terms that describe the way the picture data is structured in an MPEG video stream or file. MPEG-2 (and MPEG-1) video compression makes use of the Discrete Cosine Transform (DCT) algorithm to transform 8x8 blocks of pixels into variable length codes (VLC). These VLC's are the representation of the quantized coefficients from the DCT. MPEG-2 encoders produce three types of frames: Intra (I) frames, Predictive (P) frames, and Bidirectional (B) frames. To understand why MPEG uses these different frames, it is illuminating to look at the amount of data that is required to represent each frame type. With a video image of normal complexity, a P frame will take 2-4 times less data than an I frame. A B frame will take 2-5 times less data than a P frame. In MPEG-2, three 'picture types' are defined. The picture type defines which prediction modes may be used to code each block. 'Intra' pictures (I-pictures) are coded without reference to other pictures. Moderate compression is achieved by reducing spatial redundancy, but not temporal redundancy. They can be used periodically to provide access points in the bitstream where decoding can begin. 'Predictive' pictures (P-pictures) can use the previous I- or P-picture for motion compensation and may be used as a reference for further prediction. Each block in a P-picture can either be predicted or intra-coded. By reducing spatial and temporal redundancy, P-pictures offer increased compression compared to I-pictures. 'Bidirectionally-predictive' pictures (B-pictures) can use the previous and next I- or P-pictures for motion-compensation, and offer the highest degree of compression. Each block in a B-picture can be forward, backward or bidirectionally predicted or intra-coded. To enable backward prediction from a future frame, the coder reorders the pictures from natural 'display' order to 'bitstream' order so that the B-picture is transmitted after the previous and next pictures it references. This introduces a reordering delay dependent on the number of consecutive B-pictures. Relative amounts of data for each frame type in a typical MPEG GOP

16 Predictive – Minimal

17 Predictive – Maximum

18 Video over IP Training MPEG Compression: I, P, B Frames & Group of Pictures
The order video frames are transmitted can be different than the order they are displayed A typical GOP in display order is: B1 B2 I3 B4 B5 P6 B7 B8 P9 B10 B11 P12 The corresponding bitstream order is: I3 B1 B2 P6 B4 B5 P9 B7 B8 P12 B10 B11 MPEG can also use a variable GOP to better deal with complex video (not shown). This concentrates I frames together during complex scenes The video bitstream architecture is based on a sequence of pictures, each of which contains the data needed to create a single display-able image. I frames, P frames, B frames and Group of Pictures are all terms that describe the way the picture data is structured in an MPEG video stream or file. In MPEG video encoding, a group of pictures, or GOP, specifies the order in which intra-frames and inter-frames are arranged. The GOP is a group of successive pictures within a MPEG-coded film and/or video stream. Each MPEG-coded film and/or video stream consists of successive GOPs. From the MPEG pictures contained in it the visible frames are generated. A GOP can contain the following picture types: I-picture and/or I-Frame (English intra coded picture) reference picture, corresponds to a fixed image and is independent of other picture types. Each GOP begins with this type of picture. P-picture and/or P-Frame (English predictive coded picture) contains difference information from the preceding i or P-Frame. B-picture and/or B-Frame (English bidirectionally predictive coded pictures) contains difference information from the preceding and/or following i or P-Frame. A GOP always begins with an I-Frame. Afterwards several P-Frames follow, in each case with some frames distance. In the remaining gaps are B-Frames. With the next I-Frame a new GOP begins. The different picture types typically occur in a repeating sequence, termed a 'Group of Pictures' or GOP. A typical GOP in display order is: B1 B2 I3 B4 B5 P6 B7 B8 P9 B10 B11 P12 The corresponding bitstream order is: I3 B1 B2 P6 B4 B5 P9 B7 B8 P12 B10 B11 A regular GOP structure can be described with two parameters: N, which is the number of pictures in the GOP, and M, which is the spacing of P-pictures. The GOP given here is described as N=12 and M=3. MPEG-2 does not insist on a regular GOP structure. For example, a P-picture following a shot-change may be badly predicted since the reference picture for prediction is completely different from the picture being predicted. Thus, it may be beneficial to code it as an I-picture instead. For a given decoded picture quality, coding using each picture type produces a different number of bits. In a typical example sequence, a coded I-picture was three times larger than a coded P-picture, which was itself 50% larger than a coded B-picture.

19 Video over IP Training Every Packet Counts
Video and Audio CODECs remove large amounts of redundancy Highly compressed data streams are created Very small interruptions in the data stream can significantly reduce video quality 1st Principle.: Given good quality source video, Packet Loss is the only thing an IP transport network can do to affect video quality.

20 Building an MPEG Bitstream
Formatting MPEG Video for Transmission

21 Video over IP Training Building an MPEG Bitstream
System Layer Overview Elementary Streams (ES) Packetized Elementary Stream (PES) Program Stream (PS) Transport Stream (TS) Program Clock Reference (PCR)

22 Video over IP Training System Layer: MPEG Stream Types
MPEG Stream Types: Elementary Streams, Packetized Elementary Streams, Program Streams, Transport Streams Video ES Video PES Audio PES Multiple Program Transport Stream Audio ES Video Encoder Audio Packetizer PSIP Data Transport Stream MUX The MPEG system layer describes how video and audio bitstreams are packetized, interleaved into programs, and how they are mixed with additional information that is used to separate out the desired audio and video components. The compressed video and audio streams are called elementary streams. These are broken into packets, called packetized elementary streams (PES). The various streams from a single program are interleaved into program streams. Finally all of the programs are interleaved, along with descriptive data, into a transport stream. The descriptive data includes information about the digital channels contained within the transport stream such as which audio and video streams are grouped into the various programs and other information. Reference Page #(s): 187

23 Video over IP Training System Layer: Program Stream
A Program Stream (PS) carries a single program In MPEG, a program is a combination of video, audio, and related data All information in the program stream must have a common time-base. Typically one video is combined with one or more audio streams Video PES + Audio PES 1 + Audio PES 2 = Program Stream 1 Packet Packet Header Is this also done for Closed-Captioning as well? Answer: Yes Program stream (PS or MPEG-PS) is a name for the formats specified in MPEG-1 Systems and MPEG-2 Part 1, Systems (ISO/IEC standard ). It is a container format designed for reasonably reliable media such as disks (DVD) and other error-free environments, in contrast to transport stream which is for data transmission in which loss of data is likely. A Program Stream carries a single program In MPEG, a program is a combination of video, audio, and related data All information in the program stream must have a common time-base. Typically one video is combined with one or more audio streams and possibly some data streams. Generally, the 1st audio stream carries stereo sound track for the video. Other audio streams can include surround sound, alternate languages, and commentaries. Data streams includes captioning information, program stream info (title length, owner, content descriptions, etc.) and anything else the content owner wants to provide. Control Information is another type of data Encryption systems can also require data to be included A program stream can include multiple video streams (up to 16 videos, 32 audios, and 16 data streams). The interleaving of various PES packets (video, audio, and data) results in a program stream that is further packetized; it is broken up in chunks called ‘packs’. These packs are usually 2048 bytes in long. As with PES packets, a program stream pack also has a header to provide information about the pack. Reference Page #(s): 191

24 Video over IP Training System Layer: Transport Stream
Transport Streams (TS) contains one or more program streams along with additional information The Transport Stream breaks the Elementary Streams into fixed length packets A transport stream containing a single program is called a Single Program Transport Stream (SPTS) A transport stream with more than one program is called a Multi-program Transport Stream (MPTS) Program Stream Packet Header 1 Packet = 188 Bytes TS Packet Header 4 bytes = Transport Stream Program 3 Program 2 Program 1 Data Stream Transport Streams, the final layer in the MPEG-2 System Layer hierarchy, contain one or more program streams multiplexed together along with additional information describing the various program streams it contains. MPEG-2 transport streams are used in digital cable, digital satellite, and HDTV broadcast systems. A transport stream containing a single program is called a Single Program Transport Stream (SPTS) and a transport stream with more than one program is called a multi-program transport stream (MPTS). Typically, an MPTS contains several different programs, plus descriptive information, and the whole MPTS is delivered as a single bitstream over cable or satellite broadcast systems. Like program streams, transport streams are formed by breaking up either elementary streams or program streams or both into TS packets. The packets are fixed 188 bytes including a 4 byte transport stream header. Each packet contains data from only a single elementary stream, so each packet is either video, audio, or control information, but not a mixture. Forward Error Correct (FEC) codes can also be added to transport stream packets through the use of standardized codes, such as Reed-Solomon, adding 16 or 20 bytes to the 188 byte packet, bring the total packet length up to 204 or 208 bytes. Packets 204 bytes are commonly used in applications relating to the Digital Video Broadcasting (DVB) consortium series of specifications which have widespread use in Europe and a number of DTH satellite systems. Packets of 208 bytes in size are used in Advanced Television Systems Committee (ATSC) applications which are used in terrestrial digital television broadcast applications in the USA (sometimes call DTV or HDTV). The primary benefit of adding RS coding to the transport stream packets is to give the receiver the ability to recover from transmission errors. Reference Page #(s): 192

25 Video over IP Training System Layer: Transport Stream MPEG Packet & Header
1 TS MPEG Packet (188 bytes) Header Payload Minimum 4-byte Header Sync Byte 8 Transport Error Indicator 1 Start Indicator 1 Transport Priority 1 PID 13 Scrambling Control 2 Adaptation Field Control 2 Continuity Counter 4 Adaptation Field Payload Adaptation Field Length 8 Discontinuity Indicator 1 Random Access Indicator 1 ES Priority Indicator 1 5 Flags 5 Optional Fields Stuffing Bytes Here the figure shows the minimum header of 4 bytes. In this header, the most important information is: The sync byte. This byte is recognized by the decoder so that the header and the payload can be deserialized. The transport error indicator. This indicator is set if the error correction layer above the transport layer is experiencing a raw-bit error rate (BER) that is too high to be correctable. It indicates that the packet may contain errors. The packet identification (PID). This thirteen-bit code is used to distinguish between different types of packets. The continuity counter. This four-bit value is incremented by the multiplexer as each new packet having the same PID is sent. It is used to determine if any packets are lost, repeated, or out of sequence. A 13-bit field in the transport packet header contains the Packet Identification code (PID). The PID is used by the demultiplexer to distinguish between packets containing different types of information. The transport-stream bit rate must be constant, even though the sum of the rates of all of the different streams it contains can vary. This requirement is handled by the use of null packets. If the real payload rate falls, more null packets are inserted. Null packets always have the same PID, which is 8191 (thirteen ones in the binary representation). In a given transport stream, all packets belonging to a given elementary stream will have the same PID. The demultiplexer can easily select all data for a given elementary stream simply by accepting only packets with the right PID. Data for an entire program can be selected using the PIDs for video, audio and data streams such as subtitles or teletext. The demultiplexer can correctly select packets only if it can correctly associate them with the elementary stream to which they belong. The demultiplexer can do this task only if it knows what the right PIDs are. This is the function of the Program Specific Information (PSI). PCR 48 OPCR 48 Splice Countdown 8 Transport Private Data Adaptation Field Extension Reference Page #(s): 193

26 Constant and Variable Bit Rates CBR
Video over IP Training Transport Stream: Constant & Variable Bit Rates Constant and Variable Bit Rates CBR Rate of CODEC’s data stream consumption is constant in the decoder Useful in streaming media when the transport media is a fixed resource Usually created by stuffing null packets into transport stream VBR CODEC can vary the amount of output data per time segment More bits are allocated to more complex content Uses less overall bandwidth No stuffing

27 Video over IP Training Program Specific Information & Packet Identifiers (PIDs)
Each Program Stream (in MPEG TS) has unique 13-bit Packet Identifiers (PIDs) Standardized PIDs: Program Association Table (PAT) Program Map Table (PMT) Stuffing Configurable PID’s Video Audio Data Program Specific Information (PSI) is carried in packets having unique PIDs, some of which are standardized and some of which are specified by the program association table (PAT), conditional access table (CAT) and the transport stream description table (TSDT). These packets must be included periodically in every transport stream. The PAT always has a PID of 0, the CAT always has a PID of 1, and the TSDT always has a PID of 2. These values and the null-packet PID of 8191 are the only PIDs fixed by the MPEG standard. The decoder must determine all of the remaining PIDs by accessing the appropriate tables. The programs that exist in the transport stream are listed in the program association table (PAT) packets (PID = 0) that carries the PID of each PMT packet. The first entry in the PAT, program 0, is reserved for network data and contains the PID of network information table (NIT) packets. Usage of the NIT is optional in MPEG-2, but is mandatory in DVB. The PIDs of the video, audio, and data elementary streams that belong in the same program are listed in the Program Map Table (PMT) packets. Each PMT packet normally has its own PID, but MPEG-2 does not mandate this. The program number within each PMT will uniquely define each PMT. A given network information table (NIT) contains details of more than just the transport stream carrying it. Also included are details of other transport streams that may be available to the same decoder, for example, by tuning to a different RF channel or steering a dish to a different satellite. The NIT may list a number of other transport streams and each one must have a descriptor that specifies the radio frequency, orbital position, and so on. In DVB, additional metadata, known as DVB-SI, is included, and the NIT is considered to be part of DVB-SI. When discussing the subject in general, the term PSI/SI is used. Upon first receiving a transport stream, the decoder must look for PIDs 0 and 1 in the packet headers. All PID 0 packets contain the PAT. All PID 1 packets contain CAT data. By reading the PAT, the decoder can find the PIDs of the NIT and of each program map table (PMT). By finding the PMTs, the decoder can find the PIDs of each elementary stream. Consequently, if the decoding of a particular program is required, reference to the PAT and then the PMT is all that is needed to find the PIDs of all of the elementary streams in the program. If the program is encrypted, access to the CAT will also be necessary. As demultiplexing is impossible without a PAT, the lockup speed is a function of how often the PAT packets are sent. MPEG specifies a maximum interval of 0.5 seconds for the PAT packets and the PMT packets that are referred to in those PAT packets.

28 Video over IP Training Program Clock Reference (PCR)
Assisting the decoder: Presenting programs on time At the right speed Audio synchronization Programs periodically provide a Program Clock Reference (PCR), on one of the PIDs in the program To assist the decoder in presenting programs on time, at the right speed, and with synchronization, programs usually periodically provide a Program Clock Reference, or PCR, on one of the PIDs in the program. Simplified…the TS generator takes a “snapshot” of the reference clock and sends it in the video packets. The decoder reads the snapshots and corrects its internal clock. Encoder clock (reference, transmitted) Decoder clock (recovered, corrected)

29 Video over IP Training MPEG Encoding / Transmission / Decoding
The video bitstream architecture is based on a sequence of pictures, each of which contains the data needed to create a single display-able image. I frames, P frames, B frames and Group of Pictures are all terms that describe the way the picture data is structured in an MPEG video stream or file. In MPEG video encoding, a group of pictures, or GOP, specifies the order in which intra-frames and inter-frames are arranged. The GOP is a group of successive pictures within a MPEG-coded film and/or video stream. Each MPEG-coded film and/or video stream consists of successive GOPs. From the MPEG pictures contained in it the visible frames are generated. A GOP can contain the following picture types: I-picture and/or I-Frame (English intra coded picture) reference picture, corresponds to a fixed image and is independent of other picture types. Each GOP begins with this type of picture. P-picture and/or P-Frame (English predictive coded picture) contains difference information from the preceding i or P-Frame. B-picture and/or B-Frame (English bidirectionally predictive coded pictures) contains difference information from the preceding and/or following i or P-Frame. A GOP always begins with an I-Frame. Afterwards several P-Frames follow, in each case with some frames distance. In the remaining gaps are B-Frames. With the next I-Frame a new GOP begins. The different picture types typically occur in a repeating sequence, termed a 'Group of Pictures' or GOP. A typical GOP in display order is: B1 B2 I3 B4 B5 P6 B7 B8 P9 B10 B11 P12 The corresponding bitstream order is: I3 B1 B2 P6 B4 B5 P9 B7 B8 P12 B10 B11 A regular GOP structure can be described with two parameters: N, which is the number of pictures in the GOP, and M, which is the spacing of P-pictures. The GOP given here is described as N=12 and M=3. MPEG-2 does not insist on a regular GOP structure. For example, a P-picture following a shot-change may be badly predicted since the reference picture for prediction is completely different from the picture being predicted. Thus, it may be beneficial to code it as an I-picture instead. For a given decoded picture quality, coding using each picture type produces a different number of bits. In a typical example sequence, a coded I-picture was three times larger than a coded P-picture, which was itself 50% larger than a coded B-picture.

30 NETWORKING FUNDAMENTALS for VIDEO PROFESSIONALS
Data Transmission over IP Networks

31 Video over IP Training IP Networking Fundamentals for Video Professionals
Network Overview IP Protocol Layer Model & OSI Model IP Addresses, Datagrams and Packets Ethernet and IP Ethernet Addressing Ethernet Interfaces Video over IP MPEG Streams into IP Packets Packet Transport Multicasting Basic Concepts & Applications Multicasting System Architecture System Impact

32 Video over IP Training Network Overview
A network is comprised of two fundamental parts, the Nodes and the Links A Node is some type of network device, such as a computer. Nodes are able to communicate with other nodes through Links, like cables. IP-based networks forming the backbone and making convergence possible There are basically two different network techniques for establishing communication between nodes on a network: circuit-switched networks (connection-oriented) packet-switched network techniques Networks support large number of users for communication Partitions users for bandwidth efficiency Permits backup connections and sharing connections Network Switch User A User B Modern digital technology allows different sectors, e.g. telecom, data, radio and television, to be merged together. This convergence is happening on a global scale and is drastically changing the way in which both people and devices communicate. At the center of this process, forming the backbone and making convergence possible, are IP-based networks. Services and integrated consumer devices for purposes such as telephony, entertainment, security or personal computing are constantly being developed, designed and converged towards a communication standard that is independent from the underlying physical connection. The cable network, for instance, first designed for transmitting television to the consumer, can now also be utilized for sending voice, video, and data. These features are also available over other physical networks, e.g. telephone, mobile phone, satellite and computer networks. The Internet has become the most powerful factor guiding the ongoing convergence process. This is mainly due to the fact that the Internet protocol suite has become a shared standard used with almost any service. The Internet protocol suite consists primarily of the Internet Protocol (IP) and the Transport Control Protocol (TCP); consequently, the term TCP/IP commonly refers to the protocol family. A network is comprised of two fundamental parts: nodes and links. A node is some type of network device, such as a computer. Nodes are able to communicate with other nodes through links, like cables. There are basically two different network techniques for establishing communication between nodes on a network: the circuit-switched network and the packet-switched network techniques. The former is used in a traditional telephone system, while the latter is used in IP-based networks. As network size grows, data partitioning becomes more important. For example, worldwide broadcasts would waste network bandwidth, raise data privacy concerns, delay data delivery, etc.

33 Video over IP Training Network Techniques
Connection-Oriented (Circuit-Switched) Dedicated, private connection (circuit) between two points - e.g. telephone call Dedicated bandwidth for duration of call Bandwidth is wasted if not constantly utilized Packet-Switched Data is transferred in small messages (packets) allowing many devices to share the bandwidth of a network connection Network devices divide data stream into packets for transmission and reassemble packets to reconstruct the data stream Too many devices active at once can oversubscribe a network forcing a device to wait to send Most data networks today are built with packet switched technology due to efficiency (cost) advantages Requires protocols to permit the efficient use of shared paths and to allow packet data to be directed through interconnected shared paths

34 Video over IP Training Network Types & Characteristics
Wide Area Network (WAN) Between campus, Metropolitan area, or cross country distances Usually operate at lower speeds: 56Kbps –155Mbps Delays range from milliseconds to 100s of milliseconds Local Area Network (LAN) Short haul, within a building or campus Usually higher speed than WAN: 10 Mbps – 1Gbps Delays range from tenths of a millisecond to 10s of milliseconds Personal Area Network (PAN) Very short range network, usually wireless, that allows devices to work together such as a PDA to Desktop. See IEEE : (ex: Bluetooth, Zigbee) Typical speeds: 20Kbps – 50Mbps Very short end-to-end delays

35 Network Terms: Video over IP Training Data Networks Overview
A Protocol is a set of rules governing the interactions of two entities An Application is a user service such as or File Transfer that utilizes a network in providing its service A Packet is a small message with self-contained addressing information. Large blocks of data (e.g. files) or streaming data such as video may be forwarded as many small messages or “packets” Bandwidth is used to indicate the amount of information carrying capacity of a device (switch, router) or link (fiber, copper, RF) A Switch or Hub is a packet forwarding device that uses information in layer 2 only in its forwarding algorithms A Router or Gateway is a packet forwarding device that uses information in layer 3 only in its forwarding algorithms

36 Video over IP Training Open System Interconnection (OSI) Reference Model
Application (7) Provides services directly to user applications. Because of the potentially wide variety of applications, this layer Must provide a wealth of services. Among these services are establishing privacy mechanisms, authenticating the Intended communication partners, and determining if adequate resources are present. Presentation (6) Performs data transformations to provide a common interface for user applications, including services such as reformatting, data compression, and encryption. Session (5) Establishes, maintains, and ends user connections and manages the interaction between end systems. Services include such things as establishing communications as full or half duplex and group data. Transport (4) Insulates the 3 upper Layers 5-7 from having to deal with the complexities of Layers 1-3 by providing the functions necessary to guarantee a reliable network link. Among other functions, this Layer provides error recovery and flow control between the two end points of the network connection. Network (3) Establishes, maintains, and terminates network connections. Among other functions, standards define how data routing and relaying are handled. Ethernet and IP are not the same. Layer 2 and 3 of the OSI model. Data-Link (2) Ensures the reliability of the physical link established at Layer 1. Standards define how data frames are recognized and provide necessary flow control and error handling at the frame level. Physical (1) Controls transmission of raw bitstream over the transmission medium. Standards for this layer define such parameters as the amount of signal voltage swing, the duration of voltages, etc.

37 Video over IP Training Physical Layer 1
Copper: Twisted Pair, Coax Pros: Lowest cost, simple installation, Trained Support plentiful Cons: Shorter distances, ingress/egress interference, bandwidth limits Fiber Pros: Long haul, Low loss, High bandwidth, Interference Resistant Cons: Higher Cost, More Complex Installation, Optical component ageing MT-RJ SC ST LC Copper: Twisted Pair, Coax Pros: Lowest cost, simple installation, Trained Support plentiful Cons: Shorter distances, ingress/egress interference, bandwidth limits 10Base2, 2-pair 10BaseT, 2-pair 100BaseT, 4-pair 1000BaseT, 4-pair 10GBASET soon Higher speed signalling made possible over the years with increasing signal processing through more transistors per acre of silicon (pulse shaping, echo cancellation) combined with improved cable construction (improved common mode rejection, lower loss) Connector types: BNC for coax, RJ telephone style for twisted pair Fiber Pros: Long haul, Low loss, High bandwidth, Interference Resistant Cons: Higher Cost, More Complex Installation, Optical component ageing 10BASE-FL (FOIRL), 100BASE-FX, 1000BASE-SX & -LX, 10GBASE-S, -L, -E (850, 1310, 1550 nm) As for Copper, higher speed signalling made possible through both improved electronic interfaces, improved optics (lasers & detectors), and fiber optic cabling bandwidth Connector types: ST, SC, LC (smaller, RJ-like connector) RF Pros: Wireless, mobility, Cons: Limited Shared Spectrum, Interference, Higher Cost, Lowest bandwidth, Distance limits WiFi a (5GHz, to 54Mb/s, 60’), b (2.4GHz, to 11Mb/s, 300’), g (2.4GHz, to 54 Mb/s, 300’) 2 chip solution (digital, RF) made possible by recent gains in chip speed due to transistor shrinkage Connector types: usually integrated antennas or miniature coaxial connectors for remote antenna connection RF Pros: Wireless, mobility Cons: Limited Shared Spectrum, Interference, Higher Cost, Lowest bandwidth, Distance limits Communication

38 Ethernet Frame Format:
Video over IP Training Data-Link Layer 2 Data-Link (2) Ensures the reliability of the physical link established at Layer 1. Standards define how data frames are recognized and provide necessary flow control and error handling at the frame level. Ethernet Frame Format: Destination MAC Source Type Data CRC 6 bytes 4 bytes Link Layer (Layer 2) - Ethernet Example: Ethernet IEEE 802.3 Most common data link layer: In 1998, 86% all network connections were Ethernet (>118 Million devices) [IDC]. Specs for 10, 100, 1000, 10,000 Mb/s operation Specs for coax, twisted pair, and short and long haul Fiber Optic Phy layers FDX, HDX operation support Variable length frames Type/VLAN ID support Autonegotiation Support For the complete specification, see: Main point is that this type of network is based off of MAC addresses.

39 Video over IP Training Data-Link Layer 2
Data-Link Layer 2 Examples: Ethernet – Most prevalent data network type DOCSIS – Data network for cable systems Resilient Packet Ring (RPR) – Emerging data network with features for carrying multimedia traffic, fast fault recovery, easy interface to Ethernet networks, OAM carrier requirements DVB- SSI, ASI – common point-to-point protocol in use for linking video distribution devices

40 Video over IP Training Ethernet Addressing (Data-Link Layer 2)
Ethernet equipment uses Media Access Control (MAC) addresses for each piece of equipment. MAC address is represented as 6 fields two HEX characters each. Sample: 00:08:D4:01:03:03 MAC addresses are uniquely assigned to each piece of hardware by the manufacturer and do not change. First six digits can be used to identify the Manufacturer Frames are the format of data packets on the wire. Note that a frame viewed on the actual physical hardware would show start bits, sometimes called the preamble, and the trailing Frame Check Sequence. These are required by all physical hardware and is seen in all four following frame types. They are not displayed by packet sniffing software because these bits are removed by the Ethernet adapter before being passed on to the network protocol stack software.

41 Video over IP Training Network Layer 3 – IP
(3) Establishes, maintains, and terminates network connections. Among other functions, standards define how data routing and relaying are handled. IP Packet Format: Ethernet Header IP Data CRC 16 bytes 20 bytes 4 bytes Internet Protocol (IP) IETF RFC0791 Most common network layer Features support for addressing, fragmentation, routing, priority The Ethernet link layer provides these services to the IP network layer: Hardware address (MAC address) Protocol Type Identifier (data link SAP field) Maximum Transfer Unit (MTU) Broadcast Capability Multicast Capability

42 Video over IP Training IP Datagrams & Packets
IP works as a delivery service for “datagrams” also called “IP packets” A datagram is a single, logical block of data that is being sent over IP, augmented by IP header Maximum Transmission Unit (MTU) of a standard Ethernet link is 1,500 bytes (RFC894) Address Resolution Protocol (ARP) used to correlate MAC address to IP address IP works as a delivery service for “datagrams” also called “IP packets”. A datagram is a single, logical block of data that is being sent over IP, augmented by IP header. The Maximum Transmission Unit (MTU) of a standard Ethernet link is 1500 bytes (RFC894). Procedure for the process of sending an IP packet: The header of the IP packet must have header checksum verified The header is examined to determine where the packet is going Time-to-Live counter is decremented, a new checksum is calculated, and inserted back into the IP header

43 Video over IP Training IP Protocol Header
IP header is 20 bytes in length (32-bits/byte) Contains Source and Destination IP Adresses DSCP / Type of Service (TOS) field can be set for network devices to use in configuring QoS IP HEADER: VERS – version number of IP protocol HLEN – datagram header length given in 32 bit words DSCP / Type of Service – priority, delay, throughput, reliability Total Length – datagram length Identification – datagram ID for reassembly Flags/Fragment offset – Fragmentation control & fragment ID for reassembly Time to Live – limits time a datagram is in the network Protocol – Identifies next layer protocol Header Checksum – verifies integrity of IP header Source/Destination Address – Identifies sender and destination device Options – Used for network testing & debugging Padding – Zero bits used to ensure header is a multiple of 32 bits

44 Video over IP Training TCP Protocol Header
TCP Header contains Source and Destination Ports TCP is “connection based” Receipt of each TCP packet is acknowledged and dropped or errored packets are retransmitted IP HEADER: VERS – version number of IP protocol HLEN – datagram header length given in 32 bit words DSCP / Type of Service – priority, delay, throughput, reliability Total Length – datagram length Identification – datagram ID for reassembly Flags/Fragment offset – Fragmentation control & fragment ID for reassembly Time to Live – limits time a datagram is in the network Protocol – Identifies next layer protocol Header Checksum – verifies integrity of IP header Source/Destination Address – Identifies sender and destination device Options – Used for network testing & debugging Padding – Zero bits used to ensure header is a multiple of 32 bits

45 UDP is most common protocol for Video over IP (simpler, less overhead)
Video over IP Training UDP Protocol Header UDP header contains Source and Destination Ports UDP is “connectionless” UDP Header is 8 bytes in length (32 bits/byte) Packets are not acknowledged and there is no retransmission of dropped or errored packets IP HEADER: VERS – version number of IP protocol HLEN – datagram header length given in 32 bit words DSCP / Type of Service – priority, delay, throughput, reliability Total Length – datagram length Identification – datagram ID for reassembly Flags/Fragment offset – Fragmentation control & fragment ID for reassembly Time to Live – limits time a datagram is in the network Protocol – Identifies next layer protocol Header Checksum – verifies integrity of IP header Source/Destination Address – Identifies sender and destination device Options – Used for network testing & debugging Padding – Zero bits used to ensure header is a multiple of 32 bits UDP is most common protocol for Video over IP (simpler, less overhead)

46 Video over IP Training IP Addressing (Network Layer 3)
IP addresses special format “dotted decimal” is easy to recognize i.e Dotted decimal number represents a 32-bit number broken up into four 8-bit numbers = An IP address encodes the ID of a device’s network (Net ID) as well as a unique device (Host ID) on that network All devices on a network share a common network prefix Each address is a pair of numbers: (netid, hostid). The netid identifies the network; the hostid identifies the device on the network. Each number can have different binary lengths depending on the Class of network

47 Video over IP Training IP Addressing Classes
IP address ranges are broken down into classes: A, B, C, D, and E “Classful” addressing scheme Different size networks and device types were assigned to a specific class of addresses Helped in routing “Classless” addressing is used today Any size network or type of device can be assigned a class A,B or C address The value of the first number of the IP address that determines the class When the first precursor of the Internet was developed, some of the requirements of the internetwork, both present and future, were quickly realized. The Internet would start small but eventually grow. It would be shared by many organizations. Since it is necessary for all IP addresses on the internetwork to be unique, a system had to be created for dividing up the available addresses and share them amongst those organizations. A central authority had to be established for this purpose, and a scheme developed for it to effectively allocate addresses. The developers of IP recognized that organizations come in different sizes and would therefore need varying numbers of IP addresses on the Internet. They devised a system whereby the IP address space would be divided into classes, each of which contained a portion of the total addresses and were dedicated to specific uses. Some would be devoted to large networks on the Internet, while others would be for smaller organizations, and still others reserved for special purposes. Since this was the original system, it had no name; it was just “the” IP addressing system. Today, in reference to its use of classes, it is called the “classful” addressing scheme, to differentiate it from the newer classless scheme. As I said at the end of the introduction to this section, “classful” isn't really a word, but it's what everyone uses. IP Address Classes There are five classes in the “classful” system, which are given letters A through E. Class Lowest Address Highest Address A B C D E Multicast Address Range

48 Video over IP Training IP Addressing Classes
A Subnet (short for Subnetwork) is a range of IP addresses within the overall address space assigned to an organization Created to allow Classless Inter-Domain Routing (CIDR) Subnet masks are also represented by the dotted decimal format and follow the IP address range classes Subnets are used to divide or combine groups of IP addresses

49 Video over IP Training Ethernet Hubs
Ethernet hubs provide 3 main functions Acts as a repeater, taking an incoming signal, amplifying it, and sending it out onto all other ports Isolates the ports electrically (electrical termination) Acts as a splitter/combiner No collision prevention Simply receive and retransmit signals Not used much today Ethernet hubs provide 3 main functions Acts as a repeater, taking an incoming signal, amplifying ti, and sending it out onto all other ports Isolates the ports electrically so that you can add or remove connections with having to disable the rest of the network Acts as a splitter/combiner allowing devices to share connection to a third device Hubs don’t do anything to prevent collisions from occuring: they simply receive and transmit signals Reference Page #(s): 175

50 Video over IP Training Ethernet Switches – Layer 2
A switch provides a separate logical and physical network on every one of its connections, or ports Common practice is to put a single device on each connection of the switch This practice provides 3 benefits: Each device can transmit and receive data without worrying about collisions with other devices, so data transmission speeds can increase A switch will send out packets only on a port that is destined for a device that is connected to that port which improves network security and reliability Certain devices can operate in Full-Duplex mode which means that they can transmit and receive data at the same time Switches tend to have most of their ports configured as a single type of interface Works on MAC Addressing Reference Page #(s):

51 Video over IP Training Ethernet Router – Layer 3
Routers provide a crucial function for IP network by examining the headers and addresses of IP packets and then transmitting them onward to their destinations. Basic Functions of a Router Accept IP packets from devices and forward them to recipient devices Forward packets to another router when recipient device is not local to the router Maintain “Routing Tables” for different destination networks Bridging between different network technologies Monitor status of connected networks to determine if they are vacant, busy, or disabled Queuing packets of different priority levels so that high-priority packets have precedence Informing other routers of network configurations Performing many other administrative and security related functions Routers process IP Addresses whereas Switches process MAC Addresses Reference Page #(s):

52 Encapsulation of MPEG Transport Streams
Video over IP Encapsulation of MPEG Transport Streams

53 Video over IP Training Video over IP or Networks
Video into Packets Encapsulating Media Data Transport Protocols Ports & Sockets UDP / TCP / RTP Packet Transport Transport Methods Considerations

54 Video over IP Training IP Encapsulation
IP Encapsulation is the process of taking a data stream, formatting it into packets, and adding the headers and other data required MPEG over IP Transport streams consist of a series of multiple MPEG TS packets packed inside UDP datagrams A typical IP video packet will contain 7 TS packets (188 x 7 = 1316 bytes) Add Ethernet, IP and UDP headers (46 bytes) Ethernet Maximum Transmission Unit (MTU) = 1,500 bytes 1,316 bytes bytes = 1,362 bytes Ethernet IP/UDP MPEG2 TS Video Packet 188 bytes CRC IP Encapsulation is the process of taking a data stream, formatting it into packets, and adding the headers and other data required Performance of IP video signals will be impacted by the video packet size. Long Packet benefits: Less Overhead, Reduced Packet processing Load, Greater network Load Short Packet benefits: Loss Packets are less harmful, Reduced Latency, Less need for Fragmentation Typically, video signals tend to use the longest possible packet sizes MPEG Transport streams consist of a series of 188 byte TS packets. An IP packet payload will contain 7 TS packets. MTU size is 1500 bytes, so 7 TS packets will fit and not 8 TS packets IP Packet with MPEG2 TS Video Payload carried over Ethernet

55 Standard vs. Jumbo Frame
Standard IP Packet Jumbo IP Packet Approximately 20% more efficient.

56 Video over IP Training Transport Protocols
Transport Stream Protocols are used to control transmission of data packets 3 major protocols used to transport real-time video: UDP or User Datagram Protocol: This is one of the simplest and earliest of the IP Protocols. UDP is often used for video and other data that is very time sensitive TCP or Transmission Control Protocol: This is a well established Internet Protocol that is widely used for data transport RTP or Real-Time Transport Protocol: This protocol has been specifically developed to support real-time data transport, such as video streaming UDP is a connectionless transport protocol that is used for high-speed information flows (widely used for Video) TCP is a connection-oriented protocol that provides high reliability (retransmission) RTP is intended for real-time multimedia applications. RTP is not strictly a protocol like UDP or TCP. RTP was designed to use UDP as a packet transport mechanism Reference Page #(s):

57 Video over IP Training Transport Methods
Technologies used for packet transport Packet over SONET/SDH (Synchronous Optical Network and Synchronous Digital Hierarchy) Cable and DSL (Digital Subscriber Line) Optical Networks IP over ATM (Asynchronous Transfer Mode) MPLS/GMPLS (Multi-Protocol Label Switching and General MPLS) RPR (Resilient Packet Ring) Wireless Ethernet is the most popular technology for IP packet transport in local areas. It is not intended for long distances, so the above technologies were developed to transport IP packets over long distances Reference Page #(s):

58 Video over IP Training Transport Considerations
When video is being transported over an IP network, users need to consider a other factors that can significantly affect the users’ viewing experience Multiplexing is a process of combining video streams from different sources into 1 IP flow. Two forms of Multiplexing commonly used today: Time Division and Statistical Traffic Shaping consists of various techniques that are used to make video traffic easier to handle on a network. Overall goal is to make an IP flow less prone to sudden peaks in bit rate Buffering is basically a collection of memory that is used to temporarily store information prior to taking some action. Buffers can have a major impact of video network performance Firewalls are used to control the flow of information between two networks. Need to be aware of the constraints that firewalls impose on video services Reference Page #(s):

59 IGMP – Internet Group Management Protocol
Multicasting IGMP – Internet Group Management Protocol

60 Video over IP Training Multicasting
Basic Concepts Unicasting Multicasting Joining and Leaving Multicast

61 Video over IP Training Unicast vs. Multicast
High Bandwidth required between the video source and a number of end-users Video source make separate video streams for each recipient Reduced Bandwidth requirements between video source and multiple end-users Network devices (routers) makes copies of video stream for every recipient Unicast = one to one Multicasting is the process of simultaneously sending a single video signal to multiple users. Through the use of special protocols, the network is directed to make copies of the video stream for every recipient. This copying occurs in the network rather than the video source. Multicast = one to many

62 Video over IP Training Unicasting
Unicasting is the traditional way that packets are sent from a source to a single destination Each user who wants to view the video must make a request to the video source. The source needs to know the destination IP address of each user and must create IP packets addressed to each user. As the # of viewers increase, the load on the network increases Each viewer gets a custom tailored video stream which allows the video source to offer specialized features such as pause, rewind and fast-forward of video. Unicasting is the traditional way that packets are sent from a source to a single destination Each user who wants to view the video must make a request to the video source. The source needs to know the destination IP address of each user and must create IP packets addressed to each user. As the # of viewers increase, the load on the network increases Each viewer gets a custom tailored video stream which allows the video source to offer specialized features such as pause, rewind and fast-forward of video. VOD Server Unicasting

63 Video over IP Training Multicasting
Multicasting unlike Unicasting, puts the burden of creating streams for each user on that network rather than on the video source IP packets are given special IP addresses to be recognized by the network as Mutlicast. IP Address range is Class D: through IP Multicast uses UDP packets IGMP (Internet Group Management Protocol) Protocol controls access to Multicast streams User must request to Join and Leave a Multicast program Multicasting unlike Unicasting, puts the burden of creating streams for each user on that network rather than on the video source IP packets are given special Ip addresses to be recognized by the network as mutlicast There is a special protocol IGMP (Internet Group Management Protocol) for users that allows them to inform the network that they wish to join the multicast. Multicasting

64 Video over IP Training Joining and Leaving a Multicast
IGMP Messages Membership Query Used by multicast enabled routers running IGMP to discover which hosts on attached networks are members of which multicast groups Membership Report Sent by a host when it wants to join a multicast group or when responding to membership query’s Leave Group Sent by a host when it wants to leave a multicast group One advantage of multicasting is that it gives users the ability to control when they join and leave a multicast. There are no special actions needed When users want to watch a multicast program, they must join in at whatever point the program happens to be in. So, a join request is sent to the router. If the router is already receiving that video then it simply makes a copy. If it is not then the router must make a request to a device closer to the multicast source. When a user wants to leave a multicast, they should inform their local router. The router in turn will stop sending the video. When a router no longer has any users it must inform the network to stop sending it the video stream. Multicasting

65 Video over IP Monitoring & Measurements

66 Video over IP Training Video over IP Monitoring & Measurements
Network Impairments Flow Behavior Video over IP Measurements MDI – Media Delivery Index Distributed Continuous Program (DCP) Monitoring Determining Packet Loss (MLR) on a UDP flow Delay Factor (DF) & the effects of a high Delay Factor

67 Video over IP Training Video Network Impairments
Packet Loss is when an IP packet does not arrive at its intended destination. This can be caused by any number of circumstances: Network Saturation, Network hardware failure, Queuing misconfiguration, etc. Packet Reordering occurs in a network when packets arrive in a different order than how they were sent. Since MPEG has a very precisely defined structure and sequence, out of order packets can cause problems Delay is going to happen in any network. Two types of delay: Propagation delay and Switching. Propagation is the amount of time to travel from one location to another. Switching delay occurs at any point in the network where a signal needs to be switched or routed. Jitter is a measurement of variation in the arrival time of the data packets. Receivers must be built to tolerate jitter and networks should be designed not to create a lot of jitter. The first principle of Video over IP is The quality of the video content must be good going into the IP network The only thing an IP network can do to affect the quality of IP Packet Loss is when an IP packet does not arrive at its intended destination. This can be caused by any number of circumstances: Network Saturation, Network hardware failure, Queuing misconfiguration, etc. Packet Reordering occurs in a network when packets arrive in a different order than how they were sent. Since MPEG has a very precisely defined structure and sequence, out of order packets can cause problems Delay is going to happen in any network. Two types of delay: Propagation delay and Switching. Propagation is the amount of time to travel from one location to another. Switching delay occurs at any point in the network where a signal needs to be switched or routed. Jitter is a measurement of variation in the arrival time of the data packets. Receivers must be built to tolerate jitter and networks should be designed not to create a lot of jitter.

68 MDI = DF : MLR Media Delivery Index Video over IP Training
The Media Delivery Index (MDI) is a metric that captures the amount of Cumulative Packet Jitter and the amount of Packet Loss of an IP stream. These are the only types of impairments that can be caused by an IP transport network. MDI consists of two components: Delay Factor : Media Loss Rate Delay Factor (DF) is the size of buffer required to transport jittered packets in the network without loss divided by the rate of the media stream – it is proportional to the delay introduced in the system due to the network buffering. The buffer value is expressed in the time (milliseconds) it takes to transmit (drain) the maximum buffer size at outflow rate. Media Loss Rate (MLR) is the total Media Packets Lost (per second) See RFC 4445 for complete details on how to calculate MDI See Application Notes at: MDI = DF : MLR

69 Video over IP Training Flow Behavior
Ethernet Inter-Packet Gap Decoder Monitor, TV, etc Buffer (Removes Ethernet frame and buffers MPEG) MPEG Rate is determined by the MPEG Example 4.5Mb/s Each Ethernet packet contains up to 7 MPEG packets Ethernet Packets MPEG Packets Payload is extracted Payload is buffered Payload is clocked out In order to transmit video over an IP network, we are packetizing what would be a constant video flow into IP packets. In order to do this successfully we need to have a balance between the rate at which the payload is consumed by a decoder and the rate in which the encoder places the video into IP packets and puts it on the network. In order for the encoder to provide video at the appropriate rate it must control the interpacket gap between each packet. So the rate is determined by the mpeg that is if we are trying to deliver 4.5 MBits per second of video content to a decoder, we must insure the we send the right # of packets that equals the that amount of payload to be consumed by the decoder and that we provide it in a periodic nature. So each packet is containing 7 mpeg packets and is delivered to a buffer which in turn consumed by the decoder and played out to the TV or monitor. 188-bytes MPEG2 TS packet encapsulated within an IP Ethernet Frame. Rate of IP delivery is the same as the rate of drain of the video (MPEG2 TS). The packet arrival rate of each IP packet is exactly to the rate used to clock the contents of one IP packet from the receiver buffer.

70 Video over IP Training Simple IP Switch (example)
Basic MDI Theory Next thing we are going to talk about is Delay Factor. As mentioned earlier the goal here is to provide video into the network in IP packets at the appropriate rate to be consumed by the decoder. It is very important in terms of video that we insure we maintain a periodic flow over the network for a particular flow. The animation with show this affect. Effectively we are doing a load balance. We are expected to consume data into the decoder and it is the responsibility of the encoder to provide the information into the network periodically.

71 Video over IP Training Flow Behavior: IP Flow with Jitter & Under Run Rate
Under Run: Avg. Ethernet inter-packet gap timing at the delivery rate is less than MPEG video rate hence buffer runs empty 1 Ethernet Inter-Packet Gap Decoder Monitor, TV, etc Buffer (Removes Ethernet frame and buffers MPEG) MPEG Ethernet Packets MPEG Packets 2 Decoder Buffer (Buffer start to drain at MPEG rate 3.75 Mbps) For example: 3.50 Mbps rate For example: 3.75 Mbps rate Monitor, TV, etc 3 Decoder Buffer (Buffer is empty waiting for more IP packets) For example: 3.50 Mbps rate For example: 3.75 Mbps rate Monitor, TV, etc Nothing to Decode; Poor Video

72 Video over IP Training Flow Behavior: IP Flow with Jitter & Over Run Rate
Over Run: Avg. Ethernet inter-packet gap timing at the delivery rate is more than buffer can handle hence the buffer drops packets 1 Ethernet Inter-Packet Gap Ethernet Packets MPEG Packets MPEG Inter-Packet Gap Buffer (Removes Ethernet frame and buffers MPEG) Decoder Monitor, TV, etc 2 Shorter Ethernet Inter-Packet Gap For example: 3.75 Mbps rate For example: 4.90 Mbps rate Buffer (Buffer starts to fill up) Decoder Monitor, TV, etc 3 For example: 3.75 Mbps rate For example: 4.90 Mbps rate Buffer (Buffer Overflows) Decoder Ethernet packets are dropped at the network device Monitor, TV, etc Impaired Video

73 Video over IP Training Simple IP Switch with High MDI
Use animated diagram.

74 Video over IP Training Flow Behavior: IP Flow with IP Packet Loss
IP Packet Loss: Ethernet inter-packet gap is enlarged due to IP packet loss, causing bursty IP Video delivery (Jitter) 1 Ethernet Inter-Packet Gap Ethernet Packets MPEG Packets MPEG Inter-Packet Gap Buffer (Removes Ethernet frame and buffers MPEG) Decoder Monitor, TV, etc 2 For example: 3.75 Mbps rate For example: 3.75 Mbps rate Buffer (Buffer starts to fill up) Decoder Monitor, TV, etc Decoder Buffer (Buffer could Under Run) For example: 3.75 Mbps rate Monitor, TV, etc 3 Impaired Video Ethernet packets are dropped in the network Loss adds Jitter

75 Video over IP Training Program Clock Reference (PCR)
PCR Jitter vs. IP Jitter PCR Jitter (recovered clock inaccuracy) Serial transport media use a common clock between transmitter and receiver and can guarantee high accuracy of packet arrival times Jitter is classified into two categories: PCR accuracy errors (PCR_AC) and network jitter. These two are then combined into PCR overall jitter (PCR_OJ) Ethernet / IP Jitter (variation in expected packet arrival times) No clock reference for transmission of packets Because transport can include multiple devices (all with different buffer cues), there is no guarantee that packets transmitted with a given inter-packet spacing will arrive with the same spacing IP jitter is categorized and measured by the Media Delivery Index (MDI) Delay Factor (DF) PCR Jitter = clock inaccuracy IP Jitter = variation in expected packet arrival time

76 Video over IP Training Constant Bit Rate (CBR)
Constant Bit Rate example An encoder ideally transmits IP packets at the rate matching the MPEG encoded bit rate as shown here. PCR time stamp updates occur every 40 ms in a stream continuously informing a decoder of the MPEG encoded bit rate. Constant Bit Rate (CBR) encoding shown here. “Stuffing” bits maintain a constant bit rate even though picture complexity is dynamic.

77 Variable Bit Rate (VBR) example
Video over IP Training Variable Bit Rate (VBR) Variable Bit Rate (VBR) example This example has high DF The instantaneous, per packet IP bit rate is bursty and does not track the dynamic encoded PCR bit rate. PCR bit rate varies dynamically with picture complexity with VBR since there is no stuffing PID. The instantaneous peak PCR rate may be peak limited (“capped”) by configuration.

78 Video over IP Training Delay Factor (DF)
DF continuously tracks the cumulative difference between MPEG bit rate and IP bit rate capturing the stream’s burstiness If an IP stream is bursty, its instantaneous bit rate may significantly stress network transport device queues.

79 Video over IP Training IneoQuest Software Application: IQ MediaAnalyzer Pro

80 Video over IP Training MLR: Determining Loss on UDP Flows
PID 481 CC PID 482 CC MLT = 0 MLT = 6 09 10 02 02 01 08 04 03 07 05 06 00 01 04 How do we determine loss on UDP flows? To determine loss on a UDP flow, we have to do a little more work, so we take advantage of the payload itself. As stated earllier, within the payload there are 7 MPEG packets of an IP packet. Within that there is a combination of things, video, audio, stuffing, and control information. In the case of a constant bit rate IP flow, the difference between the IP bitrate and the video bit rate is made up of stuffing component. In affect, what is happening when we transmit information, for example, our audio is running at 400 KHz and the video is running at 4 MBits, the # of video packets we will see is ten times the amount required for audio. Basically the # of packet types placed inside IP flows are directly corrollated to the bit rate of the information that is going to be consumed by the decoder. In order for us to monitor information, we use the PIDs (video and audio elements) to take advantage of the fact that each element carries a continuity counter. So, each sequential video packet will have a new sequence # ranging from 0 to 15 and monitor theses continuity counters to determine loss. In this animation you will see that we are monitoring not the IP loss but the media loss itself. *NOTE: talk about how TSX creates a visual correlation between the IP and MPEG transport layers.

81 Video over IP Training Video over IP Measurements
Properties that must be Measured and Monitored simultaneously to ensure Quality of Video over IP. IP packet arrival times where jitter causes delay (Under Runs) IP packet arrival times where jitter causes bursts (Over Runs) IP packet bit rate average drift/deviation from the Video bit rate IP packet loss Video packet loss / CC errors

82 Video over IP Training IneoQuest Software Application: IQ MediaAnalyzer Pro
MDI PAYLOAD MDI If we are monitoring 2 points in the network and we are visually monitoring with a TV, the loss would result in pixelization or some effect to the video quality delivered at the endpoint of the network. As mentioned in the third principle is in order to monitor Video over IP quality correctly at any measurement node, All Live Flows must be monitored continuously for errors in quality and/or delivery. In order to look at them simultaneously, we have a requirement to characterize each IP packet into a particular flow and to further monitor the payload information for loss and jitter Encoder Edge HeadEnd Dropped IP Packets

83 Supported Technologies Encapsulation Supported
Video over IP Training Monitored Flow Types MPEG2 Transport Stream HD/SD H.264 (MPEG-4 part 10) MPEG-4 part 2 ISMA VC1.01 / VC1.1 Eth2/IP/UDP Eth2/IP/UDP/RTP Eth2/VLAN/IP/UDP Eth2/VLAN/IP/UDP/RTP Eth2/PPPoE/IP/UDP Eth2/PPPoE/IP/UDP/RTP Eth2/PPPoE/VLAN/IP/UDP/RTP Supported Technologies Encapsulation Supported VOD Broadcast FEC Flow Detection Flow Type Standard / High Definition SPTS – Single Program Transport Stream MPTS – Multi-Program Transport Stream Bitrate

84 Video over IP Training Alarms & Warnings
Possible Causes MDI-DF : Delay Factor (max value exceeded) NTWK-UTL : Network Utilization (max value exceeded) IP Flow Media Bit Rate Deviation (%) • Over Subscription • Encoder Behavior • Bursty Traffic • VOD Server Configuration WARNING MDI-MLR : Media Loss Rate (max value exceeded) RTP-LDE : Loss Distance Error (min value exceeded) RTP-LPE : Loss Period Error (max value exceeded) MLT-15 : 15min. Media Loss Total (max value exceeded) MLT-24 : 24hr. Media Loss Total (max value exceeded) MLS-15 : 15min. Media loss Seconds Total (max value exceeded) MLS-24 : 24hr. Media Loss Seconds Total (max value exceeded) RTP-SE : RTP-Total Sequence Errors (max value exceeded) • Noise • Bad Connectors • Pinched Cables • QoS Configuration • Equipment Configuration • Transient Power LOSS VIDO-LOS : Video Flow Outage • Faulty Equipment • Loss of Power • Nature Outage IGMPv2 / IGMPv3 support Join & Leave (min/max/average) IGMP Zap time AutoScan / Manual • Faulty Equipment • Configuration • Over Subscription IGMP TS-PID : Transport Stream PID Bit Rate (lower limit exceeded) TS-SYNC : Transport Stream Sync Byte Error V-TSB : VIDEO-TS PCR Bit Rate (lower limit exceeded) IP-SBRMX : IP-Stream Bit Rate (upper limit exceeded) IP-SBRMN : IP-Stream Bit Rate (lower limit exceeded) • Encoder Issues (config, fault equipment) • Loss Video/Voice feeds PAYLOAD

85 IneoQuest Monitoring and Troubleshooting Solutions

86 Video over IP Training How IP Video is Challenging Service Providers
The biggest problem facing IP Video service providers is unbounded operational expenses (OPEX) The inability to sustain quality across a distributed service area no matter how much is spent in OPEX – loosing business model OPEX Drivers Increased call volume – $5.00-$15.00 per call Increased truck rolls – $ plus per roll Chronic problems – Problems “come and go” Lingering problems – No definitive problem resolution; “voodoo” troubleshooting No visibility – The customer becomes the monitoring and analysis system Lack of education – New technology presents new problems Summary IP Video distribution presents a new set of problems Unique issues that traditional monitoring systems are ill-equipped to handle or detect IP Video is very different than voice and data Requires an evolved multi-dimensional approach to quality and service assurance

87 Video over IP Training Video Across Multiple Systems (end-to-end program flow)
1000s of Video Flows Encoder Headend Network Video Servers Video Headend IP Transport Core Network Hub / VHO Edge End User Subscriber Last Mile Network Premise Last Mile Networks Decoder

88 Video over IP Training Complexities of IP Video
1000s of Video Flows Headend Video Core Edge Last Mile Premise End User Encoder Hub / VHO Decoder Network Servers Network Network Network Network Headend IP Transport Last Mile Technology Subscriber Results in increased call volume ($) and truck rolls ($) No matter where the issue is across any subsystem, the effect is seen at the end of the system at the subscriber Operational dollars get spent and problem is often not found or fixed….system never improves

89 Video over IP Training Coverage Areas
MPEG Monitoring Subsystem DSL/RF Monitoring Subsystem Network Monitoring Subsystem Encoder Headend Network Video Servers Core Hub / VHO Edge Last Mile Premise End User Decoder Coverage Area Traditional MPEG Monitoring System Coverage Traditional Core Network Monitoring System Coverage Traditional DSL/RF Component Monitoring System Coverage

90 Video over IP Training Traditional Monitoring – Blind to Video Issues
Single Video Program Problem Origination 1000s of Video Flows Headend Video Core Edge Last Mile Premise End User Encoder Hub / VHO Decoder Network Servers Network Network Network Network Video Headend IP Transport Last Mile Technology Subscriber MPEG Monitoring Subsystem Network Monitoring Subsystem DSL/RF Monitoring Subsystem Systems like MPEG analyzers are blind to problems that originate downstream and other systems designed for data transport monitoring do not have the visibility to understand an IP Video flow is bad entering or with in the subsystem...the first time it is realized there is an issue is at the customers TV, so customer calls and trucks roll. System Reports Good System Reports Good System Reports Good The first time it is realized there is an issue is at the customers TV, so customer calls and trucks roll.

91 Video over IP Training Multi-Dimensional: All Flows, All Locations, All the Time
1000s of Video Flows Encoder Headend Network Video Servers Core Hub / VHO Edge Last Mile Premise End User Decoder Video Headend IP Transport Last Mile Network Coverage Area IneoQuest IQPinPoint Multi-Dimensional Video Quality Management System Coverage With Analysis, Monitoring, and Remote Troubleshooting all in one

92 Video over IP Training Multi-Dimensional Management: Detect, Isolate, Resolve
Reports Good Video Reports Bad Video 1000s of Video Flows Headend Video Core Edge Last Mile Premise End User Encoder Hub / VHO Decoder Network Servers Network Network Network Network Video Headend IP Transport Last Mile Network Subscriber Single Video Program Problem Origination Using Multi-Dimensional Video Quality Management, Operations now can detect a Video issue. Trouble ticket to specific sub system and use remote troubleshooting to solve issue. If the customer calls, no need to roll truck since the issue is not at the premise.

93 Video over IP Solutions IneoQuest Hardware Platform: Singulus G1-T
Generate network traffic up to 2 GbE Monitor & Analyze IP Video up to 1 GbE 80 MB Capture & Record Packet Morph (add Impairments) 1 GbE Copper & Fiber Connections 10/100 Management port ASI Output port 256 IP Flows

94 Video over IP Solutions IneoQuest Hardware Platform: Singulus Lite “Cricket”
Interactive Subscriber “Visual Impairment” Feedback In-band IP Video/IPTV control and stats Subscriber Behavior Tracking Emulates an end point Monitor & Analyze IP Video up to 10 IP Flows 80 MB Capture & Record 10 / 100 MbE Copper Connections USB Management port Available Versions: Ethernet QAM ASI

95 Traffic Generation Software Application
Video over IP Solutions IneoQuest Software Application: IQMediaStimulus Traffic Generation Software Application Used with Geminus, Singulus G10, Singulus G1-T Generate Video, Voice, or Data flows TS files, LIBpcap files (TS with encapsulation), Data files, voice files (.au, .wav, etc) Live Stream Replication Can cause Impairments Drop IP Packets, add Jitter, change IP Bitrate, change PCR rate, drop PIDs Supports Multiple STIM targets Test Set-ups Ability to Auto Run Tests

96 Video over IP Solutions IneoQuest Software Application: IQMediaAnalyzer Pro
Monitoring & Analysis Software Application New Dashboard Impairments window Enhanced Trigger & Capture Capabilities Commercial Insertion Support Microsoft IPTV support Software Included with Hardware

97 Video over IP Solutions IneoQuest Software Application: IQTsX Pro
Post Analysis Software Application Search and Explore the capture Display the packet data Decode media packet headers IP & Media Packet Explorer Packet arrival time reports PCR comparison reports & charts PID list reports GOP Structure reports Individual Channel analysis on MPTS CC error detection Packet Modification 3rd party tool support Play the capture with VLC Media Player View Packets with Ethereal Microsoft IPTV support Licensed Software MPEG Deep Packet Analysis

98 iVMS Video over IP Solutions IneoQuest End-to-End Solution Overview
Beginning of Last Mile End of Last Mile (Subscriber) Video Headend IP Transport End-to-End Deep MPEG Analysis, IP Video Monitoring, & Remote Troubleshooting Simultaneous IP Video Monitoring & Remote Troubleshooting Last Mile Technologies IP, QAM, HPNA, ADSL2+, VDSL, ASI, Wireless Last Mile Technologies IP, QAM, HPNA, ADSL2+, VDSL, ASI, Wireless

99 Video over IP Solutions IneoQuest iVMS IP Video Management System

100 Video over IP Solutions iVMS – IQ Map View
Google Maps integration. Visually see where the probes are in your network and what the status is.

101 Video over IP Solutions iVMS – IQ Topology View

102 Video over IP Solutions iVMS – Real-Time Monitoring
Real-time Views with where one click show what the problems are and which flows they are affecting. IQTV enables real-time content confirmation.

103 Select the metric to monitor Program Monitoring Across Multiple Points
Video over IP Solutions iVMS – Real-Time Monitoring Select the metric to monitor Real-time views show status of programs across multiple points in the network. Compare quality. Program status shows alarms regardless of probe. Program Monitoring Across Multiple Points

104 Video over IP Solutions iVMS – Reporting & Trending
Show loss distribution across all probes. Show loss over time.

105 Video over IP Solutions iVMS – Reporting & Trending
Look at historical trending in 15 minute increments. Number of flows vs. Time. How many good, how many bad. Thumbnails for each 15 minute intervals.

106 Video over IP Solutions iVMS – Reporting & Trending (Drill Down to PID level)
Daily View shows all alarms in 15 minute intervals. Interactivity enables zooming to flows for alarm details all the way down to the PID level.

107 Video over IP Solutions iVMS – Reporting & Trending (PID Details)
Reports currently show data for 24 hour periods. Most reports go all the way down to the PID details, including bitrate and loss.

108 Video over IP Solutions iVMS – Daily Reports (IQ Watch Services)
Daily Reports generates a PDF report for multiple probes across multiple days. Reports and contain Flow Details and VOD information.

109 Video over IP Solutions iVMS – Configuration & Security
Display Active Users User configuration allows for options and menus to be enabled or disabled per-user. Groups control which probes each user has access too. Define Menus Per-User

110 Video over IP Solutions iVMS – Configuration
Use a reference Probe to copy configurations to a group of Probes simultaneously.

111 Video over IP Solutions iVMS – Configuration (Firmware Downloads)
Multiple Probes upgraded simultaneously.

112 Video over IP Solutions iVMS – Email Notifications
Configure the system to notify via . Either per-alarm or summary information. s can be throttled on a 15 minute and/or a 24 hour basis.

113 IQFastLink Embedded URL in Message
Video over IP Solutions iVMS – Northbound to NMS/OSS IQFastLink Embedded URL in Message Chameleon is our open source reference configuration platform. Integrators can use Chameleon to understand how to integrate iVMS to a NMS/OSS. Each Alarm message contains an URL link for IQFastLink. This will enable direct linking to IQ visualizations wrapped in custom skins.

114 Video over IP Solutions iVMS – Customized Skins to NMS/OSS
IQFastLink allows NMS/OSS systems to connect directly to iVMS visualizations wrapped in seamless looking skins. Skins can be easily created for different systems to maintain a constant look and feel from the operators point of view.

115 Resources for Video over IP
References Resources for Video over IP

116 Video over IP Training References & Resources
Video over IP: A Practical Guide to technology and Applications by Wes Simpson, Focal Press IPTV Crash Course by Joseph Weber and Tom Newberry, McGraw Hill TCP/IP Illustrated, Volume 1, The Protocols by W. Richard Stevens, Addison Wesley Internetworking with TCP/IP, Volume 1, Principles, Protocols, and Architecture by Douglas E. Comer, Prentice-Hall, Inc. A Guide to MPEG Fundamentals and Protocol Analysis, Tektronix A Transport Protocol for Real-Time Applications, RFC3550 Requirements for Internet Hosts - Communications Layers, RFC1122 Internet Protocol, RFC791 Internet Control Message Protocol (ICMP), RFC792 Internet Group Management Protocol (IGMP), RFC 2236 Host Extensions for IP Multicast, RFC 1112 Media Delivery Index (MDI), RFC 4445

117 Video over IP Training Contact Information CORPORATE HEADQUARTERS IneoQuest Technologies, Inc. 170 Forbes Boulevard Mansfield, MA 02048 USA TEL: (508) FAX: (508) IQ PROFESSIONAL SERVICES IneoQuest Technologies, Inc. 170 Forbes Boulevard Mansfield, MA 02048 USA TEL: (508) IQ TECHNICAL SUPPORT IneoQuest Technologies, Inc. 170 Forbes Boulevard Mansfield, MA 02048 USA TEL: (866) Copyright © 2006 IneoQuest Technologies, Inc. All rights reserved. Printed in the USA. IneoQuest, IQClearView, IQWatch, Singulus G1-T, IQMediaMonitor, and the IneoQuest logo are trademarks of IneoQuest Technologies, Inc. in the U.S. and certain other countries. All other trademarks mentioned in this document are the property of their respective owners. The use of the word partner does not imply a partnership relationship between IneoQuest and any of its resellers.


Download ppt "Video over IP – Get the Picture!"

Similar presentations


Ads by Google