We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byJoe Howle
Modified over 2 years ago
A Graduate Course on Multimedia Technology 3. Multimedia Communication © Wolfgang Effelsberg Media Scaling and Media Filtering Definition of Media Scaling Adaptation of the data rate of a media stream to the capacity of the network Requirements for scaling algorithms Quick and precise adaptation to the capacity of the network Minimum possible visual/audible effect Robustness in case of packet loss
A Graduate Course on Multimedia Technology 3. Multimedia Communication © Wolfgang Effelsberg Resource Reservation vs. Stream Adaptation Reservation of Resources The application requests the necessary QoS (and thus network resources) in advance. Those resources will be guaranteed for the duration of the stream. Adaptation The application adapts dynamically to the resources/capacity currently available from the network. This works for existing „best effort“ networks, in particular for the Inter- net with IP Version 4. Adaptive applications must implement audio and/or video codecs that allow scaling at run-time. Reservation and adaptation can be combined, in particular for probabi- listic QoS guarantees: A reasonable amount of resources is reserved in advance, and in short phases of exceedingly high bit rates, the appli- cation scales down the stream.
A Graduate Course on Multimedia Technology 3. Multimedia Communication © Wolfgang Effelsberg Messages Exchanged for Scaling The source starts to send at some initial data rate. The network or the destination sends feedback messages indicating their current load. The source adapts the data stream by dynamically changing the para- meters of the codec. source sink data more/less data feedback
A Graduate Course on Multimedia Technology 3. Multimedia Communication © Wolfgang Effelsberg Scaling Example While an MPEG video is being played from a video server to a user the network signals congestion. As a consequence, the video server changes the quantization table of the MPEG codec (increases the quantization step size) and thus generates VBR video at a lower bit rate (and a lower image quality). Note the difference to the reaction you would observe on a network without adaptive (scaling) applications: the congestion control algorithm of the net- work (e.g., the one contained in TCP) would reduce the packet rate, leading to a lower video frame rate at the receiver. The network has no alternative; only the source (the application) can handle the overload situation in a more intelligent way!
A Graduate Course on Multimedia Technology 3. Multimedia Communication © Wolfgang Effelsberg Media Layering and Filtering Layering and filtering are especially relevant in multicast trees: Different receivers require different bit rates and stream qualities. Only those parts of the stream are transported to a receiver that it can really use. Irrelevant parts of the stream are removed by the inner nodes of the network. A B C E D layer 0 (base layer) layer 1 (enhancement layer for medium quality) layer 2 (enhancement layer for high quality)
A Graduate Course on Multimedia Technology 3. Multimedia Communication © Wolfgang Effelsberg Layered Video Encoding The two most important layering techniques for video are temporal scaling and spatial scaling. Temporal Scaling Assign frames to layers in a round-robin fashion. The more layers the re- ceiver gets, the higher is the frame rate. Spatial Scaling Assign data values to layers such that for each frame, a low-quality version can be reproduced from layer 0; adding higher layers will add “visible qua- lity“ to the frame. There are other scaling/layering schemes, e.g., spatio-temporal techniques, which we will not discuss here.
A Graduate Course on Multimedia Technology 3. Multimedia Communication © Wolfgang Effelsberg Temporal Scaling for Video The base layer (i.e., layer 0) transmits every third frame. So do the first and the second enhancement layers. Together, the full frame rate is transmitted disassembling reassembling layer 0 layer 1 layer 2 layer 0 layer 1 layer 2 11
A Graduate Course on Multimedia Technology 3. Multimedia Communication © Wolfgang Effelsberg Spatial Scaling for Video (1) Pyramid Encoding The encoder first down-samples the image, compresses it according to the chosen encoding technique, and then transmit the result in the base layer. At the same time, it decompresses the image, up-samples it and gets a coarse version of the original. To compensate for the difference, the encoder subtracts the resulting copy from the original and sends the encoded diffe- rence in the enhancement layer. The MPEG-2 standard supports pyramid encoding.
A Graduate Course on Multimedia Technology 3. Multimedia Communication © Wolfgang Effelsberg Spatial Scaling for Video (2) Layered Frequencies Each 8x8 block is DCT-transformed and quantized. After quantization, the coefficients are stored in different layers, e.g., the first three coefficients in the base layer, the next three in the first enhancement layer, the next four in the second enhancement layer, etc.
A Graduate Course on Multimedia Technology 3. Multimedia Communication © Wolfgang Effelsberg Spatial Scaling for Video (3) Layered Quantization Each 8x8 block is DCT-transformed. The bits of the DCT coefficients are distributed over several layers. For example, the base layer may contain the first two bits of all coefficients (corresponding to a very coarse quantization), the first enhancement layer the next two bits of all coefficients, etc. Recei- ving more layers is then equivalent to choosing a finer quantization.
A Graduate Course on Multimedia Technology 3. Multimedia Communication © Wolfgang Effelsberg Spatial Scaling for Video (4) Layered Encoding of Wavelet-Transformed Video Original frame Wavelet-transformed, five iterations Wavelet-transformed, one iteration
A Graduate Course on Multimedia Technology 3. Multimedia Communication © Wolfgang Effelsberg Spatial Scaling for Video (5) Layered Wavelet Encoding The approximation (i.e., the low-pass filtered part) of a wavelet-transformed frame is stored in the base layer. The coefficients of the details (i.e., the high-pass filtered part) are stored in enhancement layers in decreasing order of their values. base layer l 0 enhancement layer l 1 enhancement layer l 2 enhancement layer l 3
A Graduate Course on Multimedia Technology 3. Multimedia Communication © Wolfgang Effelsberg Examples of Spatial Layering Sample frames of a video sequence (a cartoon). Note that the images are shown at different quality levels in order to illustrate the characteristic artifacts. Pyramid encodingLayered DCT frequencies Layered wavelet encodingLayered DCT quantization
A Graduate Course on Multimedia Technology 3. Multimedia Communication © Wolfgang Effelsberg Conclusions on Media Scaling and Layering Scaling techniques for digital video were developed in recent years and work well. Layering (i.e., the decomposition of the video into separate layers) is not yet available in most commercial video codecs. The integration of layer filters into internal network nodes of multicast trees is still a research issue.
DISTRIBUTED MULTIMEDIA SYSTEMS. DISTRIBUTED MULTIMEDIA SYSTEM.
Multimedia Technology2. Compression Algorithms ©Wolfgang Effelsberg 2.3 Video Compression 2.3.1MPEG MPEG stands for Moving Picture Experts Group.
Multimedia Networking9-1 MM Networking Applications Fundamental characteristics: r Typically delay sensitive m end-to-end delay m delay jitter r But loss.
APNOMS02, Jeju, Korea1 QoS Control in the Internet Raouf Boutaba QoS Control in the Internet Raouf Boutaba School of Computer Science University of Waterloo.
Wireless Mobile Communications: Part 2 – Mobile Ad-hoc Networks (MANET) Jae H. Kim, Ph.D. Manager/Associate Technical Fellow Boeing Phantom Works
Introduction to H.264 / AVC Video Coding Standard Multimedia Systems Sharif University of Technology November 2008.
J.GODWIN PONSAM & S.MURUGANDAM ASST.PROFESSOR SRM University, Kattankulathur 8/22/2011 School of Computing, Department of IT 1.
Università degli studi Roma Tre – Introduzione alla codifica H.264/AVC Università degli studi Roma Tre Overview of the H.264AVC video coding standard Maiorana.
Multimedia and Streaming CS 3251: Computer Networks I Spring 2010 Nick Feamster (Slides courtesy of Jennifer Rexford)
Ch 5 : Multimedia Network Standardization, QoS, Access Media Science and Technology Faculty Informatics Arini, ST, MT Com
1. OBJECTIVES: Defining the different types of buses Discussing bus arbitration and handshaking schemes Introducing I2C and PCI bus examples Interconnection.
In This Study we Introduce the Effective of some Mathematical Transformations such as Fourier Transformation and its variants, as well as Wavelet Transformations,
1 Congestion Control Reading: Sections COS 461: Computer Networks Spring 2006 (MW 1:30-2:50 in Friend 109) Jennifer Rexford Teaching Assistant:
Network+ Guide to Networks 5 th Edition Chapter 11 Voice and Video Over IP.
1 Chapter 11: Internet Operation Business Data Communications, 7e.
MPEG Moving Picture Experts Group. What defines good video quality? Size of pictures Bitrate of channel medium (especially in real-time applications)
Copyright: Silberschatz, Korth and Sudarshan 1 Chapter 23: Advanced Data Types and New Applications.
Silberschatz, Galvin, and Gagne Applied Operating System Concepts Module 14: Network Structures Background Motivation Topology Network Types.
1. Parallel Databases Introduction I/O Parallelism Interquery Parallelism Intraquery Parallelism Intraoperation Parallelism Interoperation Parallelism.
Version 4.1 CCNA Discovery 2– Chapter 7. Contents 7.1: ISP Services : TCP / IP Protocols 7.2: 7.3: DNS 7.3: 7.4: Application Layer Protocols 7.4.
DIGITAL BLIND AUDIO WATERMARKING FOR E-LEARNING MEDIA By: Youngseock Lee, Jongweon Kim Research done at: Chungwoon University, Sangmyung University.
Routing Protocols for Ad-Hoc Networks Ad-hoc On-Demand Distance Vector Routing & DSR: The Dynamic Source Routing Protocol for Multi-Hop Wireless Ad Hoc.
Arquitetura da Internet atual Michael Stanton 22/8/2011.
March 24, 2004 Will H.264 Live Up to the Promise of MPEG-4 ? Vide / SURA March Marshall Eubanks Chief Technology Officer.
Advanced Operating Systems Prof. Muhammad Saeed Distributed Operating Systems Communication.
Network+ Guide to Networks 6 th Edition Chapter 12 Voice and Video Over IP.
Doc.: IEEE /399 Submission November 2000 Wim Diepstraten, LucentSlide 1 Baseline D-QoS Proposal Greg Chesson-Atheros Wim Diepstraten- Lucent Technologies.
1 Computer Networks: A Systems Approach, 5e Larry L. Peterson and Bruce S. Davie Chapter 9 Applications Copyright © 2010, Elsevier Inc. All rights Reserved.
CSE 413: Computer Networks Md. Kamrul Hasan Assistant Professor and Chairman Dept. of Computer and Communication Engineering Patuakhali Science and Technology.
© 2016 SlidePlayer.com Inc. All rights reserved.