Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Broadband Multimedia Network

Similar presentations


Presentation on theme: "Introduction to Broadband Multimedia Network"— Presentation transcript:

1 Introduction to Broadband Multimedia Network
Introduction to Multimedia

2 Introduction to Multimedia
Scope of Broadband Multimedia Description Why multimedia systems? Classification of Media Multimedia Systems Data Stream Characteristics Introduction to Multimedia

3 BROADBAND Broadband Signifies : High Bandwidth
High Access speeds, 256 Kbps t o 100 Mbps Huge Core bandwidth pipes, STM –16 (SDH), GigE (MEN) and 2.5 GigE (DWDM / CWDM) Multiple Converged Services High Speed Data Voice Video

4 Multiple Definitions Broadband The capability of supporting, in both the provider-to-consumer (downstream) and the consumer-to-provider (upstream) directions, a speed in excess of 200 kilobits per second (kbps) in the last mile FCC 1999 Telecommunications Act Deployment Report “High-speed” Services with over 200 kbps capability in at least one direction. The term high-speed services includes advanced telecommunications capability The International Telecommunications Union’s (ITU) defines broadband service as 1.5 Mbps 07/26/04 Page - 4

5 Speed Equals Time Downloading the DVD Movie “The Matrix” 7.8 GB
First Responders 07/26/04 Page - 5

6 Where are We Connecting?
New World (Wide Packets) Order Long Haul Metro/Access Intercontinental & Coast to Coast Over Fiber at 10 Gbps & up (Long Haul DWDM, SONET, ATM & Ethernet) Long Haul: Metro Network: Intra City or Metro Network All over Fiber at 1Gbps  10 Gbps (Short Haul DWDM, SONET, Gigabit Ethernet) Access Network: Network connections to customer, Last Mile (Access Network: Fast & Gigabit Ethernet , T-1, DSL and Cable modems) Metro/ Access: Desktop to Desktop – Floor to Floor 10 Mbps  1Gbps (Ethernet & ATM) LAN: 07/26/04 Page - 6 Copyright © World Wide Packets All trademarks and registered trademarks are the property of their respective owners

7 Technology Futures, Inc. (2001)
The typical household of 2015 subscribes to broadband service at 24 Mb/s to100 Mb/s, Small businesses will access the network at data rates up to 622 Mb/s. Medium and large businesses will access the network directly with fiber at data rates from 2.4 Gb/s to 40 Gb/s. By 2015, most customers obtain voice and narrowband data service via wireless or VoIP on broadband channels. In 2015, fiber dominates the outside plant, comprising 100% of the interoffice network, 97% of the feeder network, and 95% of the distribution network. on behalf of the Telecommunications Technology Forecasting Group (Verizon, SBC, Bell Canada, Bellsouth, Sprint, Qwest 07/26/04 Page - 7

8 BROADBAND SERVICES Services Offered On Broadband : Data Services
High Speed Internet Services Point to Point and Point to Multi Point VPN Services Web Hosting Applications Walled garden (Internet on TV) Voices Services Audio Conference Voice over IP (VoIP) Video Services Video Broadcast Video on Demand Video telephony Online Gaming

9 Multimedia Conference Video/Audio-On-Demand
BROADBAND SERVICES Services Offered on Broadband Web Hosting Multimedia Conference Video/Audio-On-Demand Online Gaming Video on PC Revenue Value-added Applications Value Add applications to boost revenues Basic Telecom Service Voice Internet Time

10 BROADBAND COMPONENTS User Access Core Backend Core Network Business
Voice over IP IP VPN NMS Server video communication Access & Aggregation Network Core Network NMS Client Radius Server Residential gaming video Video Server Soft Switch

11 BROADBAND NETWORK ELEMENTS
Network Elements – Technology Options Corporate / SME / SOHO Residential User Wired Access ( DSL, Cable, FTTH) Wireless ( BWA, Wifi ) The Access The Media Optical Fiber The Technology SDH CWDM / DWDM ATM Ethernet The Core Authentication Server Content Servers Web Hosting Servers Video Servers Backend

12 BROADBAND NETWORK COMPONENTS
Core Network Backend Access FTTH 100M MPLS 2 Fiber SM G.652 STB M/C Internet GigE Dual Homed Ring DNS DHCP AAA GigE Internet Gateway Router GigE DSLAM BBRAS Voice Services Server Internet Data Services Server TV/Video Services Servers

13 CORE NETWORK TECHNOLOGIES
Broadband Core Technologies: Core Network: Architecture Ring Single Homing Dual Homing Mesh Technology ATM / IP / Ethernet SDH SDH over CWDM / DWDM

14 RING ARCHITECTURE Advantage Highly Reliable Low Cost
Aggregation Ring 2 Fiber SM G.652 Core Switch Collector Advantage Low Cost Simple Architecture Disadvantage Hub – Potential Single Point failure Target Application Towards Access and Low Capacity Core Highly Reliable High Cost Solution. High Capacity Core Single Homed Dual Homed

15 MESH ARCHITECTURE Advantages Extremely Reliable
Protection against Equipment / Fiber failure Disadvantages Very Complex and Costly to implement Target Application Highest Tier in Core Dual-Homed Mesh Core Ring Mesh Network

16 Technology Options on Core
CORE TECHNOLOGIES Technology Options on Core ATM Backbone IP / Metro Ethernet SDH / CWDM / DWDM.

17 Traditional ATM Backbone Core
Benefits: High deployment across the world Stabilized Technology Aggregation through multiple E1’s or upto STM 16 rings. Good for aggregating Higher bandwidths. Drawbacks: ATM pvc uses 10% overheads Higher Provisioning time.

18 ATM BACKBONE CORE DSLAM for Residential Internet Access– Traditional Way xDSL RFC 1483 Bridged Access E STM4/STM16 FTTH 100M STM4/STM16 A Metro Ethernet D LAN Switch STM4/STM16 ATM MESH RFC 1483 Bridged Access STM4/STM16 RFC 1483 Bridged Access B Internet xDSL Broadband RAS

19 METRO ETHERNET CORE Metro Ethernet Core Benefits:
Deployment has started in Huge way across the world Aggregation through Fast Ethernet or Gigabit Ethernet. Highly recommended for high bandwidths requirements. Drawbacks: Technology getting standardized Limited support for VLAN’s. Single broadcast domain.

20 BROADBAND TECHNOLOGIES & SERVICES
DSLAM for Residential Internet Access– Next Generation xDSL FE/Gig Ethernet Service Connectivity Provider Internet FE/Gig FE/Gig FE/Gig FE/Gig xDSL Internet Gateway Router IPDSLAM Broadband RAS

21 Inter-City Backbone Network Metro Transport and Aggregation Network
SDH / CWDM / DWDM CORE Inter-City Backbone Network Backbone ATM STM1/STM4 on ATM Metro Transport and Aggregation Network 2 Fiber SM G.652 SDH Core Switch STM1/STM4 on Sonet Aggregation Ring (CWDM/DWDM) GigE MPLS GigE Internet Gateway Router GigE Internet Access Router Aggregation Node DSLAM traffic going to the core through Metro Ethernet BBRAS SDH Network STM 1/4/16 Internet Ethernet UNI Voice Services Server Internet Data Services Server TV/Video Services Servers DNS DHCP AAA xDSL Ethernet UNI

22 Network Topologies A topology refers to the manner in which the cable is run to individual workstations on the network. the configurations formed by the connections between devices on a local area network (LAN) or between two or more LANs There are three basic network topologies (not counting variations thereon): the bus, the star, and the ring. It is important to make a distinction between a topology and an architecture. A topology is concerned with the physical arrangement of the network components. In contrast, an architecture addresses the components themselves and how a system is structured (cable access methods, lower level protocols, topology, etc.). An example of an architecture is 10baseT Ethernet which typically uses the star topology.

23 Bus Topology A bus topology connects each computer (node) to a single segment trunk. A ‘trunk’ is a communication line, typically coax cable, that is referred to as the ‘bus.’  The signal travels from one end of the bus to the other. A terminator is required at each end to absorb the signal so it does not reflect back across the bus. In a bus topology, signals are broadcast to all stations. Each computer checks the address on the signal (data frame) as it passes along the bus. If the signal’s address matches that of the computer, the computer processes the signal. If the address doesn’t match, the computer takes no action and the signal travels on down the bus. Only one computer can ‘talk’ on a network at a time. A media access method (protocol) called CSMA/CD is used to handle the collisions that occur when two signals are placed on the wire at the same time. The bus topology is passive. In other words, the computers on the bus simply ‘listen’ for a signal; they are not responsible for moving the signal along. A bus topology is normally implemented with coaxial cable.

24 Bus Topology Advantages of bus topology: Easy to implement and extend
Well suited for temporary networks that must be set up in a hurry Typically the cheapest topology to implement Failure of one station does not affect others Disadvantages of bus topology: Difficult to administer/troubleshoot Limited cable length and number of stations A cable break can disable the entire network; no redundancy Maintenance costs may be higher in the long run Performance degrades as additional computers are added

25 Star Topology All of the stations in a star topology are connected to a central unit called a hub. The hub offers a common connection for all stations on the network. Each station has its own direct cable connection to the hub. In most cases, this means more cable is required than for a bus topology. However, this makes adding or moving computers a relatively easy task; simply plug them into a cable outlet on the wall. If a cable is cut, it only affects the computer that was attached to it. This eliminates the single point of failure problem associated with the bus topology. (Unless, of course, the hub itself goes down.) Star topologies are normally implemented using twisted pair cable, specifically unshielded twisted pair (UTP). The star topology is probably the most common form of network topology currently in use.

26 Star Topology Advantages of star topology:
Easy to add new stations Easy to monitor and troubleshoot Can accommodate different wiring Disadvantages of star topology: Failure of hub cripples attached stations More cable required (more expensive to wire a building for networking)

27 Ring Topology A ring topology consists of a set of stations connected serially by cable. In other words, it’s a circle or ring of computers. There are no terminated ends to the cable; the signal travels around the circle in a clockwise (or anticlockwise) direction. Note that while this topology functions logically as ring, it is physically wired as a star. The central connector is not called a hub but a Multistation Access Unit or MAU. (Don’t confuse a Token Ring MAU with a ‘Media Adapter Unit’ which is actually a transceiver.) Under the ring concept, a signal is transferred sequentially via a "token" from one station to the next. When a station wants to transmit, it "grabs" the token, attaches data and an address to it, and then sends it around the ring. The token travels along the ring until it reaches the destination address. The receiving computer acknowledges receipt with a return message to the sender. The sender then releases the token for use by another computer. Each station on the ring has equal access but only one station can talk at a time.

28 Ring Topology In contrast to the ‘passive’ topology of the bus, the ring employs an ‘active’ topology. Each station repeats or ’boosts’ the signal before passing it on to the next station. Rings are normally implemented using twisted pair or fiber-optic cable Advantages of ring topology: Growth of system has minimal impact on performance All stations have equal access Disadvantages of ring topology: Most expensive topology Failure of one computer may impact others Complex

29 What’s “New Generation Network” or NWGN?
Examples: Cell Phones > 2G > 3G > 4G? Internet > IPv4 > IPv6 > IPv? Next Generations New Generation Network (NWGN) New Generations Revised NXGN 1) clean-slate 2) modification Past Network Present Network Next Generation Network (NXGN) 2005 2010 2015

30 Introduction to Multimedia
Broadband in Indonesia Introduction to Multimedia

31 From Agricultural to Conceptual
sps

32 The Information Revolution, Driver of the Knowledge Economy in a Global World

33 ROLE OF BROADBAND "for every one percentage point increase in broadband penetration in a state, employment is projected to increase by 0.2 to 0.3 percent per year” (brooking institute)

34 ROLE OF BROADBAND Broadband needs to be considered as basic national infrastructure, as it will fundamentally reshape the world in the 21st century and change the way services are delivered – from e-health to e-education to e-commerce to e-government. Broadband is the most powerful tool ever devised to drive social and economic development, and accelerate progress towards the Millennium Development Goals. Broadband is becoming a prerequisite to economic opportunity for individuals, small businesses and communities. Those without broadband and the skills to use broadband-enabled technologies are becoming more isolated from the modern American economy. Broadband can provide significant benefits to the next generation of entrepreneurs and small businesses—the engines of job creation and economic growth for the country.

35 BROADBAND & SMEs It allows small businesses to achieve operational scale more quickly. Broadband and associated ICTs can help lower company start-up costs through faster business registration and improved access to customers and suppliers. It gives SMEs access to new markets and opportunities by lowering the barriers of physical scale and allowing them to compete for customers who previously turned exclusively to larger suppliers. It allow small businesses to increase efficiency, improve market access, reduce costs and increase the speed of both transactions and interactions. E-commerce solutions eliminate geographic barriers to getting a business's message and product out to a broad audience. 60 million Americans go online every day to find a product or service, but only 24% of small businesses use e-commerce applications to sell online

36 BROADBAND & ECONOMIC SECTORS
OECD report urges governments to invest in open-access high-speed national fiber networks that can serve as the future delivery mechanism for a huge range of new and innovative public sector services. And despite the large initial capital investment needed – typically US$ 1,500-2,500 per household connected – the report shows that National Broadband Networks can pay for themselves within ten years, through dramatic savings in just four key economic sectors: electricity healthcare road transport Education cost savings across the four sectors of just 0.5%-1.5% would be sufficient to justify the cost of laying high-speed fiber-to-the-home via a national point-to-point network.

37 The Positive Side of Indonesian ICT Development
Mobile and Internet Tariffs are among the cheapest in SE Asia Large growths in Mobile Subscribers for several years The growing applications and contents in Internet and Mobile services, such as IP-TV, streaming videos, games, entertainments, BlackBerry, etc. Indonesia is among the World's largest users of Web 2.0 Social Networking, such as Blogs, Facebook, Multiply, Youtube, YM, Chatting, etc

38 The Negative Side of the Indonesian ICT Development
The declining profit margins of Operators due to very intense tariff competition The lowering of Quality of Service, especially 3G and mobile Internet services Low or little profits from Web, Internet and Social Networks, due to average low income of Indonesians ICT growth has not been accompanied by economic growth; little value added results

39 ICT Indicators 2004-2008 Population in 2008 = 228,523,300
Households in 2008 = 57,716,100 Income per Capita = Rp 7.5 millions PDB per Capita = Rp 8.7 millions per year % of Households with Fixed Phones = 12.69% (24.51% in cities, 3.72% in villages)‏

40 INFRASTRUKTUR DATA 2008 INDONESIA (UN E-Gov Survey 2008)
Internet / 100 Users 7.18 PC / 100 Users 1.47 Cellular Subs /100 users 28.30 Main Telephone Lines/100 Users 6.57 Broadband / 100 Users 0.05

41 E-Readiness Peringkat Kesiapan Teknologi 2008-2009 e-Readiness 2008
Negara Peringkat Kesiapan Teknologi (Sumber: Global Competitiveness Report , World Economic Forum) Daya Saing Daya Saing Teknologi Tekno- logi Maju Daya Serap Teknologi Regulasi TIK FDI dan Transfer Teknologi Jasa Seluler Pengguna Internet Jumlah Komputer Broad-band Thailand 34 66 50 61 48 72 78 94 Indonesia 55 88 65 71 24 100 107 105 Vietnam 70 79 54 57 114 63 Philipina 52 49 60 84 101 96 Sri Lanka 77 82 45 59 47 102 117 98 Kamboja 109 123 106 122 120 130 128 108 e-Readiness 2008 (Sumber: The Economist Intelligence Unit, 2007) Negara Peringkat Nilai Total Akses Bisnis Sos Bud Hukum Kebijakan Adopsi Bisnis Thailand 47 5,22 3,80 6,99 5,07 5,90 5,25 5,10 Philipina 55 4,90 3,20 6,56 4,53 4,50 5,20 5,45 Sri Lanka 60 4,35 2,95 5,80 4,80 6,30 4,10 3,70 Vietnam 65 4,03 2,25 6,31 4,40 4,60 3,75 Indonesia 68 3,59 2,30 6,49 3,53 3,40 Sumber : RPJMN

42 WHY FIXED BROADBAND ? Mostly dedicated sampai ke last-miles
Wireless pada umumnya untuk low-traffic Infrastruktur Dasar Long-term investment Public Private Partnership Optimalisasi Pemanfaatan Palapa Ring Industri Kreatif sangat membutuhkan

43 Multimedia Description
is an integration of continuous media (e.g. audio, video) and discrete media (e.g. text, graphics, images) through which digital information can be conveyed to the user in an appropriate way. Multi many, much, multiple Medium An interleaving substance through which something is transmitted or carried on Introduction to Multimedia

44 Why Multimedia Computing?
Application driven e.g. medicine, sports, entertainment, education Information can often be better represented using audio/video/animation rather than using text, images and graphics alone. Information is distributed using computer and telecommunication networks. Integration of multiple media places demands on computation power storage requirements networking requirements Introduction to Multimedia

45 Multimedia Information Systems
Technical challenges Sheer volume of data Need to manage huge volumes of data Timing requirements among components of data computation and communication. Must work internally with given timing constraints - real-time performance is required. Integration requirements need to process traditional media (text, images) as well as continuous media (audio/video). Media are not always independent of each other - synchronization among the media may be required. Introduction to Multimedia

46 High Data Volume of Multimedia Information
Introduction to Multimedia

47 Introduction to Multimedia
Technology Incentive Growth in computational capacity MM workstations with audio/video processing capability Dramatic increase in CPU processing power Dedicated compression engines for audio, video etc. Rise in storage capacity Large capacity disks (several gigabytes) Increase in storage bandwidth,e.g. disk array technology Surge in available network bandwidth high speed fiber optic networks - gigabit networks fast packet switching technology Introduction to Multimedia

48 Introduction to Multimedia
Application Areas Residential Services video-on-demand video phone/conferencing systems multimedia home shopping (MM catalogs, product demos and presentation) self-paced education Business Services Corporate training Desktop MM conferencing, MM Introduction to Multimedia

49 Introduction to Multimedia
Application Areas Education Distance education - MM repository of class videos Access to digital MM libraries over high speed networks Science and Technology computational visualization and prototyping astronomy, environmental science Medicine Diagnosis and treatment - e.g. MM databases that provide support for queries on scanned images, X-rays, assessments, response etc. Introduction to Multimedia

50 Classification of Media
Perception Medium How do humans perceive information in a computer? Through seeing - text, images, video Through hearing - music, noise, speech Representation Medium How is the computer information encoded? Using formats for representing and information ASCII(text), JPEG(image), MPEG(video) Presentation Medium Through which medium is information delivered by the computer or introduced into the computer? Via I/O tools and devices paper, screen, speakers (output media) keyboard, mouse, camera, microphone (input media) Introduction to Multimedia

51 Classification of Media (cont.)
Storage Medium Where will the information be stored? Storage media - floppy disk, hard disk, tape, CD-ROM etc. Transmission Medium Over what medium will the information be transmitted? Using information carriers that enable continuous data transmission - networks wire, coaxial cable, fiber optics Information Exchange Medium Which information carrier will be used for information exchange between different places? Direct transmission using computer networks Combined use of storage and transmission media (e.g. electronic mail). Introduction to Multimedia

52 Introduction to Multimedia
Media Concepts Each medium defines Representation values - determine the information representation of different media Continuous representation values (e.g. electro-magnetic waves) Discrete representation values(e.g. text characters in digital form) Representation space determines the surrounding where the media are presented. Visual representation space (e.g. paper, screen) Acoustic representation space (e.g. stereo) Introduction to Multimedia

53 Introduction to Multimedia
Media Concepts (cont.) Representation dimensions of a representation space are: Spatial dimensions: two dimensional (2D graphics) three dimensional (holography) Temporal dimensions: Time independent (document) - Discrete media Information consists of a sequence of individual elements without a time component. Time dependent (movie) - Continuous media Information is expressed not only by its individual value but also by its time of occurrence. Introduction to Multimedia

54 Introduction to Multimedia
Multimedia Systems Qualitative and quantitative evaluation of multimedia systems Combination of media continuous and discrete. Levels of media-independence some media types (audio/video) may be tightly coupled, others may not. Computer supported integration timing, spatial and semantic synchronization Communication capability Introduction to Multimedia

55 Introduction to Multimedia
Data Streams Distributed multimedia communication systems data of discrete and continuous media are broken into individual units (packets) and transmitted. Data Stream sequence of individual packets that are transmitted in a time-dependant fashion. Transmission of information carrying different media leads to data streams with varying features Asynchronous Synchronous Isochronous Introduction to Multimedia

56 Data Stream Characteristics
Asynchronous transmission mode provides for communication with no time restriction Packets reach receiver as quickly as possible, e.g. protocols for transmission Synchronous transmission mode defines a maximum end-to-end delay for each packet of a data stream. May require intermediate storage E.g. audio connection established over a network. Isochronous transmission mode defines a maximum and a minimum end-to-end delay for each packet of a data stream. Delay jitter of individual packets is bounded. E.g. transmission of video over a network. Intermediate storage requirements reduced. Introduction to Multimedia

57 Data Stream Characteristics
Data Stream characteristics for continuous media can be based on Time intervals between complete transmission of consecutive packets Strongly periodic data streams - constant time interval Weakly periodic data streams - periodic function with finite period. Aperiodic data streams Data size - amount of consecutive packets Strongly regular data streams - constant amount of data Weakly regular data streams - varies periodically with time Irregular data streams Continuity Continuous data streams Discrete data streams Introduction to Multimedia

58 Classification based on time intervals
Strongly periodic data stream T Weakly periodic data stream T1 T2 T3 T T Aperiodic data stream T1 T2 Introduction to Multimedia

59 Classification based on packet size
Strongly regular data stream D1 D1 T D2 Weakly regular data stream D3 t D1 D2 D3 D1 D2 Irregular data stream t D3 Dn Introduction to Multimedia

60 Classification based on continuity
Continuous data stream D1 D2 D3 D4 D D1 D2 D3 D4 D Discrete data stream Introduction to Multimedia

61 Broadband Multimedia Communications
Audio/Image/Video Representation Introduction to Multimedia

62 Introduction to Multimedia
Basic Sound Concepts Computer Representation of Sound Basic Image Concepts Image Representation and Formats Video Signal Representation Color Encoding Computer Video Format Introduction to Multimedia

63 Introduction to Multimedia
Basic Sound Concepts Acoustics study of sound - generation, transmission and reception of sound waves. Sound is produced by vibration of matter. During vibration, pressure variations are created in the surrounding air molecules. Pattern of oscillation creates a waveform the wave is made up of pressure differences. Waveform repeats the same shape at intervals called a period. Periodic sound sources - exhibit more periodicity, more musical - musical instruments, wind etc. Aperiodic sound sources - less periodic - unpitched percussion, sneeze, cough. Introduction to Multimedia

64 Introduction to Multimedia
Basic Sound Concepts Sound Transmission Sound is transmitted by molecules bumping into each other. Sound is a continuous wave that travels through air. Sound is detected by measuring the pressure level at a point. Receiving Microphone in sound field moves according to the varying pressure exerted on it. Transducer converts energy into a voltage level (i.e. energy of another form - electrical energy) Sending Speaker transforms electrical energy into sound waves. Introduction to Multimedia

65 Frequency of a sound wave
Frequency is the reciprocal value of the period. period Air pressure amplitude time Introduction to Multimedia

66 Introduction to Multimedia
Basic Sound Concepts Wavelength is the distance travelled in one cycle 20Hz is 56 feet, 20KHz is 0.7 in. Frequency represents the number of periods in a second (measured in hertz, cycles/second). Frequency is the reciprocal value of the period. Human hearing frequency range: 20Hz - 20Khz, voice is about 500Hz to 2Khz. Infrasound from Hz Human range from 20Hz - 20KHz Ultrasound from 20kHz - 1GHz Hypersound from 1GHz THz Introduction to Multimedia

67 Introduction to Multimedia
Basic Sound Concepts Amplitude of a sound is the measure of the displacement of the air pressure wave from its mean or quiescent state. Subjectively heard as loudness. Measured in decibels. 0 db essentially no sound heard 35 db quiet home 70 db noisy street 120db discomfort Introduction to Multimedia

68 Computer Representation of Audio
A transducer converts pressure to voltage levels. Convert analog signal into a digital stream by discrete sampling. Discretization both in time and amplitude (quantization). In a computer, we sample these values at intervals to get a vector of values. A computer measures the amplitude of the waveform at regular time intervals to produce a series of numbers (samples). Introduction to Multimedia

69 Computer Representation of Audio
Sampling Rate: rate at which a continuous wave is sampled (measured in Hertz) CD standard Hz, Telephone quality Hz. Direct relationship between sampling rate, sound quality (fidelity) and storage space. Question How often do you need to sample a signal to avoid losing information? Answer To decide a sampling rate - must be aware of difference between playback rate and capturing(sampling) rate. It depends on how fast the signal is changing. In reality - twice per cycle (follows from the Nyquist sampling theorem). Introduction to Multimedia

70 Introduction to Multimedia
Sampling Sample Height samples Introduction to Multimedia

71 Nyquist Sampling Theorem
If a signal f(t) is sampled at regular intervals of time and at a rate higher than twice the highest significant signal frequency, then the samples contain all the information of the original signal. Example Actual playback frequency for CD quality audio is Hz Because of Nyquist Theorem - we need to sample the signal twice, therefore sampling frequency is Hz. Introduction to Multimedia

72 Introduction to Multimedia
Data Rate of a Channel Noiseless Channel Nyquist proved that if any arbitrary signal has been run through a low pass filter of bandwidth H, the filtered signal can be completely reconstructed by making only 2H (exact) samples per second. If the signal consists of V discrete levels, Nyquist’s theorem states: max datarate = 2 *H log_2 V bits/sec noiseless 3kHz channel with quantization level 1 bit cannot transmit binary signal at a rate exceeding 6000 bits per second. Noisy Channel Thermal noise present is measured by the ratio of the signal power S to the noise power N (signal-to-noise ratio S/N). Max datarate - H log_2 (1+S/N) Introduction to Multimedia

73 Introduction to Multimedia
Quantization Sample precision - the resolution of a sample value Quantization depends on the number of bits used measuring the height of the waveform. 16 bit CD quality quantization results in 64K values. Audio formats are described by sample rate and quantization. Voice quality - 8 bit quantization, 8000 Hz mono(8 Kbytes/sec) 22kHz 8-bit mono (22kBytes/s) and stereo (44Kbytes/sec) CD quality - 16 bit quantization, Hz linear stereo (196 Kbytes/s) Introduction to Multimedia

74 Quantization and Sampling
Sample Height 0.75 0.5 0.25 samples Introduction to Multimedia

75 Introduction to Multimedia
Audio Formats Audio formats are characterized by four parameters Sample rate: Sampling frequency Encoding: audio data representation -law encoding corresponds to CCITT G standard for voice data in telephone companies in USA, Canada, Japan A-law encoding - used for telephony elsewhere. A-law and -law are sampled at 8000 samples/second with precision of 12bits, compressed to 8-bit samples. Linear Pulse Code Modulation(PCM) - uncompressed audio where samples are proportional to audio signal voltage. Precision: number of bits used to store audio sample -law and A-law - 8 bit precision, PCM can be stored at various precisions, 16 bit PCM is common. Channel: Multiple channels of audio may be interleaved at sample boundaries. Introduction to Multimedia

76 Introduction to Multimedia
Audio Formats Available on UNIX au (SUN file format), wav (Microsoft RIFF/waveform format), al (raw a-law), u (raw u-law)… Available on Windows-based systems (RIFF formats) wav, midi (file format for standard MIDI files), avi RIFF (Resource Interchange File Format) tagged file format (similar to TIFF).. Allows multiple applications to read files in RIFF format RealAudio, MP3 (MPEG Audio Layer 3) Introduction to Multimedia

77 Computer Representation of Voice
Best known technique for voice digitization is pulse-code-modulation (PCM). Consists of the 2 step process of sampling and quantization. Based on the sampling theorem. If voice data are limited to 4000Hz, then PCM samples 8000 samples per second which is sufficient for input voice signal. PCM provides analog samples which must be converted to digital representation. Each of these analog samples must be assigned a binary code. Each sample is approximated by being quantized. Introduction to Multimedia

78 Computer Representation of Music
MIDI (Music Instrument Digital Interface) standard that manufacturers of musical instruments use so that instruments can communicate musical information via computers. The MIDI interface consists of: Hardware - physical connection b/w instruments, specifies a MIDI port (plugs into computers serial port) and a MIDI cable. Data format - has instrument specification, notion of beginning and end of note, frequency and sound volume. Data grouped into MIDI messages that specify a musical event. An instrument that satisfies both is a MIDI device (e.g. synthesizer) MIDI software applications include music recording and performance applications, musical notations and printing applications, music education etc. Introduction to Multimedia

79 Computer Representation of Speech
Human ear is most sensitive in the range 600Hz to 6000 Hz. Speech Generation real-time signal generation allows transformation of text into speech without lengthy processing Limited vs. large vocabulary (depends on application) Must be understandable, must sound natural Speech Analysis Identification and Verification - recognize speakers using acoustic fingerprint Recognition and Understanding - analyze what has been said How something was said - used in lie detectors. Speech transmission - coding, recognition and synthesis methods - achieve minimal data rate for a given quality. Introduction to Multimedia

80 Basic Concepts (Digital Image Representation)
An image is a spatial representation of an object, a 2D or 3D scene etc. Abstractly, an image is a continuous function defining a rectangular region of a plane intensity image - proportional to radiant energy received by a sensor/detector range image - line of sight distance from sensor position. An image can be thought of as a function with resulting values of the light intensity at each point over a planar region. Introduction to Multimedia

81 Digital Image Representation
For computer representation, function (e.g. intensity) must be sampled at discrete intervals. Sampling quantizes the intensity values into discrete intervals. Points at which an image is sampled are called picture elements or pixels. Resolution specifies the distance between points - accuracy. A digital image is represented by a matrix of numeric values each representing a quantized intensity value. I(r,c) - intensity value at position corresponding to row r and column c of the matrix. Intensity value can be represented by bits for black and white images (binary valued images), 8 bits for monochrome imagery to encode color or grayscale levels, 24 bit (color-RGB). Introduction to Multimedia

82 Introduction to Multimedia
Image Formats Captured Image Format format obtained from an image frame grabber Important parameters Spatial resolution (pixels X pixels) Color encoding (quantization level of a pixel - 8-bit, 24-bit) e.g. “SunVideo” Video digitizer board allows pictures of 320 by 240 pixels with 8-bit grayscale or color resolution. Parallax-X video includes resolution of 640X480 pixels and 24-bit frame buffer. Introduction to Multimedia

83 Introduction to Multimedia
Image Formats Stored Image Format - format when images are stored Images are stored as 2D array of values where each value represents the data associated with a pixel in the image. Bitmap - this value is a binary digit For a color image - this value may be a collection of 3 values that represent intensities of RGB component at that pixel, 3 numbers that are indices to table of RGB intensities, index to some color data structure etc. Image file formats include - GIF (Graphical Interchange Format) , X11 bitmap, Postscript, JPEG, TIFF Introduction to Multimedia

84 Basic Concepts (Video Representation)
Human eye views video immanent properties of the eye determine essential conditions related to video systems. Video signal representation consists of 3 aspects: Visual Representation objective is to offer the viewer a sense of presence in the scene and of participation in the events portrayed. Transmission Video signals are transmitted to the receiver through a single television channel Digitalization analog to digital conversion, sampling of gray(color) level, quantization. Introduction to Multimedia

85 Visual Representation
The televised image should convey the spatial and temporal content of the scene Vertical detail and viewing distance Aspect ratio: ratio of picture width and height (4/3 = 1.33 is the conventional aspect ratio). Viewing angle = viewing distance/picture height Horizontal detail and picture width Picture width (conventional TV service ) - 4/3 * picture height Total detail content of the image Number of pixels presented separately in the picture height = vertical resolution Number of pixels in the picture width = vertical resolution*aspect ratio product equals total number of picture elements in the image. Introduction to Multimedia

86 Visual Representation
Perception of Depth In natural vision, this is determined by angular separation of images received by the two eyes of the viewer In the flat image of TV, focal length of lenses and changes in depth of focus in a camera influence depth perception. Luminance and Chrominance Color-vision - achieved through 3 signals, proportional to the relative intensities of RED, GREEN and BLUE. Color encoding during transmission uses one LUMINANCE and two CHROMINANCE signals Temporal Aspect of Resolution Motion resolution is a rapid succession of slightly different frames. For visual reality, repetition rate must be high enough (a) to guarantee smooth motion and (b) persistance of vision extends over interval between flashes(light cutoff b/w frames). Introduction to Multimedia

87 Visual Representation
Continuity of motion Motion continuity is achieved at a minimal 15 frames per second; is good at 30 frames/sec; some technologies allow 60 frames/sec. NTSC standard provides 30 frames/sec Hz repetition rate. PAL standard provides 25 frames/sec with 25Hz repetition rate. Flicker effect Flicker effect is a periodic fluctuation of brightness perception. To avoid this effect, we need 50 refresh cycles/sec. Display devices have a display refresh buffer for this. Temporal aspect of video bandwidth depends on rate of the visual system to scan pixels and on human eye scanning capabilities. Introduction to Multimedia

88 Introduction to Multimedia
Transmission (NTSC) Video bandwidth is computed as follows 700/2 pixels per line X 525 lines per picture X 30 pictures per second Visible number of lines is 480. Intermediate delay between frames is 1000ms/30fps = 33.3ms Display time per line is 33.3ms/525 lines = 63.4 microseconds The transmitted signal is a composite signal consists of 4.2Mhz for the basic signal and 5Mhz for the color, intensity and synchronization information. Introduction to Multimedia

89 Introduction to Multimedia
Color Encoding A camera creates three signals RGB (red, green and blue) For transmission of the visual signal, we use three signals 1 luminance (brightness-basic signal) and 2 chrominance (color signals). In NTSC, luminance and chrominance are interleaved Goal at receiver separate luminance from chrominance components avoid interference between them prior to recovery of primary color signals for display. Introduction to Multimedia

90 Introduction to Multimedia
Color Encoding RGB signal - for separate signal coding consists of 3 separate signals for red, green and blue colors. Other colors are coded as a combination of primary color. (R+G+B = 1) --> neutral white color. YUV signal separate brightness (luminance) component Y and color information (2 chrominance signals U and V) Y = 0.3R G B U = (B-Y) * 0.493 V = (R-Y) * 0.877 Resolution of the luminance component is more important than U,V Coding ratio of Y, U, V is 4:2:2 Introduction to Multimedia

91 Introduction to Multimedia
Color Encoding(cont.) YIQ signal similar to YUV - used by NTSC format Y = 0.3R G B U = 0.60R G B V = 0.21R -0.52g B Composite signal All information is composed into one signal To decode, need modulation methods for eliminating interference b/w luminance and chrominance components. Introduction to Multimedia

92 Introduction to Multimedia
Digitization Refers to sampling the gray/color level in the picture at MXN array of points. Once points are sampled, they are quantized into pixels sampled value is mapped into an integer quantization level is dependent on number of bits used to represent resulting integer, e.g. 8 bits per pixel or 24 bits per pixel. Need to create motion when digitizing video digitize pictures in time obtain sequence of digital images per second to approximate analog motion video. Introduction to Multimedia

93 Introduction to Multimedia
Computer Video Format Video Digitizer A/D converter Important parameters resulting from a digitizer digital image resolution quantization frame rate E.g. Parallax X Video - camera takes the NTSC signal and the video board digitizes it. Resulting video has 640X480 pixels spatial resolution 24 bits per pixel resolution 20fps (lower image resolution - more fps) Output of digital video goes to raster displays with large video RAM memories. Color lookup table used for presentation of color Introduction to Multimedia

94 Digital Transmission Bandwidth
Bandwidth requirement for images raw image transmission b/w = size of image = spatial resolution x pixel resolution compressed image - depends on compression scheme symbolic image transmission b/w = size of instructions and primitives carrying graphics variables Bandwidth requirement for video uncompressed video = image size X frame rate compressed video - depends on compression scheme e.g HDTV quality video uncompressed Mbps, compressed using MPEG (34 Mbps with some loss of quality). Introduction to Multimedia

95 Broadband Multimedia Communications
Multimedia Compression Techniques Introduction to Multimedia

96 Introduction to Multimedia
Coding Requirements Entropy Encoding Content Dependent Coding Run-length Coding Diatomic Coding Statistical Encoding Huffman Coding Arithmetic Coding Source Encoding Predictive Coding Differential Pulse Code Modulation Delta Modulation Adaptive Encoding Introduction to Multimedia

97 Introduction to Multimedia
Coding Requirements Storage Requirements Uncompressed audio: 8Khz, 8-bit quantization implies 64 Kbits to store per second CD quality audio: 44.1Khz, 16-bit quantization implies storing 705.6Kbits/sec PAL video format: 640X480 pixels, 24 bit quantization, 25 fps, implies storing 184,320,000 bits/sec = 23,040,000 bytes/sec Bandwidth Requirements uncompressed audio: 64Kbps CD quality audio: 705.6Kbps PAL video format: 184,320,000 bits/sec COMPRESSION IS REQUIRED!!!!!!! Introduction to Multimedia

98 Coding Format Examples
JPEG for still images H.261/H.263 for video conferencing, music and speech (dialog mode applications) MPEG-1, MPEG-2, MPEG-4 for audio/video playback, VOD (retrieval mode applications) DVI for still and continuous video applications (two modes of compression) Presentation Level Video (PLV) - high quality compression, but very slow. Suitable for applications distributed on CD-ROMs Real-time Video (RTV) - lower quality compression, but fast. Used in video conferencing applications. Introduction to Multimedia

99 Introduction to Multimedia
Coding Requirements Dialog mode applications End-to-end Delay (EED) should not exceed ms Face-to-face application needs EED of 50ms (including compression and decompression). Retrieval mode applications Fast-forward and rewind data retrieval with simultaneous display (e.g. fast search for information in a multimedia database). Random access to single images and audio frames, access time should be less than 0.5sec Decompression of images, video, audio - should not be linked to other data units - allows random access and editing Introduction to Multimedia

100 Introduction to Multimedia
Coding Requirements Requirements for both dialog and retrieval mode applications Support for scalable video in different systems. Support for various audio and video rates. Synchronization of audio-video streams (lip synchronization) Economy of solutions Compression in software implies cheaper, slower and low quality solution. Compression in hardware implies expensive, faster and high quality solution. Compatibility e.g. tutoring systems available on CD should run on different platforms. Introduction to Multimedia

101 Classification of Compression Techniques
Entropy Coding lossless encoding used regardless of media’s specific characteristics data taken as a simple digital sequence decompression process regenerates data completely e.g. run-length coding, Huffman coding, Arithmetic coding Source Coding lossy encoding takes into account the semantics of the data degree of compression depends on data content. E.g. content prediction technique - DPCM, delta modulation Hybrid Coding (used by most multimedia systems) combine entropy with source encoding E.g. JPEG, H.263, DVI (RTV & PLV), MPEG-1, MPEG-2, MPEG-4 Introduction to Multimedia

102 Introduction to Multimedia
Steps in Compression Picture preparation analog-to-digital conversion generation of appropriate digital representation image division into 8X8 blocks fix the number of bits per pixel Picture processing (compression algorithm) transformation from time to frequency domain, e.g. DCT motion vector computation for digital video. Quantization Mapping real numbers to integers (reduction in precision). E.g. U-law encoding - 12bits for real values, 8 bits for integer values Entropy coding compress a sequential digital stream without loss. Introduction to Multimedia

103 Introduction to Multimedia
Compression Steps Uncompressed Picture Picture Preparation Picture Processing Adaptive Feedback Loop Quantization Compressed Picture Entropy Coding Introduction to Multimedia

104 Introduction to Multimedia
Types of compression Symmetric Compression Same time needed for decoding and encoding phases Used for dialog mode applications Asymmetric Compression Compression process is performed once and enough time is available, hence compression can take longer. Decompression is performed frequently and must be done fast. Used for retrieval mode applications Introduction to Multimedia

105 Broadband Multimedia Communications
JPEG Compression

106 Introduction to Multimedia
Requirements on JPEG implementations JPEG Image Preparation Blocks, Minimum Coded Units (MCU) JPEG Image Processing Discrete Cosine Transformation (DCT) JPEG Quantization Quantization Tables JPEG Entropy Encoding Run-length Coding/Huffman Encoding Introduction to Multimedia

107 Additional Requirements -JPEG
JPEG implementation is independent of image size and applicable to any image and pixel aspect ratio. Image content may be of any complexity (with any statistical characteristics). JPEG should achieve very good compression ratio and good quality image. From the processing complexity of a software solution point of view: JPEG should run on as many available platforms as possible. Sequential decoding (line-by-line) and progressive decoding (refinement of the whole image) should be possible. Introduction to Multimedia

108 Variants of Image Compression
Four different modes Lossy Sequential DCT based mode Baseline process that must be supported by every JPEG implementation. Expanded Lossy DCT based mode enhancements to baseline process Lossless mode low compression ratio allows perfect reconstruction of original image Hierarchical mode accommodates images of different resolutions Introduction to Multimedia

109 Introduction to Multimedia
JPEG Processing Steps Uncompressed Image Baseline Sequential Mode Expanded Lossy Mode Lossless Mode Hierarchical Mode Image Preparation Block, MCU 8bits/pixel 12 bits/pixel 2-16 bits/pixel Pixel, Block, MCU Image Preparation Predictive Entropy coding Transformation Source Coding lossy DCT Layered coding Switch between lossy DCT and lossless technique Prediction FDCT Quantization Compressed Image Entropy Encoding Run-length Huffman Run-length Huffman Arithmetic Introduction to Multimedia

110 Broadband Multimedia Communications
MPEG Compression Introduction to Multimedia

111 Introduction to Multimedia
General Information about MPEG MPEG/ Video Standard MPEG/ Audio Standard MPEG Systems Multiplexing of Video/Audio Data Streams Introduction to Multimedia

112 Introduction to Multimedia
General Information MPEG-1 achieves data compression of 1.5Mbps. This is the data rate of audio CD’s and DAT’s (Digital Audio Tapes). MPEG considers explicitly functionalities of other standards,e.g. it uses JPEG. MPEG defines standard video, audio coding and system data streams with synchronization. MPEG Core Technology includes many different patents MPEG committee sets technical standards Introduction to Multimedia

113 General Information (cont.)
MPEG stream provides more information than a data stream compressed according to the JPEG standard. Aspect Ratio - 14 aspect ratios can be encoded. 1:1 corresponds to computer graphics, 4:3 corresponds to 702X575 pixels (TV format), 16:9 corresponds to 625/525 (HDTV format). Refresh Frequency - 8 frequencies are encoded - 23.976Hz, 24, 25,29.97, 50, 59.94, 60 Hz. Other Issues with frame rate Each frame must be built within a maximum of 41.7(33)ms to keep display rate of 24fps(30fps). No need or possibility of defining MCUs in MPEG. Implies sequential non-interleaving order. For MPEG, there is no advantage to progressive display over sequential display. Introduction to Multimedia

114 Introduction to Multimedia
MPEG Overview MPEG exploits temporal (i.e frame-to-frame) redundancy present in all video sequences. Two Categories: Intra-frame and inter-frame encoding DCT based compression for the reduction of spatial redundancy (similar to JPEG) Block-based motion compensation for exploiting temporal redundancy causal(predictive coding) - current picture is modeled as transformation of picture at some previous time non-causal (interpolative coding) - uses past and future reference Introduction to Multimedia

115 MPEG Image Preparation - Motion Representation
Predictive and interpolative coding Good compression but requires storage and information Often makes sense for parts of an image and not the whole image. Each image is divided into areas called macro-blocks (motion compensation units) Each macro-blocks is partitioned into 16x16 pixels for luminance, 8x8 for each of the chrominance components. Choice of macro-block size is a tradeoff between gain from motion compensation and cost of motion estimation. Macro-blocks are useful for compression based on motion estimation. Introduction to Multimedia

116 Introduction to Multimedia
MPEG Video Processing MPEG stream includes 4 types of image coding for video processing I-frames - Intra-coded frames - access points for random access, yields moderate compression P-frames - Predictive-coded frames - encoded with reference to a previous I or P frame. B-frames - Bi-directionally predictive coded frames - encoded using previous/next I and P frame, maximum compression D-frames - DC coded frames Motivation for types of frames Demand for efficient coding scheme and fast random access Goal to achieve high compression rate - temporal redundancies of subsequent pictures (i.e. interframes) must be exploited Introduction to Multimedia

117 MPEG Audio Encoding Steps
Psychoacoustic Model Filter Bank Transformation from time to frequency domain 32 subbands Quantization Bit/noise Allocation If noise level is too high --> rough quantization is applied If noise level is too low --> finer quantization is applied Multiplexer Entropy Coder Huffman Coding Introduction to Multimedia Compressed data

118 MPEG/System Data Stream
Video Stream is interleaved with audio. Video Stream consists of 6 layers Sequence layer Group of pictures layer Video Param - width, height, aspect ratio, picture rate Bitstream Param - bitrate, bufsize QT - intracoded blocks, intercoded blocks Picture layer Time code - hours, minutes, seconds Slice layer Type - I, P, B Buffer Param - decoder’s bufsize Encode Param - indicates info about motion vectors Macro-block layer Vertical Position - what line does this slice start on? Qscale - how is the quantization table scaled in this slice? Block layer Introduction to Multimedia


Download ppt "Introduction to Broadband Multimedia Network"

Similar presentations


Ads by Google