CSI-447: Multimedia Systems Chapter 4: Reading Assignment.

Slides:



Advertisements
Similar presentations
MULTIMEDIA TUTORIAL PART - III SHASHI BHUSHAN SOCIS, IGNOU.
Advertisements

Digital Signal Processing
SWE 423: Multimedia Systems Chapter 3: Audio Technology (2)
The frequency spectrum
Foundations of Physics
Motivation Application driven -- VoD, Information on Demand (WWW), education, telemedicine, videoconference, videophone Storage capacity Large capacity.
Chapter 12 SOUND.
SIMS-201 Characteristics of Audio Signals Sampling of Audio Signals Introduction to Audio Information.
SWE 423: Multimedia Systems Chapter 3: Audio Technology (1)
IT-101 Section 001 Lecture #8 Introduction to Information Technology.
Chapter 6 (Sections ) Sound. The speed of sound in a substance depends on: the mass of its constituent atoms, and the strength of the forces between.
Digital Audio.
Multimedia Systems Chapter 3: Audio and video Technology.
Digital audio recording Kimmo Tukiainen. My background playing music since I was five first time in a studio at fourteen recording on my own for six months.
Chapter 15 Sound 15.1 Properties of Sound 15.2 Sound Waves
Audio Audio is a wave resulting from air pressure disturbance that reaches our eardrum generating the sound we hear. –Humans can hear frequencies in the.
Digital Voice Communication Link EE 413 – TEAM 2 April 21 st, 2005.
SWE 423: Multimedia Systems Chapter 3: Audio Technology (3)
ICS 542 Multimedia Computing Spring Semester (072) King Fahd University of Petroleum & Minerals Information & Computer Science Department.
5. Multimedia Data. 2 Multimedia Data Representation  Digital Audio  Sampling/Digitisation  Compression (Details of Compression algorithms – following.
Digital Audio Multimedia Systems (Module 1 Lesson 1)
Introduction to Sound Sounds are vibrations that travel though the air or some other medium A sound wave is an audible vibration that travels through.
infinity-project.org Engineering education for today’s classroom 53 Design Problem - Digital Band Build a digital system that can create music of any.
Audio Fundamentals Lesson 2 Sound, Sound Wave and Sound Perception
Digital audio. In digital audio, the purpose of binary numbers is to express the values of samples that represent analog sound. (contrasted to MIDI binary.
Introduction to Interactive Media 10: Audio in Interactive Digital Media.
Lecture # 22 Audition, Audacity & Sound Editing Sound Representation.
Multimedia Technology Digital Sound Krich Sintanakul Multimedia and Hypermedia Department of Computer Education KMITNB.
Chapter 6 Basics of Digital Audio
COMP Representing Sound in a ComputerSound Course book - pages
C-15 Sound Physics Properties of Sound If you could see atoms, the difference between high and low pressure is not as great. The image below is.
Art 321 Sound, Audio, Acoustics Dr. J. Parker. Sound What we hear as sound is caused by rapid changes in air pressure! It is thought of as a wave, but.
CSC361/661 Digital Media Spring 2002
15.1 Properties of Sound  If you could see atoms, the difference between high and low pressure is not as great.  The image below is exaggerated to show.
Basics of Digital Audio Outline  Introduction  Digitization of Sound  MIDI: Musical Instrument Digital Interface.
Signal Digitization Analog vs Digital Signals An Analog Signal A Digital Signal What type of signal do we encounter in nature?
COSC 1P02 Introduction to Computer Science 4.1 Cosc 1P02 Week 4 Lecture slides “Programs are meant to be read by humans and only incidentally for computers.
Chapter 12 Sound.
Introduction to SOUND.
Sound Vibration and Motion.
Audio / Sound INTRODUCTION TO MULTIMEDIA SYSTEMS Lect. No 3: AUDIO TECHNOLOGY.
Unit 10: Part 2 Sound.
1 Introduction to Information Technology LECTURE 6 AUDIO AS INFORMATION IT 101 – Section 3 Spring, 2005.
Sound Representation Digitizing Sound Sound waves through Air Different Voltages Voltage converted to numbers.
Audio Technology introduction Iwan Sonjaya,MT What is sound? Sound is a physical phenomenon caused by vibration of material (ex.: violin). As the matter.
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 3 – Digital Audio Representation Klara Nahrstedt Spring 2009.
Hearing: Physiology and Psychoacoustics 9. The Function of Hearing The basics Nature of sound Anatomy and physiology of the auditory system How we perceive.
Encoding and Simple Manipulation
Chapter 21 Musical Sounds.
Lecture Outline Chapter 14 College Physics, 7 th Edition Wilson / Buffa / Lou © 2010 Pearson Education, Inc.
Digital Audio. Acknowledgement Some part of this lecture note has been taken from multimedia course made by Asst.Prof.Dr. William Bares and from Paul.
CS Spring 2014 CS 414 – Multimedia Systems Design Lecture 3 – Digital Audio Representation Klara Nahrstedt Spring 2014.
Recording Arts…Audio Sound Waves Fall What does this all mean to you in this class? You are always working with sound waves – it is important to.
Multimedia Sound. What is Sound? Sound, sound wave, acoustics Sound is a continuous wave that travels through a medium Sound wave: energy causes disturbance.
1. What is Sound? Sound is a wave phenomenon like light, but is macroscopic and involves molecules of air being compressed and expanded under the action.
1 What is Multimedia? Multimedia can have a many definitions Multimedia means that computer information can be represented through media types: – Text.
Digital Audio I. Acknowledgement Some part of this lecture note has been taken from multimedia course made by Asst.Prof.Dr. William Bares and from Paul.
Chapter 12: Sound and Light. Goals/Objectives  After completing the lesson, students will be able to...  Recognize what factors affect the speed of.
AUDIO TECHNOLOGY 2 Signal Representation. Audio Audio is a wave resulting from air pressure disturbance that reaches our eardrum generating the sound.
Physics Mrs. Dimler SOUND.  Every sound wave begins with a vibrating object, such as the vibrating prong of a tuning fork. Tuning fork and air molecules.
AUDIO Audio means "of sound" or "of the reproduction of sound". Specifically, it refers to the range of frequencies detectable by the human ear ム approximately.
Lifecycle from Sound to Digital to Sound. Characteristics of Sound Amplitude Wavelength (w) Frequency ( ) Timbre Hearing: [20Hz – 20KHz] Speech: [200Hz.
The Physics of Sound.
COMPUTER NETWORKS and INTERNETS
Multimedia Systems and Applications
"Digital Media Primer" Yue-Ling Wong, Copyright (c)2013 by Pearson Education, Inc. All rights reserved.
C-15 Sound Physics 1.
Assist. Lecturer Safeen H. Rasool Collage of SCIENCE IT Dept.
Introduction to Digital Audio
Digital Audio Application of Digital Audio - Selected Examples
Presentation transcript:

CSI-447: Multimedia Systems Chapter 4: Reading Assignment

Reading Assignment Media Coding and Content Processing: Chapter 3 (Sections 3.1 … 3.4) Multimedia Signals and Systems (Chapter 2) Fundamentals of Multimedia (Chapter 6: Section 6.1) Self Reading Material of Section 2.2 of Multimedia Signals and Systems (Human Auditory System) will be on Quiz 2.

Audio Audio is a wave resulting from air pressure disturbance that reaches our eardrum generating the sound we hear. – Humans can hear frequencies in the range 20-20,000 Hz. ‘Acoustics’ is the branch of physics that studies sound

Characteristics of Audio Audio has normal wave properties –Reflection –Refraction –Diffraction A sound wave has several different properties: –Amplitude (loudness/intensity) –Frequency (pitch) –Envelope (waveform)

Audio Amplitude Audio amplitude is often expressed in decibels (dB) Sound pressure levels (loudness or volume) are measured in a logarithmic scale (deciBel, dB) used to describe a ratio –Suppose we have two loudspeakers, the first playing a sound with power P 1, and another playing a louder version of the same sound with power P 2, but everything else (how far away, frequency) is kept the same. –The difference in decibels between the two is defined to be 10 log 10 (P 2 /P 1 ) dB

Audio Amplitude In microphones, audio is captured as analog signals (continuous amplitude and time) that respond proportionally to the sound pressure, p. The power in a sound wave, all else equal, goes as the square of the pressure. –Expressed in dynes/cm 2. The difference in sound pressure level between two sounds with p 1 and p 2 is therefore 20 log 10 (p 2 /p 1 ) dB The “acoustic amplitude” of sound is measured in reference to p 1 = p ref = dynes/cm 2. –The human ear is insensitive to sound pressure levels below p ref.

Audio Amplitude IntensityTypical Examples 0 dBThreshold of hearing 20 dBRustling of paper 25 dBRecording studio (ambient level) 40 dBResident (ambient level) 50 dBOffice (ambient level) dBTypical conversation 80 dBHeavy road traffic 90 dBHome audio listening level dBThreshold of pain 140 dBRock singer screaming into microphone

Audio Frequency Audio frequency is the number of high-to-low pressure cycles that occurs per second. –In music, frequency is referred to as pitch. Different living organisms have different abilities to hear high frequency sounds –Dogs: up to 50KHz –Cats: up to 60 KHz –Bats: up to 120 KHz –Dolphins: up to 160KHz –Humans: Called the audible band. The exact audible band differs from one to another and deteriorates with age.

Audio Frequency The frequency range of sounds can be divided into –Infra sound –Audible sound –Ultrasound –Hypersound 0 Hz– 20 Hz 20 Hz– 20 KHz 20 KHz – 1 GHz 1 GHz – 10 GHz Sound waves propagate at a speed of around 344 m/s in humid air at room temperature (20  C) – Hence, audio wave lengths typically vary from 17 m (corresponding to 20Hz) to 1.7 cm (corresponding to 20KHz). Sound can be divided into periodic (e.g. whistling wind, bird songs, sound from music) and nonperiodic (e.g. speech, sneezes and rushing water).

Audio Frequency Most sounds are combinations of different frequencies and wave shapes. Hence, the spectrum of a typical audio signal contains one or more fundamental frequency, their harmonics, and possibly a few cross- modulation products. –Fundamental frequency –Harmonics The harmonics and their amplitude determine the tone quality or timbre.

Audio Envelope When sound is generated, it does not last forever. The rise and fall of the intensity of the sound is known as the envelope. A typical envelope consists of four sections: attack, decay, sustain and release.

Audio Envelope Attack: The intensity of a note increases from silence to a high level Decay: The intensity decreases to a middle level. Sustain: The middle level is sustained for a short period of time Release: The intensity drops from the sustain level to zero.

Audio Envelope Different instruments have different envelope shapes –Violin notes have slower attacks but a longer sustain period. –Guitar notes have quick attacks and a slower release

Audio Signal Representation Waveform representation –Focuses on the exact representation of the produced audio signal. Parametric form representation –Focuses on the modeling of the signal generation process. –Two major forms Music synthesis (MIDI Standard) Speech synthesis

Waveform Representation Audio Source Human Ear Audio Capture Sampling & Digitization Digital to Analog Playback (speaker) Storage or Transmission Receiver Audio Generation and Playback

Digitization To get audio (or video for that matter) into a computer, we must digitize it (convert it into a stream of numbers). This is achieved through sampling, quantization, and coding.

Example Signal Amplitude

Sampling Sampling: The process of converting continuous time into discrete values.

Sampling Process 1.Time axis divided into fixed intervals 2.Reading of the instantaneous value of the analog signal is taken at the beginning of each time interval (interval determined by a clock pulse) 3.Frequency of clock is called sampling rate or sampling frequency The sampled value is held constant for the next time interval (sampling and hold circuit)

Sampling Example Amplitude

Quantization The process of converting continuous sample values into discrete values. –Size of quantization interval is called quantization step. –How many values can a 4-bit quantization represent? 8-bit? 16-bit? The higher the quantization, the resulting sound quality

Quantization Example Amplitude

Coding The process of representing quantized values digitally

Analog to Digital Conversion Amplitude

MIDI Interface Musical sound can be generated, unlike other types of sounds. Therefore, the Musical Instrument Digital Interface standard has been developed –The standard emerged in its final form in August 1982 –A music description language in binary form A given piece of music is represented by a sequence of numbers that specify how the musical instruments are to be played at different time instances.

MIDI Components An MIDI studio consists of –Controller: Musical performance device that generates MIDI signal when played. MIDI Signal: A sequence of numbers representing a certain note. –Synthesizer: A piano-style keyboard musical instrument that simulates the sound of real musical instruments –Sequencer: A device or a computer program that records a MIDI signal. –Sound Module: A device that produces pre-recorded samples when triggered by a MIDI controller or sequencer

MIDI Components

MIDI Data MIDI File Organization Header Chunk Track 1Track 2 Track Header Track Chunk Track Header Track Chunk Status Byte Data Bytes Status Byte Data Bytes Actual Music Data Describes –Start/end of a score –Intensity –Instrument –Basis frequency –....

MIDI Data MIDI standard specifies 16 channels –A MIDI device is mapped onto one channel E.g. MIDI Guitar controller, MIDI wind machine, Drum machine. –128 instruments are identified by the MIDI standard Electric grand piano (2) Telephone ring (124) Helicopter (125) Applause (126) Gunshot (127)

MIDI Instruments Can play 1 single score (e.g. flute) vs. multiple scores (e.g. organ) Maximum number of scores that can be played concurrently is an important property of a synthesizer – scores per channel.

3D Sound Projection After experimentation, it was shown that the use of a two-channel sound (stereo) produces the best hearing effects for most people.

Spatial Sound Direct sound path: The shortest path between the sound source and the auditor. –Carries the first sound waves to the auditors head. All other sound paths are reflected All sound paths leading to the human ear are influenced by the auditor’s individual head-related transfer function (HRTF) –HRTF is a function of the path’s direction (horizontal and vertical angles) to the auditor. Pulse response in a closed room

Question What determines the quality of the digitization process?

Basic Types of a Digital Signal Unit Impulse Function  [n] Unit Step Function u[n]

Sinc Function

To plot the sinc function in Matlab x = linspace(-5,5); y = sinc(x); plot(x,y);

Determining the Sampling Rate Suppose we are sampling a sine wave. How often do we need to sample it to figure out its frequency?

Sampling Theorem If the highest frequency contained in an analog is B and the signal is sampled at a rate F > 2B, then the signal can be exactly recovered from its sample values. F=2B is called the Nyquist Rate.

Quantization Levels Determines amplitude fidelity of the signal relative to the original analog signal. Quantization error (noise) is the maximum difference between the quantized sample values and the analog signal values. The digital signal quality relative to the original signal is measured by the signal to noise ratio (SNR). –SNR=20log 10 (S/N), where S is the maximum signal amplitude and N is the quantization noise (=quantization step).