Presentation is loading. Please wait.

Presentation is loading. Please wait.

1. What is Sound? Sound is a wave phenomenon like light, but is macroscopic and involves molecules of air being compressed and expanded under the action.

Similar presentations


Presentation on theme: "1. What is Sound? Sound is a wave phenomenon like light, but is macroscopic and involves molecules of air being compressed and expanded under the action."— Presentation transcript:

1 1

2 What is Sound? Sound is a wave phenomenon like light, but is macroscopic and involves molecules of air being compressed and expanded under the action of some physical device. a) For example, a speaker in an audio system vibrates back and forth and produces a longitudinal pressure wave that we perceive as sound. b) Since sound is a pressure wave, it takes on continuous values, as opposed to digitized ones. 2

3 c) Even though such pressure waves are longitudinal, they still have ordinary wave properties and behaviors, such as reflection (bouncing), refraction (change of angle when entering a medium with a different density) and diffraction (bending around an obstacle). d) If we wish to use a digital version of sound waves we must form digitized representations of audio information. 3

4 The Nature of Sound Sounds are produced by the conversion of energy into vibrations in the air or some other elastic medium, which are detected by the each and converted into nerve impulses which we experience as sound. A sound’s frequency spectrum is a description of the relative amplitudes of its frequency components. The human ear can detect sound frequencies roughly in the range 20 Hz to 20 kHz, though the ability to hear the higher frequencies is lost as people age. A sound’s waveform shows how its amplitude varies over time. 4

5 Sound Waves 5

6 6

7 The Nature of Sound Perception of sound has a psychological dimension. CD Audio is sampled at 44.1 kHz. Sub-multiples of this value may be used for low quality digital audio. 22.05 kHz is commonly used for audio destined for delivery over the Internet. 11.025 kHz is sometimes used for speech. Some professional and semi-professional recording devices use sample rates that are multiples of 48 kHz. 7

8 Characteristics of Audio Audio has normal wave properties Reflection Refraction Diffraction A sound wave has several different properties: Amplitude (loudness/intensity) Frequency (pitch) Envelope (waveform) 8

9 Audio Amplitude Audio amplitude is often expressed in decibels (dB) Sound pressure levels (loudness or volume) are measured in a logarithmic scale (deciBel, dB) used to describe a ratio Suppose we have two loudspeakers, the first playing a sound with power P 1, and another playing a louder version of the same sound with power P 2, but everything else (how far away, frequency) is kept the same. The difference in decibels between the two is defined to be 10 log 10 (P 2 /P 1 ) dB 9

10 Audio Amplitude In microphones, audio is captured as analog signals (continuous amplitude and time) that respond proportionally to the sound pressure, p. The power in a sound wave, all else equal, goes as the square of the pressure. Expressed in dynes/cm 2. The difference in sound pressure level between two sounds with p 1 and p 2 is therefore 20 log 10 (p 2 /p 1 ) dB The “acoustic amplitude” of sound is measured in reference to p 1 = p ref = 0.0002 dynes/cm 2. The human ear is insensitive to sound pressure levels below p ref. 10

11 Audio Amplitude IntensityTypical Examples 0 dBThreshold of hearing 20 dBRustling of paper 25 dBRecording studio (ambient level) 40 dBResident (ambient level) 50 dBOffice (ambient level) 60 - 70 dBTypical conversation 80 dBHeavy road traffic 90 dBHome audio listening level 120 - 130 dBThreshold of pain 140 dBRock singer screaming into microphone 11

12 Audio Frequency Audio frequency is the number of high-to-low pressure cycles that occurs per second. In music, frequency is referred to as pitch. Different living organisms have different abilities to hear high frequency sounds Dogs: up to 50KHz Cats: up to 60 KHz Bats: up to 120 KHz Dolphins: up to 160KHz Humans: Called the audible band. The exact audible band differs from one to another and deteriorates with age. 12

13 Audio Frequency The frequency range of sounds can be divided into Infra sound0 Hz– 20 Hz Audible sound20 Hz– 20 KHz Ultrasound20 KHz – 1 GHz Hypersound1 GHz – 10 GHz Sound waves propagate at a speed of around 344 m/s in humid air at room temperature (20  C) Hence, audio wave lengths typically vary from 17 m (corresponding to 20Hz) to 1.7 cm (corresponding to 20KHz). Sound can be divided into periodic (e.g. whistling wind, bird songs, sound from music) and nonperiodic (e.g. speech, sneezes and rushing water). 13

14 Audio Frequency Most sounds are combinations of different frequencies and wave shapes. Hence, the spectrum of a typical audio signal contains one or more fundamental frequency, their harmonics, and possibly a few cross-modulation products. Fundamental frequency Harmonics The harmonics and their amplitude determine the tone quality or timbre. 14

15 Audio Envelope When sound is generated, it does not last forever. The rise and fall of the intensity of the sound is known as the envelope. A typical envelope consists of four sections: attack, decay, sustain and release. Attack: The intensity of a note increases from silence to a high level Decay: The intensity decreases to a middle level. Sustain: The middle level is sustained for a short period of time Release: The intensity drops from the sustain level to zero. Different instruments have different envelope shapes – Violin notes have slower attacks but a longer sustain period. – Guitar notes have quick attacks and a slower release 15

16 Audio Signal Representation Waveform representation Focuses on the exact representation of the produced audio signal. Parametric form representation Focuses on the modeling of the signal generation process. Two major forms Music synthesis (MIDI Standard) Speech synthesis 16

17 Waveform Representation 17 Audio Capture Sampling & Digitization Storage or Transmission Receiver Digital to Analog Playback (speaker) Audio Source Human Ear Audio Generation and Playback

18 Digitization To get audio (or video for that matter) into a computer, we must digitize it (convert it into a stream of numbers). This is achieved through sampling, quantization, and coding. Example Signal 18

19 Sampling Sampling: The process of converting continuous time into discrete values.  Sampling Process 1. Time axis divided into fixed intervals 2. Reading of the instantaneous value of the analog signal is taken at the beginning of each time interval (interval determined by a clock pulse) 3. Frequency of clock is called sampling rate or sampling frequency The sampled value is held constant for the next time interval (sampling and hold circuit) Sampling Example : 19

20 Quantization The process of converting continuous sample values into discrete values. Size of quantization interval is called quantization step. How many values can a 4-bit quantization represent? 8-bit? 16-bit? The higher the quantization, the resulting sound quality...  Quantization Example 20

21 Coding The process of representing quantized values digitally  Analog to Digital Conversion 21

22 MIDI : Musical Instruments Digital Interface also known as Musical Instruments Digital Interface MIDI provides a way of representing music as instructions describing how to produce notes, instead of as a record of the actual sounds. MIDI provides a standard protocol and hardware interface for communicating between electronic instruments, such as synthesizers, samplers and drum machines, allowing instruments to be controlled by hardware or software sequencers. 22

23 MIDI Components An MIDI studio consists of Controller: Musical performance device that generates MIDI signal when played. MIDI Signal: A sequence of numbers representing a certain note. Synthesizer: A piano-style keyboard musical instrument that simulates the sound of real musical instruments Sequencer: A device or a computer program that records a MIDI signal. Sound Module: A device that produces pre-recorded samples when triggered by a MIDI controller or sequencer 23

24 MIDI Components 24

25 MIDI Data Describes Start/end of a score Intensity Instrument Basis frequency… 25 Header Chunk Track Header Track Chunk Track Header Track Chunk Track 1Track 2 Status Byte Data Bytes Status Byte Data Bytes MIDI File Organization Actual Music Data

26 MIDI Data MIDI standard specifies 16 channels A MIDI device is mapped onto one channel E.g. MIDI Guitar controller, MIDI wind machine, Drum machine. 128 instruments are identified by the MIDI standard Electric grand piano (2) Telephone ring (124) Helicopter (125) Applause (126) Gunshot (127) 26

27 MIDI Instruments Can play 1 single score (e.g. flute) vs. multiple scores (e.g. organ) Maximum number of scores that can be played concurrently is an important property of a synthesizer 3..16 scores per channel. 27


Download ppt "1. What is Sound? Sound is a wave phenomenon like light, but is macroscopic and involves molecules of air being compressed and expanded under the action."

Similar presentations


Ads by Google