Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sound Synthesis Part I: Introduction & Fundamentals Nicolas Pugeault

Similar presentations


Presentation on theme: "Sound Synthesis Part I: Introduction & Fundamentals Nicolas Pugeault"— Presentation transcript:

1

2 Sound Synthesis Part I: Introduction & Fundamentals Nicolas Pugeault

3 Introduction Instruments can be made in a variety of ways: think guitar, piano, organs, etc. Use electronic devices to create sounds: synthesisers. Can either –Recreate an existing timbre –… or something different.

4 Introduction Producing a sound by sending an electrical signal to a speaker is trivial. The question is what are the relevant and desirable properties of the signal to ensure that the resulting sound is as desired (eg, similar to a real instrument).

5 Applications Musical instruments Computer games Sound effects for films Multimedia, computer system sounds Mobile phones Speech synthesis Toys Effects

6 Sound Synthesis Plan Synthesis I: Fundamentals Synthesis II: Additive Synthesis III: Filtering and distortion Synthesis IV: Other approaches Post-processing, pitch correction (autotunes) Sound perception

7 Sound Synthesis Part I: Fundamentals Nicolas Pugeault

8 Lecture Plan Introduction to sound synthesis Perception of sound –Loudness –Pitch –Timbre Sound synthesis – Fundamentals Summary

9 Sound (cont’d) Sound is a pattern of compression and depression of the air –Record it using microphones –Perceive it from our ears –Generate it by speaking or using speakers Energy per m2 decreases with the square of distance...

10 Sound is a waveform Sound is a waveform, Can be reflected when hitting a non- transmissive surface If the surface is flat, reflected in cohesive way Otherwise depends on frequency and surface texture Sound proof studio wall, for absorbing high frequencies

11 Attributes of sound Sound LoudnessTimbrePitch Temporal characteristic Spatial characteristic

12 The simplest sound: Pure tone Sinusoidal wave (440Hz) Periode p=1/f0

13 Reminder: Fourier Transform Idea: “All functions can be decomposed in a (possibly infinite) sum of sinusoidal functions of varying frequencies.” Transforms a function from time domain to frequency domain. Eg, right, for a square wave. First component First two components First three components First four components

14 Loudness Often measured in decibels (dB) R=20*log10(A/A0) A0 is a reference amplitude, often taken as the threshold of audibility. Logarithmic perception of loudness.  A change in 6dB means a doubling of amplitude! Range of audibility: ~120dB (1 to 1million)

15 Perception of Loudness Correlated with amplitude. Here: –constant frequency (f0=440Hz) –Varying amplitude (A = 0.2, 0.5 or 1.0)

16 Loudness (cont’d) However –Perception of Loudness is frequency dependent. –Sound X and Y have the same amplitude, which is louder, X or Y? X (100 Hz, A=1) Y (3,500Hz, A=1) Considering only amplitudes, sound Y should be the same loudness as sound X. However, Y is louder than X. Why?

17 Loudness (cont’d) Fletcher Munson (1933) –Subjects listen to pure tones Various frequencies amplitude inc. per 10dB Robinson & Dadson (1956) –more accurate –Basis for standard ISO-226 Perceived Loudness (Phons) –1 Phon = 1dB 1kHz British Standard BS ISO 226 (2003) (source wikipedia)

18 Loudness (cont’d) There is a difference between sensory loudness and perceptual loudness! (Emmet, 1992) For the design of a synthesiser with large dynamical range, changing only amplitude is a poor choice since signal may clip. Solution: use spectral variation: –Broad spectrum will likely result in a loud sound. –Narrow spectrum will be perceived as quiet.

19 Perception of Pitch Frequency correlated with pitch Here: 3 examples of pure tones. What if sounds are more complex? Range: 20Hz- 20kHz Best acuity: 200Hz- 2kHz

20 Pitch: Fundamental & Harmonics Real sounds are not pure – more complex! The ear assumes that multiple frequency components form one sound. Harmonically related -> fuse into single pitch at Fundamental Frequency (f0 largest common divisor) Each sinusoid is called a Harmonic partial of the sound (fk = N*f0)

21 Fundamental & Harmonics (2) Fundamental f0 First harmonic f1 = 2*f0 Second harmonic f2 = 3*f0 Third harmonic f3 = 4*f0 Seventh harmonic f7 = 7*f0...

22 Fundamental & Harmonics (3) The pitch is correlated with the Fundamental frequency. Although in this example the fundamental is missing, the pitch is the same. The timbre is different.

23 Timbre All those sounds have the same pitch (A4, 440Hz) –Flute A4 –Tuning fork A4 –Violin A4 –Singer A4 They differ in timbre.

24 Defining Timbre Definition (American Standard Association): “That attribute of sensation in terms of which a listener can judge that two sounds having the same loudness and pitch are dissimilar.” (ASA, 1960 ; Wikipedia, 2011) Has a “wastebasket” quality (Dixon Ward, 1965) : –What is neither loudness nor pitch... Synonyms: Tone quality or colour, texture... Affected by a sound’s envelope.

25 Timbre (cont’d) What physical parameters relate to timbre? –Static spectrum (transient) –Envelope of spectrum (transient) –Dynamic spectrum (time-evolving) –Phase This list is not exhaustive. – cf “wastebasket” quality!

26 Timbre: Static Spectrum (220Hz)

27 Timbre: Envelope of Spectrum

28 Timbre: Envelope (cont’d) Difference in envelope (same note, 440Hz fundamental) –Top: Flute –Bottom: Violin  Envelope differs! Conclusion: Envelope is instrument-specific. Flute Violin

29 Timbre: Envelope (cont’d) Arrows indicate formants. This slide indicates two speech vowels (i and u) Formants not only determine timbre but helps distinguishing vowels. (used in speech recognition)

30 Timbre: Dynamic Spectrum Will those two sounds have the same timbre? No, same average spectrum, but different timbre! Difference: –Top: original sound –Bottom: time reversed. Conclusion: Temporal variation of spectrum impacts timbre! A B

31 Timbre: Dynamic Spectrum (cont’d)

32 Timbre: spectrogram Time (s.) Frequency (Hz)

33 Timbre: Dynamic Spectrum (cont’d) This slides shows the long term (average) spectrum for two sounds (top: original and bottom: time reversed) Spectrum is identical; timbre is totally different  very misleading! Conclusion: it is important to know how the spectrum evolves in time. The timbre does not only depends on the harmonic structure but on the way spectrum varies in time. A) Normal B) Time reversed

34 Time envelope (ADSR) Time Envelope (ADSR) –Attack is the time from nil to peak. –Decay is the time from peak to the sustain level. –Sustain is the level during the main sequence of the sound’s duration, until key is released. –Release is the time to decay from sustain level to zero.

35 Time Envelope (example)

36 Example of the same sound with and without attack –Attack cut at 0.7s. –With (blue+green): –Without (green):

37 Timbre: Phase? Sound A Sound B Are A and B of different timbres?

38 Timbre: Phase? Timbre depends (weakly) on phase relationship between harmonics. BUT waveforms are totally different, magnitude spectra identical, and timbre are (almost) identical! Conclusion: Human hearing is not sensitive to phase differences. Sound A: Square wave, fundamental 500Hz, 9 harmonics. Sound B: Square wave, fundamental 500Hz, 9 harmonics, every second harmonic phase shifted by 90 degrees.

39 Summary 1: Loudness Control In order to control loudness in synthetic sounds: –Modify the spectral content: –more energy at high frequency  louder (see right). –Modify the amplitude –Higher amplitude  louder

40 Summary 2: Pitch Control In order to control pitch in synthetics sounds: –Modify the fundamental frequency. –High fundamental frequency  high pitch. Fundamental frequency

41 Summary 3: Timbre Control In order to control timbre in synthetic sounds, modify –Spectral content –Spectral envelope –Spectrum in time –Spectrum evolution during transient states

42 Plan Introduction to sound synthesis Perception of sound –Loudness –Pitch –Timbre Sound synthesis – Fundamentals Summary

43 Fundamental Definitions Computer Instrument: An algorithm that realizes (performs) a musical event. Unit Generator: A high-level “building block” in an instrument.

44 Oscillator Basic waveform generator Abbreviations: –EG – Envelope Generator –LFO – Low Frequency Oscillator –VCA – Voltage Control Amplifier –VCF – Voltage Control Filter

45 Abbreviations EG – Envelope Generator LFO – Low Frequency Oscillator VCA – Voltage Control Amplifier VCF – Voltage Control Filter

46 Important Terms Two types of synthesisers monophonicpolyphonic You can only play one note at a time. If you play several keys together, only one note will be generated  no chords! You can play several notes at the same time  can play chords!

47 Types of synthesis Sound Synthesis Additive synthesis Distortion techniques Subtractive synthesis Granular synthesis Analysis based Physical modelling

48 Plan Introduction to sound synthesis Perception of sound –Loudness –Pitch –Timbre Sound synthesis – Fundamentals Summary

49 Additional Reading C. Dodge, C., & Jerse, T. A. (1997). Computer Music: Synthesis, Composition, and Performance. Schrimer, UK. (see chapters 2 and 4)


Download ppt "Sound Synthesis Part I: Introduction & Fundamentals Nicolas Pugeault"

Similar presentations


Ads by Google