Modeling the effects of air currents on acoustic measurements in large spaces David Griesinger David Griesinger Acoustics www.davidgriesinger.com.

Slides:



Advertisements
Similar presentations
David Griesinger Acoustics, Cambridge, Massachusetts, USA
Advertisements

Chapter 2 The Process of Experimentation
| Page Angelo Farina UNIPR | All Rights Reserved | Confidential Digital sound processing Convolution Digital Filters FFT.
Angelo Farina Dip. di Ingegneria Industriale - Università di Parma Parco Area delle Scienze 181/A, Parma – Italy
Hearing relative phases for two harmonic components D. Timothy Ives 1, H. Martin Reimann 2, Ralph van Dinther 1 and Roy D. Patterson 1 1. Introduction.
Measuring sonic presence and the ability to sharply localize sound using existing lateral fraction data David Griesinger David Griesinger Acoustics
Basic Acoustics Inverse square law Reinforcement/cancellation
Room Acoustics: implications for speech reception and perception by hearing aid and cochlear implant users 2003 Arthur Boothroyd, Ph.D. Distinguished.
DFT/FFT and Wavelets ● Additive Synthesis demonstration (wave addition) ● Standard Definitions ● Computing the DFT and FFT ● Sine and cosine wave multiplication.
Abigail Stefaniw Room Acoustics for Classrooms: measurement techniques University of Georgia Classroom Acoustics Seminar.
The frequency spectrum
Auditorium Acoustics 1. Sound propagation (Free field)
Foundations of Physics
SIMS-201 Characteristics of Audio Signals Sampling of Audio Signals Introduction to Audio Information.
IT-101 Section 001 Lecture #8 Introduction to Information Technology.
Loudness Physics of Music PHY103 experiments: mix at different volumes
PREDICTION OF ROOM ACOUSTICS PARAMETERS
Acoustics in Twenty Words or Less. What is Acoustics? The Science of Sound!
Chapter 3 Data and Signals
Department of Electronic Engineering City University of Hong Kong EE3900 Computer Networks Data Transmission Slide 1 Continuous & Discrete Signals.
Module 3.0: Data Transmission
Chapter 16 Sound and Hearing.
Reverberation parameters and concepts Absorption co-efficient
Harmonics and Overtones Waveforms / Wave Interaction Phase Concepts / Comb Filtering Beat Frequencies / Noise AUD202 Audio and Acoustics Theory.
© 2010 Pearson Education, Inc. Conceptual Physics 11 th Edition Chapter 21: MUSICAL SOUNDS Noise and Music Musical Sounds Pitch Sound Intensity and Loudness.
Basic Concepts: Physics 1/25/00. Sound Sound= physical energy transmitted through the air Acoustics: Study of the physics of sound Psychoacoustics: Psychological.
EE513 Audio Signals and Systems Noise Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
On the Accuracy of Modal Parameters Identified from Exponentially Windowed, Noise Contaminated Impulse Responses for a System with a Large Range of Decay.
LE 460 L Acoustics and Experimental Phonetics L-13
ROOM ACOUSTIC MEASUREMENTS ACOUSTICS OF CONCERT HALLS AND ROOMS Handbook of Acoustics, Chapter 9 Schroeder (1965)
Acoustics Reverberation.
Lecture 1 Signals in the Time and Frequency Domains
1 Improved Subjective Weighting Function ANSI C63.19 Working Group Submitted by Stephen Julstrom for October 2, 2007.
Acoustics/Psychoacoustics Huber Ch. 2 Sound and Hearing.
Review Exam III. Chapter 10 Sinusoidally Driven Oscillations.
15.1 Properties of Sound  If you could see atoms, the difference between high and low pressure is not as great.  The image below is exaggerated to show.
Sound Vibration and Motion.
David Meredith Aalborg University
All slides © S. J. Luck, except as indicated in the notes sections of individual slides Slides may be used for nonprofit educational purposes if this copyright.
Acoustics of classrooms, restaurants and offices Eng.Ivaylo Hristev.
Authors: Sriram Ganapathy, Samuel Thomas, and Hynek Hermansky Temporal envelope compensation for robust phoneme recognition using modulation spectrum.
Sound field descriptors Eng.Ivaylo Hristev. Contents 1. Wave acoustics. Room resonances. 2. Ray acoustics. Raytracing. 3.Statistical acoustics. Reverberation.
EE Audio Signals and Systems Room Acoustics Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Electromagnetic Spectrum
Loudness level (phon) An equal-loudness contour is a measure of sound pressure (dB SPL), over the frequency spectrum, for which a listener perceives a.
Sound Waves Chapter 13. General Characteristics Longitudinal wave; requires elastic medium for propagation Series of compressions and rarefactions in.
GG313 Lecture 24 11/17/05 Power Spectrum, Phase Spectrum, and Aliasing.
02/05/2002 (C) University of Wisconsin 2002, CS 559 Last Time Color Quantization Mach Banding –Humans exaggerate sharp boundaries, but not fuzzy ones.
SOUND SECONDARY 3 PHYSICS. NATURE AND PRODUCTION OF SOUND Sound is….. A form of energy an example of longitudinal wave Produced by vibrating sources placed.
Introduction to psycho-acoustics: Some basic auditory attributes For audio demonstrations, click on any loudspeaker icons you see....
Pitch What is pitch? Pitch (as well as loudness) is a subjective characteristic of sound Some listeners even assign pitch differently depending upon whether.
Time Based Processors. Reverb source:
Chapter 12 Preview Objectives The Production of Sound Waves
 The principle of superposition for two identical waves traveling in opposite directions.  Snapshots of the blue and the green waves and their red sum.
Physics Mrs. Dimler SOUND.  Every sound wave begins with a vibrating object, such as the vibrating prong of a tuning fork. Tuning fork and air molecules.
Basic Acoustics + Digital Signal Processing January 11, 2013.
Loudness level (phon) An equal-loudness contour is a measure of sound pressure (dB SPL), over the frequency spectrum, for which a listener perceives a.
Auditorium Acoustics 1. Sound propagation (Free field)
CS 591 S1 – Computational Audio
Loudness level (phon) An equal-loudness contour is a measure of sound pressure (dB SPL), over the frequency spectrum, for which a listener perceives a.
PREDICTION OF ROOM ACOUSTICS PARAMETERS
Pitch What is pitch? Pitch (as well as loudness) is a subjective characteristic of sound Some listeners even assign pitch differently depending upon whether.
C-15 Sound Physics 1.
Loudness asymmetry in real-room reverberation: cross-band effects
PREDICTION OF ROOM ACOUSTICS PARAMETERS
Lecture 2: Frequency & Time Domains presented by David Shires
Sound Sound is a type of energy made by vibrations. When any object vibrates, it causes movement in the air particles. These particles bump into the particles.
Speech Perception (acoustic cues)
8.5 Modulation of Signals basic idea and goals
Biological Science Applications in Agriculture
Presentation transcript:

Modeling the effects of air currents on acoustic measurements in large spaces David Griesinger David Griesinger Acoustics

Sabine Sabine discovered that in most rooms sound decays logarithmically at a constant rate. He measured the decay by filling the space with tone from an organ pipe, and timing the time before the sound became inaudible with a stopwatch. He could do this with considerable precision. He also knew that sound power increases in a room as a sound continues. Sound pressure rises until the available absorption is able to dissipate all the power from the source. Sabine understood that speech is incomprehensible when reverberation from a previous syllable masks a succeeding syllable. Such masking depends on the reverberation time, the reverberant level, and the length of a sound. Short, detached sounds can overcome long RTs. But small rooms with little absorption can be more garbled than large spaces with long reverberation times. The bottom line: We should not design spaces where reverberation masks the onsets of sounds! But Sabine’s method of measuring RT works quite well!

Noise decays In the 1960’s the standard way of measuring RT was with a logarithmic chart recorder. The room was filled with a continuous noise which was switched off just as the recorder was started. A jagged slope recorded the decay. The paper was rewound, and the measurement was repeated. After 5 or more trials there was an easily measured line on the paper. The procedure was tedious, but the result was accurate. The result is not an impulse response. It is an accurate picture of what happens in a room with a sound of finite length

Long notes can be masked by reverb, while short notes are clear Chart recorder graph of a 1.5s RT and a long note The same with a short (0.1s) note The reverberant level builds up as the note is held with same sound power These graphs are not impulse responses. They show what you actually hear – the decay of notes of different lengths that suddenly stop. They graph the result of the room on sound, and not the room itself. Jordan’s definition of EDT gives insight into the contribution of the direct sound to the total sound field. Schroeder did not understand this meaning, and re-defined EDT to be the slope of the first 10dB of decay. This is now the ISO3382 standard. Its meaning is unclear.

Impulse responses Mathematicians are fond of impulse responses. They carry much more information about how sound reflects around a room than noise decays. But they require an impulse to excite the space. If you want to record a decay with a S/N of 40dB, the sound pressure at the beginning of the decay must be one hundred times larger than the noise level in the room. Achieving a high level at a considerable distance from the source requires a strong impulse – typically from a pistol or a small cannon. Firearms are loud, dangerous, often illegal, and are not omnidirectional, particularly at low frequencies. But the reverberation times, and the reflection amplitudes, are accurate. In theory any repeatable signal of finite length can be used to excite a room. An impulse response can be obtained by dividing the room signal by the stimulus in the frequency domain. In 1975 Schroeder proposed using maximum length sequences of ones and zeroes (MLS) to excite rooms. They have a white spectrum, are repeatable, and can be deconvolved into impulses without a hardware multiplier. The method was rapidly adopted, and is still commonly used.

MLS sequences are inherently noisy Green: A 1.5s RT decaying noise impulse response measured with a 2second MLS sequence (ones and zeros). Blue: a 2 second log sine sweep. 250Hz octave band The MLS measurement has a S/N of about 34dB, the sine sweep 60dB. The noise in MLS is due to the use of ones and zeroes as a stimulus. If we use multivalued random noise as a stimulus the noise floor is identical to that of the sine sweep. – but the crest factor is much larger than either MLS or a sine-sweep.

MLS and distortion A two second white noise sequence deconvolved with its time inverse. Green: 5% third harmonic distortion Blue: 1% harmonic distortion Harmonic distortion in the measurement system adds pseudo reflections. - Note in the picture below that the bumps in what looks like the noise floor are identical. They are artifacts of the particular sequence. A one-second RT decaying noise deconvolved from a 2 second noise sequence with no distortion. The S/N is ~44dB at 2000Hz. Logarithmic sine sweeps are superior to Noise or MLS sequences with respect to distortion. Distortion produces artifacts that deconvolve to negative time, where they can be observed and measured.

Averaging to reduce noise All convolution measurement methods rely on the fact that when we sum two independent noise signals the power rises by 3dB. But if we sum two identical signals the power rises by 6dB. Thus we can extract a stationary signal from random noise by averaging many measurements. The signal to noise ratio increases by 3dB for every doubling of the number of measurements. But the signal we are measuring must completely stable, both in amplitude and phase. If at some point in time the signal starts to behave randomly, there is no advantage to averaging. Both the signal and the noise increase 3dB with every doubling.

How stable are room impulse responses? And what measures does instability affect? Many years ago I was called by a scientist at a major consulting firm that had just finished a new concert hall. They had measured the RT of the hall with a pistol and with the newly introduced MLS measurement software. The RT at 2000Hz was shorter in the MLS result by 30% I told them air currents were the problem, and that the pistol was correct. How common is this problem? And what are the effects? I decided to model it

Modeling air currents with random walks Convolution methods are commonly used by coherently averaging several measurements. I decided to model air currents by averaging several identical IRs who’s time-base had been slightly modified progressively with time. The time base modification was done by a “random walk” similar to the motion of a small particle jostled by Brownian motion. Here are eight typical random walks, moving randomly by a random function of samples per sample. They are used to modify a one second RT 1.5 seconds long. With a sample rate of 44.1KHz these random walks produce a maximum pitch-shift of.04%. The mean absolute value of all 32 walks is 8 samples, 2.4 inches per second, or 0.13 miles per hour. They have a LARGE effect on measurements!

1sRTs measured in one octave bands Green: 32 averages Blue: 8 averages 8kHz Green: RT =.92 blue, 1.0 4kHz Green early 0.2sRT, late 0.82 Green is 6dB higher than blue! Green is initially 12dB higher than blue 2kHz Green early 0.28s. Blue early 0.52s 1kHz Green 0.77 Blue 0.81 Green is finally 12 dB higher.

Boston Symphony Hall measurements In 2008 I made a set of soundfield and binaural measurements in BSH using a Genelec 1029 loudspeaker just to the audience right of the conductor. I used a 2 second sine sweep as a stimulus. I have been using the data to understand why the hall sounds as it does. There are a great many outstanding seats in the hall, and many less than outstanding seats. I have binaural recordings of live music in many of them, which I can listen to while pondering the data. On April 18 th 2014 I was able to duplicate the soundfield data using tiny balloons as stimuli. The balloons are amazing – bursting in less than a millisecond, and yielding a peak sound pressure of 94dB at 10 meters.

Two impulse responses from Boston Symphony Hall with a 2 second sweep 2008 Binaural impulse from BSH row R seat 11 C80 = 0.85dB IACC80 =.68 LOC = 9.1dB Same, Row DD, seat 11 C80= IACC80 = 0.2 LOC = -1.2 ISO 3382 fails to quantify the sound in these seats! Both C80 and IACC80 predict the opposite of what we hear!

Compare the sine sweep in 2008 to a 5” balloon pop in 2014 BSH row R seat 11 Binarual data from 2008: LOC = +9dB SF data from 2014: LOC = ~+2dB I had always been suspicious about the very high LOC value in the 2008 measurement. The value from 2014 is lower than I would expect from the very good sound I hear there. The stimulus is omnidirectional, and the orchestra is not.

Ning’s 2 second sweep data Ning Xiang and his students were also measuring in BSH. We measured one seat in common: row U seat 14 – one with very good sound. We attempted to measure the effect of air currents by comparing the data from a single sweep with data from averaging two and ten measurements of the same sweep 5 seconds apart. We expected to see differences in the RT data at 2kHz and above. But there were no significant differences!

Compare sound decays at U14 with no averages and 10 averages of a two second sweep 4kHz octave band data from 10 averages 4kHz octave band data with no averages The RT data are clearly the same! Does this mean that there are no significant air currents? Look more closely: if there were no air currents the two decays should be identical. But they are not! Past the first 100ms they are different!

If no averaging and two averages are the same, we can subtract them and get zero The direct sound is the same – but by the first reflection at 27ms the waveforms are quite different. And the random stuff between reflections is different too.

The spectrum of the direct sound from a single measurement is in blue, and the spectrum from subtracting a two measurement average is in red. The top graph is for the direct sound The next graph is for the first reflection at 27ms. The coherence at 3kH is only ~3dB. The second reflection at 83ms shows even less coherence. These graphs show a method of testing for the presence of air currents and the validity of your data.

Compare sweep data to balloon data in U 14 at 4kHz All the reflections are stronger in the balloon. The first, at 23ms is stronger by at least 3dB, the next at 83ms is stronger by more than 4dB, and the one at 143ms is completely missing in the sweep data.

Why does it matter? RT values are OK! But any measure that depends on early vs late will be wrong! Ning’s data from a 2 second single log sweep DG’s data from a 5” balloon Both from BSH seat U 14

Conclusions In both speech and music most information is carried in frequencies above 1000Hz. We need measures that are accurate at these frequencies Only the first 20dB of sound decay is audible in the presence of music. We do not need enormous values of S/N to find what we need to know. The two most critical acoustic factors for human communication are: 1. Are onsets of sounds masked by the build-up of reverberation from previous sounds? 2. Is the direct sound component of a sound field – which contains the source azimuth, and supplies the attention-grabbing property of presence – separately audible from the hall sound? Both these factors are falsely measured by convolution methods in large halls – even with sweep lengths as short as two seconds. The degree to which these methods fail in small halls remains to be investigated. Small balloons are more accurate Their variable spectrum and usually omnidirectional radiation can perhaps be compensated if soundfield data is available. Noise burst measurements give less information than impulse response measurements – but they are easy to make and inherently accurate. Analyzing several with different burst lengths can probably reveal a lot about how a hall sounds with music and speech. Alternate sweep signals might be very helpful