LEARNING OBJECTIVES In this chapter you will learn about: – –The fundamentals of sound – –DirectX Audio – –DirectSound – –XACT – –XAudio2
INTRODUCTION TO SOUND Sound, on a physical level, is the propagation of potential and kinetic energy waves. Potential or stored energy is released in the form of kinetic energy (including other forms of energy) to affect the states of other objects.
INTRODUCTION TO SOUND A sound wave (vibration that travels through some medium) is described by means of its frequency, wavelength, period, direction, speed, and amplitude. Frequency is a measure of repeated occurrences over some period of time. Wavelength is the distance between two repeating points of a sound wave.
The amplitude of a sound wave is its magnitude of oscillation (the wave’s variation in time) – i.e. the maximum variation in the wave during one wave cycle. The speed of sound expresses the distance travelled by a sound wave over a period of time. This velocity is dependent on the medium of propagation. Sound is thus mechanical energy in the form of a wave emitted from a vibrating source.
INTRODUCTION TO SOUND Sound and music in games form a crucial part of the overall gaming experience. There are generally two kinds of sounds in the computer world – digital and synthesized. – –Digital sound is recordings in the form of wave files, mp3s or oggs; to name but a few file formats. – –Synthesized sound is in turn based on hardware generated tones arranged in such a way as to produce the desired output.
DIRECTX AUDIO DirectX Audio is a collective term used to describe a number of DirectX audio libraries, specifically: – –X3DAudio – –Microsoft Cross-Platform Audio Creation Tool (XACT) – –XAudio2 – –DirectSound
DirectSound The DirectSound library consists of three modules: – –the run-time dynamic link libraries (dsound,.dll and dsound3d.dll), a compile-time library (dsound.lib), and the DirectSound header (dsound.h). DirectSound allows for the simultaneous playback and manipulation of multiple wave sound files (.wav), the recording of audio, dynamic control over the addition of audio effects such as distortion and echo.
DirectSound DirectSound, similar to DirectInput, requires the creation of a main COM (Component Object Model) object from which all the other interfaces are requested. DirectSound consequently consists of a number of COM interfaces. – –The main interface, IDirectSound8, is responsible for initializing the DirectSound environment and for creating sound buffer objects. We also specify an IDirectSoundBuffer8 object to manage these created sound buffers.
DirectSound The IDirectSoundCapture8 interface is used to create sound capture buffers to facilitate recording of audio from an external device such as a keyboard or microphone. Sometimes it will be necessary to send control messages to the soundcard via notification events; IDirectSoundNotify8 is used for this purpose. There are, in addition to these interfaces, several others which can be used based on the complexity of the implemented sound system. – –Examples include IDirectSoundFXDistortion8, IDirectSoundFXWavesReverb8, IDirectSoundCaptureFXNoiseSuppress8 and IDirectSoundFull-Duplex8.
DirectSound The general steps for setting up and playing audio via the DirectSound API are as follows: – –Create a DirectSound IDirectSound8 interface object via the DirectSoundCreate8 function – this object represents the default playback device. – –Call the CreateSoundBuffer IDirectSound8 interface function to create a sound buffer which will be used for the storage and management of sound data. – –Access the wave data (stored in an audio file) and write it to the private sound buffer. – –Call the Lock IDirectSoundBuffer8 interface function to prepare a secondary buffer for the write operation. – –Call the Unlock IDirectSoundBuffer8 interface function to transfer the audio data from the private buffer to the initialized secondary buffer. – –Play the audio clip via the Play IDirectSoundBuffer8 interface function and, if playback is set to loop, then we can stop it using the Stop IDirectSoundBuffer8 interface function. [see the textbook and online source code for an example and detailed discussion].
XACT XACT is more than just an audio API, it is a complete high-level engine and library for the authoring and playback of audio via XAudio exclusive to the Xbox, DirectSound for older versions of Windows and the session-based audio stack in Windows Vista. XACT also features a helper library for 3D audio spatialization, namely, X3DAudio. XACT supports the following file formats:.wav,.aiff and.xma. It also has support for 5.1 surround sound as well as the grouping and queuing of sounds in so called wave and sound banks, respectively.
XACT The XACT Audio Creation Tool, shipped with the DirectX SDK, arranges audio files into so-called wave banks. It is typically used to construct and organize the data files needed for the integration of the XACT audio resources.
XACT The general steps for setting up and playing audio via the XACT API are: – –Initialize the XACT sound engine. – –Initialize an IXACTWave interface by first reading the.wave file into memory and then initializing the WAVEBANKENTRY XACT wave bank structure. – –Create an IXACTWave interface with the loaded wave data. This is done via the PrepareInMemoryWave IXACTEngine interface function. – –Play the wave file using the Play IXACTWave interface function. – –When finished using, destroy the IXACTEngine interface. [see the textbook and online source code for an example and detailed discussion].
XAudio2 XAudio2 is a new, still under development (at the time of writing) cross-platform, low-level audio API for unified code development under Windows XP, Windows Vista and the Xbox 360 gaming console. XAudio2 serves as replacement of DirectSound and as an extended version of the XAudio API for Xbox 360 development. All code developed using the XAudio2 API is exactly the same for both the Xbox and Windows platforms – thus resulting in a ‘write once, compile twice’ implementation.
XAudio2 The general steps for setting up and playing audio via the XAudio2 API are as follows: – –Initialize the XAudio2 sound engine. – –Create a mastering voice to control the mixing format of all audio played by the application. – –Load the wave file to play. – –Create a source voice and load the audio data into it. – –Play the source voice. – –Cleanup the source voice and all associated data. – –When finished using, release the XAudio2 engine. [see the textbook and online source code for an example and detailed discussion].