Presentation is loading. Please wait.

Presentation is loading. Please wait.

Aurimas Jankauskas IT3gr.

Similar presentations


Presentation on theme: "Aurimas Jankauskas IT3gr."— Presentation transcript:

1 Aurimas Jankauskas IT3gr.
Core Audio Aurimas Jankauskas IT3gr.

2 Why do we have Core Audio today

3 1991 – Windows Multimedia Extensions (aka MME, aka WinMM)
Microsoft decided to add audio API for their OS (Windows 3.0). Windows 3.0 was already out for a year and was in widespread use MME has both a high-level and low-level API. The low-level API supports waveform audio and MIDI input/output. It has function names that start with waveIn, waveOut, midiIn, midiStream, etc. The high-level API, the Media Control Interface (MCI), is REALLY high level. MCI is akin to a scripting language for devices.

4 1995 – DirectSound (aka DirectX Audio)
Microsoft needed to improve their audio API to get attention from gaming industry, which currently were based on BioS. DirectX consumed memory and other resources that the games desperately needed. DirectX was the umbrella name given to a collection of COM-based multimedia APIs, which included DirectSound.

5 1998 – Windows Driver Model / Kernel Streaming (aka WDM/KS)
MME developers were used to dealing with latency issues. But DirectSound developers were used to working a bit closer to the metal. With WDM, both MME and DirectSound audio now passed through something call the Kernel Audio Mixer (usually referred to as the KMixer). KMixer was a kernel mode component responsible for mixing all of the system audio together. KMixer introduced latency. A lot of it. 30 milliseconds

6 2007 – Windows Core Audio Windows Core Audio, not to be confused with OSX’s similarly named Core Audio, was a complete redesign in the way audio is handled on Windows. KMixer was killed and buried. Most of the audio components were moved from kernel land to user land, which had an impact on application stability. All of the legacy audio APIs we knew and loved were shuffled around and suddenly found themselves built on top of this new user mode API. Core Audio is actually 4 APIs in one – MMDevice, WASAPI, DeviceTopology, and EndpointVolume.

7 Core Audio 4 APIs Multimedia Device (MMDevice) API. Clients use this API to enumerate the audio endpoint devices in the system. Windows Audio Session API (WASAPI). Clients use this API to create and manage audio streams to and from audio endpoint devices. DeviceTopology API. Clients use this API to directly access the topological features (for example, volume controls and multiplexers) that lie along the data paths inside hardware devices in audio adapters. ENDpointVolume API. Clients use this API to directly access the volume controls on audio endpoint devices. This API is primarily used by applications that manage exclusive-mode audio streams.

8 Core Audio today The API for interacting with all of the software components that exist in the audio path is the DeviceTopology API. For interacting with volume control on the device itself, there’s the EndpointVolume API. And then there’s the audio session API – WASAPI. WASAPI is the workhorse API. It’s where all of the action happens. It’s where sound gets made. Along with new APIs came a number of new concepts, such as audio sessions and device roles. Core Audio is much better suited to the modern era of computing. Today we live in an ecosystem of devices. Users no longer have a single audio adapter and a set of speakers. We have headphones, speakers, bluetooth headsets, USB audio adapters, webcams, HDMI connected devices, WiFi connected devices, etc. Core Audio makes it easy for applications to work with all of these things based on use-case.

9 Source:

10 Questions?


Download ppt "Aurimas Jankauskas IT3gr."

Similar presentations


Ads by Google