Gorilla Audio  0.3.1
Cross-platform C audio mixer library for games
 All Data Structures Files Functions Variables Typedefs Groups Pages
Gorilla Audio Documentation
Table of Contents

1  Introduction
1.1  About

Gorilla Audio is an attempt to make a free, straightforward, cross-platform, high-level software audio mixer that supports playback of both static and streaming sounds. It is intended for video game development, but should easily support many other real-time audio software applications.

The library is written in ANSI C, and is licensed under the MIT license. It was written by Tim Ambrogi, engine programmer at Final Form Games (makers of Jamestown).

This library is currently under active development, and has been used in two successful commercial game projects: Spirits and Jamestown. It will be used in the upcoming PC/Mac/Linux release of The Splatters.

1.2  Motivation

In the world of independent game development, there exist many excellent tools to eliminate the need for writing low-level systems. Engines like Unreal and Unity provide out-of-the box everything, and middleware like FMod and BASS give you tremendous power to implement great game audio.

Q: So, why bother with another audio library?

A: The short answer is because every other library comes with strings attached. Either you're married to a heavyweight framework, or it's a black box, or you need to pay for a license, or the license doesn't allow for commercial use, or you need to write a huge amount of code to perform common/straightforward tasks; generally it's a combination thereof.

Additionally, I have spoken to many indie developers who created their own game on top of a homebrew engine, and then decided to port the game to other platforms. When it comes to writing cross-platform audio, the common solutions are to a) write up a thin layer on top of OpenAL that implements sounds and streaming music, or b) buy a middleware license.

Gorilla Audio is an attempt to provide a third option: a completely free library that is quicker-to-write and more powerful than a thin OpenAL layer, and (infinitely!) cheaper than commercial middleware.

1.3  Principles

Gorilla Audio was designed with the following guiding principles in mind:

  • Completely free - You shouldn't need to pay money to play audio in your game
  • Beginner-friendly - You shouldn't need to be an expert audio programmer
  • Cross-platform - You shouldn't need to learn a new audio library to develop for a new platform
  • Hardware-independent - Your choice of hardware should not arbitrarily limit your audio
  • Flexible - You shouldn't need to switch libraries because your game has special needs
  • Powerful - You should be free to explore creative new audio ideas
  • Efficient - You shouldn't have to worry about performance on a regular basis

While the current version of library satisfies each of these goals to a certain extent, there's still plenty of room to improve. As the library develops, it will endeavor to uphold these guiding principles.

1.4  Features

Gorilla Audio offers the following features:

  • Truly free, open-source library
  • Cross-platform on Windows, Mac, and Linux
  • Choice of powerful low-level interface or convenient high-level interface
  • Cached static sound effects and low-memory streaming music
  • Volume, pitch, pan, and looping control (with configurable loop points)
  • Panning for both mono and stereo audio data
  • WAV and Ogg Vorbis audio format support
  • Out-of-the-box file, archive, or memory-based data sources
  • Configurable threading (single- or multi-threaded)
  • Straightforward portability to new platforms
  • Extensible component-based streaming data pipeline
1.5  Future Development

The current feature set represents a minimal but powerful subset of what's available in many commercial audio packages. This feature set is just a starting point, and over the next year several other features will be added to the library. Such features include:

  • iOS and Android ports
  • Simple push-to-buffer data + sample sources (for dynamic audio playback)
  • Atomic multi-stream synchronization
  • Customizable DSP filters
  • Network streaming
  • Microphone recording
  • (Optional) MP3 support
  • Improved error reporting

You can browse the full roadmap here.

1.6  Comparison Against Alternative Libraries

Gorilla is still a young library, but here are some reasons why it may already be a better choice than some popular alternatives...


  • Ease of use - With OpenAL, you have to manage low-level buffers, write your own thread code, and load your own file formats, whereas Gorilla takes care of all of that for you
  • Permissive license - OpenAL uses the LGPL license, which is annoying when deploying commercial software, especially on Windows (see: oalinst.exe)
  • Higher-level interface - Gorilla offers out-of-the-box support for multi-threaded background streaming, OGG/WAV-loading files, and other common features that OpenAL expects you to write yourself
  • Unlimited mixer channels - Regardless of hardware/drivers, Gorilla supports unlimited channels on any platform. OpenAL's channel count is implementation-dependent


  • Multiple background streams - SDL_mixer only allows for a single background stream
  • No SDL dependency - lighter-weight and no LGPL license


  • Cross-platform - These API's are not intended to run on non-Microsoft platforms
  • Ready out-of-the-box - Gorilla supports background audio streams and OGG/WAV-loading without any extra code or plugins
  • Streamlined interface - Because Gorilla isn't everything to everyone, it eliminates a huge amount of boilerplate code that would be needed to use these more general systems


  • Free to everyone - irrKlang is not free software
  • Open-source - Closed-source libraries such as irrKlang are often more difficult to debug when something goes wrong, and you cannot implement your own features except through plugins
  • Better performance - Under many circumstances, Gorilla has a smaller performance cost


  • Free to everyone - BASS is not free software
  • Open-source - Closed-source libraries such as BASS are often more difficult to debug when something goes wrong, and you cannot implement your own features except through plugins


  • Free to everyone - FMod is not free software, though it is fabulous! :)
All that said, these are all excellent software packages. If Gorilla is not the tool for you then I highly recommend you choose one of the above based on your needs!
1.7  License

Gorilla Audio is licensed under the MIT license:

Copyright (C) 2012 Tim Ambrogi

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.


The project contains some other libraries that are licensed under terms other than MIT. This includes code under the Xiph and LGPL licenses. Non-MIT code in the repository is denoted by COPYING files.

1.8  Credits

Gorilla Audio is written and maintained by: Tim Ambrogi. He is the engine programmer at Final Form Games (Jamestown), and was responsible for the PC and Linux ports of Spirits.

You can contact him at: tim@finalformgames.com, or follow him on twitter (@westquote)

Logo courtesy of: Mike Ambrogi

Demo music by: Francisco Cerda, and licensed under creative commons

Grateful thanks to: Jason Earp, Niv Fisher, Justin Mullens, Jordan Fehr, Jim Crawford, Ichiro Lambe, Ryan Gordon, Nicholas 'Indy' Ray, and Chris Cornell

2  Getting Started
2.1  Prerequisites

While Gorilla Audio strives to be a beginner-friendly library, it does assume that you have some experience programming in either C or C++, and that you know how to create projects and link libraries on your platform.

If you are building the library from source, you will need to install CMake 2.8 or later, as well as a C or C++ compiler.

Please see section '2.3 - External Dependencies' for platform-specific details on setting up external libraries.

2.2  Setup
2.2.1  Downloading

You can download the latest stable binaries + source code from the downloads page.

For the latest development version, you must clone the project using Mercurial from this location:

hg clone https://code.google.com/p/gorilla-audio/

If you need a free Mercurial GUI client, we recommend either TortoiseHg or SourceTree.

2.2.2  Building Sources

Before building sources on any platform, make sure to download and expand the latest source code archive, and to install CMake 2.8 or higher. For the purposes of this section [install_root] refers to the directory where you expanded the source code archive.

  1. Install Visual Studio 2008 or Visual C++ 2008 Express Edition (will likely work with later versions)
  2. In a command prompt, navigate to [install_root]/build/cmake
  3. Run 'cmake .'
  4. In Visual Studio, open [install_root]/build/cmake/Gorilla.sln
  5. Select your build configuration type, and Run 'Build->Build Solution' from the menu
  1. Install XCode 4.1 or higher (will likely work with earlier versions)
  2. In a command prompt, navigate to [install_root]/build/cmake
  3. Run 'cmake .' (for Debug builds, run 'cmake . -DCMAKE_BUILD_TYPE=Debug')
  4. Run 'make'
  1. Install GCC 4.6 or higher (will likely work with earlier versions)
  2. Install the OpenAL development libraries (libopenal-dev)
  3. In a command prompt, navigate to [install_root]/build/cmake
  4. Run 'cmake .' (for Debug builds, run 'cmake . -DCMAKE_BUILD_TYPE=Debug')
  5. Run 'make'
2.2.3  Linking

To use Gorilla Audio in your own project, you must include the library headers, as well as link against the library binaries. Please add [install_root]/include to your list of additional include directories. If you built sources, you will find the compiled library binaries under [install_root]/bin/[platform]. If you downloaded a binary package, you will find the library binaries within said package.

You are expected to know how to configure your project based on your choice of platform of development tool. If you do not, please search the internet for 'linking external library '.

2.3  External Dependencies

Gorilla Audio can be configured to use external libraries, which may require additional configuration as described below:

2.3.1  OpenAL (Windows, Mac, Linux)

If the ENABLE_OPENAL flag is set in the CMake configuration, it will compile and dynamically-link against the OpenAL library on your platform. You may need to do full rebuild after setting the ENABLE_OPENAL flag in CMake. If you enable this flag, you will also need to link your final project against the OpenAL library.

The Windows headers and .lib files for OpenAL are included as part of the Gorilla Audio project. On Windows, you will need to distribute the OpenAL runtime (oalinst.exe) along with your final product. The Windows OpenAL runtime is available here.

On Linux, you will need to install the libopenal-dev package in order to link against OpenAL.

OpenAL is licensed under the LGPL license.

2.3.2  DirectX (Windows)

If the ENABLE_XAUDIO2 flag is set in the CMake configuration, it will compile and link against the XAudio2 library (part of DirectX). You may need to do full rebuild after setting the ENABLE_XAUDIO2 flag in CMake. If you enable this flag, you will also need to link your final project against the XAudio2 DirectX library (xapobase.lib).

Once you set the ENABLE_XAUDIO2 flag, you will need to provide a path to the DirectX SDK directory (the directory containing the Directx 'Include', 'Lib', and 'Redist' directories). This is done by setting the DIRECTXSDK_PATH in your CMake configuration. You can download the DirectX SDK here.

2.3.3  Ogg/Vorbis (Windows, Mac Linux)

The Ogg/Vorbis project sources are included as part of the Gorilla Audio project. They are used by the GAU library.

Ogg/Vorbis is licensed under the Xiph license.

3  API Overview
3.1  Modules

The Gorilla Audio library consists of 3 modules:

  • Gorilla Common (GC) - Non-audio-specific classes that are common to most libraries.
  • Gorilla Audio (GA) - The core low-level interface for audio streaming and playback.
  • Gorilla Audio Utility (GAU) - The high-level interface for initializing, loading and managing audio streaming and playback. GAU is built entirely on top of GA.

To use these modules, you can just include "gorilla/ga.h" and "gorilla/gau.h". The GC library is implicitly included by these other libraries.

3.2  Memory Management

Memory management in Gorilla is handled in two different ways, depending on the type of object. The API reference specifies which objects use which model.

  • POD Objects: For simple plain-ol'-data objects (POD), you can allocate/free the object any way you please, though you should be sure to call their *_init() function if available.
  • Single-Client Objects: For objects that have a single client, a managed create/destroy model is used. NOTE: This is identical to the reference-counting model should be used identically to Multiple-Client Objects, except that instead of calling *_release(), you must instead call *_destroy().
  • Multiple-Client Objects: For objects that are shared between multiple clients, a standard acquire/release reference-counting model is used. Multiple-Client objects always come with a *_create() function that creates an instance of the object. To free the object, DO NOT attempt to deallocate it yourself. Instead, you should call *_release(). When passing the object reference to a new client, you MUST make sure to call *_acquire() to add an additional reference. Otherwise, you may find the object unexpectedly deallocated when all other clients have released their reference.
3.3  Concepts
3.3.1  Audio Data (PCM)

Audio data on computers is usually represented as Pulse Code Modulation (PCM) data. PCM data stores the position of a speaker drum over time, which lets us approximate the analog waveforms of sounds. PCM data is stored as a series of samples over time. A 'sample' of audio represents a speaker position at a point in time.

3.3.2  PCM Sample Rate

The rate of PCM samples per second is called the sample rate, and the higher the sample rate, the more the sound can resemble the smooth waveforms we find in nature. Common PCM sample rates include: 44100 Hz (CD quality), 22050 Hz, and 11025 Hz.

Resampling is the process of converting PCM data from one sample rate to another sample rate.

3.3.3  Streams

Streams are sequences of data that can be read from over time. This can be conceived of as a queue, where the first piece of data written to the stream will be the first piece of data read out of the stream. When you read data from a stream, it is called "streaming out". When you write data into a stream, it is called "streaming in".

This abstraction is particularly useful when you think about data as moving through a pipeline of transformations. In the case of audio, this pipeline is often: load file data from disk -> decompress file data into PCM data -> modify pitch/pan/gain of PCM data. For this reason, Gorilla uses Streams to organize its data-processing pipeline.

3.3.4  Data Sources

A data source is a stream of bytes of data. The format of this data generally corresponds to a file format, such as the .wav or .ogg formats. A data source's data can only be useful if the data format is known. In Gorilla, data sources are implemented as ga_DataSource*.

Out of the box, Gorilla allows for file data sources, archive data sources, and in-place memory data sources. These data sources can be used to stream in data of any format, such as WAV and OGG file formats. In the future, Gorilla may add a network data source for streaming data from an internet url.

3.3.5  Sample Sources

A sample source is a stream of samples of PCM data. The format of this data is specified by a ga_Format object, which defines the sample rate, bits-per-sample, and number of channels (stereo or mono) for a given sample source.

In Gorilla, sample sources are the components responsible for decoding the data from a data source (such as WAV or OGG data) into raw PCM data. Gorilla comes with both WAV- and OGG-decoding sample sources, as well as several others that perform useful data transformations.

3.3.6  Buffered Streaming Audio

The term 'stream' is unfortunately an ambiguous one in the world of audio programming. When people refer to 'streaming audio', what they usually are referring to is 'buffered streaming audio'.

Buffered streams are streams of data that are generated in advance, and then used later. This can be useful when, for instance, streaming audio over a network connection. If the network is congested, you may not receive your data in time to mix it in real-time. So, to prevent running out of samples (known as 'underrun'), the audio is streamed into an intermediate buffer, and then streamed back out of that buffer when it is needed by the mixer.

In Gorilla, these buffered streams are referred to simply as ga_Stream*, in keeping with popular usage. They are managed by ga_StreamManager* objects, which work in the background to fill the buffers whenever they are not full. ga_Stream* objects are wrapped into ga_SampleSource* objects, allowing them to be component within the audio data pipeline.

Buffered streaming audio is a very popular technique for playing back large streams of music data from disk without suffering from buffer underrun, slow load times, and other inherent performance issues with large audio streams.

3.3.7  Cached Audio Data

Because disk I/O is often very slow, and because sounds in games are often fairly short, it is a popular technique to load and decompress sounds into cached buffers of PCM data that can be reused and shared by many streams. In Gorilla, this is done via the ga_Memory* and ga_Sound* data structures.

3.3.8  Audio Handles

An audio handle is a data structure that represents a stream of audio data, as well as controls through which you can transform that data during mixing. Common controls include volume (gain), pitch, stereo pan, playing, pausing, stopping, and looping. In Gorilla, audio handles are implemented as ga_Handle*.

3.3.9  Mixing

Mixing is the process of combining ('mixing') multiple streams of audio data into a single stream of audio data. In its simplest form, mixing is accomplished by summing together the simultaneous PCM samples from each audio handle. Historically, this data-intensive task of mixing multiple streams together of was the job of dedicated sound cards, but can now be done in real-time on modern CPUs.

A mixer is a data structure that tracks and manages multiple audio handles, and mixes their data into a buffer of audio data that can be presented to the audio device. In Gorilla, the mixer is implemented as ga_Mixer*.

3.3.10 Audio Devices

While Gorilla does all of its mixing on the CPU, it must nonetheless present the mixed data to the sound card for playback on speaker hardware. There are many different libraries available to handle this, which vary based on which operating system you are using. Gorilla abstracts this presentation device into ga_Device*, which can be implemented through different libraries depending on how the library is configured.

3.4  The Audio Pipeline

Gorilla Audio has a highly modular 'stream-based' pipeline for processing audio data. Here's how it works:

  • The Stream Chain - Loads + transforms audio data
    • There are two main classes of streams: Data Sources and Sample Sources.
    • These streams can have any number of inputs, but must have exactly one output.
    • Some load data, some decode it, some buffer it, some apply DSP filters, etc...
    • By connecting stream outputs and inputs, each stream forms a link in a 'stream chain'.
    • The data is processed as it 'flows' through the chain.
  • The Handle - Controls audio playback
    • Each stream chain terminates with a single handle object.
    • This handle allows you to control volume, pitch, pan, and playback for a stream of audio data.
  • The Mixer - Mixes many streams into a single buffer
    • The mixer keeps track of all active handles.
    • When the mixer needs to mix a new buffer, it streams in samples from each handle.
    • Once the mixer has retrieved enough samples from each handle, it mixes those samples together into a single buffer.
  • The Device - Plays the audio through your speakers
    • The buffer is then presented to the audio device for playback.

The diagram below demonstrates the audio pipeline for a simple WAV-loading stream chain:

The above diagram shows only simple example of a stream chain, but they are often more complex.

The Gorilla Utility API provides several helper functions for common stream chains here.

4  Quick Start Tutorial
Welcome to Gorilla Audio! This tutorial will walk you through the basics of writing a project that plays back audio from files on the disk. For more advanced usage, please consult the full API documentation.
4.1  Including the library headers

You will need to include two headers in any that uses the library directly:

#include "gorilla/ga.h"
#include "gorilla/gau.h"

The first header (ga.h) is the low-level Gorilla Audio interface, which we barely cover in this tutorial. The second header (gau.h) is the higher-level Gorilla Utility interface, which simplifies most tasks required for game audio.

4.2  Setup/Cleanup

The first step when using Gorilla is always to call gc_initialize(). This must be done before calling any other functions in the library. If you have custom allocators, you can configure them using this function. (For now we'll just pass in 0, which tells the library to use the default allocators.)


The next step is to create a gau_Manager, an all-in-one audio manager object. (NOTE: While it is possible for advanced users to use Gorilla without a gau_Manager, this is not recommended for beginners - nor is it usually necessary!).

gau_Manager* mgr;
mgr = gau_manager_create();

When you are finished using the library (usually when the program terminates) you must destroy the manager and then shutdown the library:


4.3  Updating the manager

The manager takes care of everything for you, from managing background threads, to mixing the audio, to pushing that mixed audio to the default audio device.

In order to keep things running smoothly, you need to make sure the update function gets called periodically. In games, this is usually done by calling this function once per frame:


With that, Gorilla Audio is ready to play your sounds.

4.4  Playing sounds

In the hands of an experienced user, Gorilla can be configured to load sound data in many different ways. For the purposes of this tutorial, we'll focus on the most common case: files on disk.

When playing a sound, you need to provide a mixer. For buffered streams, you also need to provide a stream manager. Use gau_Manager to get both of these:

ga_Mixer* mixer = gau_manager_mixer(mgr);
ga_StreamManager* streamMgr = gau_manager_streamManager(mgr);

4.4.1  Loading/playing static sounds

For short sound effects, it is a common practice to load the sound data into memory, and then play it back many times.

The first step is to load the sound into memory:

ga_Sound* sound;
sound = gau_helper_sound_file("test.wav", "wav");

The next step is to create handles that can play back the sound's data:

ga_Handle* handle;
handle = gau_create_handle_sound(mixer, sound, &gau_on_finish_destroy, 0, 0);

The last step is to play the handle:


NOTE: In this example, we pass &gau_on_finish_destroy as a parameter to gau_create_handle_sound(). This tells the handle to destroy itself when the sound finishes playing. You can pass in 0 for this parameter to control destruction manually.

4.4.2  Playing buffered streams

Creating a buffered stream handle requires just one function call:

ga_Handle* handle;
handle = gau_create_handle_buffered_file(mixer, streamMgr, "test.ogg", "ogg", &gau_on_finish_destroy, 0, 0);

Then, as with any handle, we tell it to play:


That's it! The stream will read and decode the file on a background thread continuously throughout playback.

4.5  Controlling handles

Once a handle has been created, you have access to the following playback controls:

  • Play


  • Stop


  • Gain (Volume) [0.0 -> 1.0]

    ga_handle_setParamf(handle, GA_HANDLE_PARAM_GAIN, gain);

  • Pitch (0.0 -> 16.0]

    ga_handle_setParamf(handle, GA_HANDLE_PARAM_PITCH, pitch);

  • Pan [-1.0 -> 0.0 -> 1.0]

    ga_handle_setParamf(handle, GA_HANDLE_PARAM_PAN, pan);

4.6  Looping

Looping in Gorilla Audio is implemented by way of a special sample source. As such, the interface for looping can seem counterintuitive.

To loop a playing handle, pass a gau_SampleSourceLoop** as the last parameter your handle-creation function:

ga_Handle* handle;
gau_SampleSourceLoop* loopSrc; handle = gau_create_handle_sound(mixer, sound, 0, 0, &loopSrc);

By passing in a non-zero value, you are requesting the handle to be loopable.

To set loop points, call:

gau_sample_source_loop_set(loopSrc, trigger, target);

The 'trigger' is the sample number that triggers a loop. The 'target' is the sample that should be looped back to. To loop the whole stream, set trigger to -1 and target to 0.

To stop looping, call:


4.7  Callbacks

When a handle finishes playing, you may need to perform some program-specific operation. If so, you can optionally provide a callback and context pointer:

void* context = &someData; handle = gau_create_handle_sound(mixer, sound, &callback, context, 0);

This callback will be called after the handle finishes playing, when you next call gau_manager_update(). The context will be passed along to the callback.

4.8  Further Examples

Looking for more example code? Check out the /examples directory in the source code archive that contains several full-program examples.

5  History
5.1  Versions
- Fixed various significant memory leaks (with help from hjj).
- All documentation written, at least in provisional form (GAU still needs more detail).
- Windows builds now only enable the OpenAL device by default. You can set ENABLE_XAUDIO2 in cmake-gui to enable XAudio2 devices.
- gau_manager_create() now takes 0 parameters. To customize the gau_Manager settings, please use gau_manager_create_custom().
- Various other small changes to the interface.
- Added minimal Doxygen comments to the Gorilla Common API (GAU is still forthcoming).
- Updated Linux and OSX ports to work correctly with latest changes.
- Created logo.
- Added copious Doxygen comments to the Gorilla Audio API (GAU/GC are still forthcoming).
- Moved device headers into gorilla/devices.
- Moved various internal functions, definitions, and data structures into ga_internal.h header.
- Implemented XAudio2 device support for Windows.
- Implemented ga_Memory data structure for refcounting shared memory.
- Stubbed in XAudio2 support.
- Added gau_sample_source_loop_count() function to count how many times something has looped. This is a stop-gap feature, to be deprecated after planned looping improvements.
- You can now query the gau_Manager for its device.
- Moved device initialization into gau_Manager.
- Removed loopStart and loopEnd parameters from gau_helper_*.
- Stubbed in DirectSound support in CMake project files.
- Removed vestigial streamLink handle property.
- Cleaned up the examples.
- WAV loader rewrite: WAV loader is cleaner, clearer, and far more robust at handling WAV files with extension chunks.
- First properly-versioned release.
- Fixed deadlock in archive data source.
- Replaced an accidental naked malloc with internal allocFunc.
- Fixed a bug in looping logic.
- Wrote a new helper function, gau_helper_stream_data().
< 0.2.2
- Prior versioning information is 'pre-historic'.
- Please browse the source code repository for a full changelog.
5.2  Development Roadmap

The following features are tentatively planned for development over the next year. Relative priority of these features will be based on which are in highest demand by the library's users.

  • Implement more platforms/devices:
    • Linux (using OpenAL)
    • DirectSound device (Windows)
    • WinMM device (Windows)
    • ALSA (Linux)
    • CoreAudio (OSX/iOS)
    • OpenSL (Android 2.3+)
    • OpenMAX AL device (Multi)
  • Improve looping system
    • Make it queryable and/or specify a number of future loops to perform
    • Expose loop interface to generic SampleSource data structure
  • Specifiable/queryable device format (partly done)
  • Handle-group support (synchronized start/seek/pause/stop)
  • Mono mixer
  • 8-bit mixer
  • Arbitrary mixer rate
  • Opaque handles (ints)
  • Push handles writeable from main thread into an internal buffer
  • Support for enumerating devices
  • Doxygen comments for the API
  • Glossary of terms
  • Tutorial on how to get started (akin to the example code)
  • Documentation on how to extend the systems
  • Better error codes and reporting
  • Create framework for apply DSP filters
  • Implement support for common filters:
    • Reverb, compressor, distortion, echo, equalizer, flanger, gargle, chorus, high-pass, low-pass, wah-wah
  • Input audio recording (recording devices + wrapping samplesource)
  • Optimize mixer (less branches, SIMD)
  • Tracker support (MOD/S3M/XM/IT)
  • Surround sound (multi-channel) input and output formats
  • Support for multiple ga_StreamManager threads
  • Floating-point format support
  • MP3 support (optional)
  • AIFF support
  • OGG Opus support
  • Network-streaming audio (OGG, MP3, ShoutCast, IceCast)
  • Handle-locking for atomic groups of control commands
  • Seeking can currently cause a handle to have fewer samples than expected mid-mix (rare race condition, can cause stutter/desync)