Gorilla Audio 0.3.1
Cross-platform C audio mixer library for games
Gorilla Audio is an attempt to make a free, straightforward, cross-platform, high-level software audio mixer that supports playback of both static and streaming sounds. It is intended for video game development, but should easily support many other real-time audio software applications.
The library is written in ANSI C, and is licensed under the MIT license. It was written by Tim Ambrogi, engine programmer at Final Form Games (makers of Jamestown).
This library is currently under active development, and has been used in two successful commercial game projects: Spirits and Jamestown. It will be used in the upcoming PC/Mac/Linux release of The Splatters.
In the world of independent game development, there exist many excellent tools to eliminate the need for writing low-level systems. Engines like Unreal and Unity provide out-of-the box everything, and middleware like FMod and BASS give you tremendous power to implement great game audio.
Q: So, why bother with another audio library?
A: The short answer is because every other library comes with strings attached. Either you're married to a heavyweight framework, or it's a black box, or you need to pay for a license, or the license doesn't allow for commercial use, or you need to write a huge amount of code to perform common/straightforward tasks; generally it's a combination thereof.
Additionally, I have spoken to many indie developers who created their own game on top of a homebrew engine, and then decided to port the game to other platforms. When it comes to writing cross-platform audio, the common solutions are to a) write up a thin layer on top of OpenAL that implements sounds and streaming music, or b) buy a middleware license.
Gorilla Audio is an attempt to provide a third option: a completely free library that is quicker-to-write and more powerful than a thin OpenAL layer, and (infinitely!) cheaper than commercial middleware.
Gorilla Audio was designed with the following guiding principles in mind:
While the current version of library satisfies each of these goals to a certain extent, there's still plenty of room to improve. As the library develops, it will endeavor to uphold these guiding principles.
Gorilla Audio offers the following features:
The current feature set represents a minimal but powerful subset of what's available in many commercial audio packages. This feature set is just a starting point, and over the next year several other features will be added to the library. Such features include:
You can browse the full roadmap here.
Gorilla is still a young library, but here are some reasons why it may already be a better choice than some popular alternatives...
Gorilla Audio is licensed under the MIT license:
Copyright (C) 2012 Tim Ambrogi
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
The project contains some other libraries that are licensed under terms other than MIT. This includes code under the Xiph and LGPL licenses. Non-MIT code in the repository is denoted by COPYING files.
Gorilla Audio is written and maintained by: Tim Ambrogi. He is the engine programmer at Final Form Games (Jamestown), and was responsible for the PC and Linux ports of Spirits.
You can contact him at: email@example.com, or follow him on twitter (@westquote)
Logo courtesy of: Mike Ambrogi
Demo music by: Francisco Cerda, and licensed under creative commons
Grateful thanks to: Jason Earp, Niv Fisher, Justin Mullens, Jordan Fehr, Jim Crawford, Ichiro Lambe, Ryan Gordon, Nicholas 'Indy' Ray, and Chris Cornell
While Gorilla Audio strives to be a beginner-friendly library, it does assume that you have some experience programming in either C or C++, and that you know how to create projects and link libraries on your platform.
If you are building the library from source, you will need to install CMake 2.8 or later, as well as a C or C++ compiler.
Please see section '2.3 - External Dependencies' for platform-specific details on setting up external libraries.
You can download the latest stable binaries + source code from the downloads page.
For the latest development version, you must clone the project using Mercurial from this location:
hg clone https://code.google.com/p/gorilla-audio/
Before building sources on any platform, make sure to download and expand the latest source code archive, and to install CMake 2.8 or higher. For the purposes of this section [install_root] refers to the directory where you expanded the source code archive.
To use Gorilla Audio in your own project, you must include the library headers, as well as link against the library binaries. Please add [install_root]/include to your list of additional include directories. If you built sources, you will find the compiled library binaries under [install_root]/bin/[platform]. If you downloaded a binary package, you will find the library binaries within said package.
You are expected to know how to configure your project based on your choice of platform of development tool. If you do not, please search the internet for 'linking external library
Gorilla Audio can be configured to use external libraries, which may require additional configuration as described below:
If the ENABLE_OPENAL flag is set in the CMake configuration, it will compile and dynamically-link against the OpenAL library on your platform. You may need to do full rebuild after setting the ENABLE_OPENAL flag in CMake. If you enable this flag, you will also need to link your final project against the OpenAL library.
The Windows headers and .lib files for OpenAL are included as part of the Gorilla Audio project. On Windows, you will need to distribute the OpenAL runtime (oalinst.exe) along with your final product. The Windows OpenAL runtime is available here.
On Linux, you will need to install the libopenal-dev package in order to link against OpenAL.
OpenAL is licensed under the LGPL license.
If the ENABLE_XAUDIO2 flag is set in the CMake configuration, it will compile and link against the XAudio2 library (part of DirectX). You may need to do full rebuild after setting the ENABLE_XAUDIO2 flag in CMake. If you enable this flag, you will also need to link your final project against the XAudio2 DirectX library (xapobase.lib).
Once you set the ENABLE_XAUDIO2 flag, you will need to provide a path to the DirectX SDK directory (the directory containing the Directx 'Include', 'Lib', and 'Redist' directories). This is done by setting the DIRECTXSDK_PATH in your CMake configuration. You can download the DirectX SDK here.
The Ogg/Vorbis project sources are included as part of the Gorilla Audio project. They are used by the GAU library.
Ogg/Vorbis is licensed under the Xiph license.
The Gorilla Audio library consists of 3 modules:
To use these modules, you can just include "gorilla/ga.h" and "gorilla/gau.h". The GC library is implicitly included by these other libraries.
Memory management in Gorilla is handled in two different ways, depending on the type of object. The API reference specifies which objects use which model.
Audio data on computers is usually represented as Pulse Code Modulation (PCM) data. PCM data stores the position of a speaker drum over time, which lets us approximate the analog waveforms of sounds. PCM data is stored as a series of samples over time. A 'sample' of audio represents a speaker position at a point in time.
The rate of PCM samples per second is called the sample rate, and the higher the sample rate, the more the sound can resemble the smooth waveforms we find in nature. Common PCM sample rates include: 44100 Hz (CD quality), 22050 Hz, and 11025 Hz.
Resampling is the process of converting PCM data from one sample rate to another sample rate.
Streams are sequences of data that can be read from over time. This can be conceived of as a queue, where the first piece of data written to the stream will be the first piece of data read out of the stream. When you read data from a stream, it is called "streaming out". When you write data into a stream, it is called "streaming in".
This abstraction is particularly useful when you think about data as moving through a pipeline of transformations. In the case of audio, this pipeline is often: load file data from disk -> decompress file data into PCM data -> modify pitch/pan/gain of PCM data. For this reason, Gorilla uses Streams to organize its data-processing pipeline.
A data source is a stream of bytes of data. The format of this data generally corresponds to a file format, such as the .wav or .ogg formats. A data source's data can only be useful if the data format is known. In Gorilla, data sources are implemented as ga_DataSource*.
Out of the box, Gorilla allows for file data sources, archive data sources, and in-place memory data sources. These data sources can be used to stream in data of any format, such as WAV and OGG file formats. In the future, Gorilla may add a network data source for streaming data from an internet url.
A sample source is a stream of samples of PCM data. The format of this data is specified by a ga_Format object, which defines the sample rate, bits-per-sample, and number of channels (stereo or mono) for a given sample source.
In Gorilla, sample sources are the components responsible for decoding the data from a data source (such as WAV or OGG data) into raw PCM data. Gorilla comes with both WAV- and OGG-decoding sample sources, as well as several others that perform useful data transformations.
The term 'stream' is unfortunately an ambiguous one in the world of audio programming. When people refer to 'streaming audio', what they usually are referring to is 'buffered streaming audio'.
Buffered streams are streams of data that are generated in advance, and then used later. This can be useful when, for instance, streaming audio over a network connection. If the network is congested, you may not receive your data in time to mix it in real-time. So, to prevent running out of samples (known as 'underrun'), the audio is streamed into an intermediate buffer, and then streamed back out of that buffer when it is needed by the mixer.
In Gorilla, these buffered streams are referred to simply as ga_Stream*, in keeping with popular usage. They are managed by ga_StreamManager* objects, which work in the background to fill the buffers whenever they are not full. ga_Stream* objects are wrapped into ga_SampleSource* objects, allowing them to be component within the audio data pipeline.
Buffered streaming audio is a very popular technique for playing back large streams of music data from disk without suffering from buffer underrun, slow load times, and other inherent performance issues with large audio streams.
Because disk I/O is often very slow, and because sounds in games are often fairly short, it is a popular technique to load and decompress sounds into cached buffers of PCM data that can be reused and shared by many streams. In Gorilla, this is done via the ga_Memory* and ga_Sound* data structures.
An audio handle is a data structure that represents a stream of audio data, as well as controls through which you can transform that data during mixing. Common controls include volume (gain), pitch, stereo pan, playing, pausing, stopping, and looping. In Gorilla, audio handles are implemented as ga_Handle*.
Mixing is the process of combining ('mixing') multiple streams of audio data into a single stream of audio data. In its simplest form, mixing is accomplished by summing together the simultaneous PCM samples from each audio handle. Historically, this data-intensive task of mixing multiple streams together of was the job of dedicated sound cards, but can now be done in real-time on modern CPUs.
A mixer is a data structure that tracks and manages multiple audio handles, and mixes their data into a buffer of audio data that can be presented to the audio device. In Gorilla, the mixer is implemented as ga_Mixer*.
While Gorilla does all of its mixing on the CPU, it must nonetheless present the mixed data to the sound card for playback on speaker hardware. There are many different libraries available to handle this, which vary based on which operating system you are using. Gorilla abstracts this presentation device into ga_Device*, which can be implemented through different libraries depending on how the library is configured.
Gorilla Audio has a highly modular 'stream-based' pipeline for processing audio data. Here's how it works:
The diagram below demonstrates the audio pipeline for a simple WAV-loading stream chain:
The above diagram shows only simple example of a stream chain, but they are often more complex.
The Gorilla Utility API provides several helper functions for common stream chains here.
You will need to include two headers in any that uses the library directly:
The first header (ga.h) is the low-level Gorilla Audio interface, which we barely cover in this tutorial. The second header (gau.h) is the higher-level Gorilla Utility interface, which simplifies most tasks required for game audio.
The first step when using Gorilla is always to call gc_initialize(). This must be done before calling any other functions in the library. If you have custom allocators, you can configure them using this function. (For now we'll just pass in 0, which tells the library to use the default allocators.)
The next step is to create a gau_Manager, an all-in-one audio manager object. (NOTE: While it is possible for advanced users to use Gorilla without a gau_Manager, this is not recommended for beginners - nor is it usually necessary!).
mgr = gau_manager_create();
When you are finished using the library (usually when the program terminates) you must destroy the manager and then shutdown the library:
The manager takes care of everything for you, from managing background threads, to mixing the audio, to pushing that mixed audio to the default audio device.
In order to keep things running smoothly, you need to make sure the update function gets called periodically. In games, this is usually done by calling this function once per frame:
With that, Gorilla Audio is ready to play your sounds.
In the hands of an experienced user, Gorilla can be configured to load sound data in many different ways. For the purposes of this tutorial, we'll focus on the most common case: files on disk.
When playing a sound, you need to provide a mixer. For buffered streams, you also need to provide a stream manager. Use gau_Manager to get both of these:
ga_Mixer* mixer = gau_manager_mixer(mgr);
ga_StreamManager* streamMgr = gau_manager_streamManager(mgr);
For short sound effects, it is a common practice to load the sound data into memory, and then play it back many times.
The first step is to load the sound into memory:
sound = gau_helper_sound_file("test.wav", "wav");
The next step is to create handles that can play back the sound's data:
handle = gau_create_handle_sound(mixer, sound, &gau_on_finish_destroy, 0, 0);
The last step is to play the handle:
NOTE: In this example, we pass &gau_on_finish_destroy as a parameter to gau_create_handle_sound(). This tells the handle to destroy itself when the sound finishes playing. You can pass in 0 for this parameter to control destruction manually.
Creating a buffered stream handle requires just one function call:
handle = gau_create_handle_buffered_file(mixer, streamMgr, "test.ogg", "ogg", &gau_on_finish_destroy, 0, 0);
Then, as with any handle, we tell it to play:
That's it! The stream will read and decode the file on a background thread continuously throughout playback.
Once a handle has been created, you have access to the following playback controls:
ga_handle_setParamf(handle, GA_HANDLE_PARAM_GAIN, gain);
ga_handle_setParamf(handle, GA_HANDLE_PARAM_PITCH, pitch);
ga_handle_setParamf(handle, GA_HANDLE_PARAM_PAN, pan);
Looping in Gorilla Audio is implemented by way of a special sample source. As such, the interface for looping can seem counterintuitive.
To loop a playing handle, pass a gau_SampleSourceLoop** as the last parameter your handle-creation function:
gau_SampleSourceLoop* loopSrc; handle = gau_create_handle_sound(mixer, sound, 0, 0, &loopSrc);
By passing in a non-zero value, you are requesting the handle to be loopable.
To set loop points, call:
gau_sample_source_loop_set(loopSrc, trigger, target);
The 'trigger' is the sample number that triggers a loop. The 'target' is the sample that should be looped back to. To loop the whole stream, set trigger to -1 and target to 0.
To stop looping, call:
When a handle finishes playing, you may need to perform some program-specific operation. If so, you can optionally provide a callback and context pointer:
void* context = &someData; handle = gau_create_handle_sound(mixer, sound, &callback, context, 0);
This callback will be called after the handle finishes playing, when you next call gau_manager_update(). The context will be passed along to the callback.
Looking for more example code? Check out the /examples directory in the source code archive that contains several full-program examples.
The following features are tentatively planned for development over the next year. Relative priority of these features will be based on which are in highest demand by the library's users.