By Stephan Schütz
There is never enough memory.
One of the greatest misconceptions currently in the game industry is that smaller platforms with limited resources do not allow for anything other than a handful of sounds and short repetitive music loops. This is, not only not true now, but has not been so for quite some time.
This article will focus on a specific title developed for both iOS and Android platforms to illustrate the possibilities available for the creation of audio environments on all platforms using real-time generated sound effects and music.
Battlepath Monsters is a newly released MMO (massively multiplayer online) game for the iOS platforms as well as Android. The scope of the game design in general is very ambitious and when I was asked to produce the audio for the project I wanted to create an audio environment that would complement the grand scale of the design and its world setting. Most importantly, I wanted to create an audio environment that people would want to listen to. All too often, this is not that case with mobile games.
Time for a new approach
Games can differ greatly from their linear cousins’ film and television in that you seldom have the same experience twice. The very nature of an interactive form of media is that it attempts to create a more realistic experience by altering in response to the users input. Film and television being linear have no requirement to fulfill this function. In real life however repeating the same action over and over will rarely have exactly the same result. Even an action as simple as bouncing a ball has a vast number of influences that will modify the results. Speed, angle, power, floor surface will all effect the sound a ball makes when it bounces and even the slightest variations result in changes to the sound produced.
Audio in games usually provides feedback for actions or events, often in an attempt to simulate a real life experience, yet many games utilise a single sound in a repetitive manner as audio feedback. As the industry strives for greater realism in creating environments and representing actions the audio should support the sense of realism rather than damaging the illusion with repetitive sounds that in the worst case can be simply annoying to the player.
The concept here is to build an audio environment that consists of sounds and music that are not static pre-made sound files. Instead, every sound heard during game-play is unique by being generated in real-time when triggered. Unique does not mean a particular sound needs to vary greatly each time it is played, in fact often it is important to match sounds to events or animations so they often need to be definable and controllable in their output. Even the subtlest of variations however can prevent a sound from being perceived as repetitive.
The core of this method is to take the source sounds that are normally used to create sound effects in a linear editing program and instead add these components directly into an appropriate sound engine design tool. For Battlepath Monsters I worked with FMOD Designer 2010 and I will use its functionality to describe the process of this method.
How does it work?
FMOD Designer allows the user to create sound events and components that can be combined to create sound events. It allows for a range of different sound file types to be used to create these events, but it is how it does this that is most relevant.
FMOD does not alter the base sound file to create sounds. In this way it is a non destructive method of sound production. Traditional sound editing software use a destructive method where altering a sound via the various processes results in the initial sound being overwritten. (You can obviously retain the original file as well, but then you have two files, taking up twice the space). Once a sound file is added to FMOD it is referenced to create any number of sound effects, the original file is never altered. This is both an efficient and powerful method of creating sound effects.
The real power behind this process for game development is that once a sound file is placed into memory it can be used literally limitless times as the foundation for creating sounds. When combined with other sound files and a range of processes and effects, complex audio environments can be realised.
It does what with the who now?
For Example: Using the sound of a shotgun discharging is the core sound file, a variety of possibilities can be produced. The basic sound of the shotgun can be passed directly through FMOD and placed in an event to represent a shotgun being used. This is the most obvious application of the original sound file. A second sound can generate along with the simple application of pitch alteration down by approximately 1.5 octaves. This will produce a lower, longer blast sound that can effectively represent the sound of an explosive device such as a grenade. Combining both the raw sound, the lower pitched sound with a selection of other elements the shotgun can become the core of a range of explosive events, such as buildings or vehicles being destroyed by adding sounds to represent debris, smashing glass, metal etc. The smashing metal elements here could then be lowered in pitch extremely to create the sound of a ship’s hull being torn open by an iceberg. Each sound “building block” can be utilised in multiple sound events to create a rich audio environment.
I don’t want to repeat myself
The use of sound files in multiple sound events is an efficient use of memory space, but this alone will not create a repetition free audio environment. The trick is to provide a series of building blocks that combine in real-time when a sound is triggered and to include randomisation of certain values when triggered.
Battlepath Monsters involves a great deal of combat against a large variety of monsters and adversaries using magical, ranged and melee attacks. But even within a single specific attack choice there is variation in the audio and here is how it works.
Taking the example of a fireball spell, the sound is designed to represent a magical spell that summons a large ball of flame and projects it towards a target with an explosive result. The sound of this event can be broken down into several stages.
- The initial surge or build up of flame
- The release of the energy as it is fired towards the target
- The movement of the flame energy as it travels to the target
- The detonation of the fireball upon impact with the target.
Even if each of these stages only uses a single sound type there can still be considerable variation included in the process. The sound of a giant 2 meter gas flame igniting is the core whoosh sound used for the release of energy as the fireball is thrown. The original source used for this stage includes 4 different sound files of a giant gas flame. FMOD allows all 4 recordings to be placed into a Sound Def. (Sound Defs are objects within FMOD with definable properties used to create sound events). The Sound Def can be defined to randomly select one of the four raw sound files each time it is triggered. The Sound Def also has properties of Pitch randomisation, Volume randomisation and Trigger delay that can all be defined to achieve the following result. Each time the Sound Def is triggered it will,
- Randomly select from the sound files provided
- Randomly pitch shift within the defined values
- Randomly alter volume within the defined values
- Randomly delay the start of the sound within the defined window
This will not result in quite an infinite level of variability, but when combined with all four stages of the fireball sound, each with the above randomised values the overall effect is that each fireball sound when triggered is subtly different. There are other values that can be used to achieve different results and the overall range of definable properties makes this method very powerful.
So I am avoiding repetition by repeatedly using sound files?
When creating the large range of spell sounds for Battlepath Monsters I wanted to create a level of consistency between the different magic class types. There are two main magical realms, Wizard Magic and Cleric Magic. The two realms have different spells but often they share similar components. Both realms of magic have fire based spells, both have ice based spells, so I needed something to reinforce what realm of magic was being used from an audio point of view.
I used a selection of previously created sound effects that functioned as drones, or sweeps or pulses. These emotive ambient sounds had a nice magical or ethereal feel to them, so I chose a few that I thought would work nicely as the underlying audio identifiers for the two magical realms. I then added these to all the spells of the appropriate matching realm. This in itself is not an original approach, but because of the methodology I was using I could be free to mix and alter the extent in which I used these extra sound elements. Right up to delivery, I could tweak each component layer of a sound effect without having to re-render the sound in a traditional editing program and then re-import it into the project. As work progressed, some of my choices of sound elements were too busy for the overall mix, or needed slight alterations. I was able to tweak the values of just the Sound Def that contained one of my magical realm sounds and every event that contained a layer with that Sound Def would be instantly updated. I even used these sounds as the basis for some of the generative music, sometimes pitch shifted down extremely to provide low drones, and at other times randomly pitched and blended to create great sweeping tones to represent the tides of magic in battle.
Music? Let’s talk about the music
All of the methodology mentioned so far can be used to created music of various styles. The current limitation of the functionality of FMOD Designer 2010 does make it tricky to create melodic themes that will develop and change like more traditional music, but for the creation of ambient background themes or dynamic percussive rhythms FMOD can be very effective.
The original brief for creating audio for BattlePath Monsters only included a single short piece of music for the opening screen. Like many other mobile game producers, music was seen as either impossible or likely to be terribly repetitive by the development team and the extended periods of time users would likely be playing Battlepath Monsters meant repetition was very undesirable. During production of the sound effects, I realised however that I had at my disposal all the tools I needed to create an effective musical backdrop for the game and best of all, in this case it was going to require no additional memory usage as I was going to use all the sound files I already had in memory from creating the sound effects. It seemed like a win, win situation.
And now a word from our sponsor
This is a bit of an aside, but since this article is discussing getting the best out of audio for mobile platforms I think it is worth considering. I believe that one of the most common errors in judgement when developing audio for a small device is to replicate the same methods used on larger platforms, but simply scaled down. This has proven time and again not to be the best solution. Too often mobile games are accompanied by triumphant bold melodic phrases that due to the nature of the device are set to repeat all too frequently. Even more subdued melodic themes can be very noticeable when they repeat. I have for some time advocated the use of sparse, ambient atmospheric music that simply works to set a general background mood. If you need dangerous or energetic music I think it should still be done in a limited way. A long low frequency drone can be an ominous accompaniment to combat. Even the best thumping battle track will wear thin when it repeats every twenty seconds. If your platform does not have the resources for a full emotive score, then approach the challenge in a very different way to best support the game narrative with your audio.
Making music for zero extra memory
This process will not always be as resource light as it was for Battlepath Monsters. I was fortunate enough that the sound material I was using to create spells and monsters and attach sounds were all very suitable to create generative music. A drone sound used in a spell that is a couple of seconds long can produce a great underlying layer for a music track when it is pitched down a couple of octaves; the sound is also lengthened greatly by this process which for me worked very well to create music. One of the other tools that FMOD Designer 2010 has is a method very similar to generative synthesis. Within a Sound Def it is possible to define a period in time in which FMOD will randomly trigger the sound. FMOD will then count this value and spawn the sound a second time, and then a third and so on. You can choose to either define a set number of re-spawns, or allow FMOD to continue infinitely. When each newly triggered sound also benefits from the randomisation discussed earlier it is possible with a single layer to create a sustained harmonic effect that will slowly shift and change through time. Combining other similar layers can create broad tonal sounds. Experimenting with the process can produce an incredible range of unexpected and exciting results.
An alternative to randomly flowing and generating long tones is to define time intervals of exact values and create layers to produce specific rhythmic effects. (Set the Sound Def to spawn every 500 milliseconds with a percussive sound and you get a drum beat at 120 bmp) Using this method I was able to create short drum patterns appropriate to accompany combat situations. Adding gongs and bells pitch-shifted down worked as rhythmic and tonal highlights. There are of course limitations to what can be produced as complex multilayered generative events do consume CPU power. There can be timing issues if you do not plan out your layers correctly, but to be able to create a range of musical tracks that generate in real time and can be set to play perpetually without ever looping, and in this case without requiring any further memory than had already been allocated for sound effects, was a result beyond initial expectations.
How do I do this?
This article has been less a tutorial on how to create generative audio and more an overview illustrating its possibilities. For Battlepath Monsters I used FMOD Designer 2010, but there are other game audio tools capable of creating similar results. The real challenge here for many experienced sound designers and composers is to unlearn many of the things you have learnt in the past. FMOD and its cousins are not limited to linear editing, and the game industry is not film or television. There are so many ways in which games differ to linear media that we need to look beyond the traditional methods of producing our projects, as I think there are some great results just waiting for us to discover.
Stephan Schütze has been a composer and sound designer in the games industry for over ten years. In the last few years he has created the first Australian produced sound library in over 50 years , which is being distributed world-wide. Stephan recently launched an iOS app of the largest sound effect ringtone library in the world. He loves chasing after things with microphones, creating audio environments and playing some of the outstanding examples of games produced in the world today.