Audio SDK - Oculus

Some 3D positional implementations layer simple “shoebox room” modeling on ..... that you set the attenuation range min/max to match the authored ...

1 downloads 37 Views 5MB Size

Recommend Documents


Audio SDK - Oculus
2 | Introduction | Audio. 2 | |. Copyrights and ..... Playing Ambisonic Audio in Unity 2017.1 (Beta). ..... Audio | Introduction to Virtual Reality Audio | 9. Directional ...

Audio SDK - Oculus
BLUETOOTH is a registered trademark of Bluetooth SIG, Inc. All other ...... Watch Brian Hook's Introduction to VR Audio from Oculus Connect 2014. ...... for setting up the Legacy Integration here: http://www.fmod.org/documentation/ ...... Fixed a rev

Audio SDK - Oculus
How you proceed depends on your engine and audio middleware of choice. ..... both ears, and we have to capture sounds from a sufficient number of discrete directions ... system that attempts to accurately model reflections and late reverberations. ..

Audio SDK - Oculus
9. Distance Localization. ...... Audio | Introduction to Virtual Reality Audio | 9 ...... It is a computationally- efficient way to play a pre-rendered or live- ...... Ableton Live 9.1.7 ...... To workaround this, use Unity 4.6 until the next 5.1 pat

Audio SDK - Oculus
Spatial Audio for 360 Video using FB360 Spatial Workstation. ..... best. When a listener turns their head 45 degrees to the side, we must be able to reflect that ...... 2. Select OculusSpatializer in the Spatializer Plugin drop-down setting in the ..

Audio SDK - Oculus
Audio SDK 1.20 Release Notes. ... Audio SDK 1.19 Release Notes. ...... Android/Gear VR: Oculus Audio SDK supports Android phones on the Gear VR platform.

Audio SDK - Oculus
however, there is a problem of audible discontinuities or artifacts when transitioning ..... Attenuation is key component of game audio, but 3D spatialization with ...

Audio SDK
than the dimensions of a typical human head, allowing us to rely on timing ...... The Oculus Spatializer Plugin (OSP) is an add-on plugin for FMOD Studio that ...

Audio SDK
ignoring localization, if we are unable to compensate for head motion, then sound .... Bluetooth technology is not recommended for audio output. ... By the same token, an .... paradoxically make things seem less realistic, as we are conditioned by ..

Audio SDK
Sound design and mixing is an art form, and VR is a new medium in which it is ... developers, drop by our Audio Developer Forums. ... of the techniques learned in traditional game development must be revisited ..... Thus far, much of the discussion a

Audio SDK Version 1.25

2 | Introduction | Audio

Copyrights and Trademarks ©

2017 Oculus VR, LLC. All Rights Reserved.

OCULUS VR, OCULUS, and RIFT are trademarks of Oculus VR, LLC. (C) Oculus VR, LLC. All rights reserved. BLUETOOTH is a registered trademark of Bluetooth SIG, Inc. All other trademarks are the property of their respective owners. Certain materials included in this publication are reprinted with the permission of the copyright holder.

2 |  | 

Audio | Contents | 3

Contents Introduction to Virtual Reality Audio................................................................... 6 Overview............................................................................................................................................................ 9 Localization and the Human Auditory System.................................................................................................. 9 Directional Localization.............................................................................................................................. 10 Distance Localization.................................................................................................................................. 12 3D Audio Spatialization................................................................................................................................... 13 Directional Spatialization with Head-Related Transfer Functions (HRTFs)................................................. 14 Distance Modeling..................................................................................................................................... 15 Listening Devices............................................................................................................................................. 15 Environmental Modeling..................................................................................................................................19 Sound Design for Spatialization...................................................................................................................... 21 Mixing Scenes for Virtual Reality.....................................................................................................................22 VR Audio Glossary........................................................................................................................................... 24

Oculus Audio SDK Guide.................................................................................. 26 SDK Contents and Features............................................................................................................................ 26 Requirements................................................................................................................................................... 26 Features............................................................................................................................................................27 Supported Features....................................................................................................................................27 Unsupported Features................................................................................................................................28 Sound Transport Time..................................................................................................................................... 28 Attenuation and Reflections............................................................................................................................ 29 Pitfalls and Workarounds................................................................................................................................. 29 Platform Notes................................................................................................................................................. 30 Middleware Support........................................................................................................................................ 30 Oculus Hardware Capabilities......................................................................................................................... 30

Oculus Native Spatializer for Unity....................................................................32 Overview.......................................................................................................................................................... 32 Requirements and Setup................................................................................................................................. 32 Exploring Oculus Native Spatializer with the Sample Scene.......................................................................... 34 Applying Spatialization.................................................................................................................................... 35 Dynamic Room Modeling................................................................................................................................39 First-Party Audio Spatialization (Beta)............................................................................................................. 41 Playing Ambisonic Audio in Unity 2017.1 (Beta)............................................................................................ 43 Migrating from TBE 3DCeption to Oculus Spatialization............................................................................... 47 Managing Sound FX with Oculus Audio Manager......................................................................................... 49

Oculus Spatializer for Wwise Integration Guide................................................53 Overview.......................................................................................................................................................... 53 Installing to the Wwise Authoring Tool.......................................................................................................... 54 Adding Target Platform Plugins to Wwise Unity Projects...............................................................................55 How to Use the Oculus Spatializer in Wwise..................................................................................................56 Global Properties........................................................................................................................................57 Sound Properties........................................................................................................................................ 58 Integrating the Oculus Spatializer................................................................................................................... 61 OSP Version Migration in Wwise.................................................................................................................... 61

Oculus Spatializer for FMOD Integration Guide............................................... 62

4 | Contents | Audio

Overview.......................................................................................................................................................... 62 How to Use in FMOD Studio..........................................................................................................................63 Notes and Best Practices................................................................................................................................ 64 Oculus Spatial Reverb..................................................................................................................................... 65 Ambisonics in FMOD.......................................................................................................................................67 Installing with the FMOD Studio Unity Integration.........................................................................................69 OSP Version Migration in FMOD....................................................................................................................71

Oculus VST Spatializer for DAWs Integration Guide......................................... 72 Overview.......................................................................................................................................................... 72 Using the Plugin.............................................................................................................................................. 73 VST Options and Parameters.......................................................................................................................... 74 3D Visualizer.................................................................................................................................................... 77 DAW-Specific Notes........................................................................................................................................ 81 Legal Notifications........................................................................................................................................... 81

Oculus AAX Spatializer for DAWs Integration Guide........................................ 82 Overview.......................................................................................................................................................... 82 Using the Plugin.............................................................................................................................................. 83 Track Parameters............................................................................................................................................. 83 3D Visualizer.................................................................................................................................................... 85

Oculus Lip Sync Unity Integration Guide.......................................................... 89 Overview.......................................................................................................................................................... 89 Requirements................................................................................................................................................... 89 Download and Setup.......................................................................................................................................90 Using Lip Sync Integration.............................................................................................................................. 90 Precomputing Visemes to Save CPU.............................................................................................................. 94 Exploring Oculus Lip Sync with the Sample Scene........................................................................................ 95

Oculus Audio SDK Profiler.................................................................................98 Overview.......................................................................................................................................................... 98 Setup................................................................................................................................................................ 98 Profiling Spatialized Audio.............................................................................................................................. 99

Oculus Audio Loudness Meter........................................................................ 102 Overview........................................................................................................................................................ 102 Setup.............................................................................................................................................................. 102 Measuring Loudness...................................................................................................................................... 102

Release Notes.................................................................................................. 104 Audio Audio Audio Audio Audio Audio Audio Audio Audio Audio Audio Audio

SDK SDK SDK SDK SDK SDK SDK SDK SDK SDK SDK SDK

1.25 Release Notes.................................................................................................................... 104 1.24 Release Notes.................................................................................................................... 104 1.22 Release Notes.................................................................................................................... 105 1.20 Release Notes.................................................................................................................... 105 1.19 Release Notes.................................................................................................................... 106 1.18 Release Notes.................................................................................................................... 106 1.17 Release Notes.................................................................................................................... 107 1.16 Release Notes.................................................................................................................... 107 1.1 Release Notes...................................................................................................................... 108 1.0 Release Notes...................................................................................................................... 111 0.11 Release Notes.................................................................................................................... 113 0.10 Release Notes.................................................................................................................... 114

Audio | Contents | 5

Audio SDK Developer Reference.................................................................... 116 Audio Documentation Archive.........................................................................117

6 | Introduction to Virtual Reality Audio | Audio

Introduction to Virtual Reality Audio Welcome to audio development for virtual reality! This document introduces fundamental concepts in audio development for virtual reality (VR) with an emphasis on key factors that deserve development attention. Audio is crucial for creating a persuasive VR experience. Because of the key role that audio cues play in our sense of being present in an actual, physical space, any effort that development teams devote to getting it right will pay off in spades, as it will contribute powerfully to the user's sense of immersion. This is as true for smallor mid-sized teams as it is for design houses — perhaps even more so. Oculus is committed to providing audio tools and technology to developers who want to create the most compelling experiences possible. Learn about, build, and share your VR audio experiences today! Note: If you are a video producer looking to create spatialized audio for a 360 video, this is the wrong guide. This guide is to help software developers add spatialized audio to VR apps. Instead see Creating Spatial Audio for 360 Video using FB360 Spatial Workstation. Supported Platforms Oculus provides audio spatialization tools for the following game engines and middleware: • • • • •

Unity Audiokinetic Wwise FMOD Studio Avid Pro-Tools (AAX) Various Windows DAWs (VST)

Learn Sound design and mixing is an art form, and VR is a new medium in which it is expressed. Whether you're an aspiring sound designer or a veteran, VR provides many new challenges and inverts some of the common beliefs we've come to rely upon when creating music and sound cues for games and traditional media. Watch Brian Hook's Introduction to VR Audio from Oculus Connect 2014. https://www.youtube.com/watch? v=kBBuuvEP5Z4 Watch Tom Smurdon and Brian Hook's talk at GDC 2015 about VR Audio: https://www.youtube.com/watch? v=2RDV6D7jDVs Our Introduction to VR Audio white paper looks at key ideas and how to address them in VR. Build Once you've learned about the underpinnings of VR audio, the next step is to download the Oculus Audio SDK from our Downloads Page and start creating. How you proceed depends on your engine and audio middleware of choice. If you're currently developing with FMOD or Audiokinetic Wwise, you've already finished a good chunk of the work! Our Audio SDK has plugins for FMOD and Wwise so you can get up and running right away.

Audio | Introduction to Virtual Reality Audio | 7

If you're developing for Unity, then things are just as easy. Audio integrations for Unity 4 and Unity 5 are part of the Oculus Audio SDK. Add our spatialization component to Unity's native audio object objects and hear them in space all around you!

8 | Introduction to Virtual Reality Audio | Audio

Audio designers can preview their sounds in most popular digital audio workstations (DAWs) using either our VST plugin or our Avid Pro Tools AAX plugin (available for Mac OS X and Windows).

Share If you're interested in learning more about Oculus VR audio or just want to chat with other audio-minded developers, drop by our Audio Developer Forums.

Audio | Introduction to Virtual Reality Audio | 9

Overview This document introduces fundamental concepts in audio development for virtual reality (VR) with an emphasis on key factors that deserve development attention. We hope to establish that audio is crucial for creating a persuasive VR experience. Because of the key role that audio cues play in our cognitive perception of existing in space, any effort that development teams devote to getting it right will pay off in spades, as it will contribute powerfully to the user's sense of immersion. This is as true for small- or mid-sized teams as it is for design houses — perhaps even more so. Audio has been a crucial part of the computer and video gaming experience since the advent of the first coinop games, which filled arcades with bleeps, bloops, and digital explosions. Over time, the state of computer audio has steadily improved, from simple wave generators (SID, 1983) to FM synthesis (AdLib, 1987), evolving on to 8-bit mono samples (Amiga OCS, 1985; SoundBlaster, 1989) and 16-bit stereo samples (SoundBlaster Pro), culminating in today's 5.1 surround sound systems on modern gaming consoles (Xbox, 2001). Since the development of 5.1 surround, little has changed. The fundamental technology of playing waveforms over speakers is the same, and the game playing environment is still primarily the living room or den with a large television and speakers. Virtual reality, however, is changing all this. Instead of a large environment with speakers, virtual reality brings the experience in close to the player via a head-mounted display (HMD) and headphones. The ability to track the user's head orientation and position significantly empowers audio technology. Until now, the emphasis has typically been placed on the visual aspects of virtual reality (resolution, latency, tracking), but audio must now catch up in order to provide the greatest sense of presence possible. This document discusses the challenges, opportunities, and solutions related to audio in VR, and how some of the techniques learned in traditional game development must be revisited and modified for VR. It is not intended to be a rigorous scientific study of the nature of acoustics, hearing and human auditory perception. Its intended audience includes anyone with an interest in audio and VR, including sound designers, artists, and programmers. If you are interested in learning about these details in greater depth, we recommend searching the Web for the following terms: • Head-Related Impulse Response • Head-Related Transfer Function • Sound Localization

Localization and the Human Auditory System Consider the following: human beings have only two ears, but are able to locate sound sources within three dimensions. That shouldn't be possible — if you were given a stereo recording and were asked to determine if the sound came from above or below the microphones, you would have no way to tell. If you can't do it from a recording, how can you do it in reality? Humans rely on psychoacoustics and inference to localize sounds in three dimensions, attending to factors such as timing, phase, level, and spectral modifications. This section summarizes how humans localize sound. Later, we will apply that knowledge to solving the spatialization problem, and learn how developers can take a monophonic sound and transform its signal so that it sounds like it comes from a specific point in space.

10 | Introduction to Virtual Reality Audio | Audio

Directional Localization

In this section, we will look at the cues humans use to determine the direction to a sound source. The two key components of localization are direction and distance. Lateral Laterally localizing a sound is the simplest type of localization, as one would expect. When a sound is closer to the left, the left ear hears it before the right ear hears it, and it sounds louder. The closer to parity, the more centered the sound, generally speaking. There are, however, some interesting details. First, we may primarily localize a sound based on the delay between the sound's arrival in both ears, or interaural time difference (ITD); or, we may primarily localize a sound based on the difference in the sound's volume level in both ears, or the interaural level difference (ILD). The localization technique we rely upon depends heavily on the frequency content of the signal. Sounds below a certain frequency (anywhere from 500 to 800 Hz, depending on the source) are difficult to distinguish based on level differences. However, sounds in this frequency range have half wavelengths greater than the dimensions of a typical human head, allowing us to rely on timing information (or phase) between the ears without confusion. At the other extreme, sounds with frequencies above approximately 1500 Hz have half wavelengths smaller than the typical head. Phase information is therefore no longer reliable for localizing the sound. At these frequencies, we rely on level differences caused by head shadowing, or the sound attenuation that results from our heads obstructing the far ear (see figure below).

We also key on the difference in time of the signal's onset. When a sound is played, which ear hears it first is a big part of determining its location. However, this only helps us localize short sounds with transients as opposed to continuous sounds. There is a transitional zone between ~800 Hz and ~1500 Hz in which both level differences and time differences are used for localization. Front/Back/Elevation Front versus back localization is significantly more difficult than lateral localization. We cannot rely on time differences, since interaural time and/or level differences may be zero for a sound in front of or behind the listener. In the following figure we can see how sounds at locations A and B would be indistinguishable from each other since they are the same distance from both ears, giving identical level and time differences.

Audio | Introduction to Virtual Reality Audio | 11

Humans rely on spectral modifications of sounds caused by the head and body to resolve this ambiguity. These spectral modifications are filters and reflections of sound caused by the shape and size of the head, neck, shoulders, torso, and especially, by the outer ears (or pinnae). Because sounds originating from different directions interact with the geometry of our bodies differently, our brains use spectral modification to infer the direction of origin. For example, sounds approaching from the front produce resonances created by the interior of our pinnae, while sounds from the back are shadowed by our pinnae. Similarly, sounds from above may reflect off our shoulders, while sounds from below are shadowed by our torso and shoulders. All of these reflections and shadowing effects combine to create a direction selective filter. Head-Related Transfer Functions (HRTFs) A direction selection filter can be encoded as a head-related transfer function (HRTF). The HRTF is the cornerstone for most modern 3D sound spatialization techniques. How we measure and create an HRTF is described in more detail elsewhere in this document. Head Motion HRTFs by themselves may not be enough to localize a sound precisely, so we often rely on head motion to assist with localization. Simply turning our heads changes difficult front/back ambiguity problems into lateral localization problems that we are better equipped to solve. In the following figure sounds at A and B are indistinguishable from each other based on level or time differences, since they are identical. By turning her head slightly, the listener alters the time and level differences between ears, helping to disambiguate the location of the sound. D1 is closer than D2, which is a cue that the sound is to the left (and thus behind) the listener.

12 | Introduction to Virtual Reality Audio | Audio

Likewise, cocking our heads can help disambiguate objects vertically. In the following figure, the listener cocks her head, which results in D1 shortening and D2 lengthening. This provides a cue that the object is above her head instead of below it.

Distance Localization ILD, ITD and HRTFs help us determine the direction to a sound source, but they give relatively sparse cues for determining the distance to a sound. To determine distance we use a combination of factors, including initial time delay, ratio of direct sound to reverberant sound, and motion parallax. Loudness Loudness is the most obvious distance cue, but it can be misleading. If we lack a frame of reference, we can't judge how much the sound has diminished in volume from its source, and thus estimate a distance. Fortunately, we are familiar with many of the sound sources that we encounter daily, such as musical instruments, human voice, animals, vehicles, and so on, so we can predict these distances reasonably well. For synthetic or unfamiliar sound sources, we have no such frame of reference, and we must rely on other cues or relative volume changes to predict if a sound is approaching or receding.

Audio | Introduction to Virtual Reality Audio | 13

Initial Time Delay Initial time delay describes the interval between the direct sound and its first reflection. The longer this gap, the closer we assume that we are to the sound source.

Anechoic (echoless) or open environments such as deserts may not generate appreciable reflections, which makes estimating distances more difficult. Ratio of Direct Sound to Reverberation In a reverberant environment there is a long, diffuse sound tail consisting of all the late echoes interacting with each other, bouncing off surfaces, and slowly fading away. The more we hear of a direct sound in comparison to the late reverberations, the closer we assume it is. This property has been used by audio engineers for decades to move a musical instrument or vocalist “to the front” or “to the back” of a song by adjusting the “wet/dry mix” of an artificial reverb. Motion Parallax Motion parallax (the apparent movement of a sound source through space) indicates distance, since nearby sounds typically exhibit a greater degree of parallax than far-away sounds. For example, a nearby insect can traverse from the left to the right side of your head very quickly, but a distant airplane may take many seconds to do the same. As a consequence, if a sound source travels quickly relative to a stationary perspective, we tend to perceive that sound as coming from nearby. High Frequency Attenuation High frequencies attenuate faster than low frequencies, so over long distances we can infer a bit about distance based on how attenuated those high frequencies are. This is often a little overstated in the literature, because sounds must travel hundreds or thousands of feet before high frequencies are noticeably attenuated (i.e., well above 10 kHz). This is also affected by atmospheric conditions, such as temperature and humidity.

3D Audio Spatialization The previous section discussed how humans localize the sources of sounds in three dimensions. We now invert that and ask, “Can we apply that information to fool people into thinking that a sound is coming from a specific point in space?”

14 | Introduction to Virtual Reality Audio | Audio

The answer, thankfully, is “yes”, otherwise this would be a pretty short document. A big part of VR audio is spatialization: the ability to play a sound as if it is positioned at a specific point in three-dimensional space. Spatialization is a key aspect of presence because it provides powerful cues suggesting the user is in an actual 3D environment, which contributes strongly to a sense of immersion. As with localization, there are two key components to spatialization: direction and distance.

Directional Spatialization with Head-Related Transfer Functions (HRTFs)

We know that sounds are transformed by our body and ear geometry differently depending on the incoming direction. These different effects form the basis of HRTFs, which we use to localize a sound. Capturing HRTFs The most accurate method of HRTF capture is to take an individual, put a couple microphones in their ears (right outside the ear canal), place them in an anechoic chamber (i.e., an anechoic environment), play sounds in the chamber from every direction we care about, and record those sounds from the mics. We can then compare the original sound with the captured sound and compute the HRTF that takes you from one to the other. We have to do this for both ears, and we have to capture sounds from a sufficient number of discrete directions to build a usable sample set. But wait — we have only captured HRTFs for a specific person. If our brains are conditioned to interpret the HRTFs of our own bodies, why would that work? Don't we have to go to a lab and capture a personalized HRTF set? In a perfect world, yes, we'd all have custom HRTFs measured that match our own body and ear geometry precisely, but in reality this isn't practical. While our HRTFs are personal, they are similar enough to each other that a generic reference set is adequate for most situations, especially when combined with head tracking. Most HRTF-based spatialization implementations use one of a few publicly available data sets, captured either from a range of human test subjects or from a synthetic head model such as the KEMAR. • • • •

IRCAM Listen Database MIT KEMAR CIPIC HRTF Database ARI (Acoustics Research Institute) HRTF Database

Most HRTF databases do not have HRTFs in all directions. For example, there is often a large gap representing the area beneath the subject's head, as it is difficult, if no impossible, to place a speaker one meter directly below an individual's head. Some HRTF databases are sparsely sampled, including HRTFs only every 5 or 15 degrees. Most implementations either snap to the nearest acquired HRTF (which exhibits audible discontinuities) or use some method of HRTF interpolation. This is an ongoing area of research, but for VR applications on desktops, it is often adequate to find and use a sufficiently-dense data set. Applying HRTFs Given an HRTF set, if we know the direction we want a sound to appear to come from, we can select an appropriate HRTF and apply it to the sound. This is usually done either in the form of a time-domain convolution or an FFT/IFFT pair. If you don't know what these are, don't worry - those details are only relevant if you are implementing the HRTF system yourself. Our discussion glosses over a lot of the implementation details (e.g., how we store an HRTF,

Audio | Introduction to Virtual Reality Audio | 15

how we use it when processing a sound). For our purposes, what matters is the high-level concept: we are simply filtering an audio signal to make it sound like it's coming from a specific direction. Since HRTFs take the listener's head geometry into account, it is important to use headphones when performing spatialization. Without headphones, you are effectively applying two HRTFs: the simulated one, and the actual HRTF caused by the geometry of your body. Head Tracking Listeners instinctively use head motion to disambiguate and fix sound in space. If we take this ability away, our capacity to locate sounds in space is diminished, particularly with respect to elevation and front/back. Even ignoring localization, if we are unable to compensate for head motion, then sound reproduction is tenuous at best. When a listener turns their head 45 degrees to the side, we must be able to reflect that in their auditory environment, or the soundscape will ring false. VR headsets such as the Rift provide the ability to track a listener's head orientation (and, sometimes, position). By providing this information to a sound package, we can project a sound in the listener's space, regardless of their head position. This assumes that the listener is wearing headphones. It is possible to mimic this with a speaker array, but it is significantly less reliable, more cumbersome, and more difficult to implement, and thus impractical for most VR applications.

Distance Modeling HRTFs help us identify a sound's direction, but they do not model our localization of distance. Humans use several factors to infer the distance to a sound source. These can be simulated with varying degrees of accuracy and cost in software: • Loudness, our most reliable cue, is trivial to model with simple attenuation based on distance between the source and the listener. • Initial Time Delay is significantly harder to model, as it requires computing the early reflections for a given set of geometry, along with that geometry's characteristics. This is both computationally expensive and awkward to implement architecturally (specifically, sending world geometry to a lower level API is often complex). Even so, several packages have made attempts at this, ranging from simple “shoebox models” to elaborate full scene geometric modeling. • Direct vs. Reverberant Sound (or, in audio production, the “wet/dry mix”) is a natural byproduct of any system that attempts to accurately model reflections and late reverberations. Unfortunately, such systems tend to be very expensive computationally. With ad hoc models based on artificial reverberators, the mix setting can be adjusted in software, but these are strictly empirical models. • Motion Parallax we get “for free,” because it is a byproduct of the velocity of a sound source. • High Frequency Attenuation due to air absorption is a minor effect, but it is also reasonably easy to model by applying a simple low-pass filter, and by adjusting cutoff frequency and slope. In practice, HF attenuation is not very significant in comparison to the other distance cues.

Listening Devices Traditionally, high quality audio reproduction has been the domain of multi-speaker systems, often accompanied by one or more subwoofers. However, with the rise of online gaming and voice chat, many players have transitioned to headsets (headphones with integrated microphones).

16 | Introduction to Virtual Reality Audio | Audio

For modern VR, especially with head tracking and user movement, speaker arrays are an evolutionary dead end. Headphone audio will be the standard for VR into the future, as it provides better isolation, privacy, portability, and spatialization. Headphones Headphones offer several significant advantages over free-field speaker systems for virtual reality audio: • Acoustic isolation from the listener's environment enhance realism and immersion. • Head tracking is greatly simplified.

• HRTFs are more accurate since they don't suffer from the “doubling down” of HRTF effects (sounds modified from the simulated HRTF, and again by the listener's actual body geometry). • Access to controls while wearing an HMD is far simpler when those controls are physically attached to the listener. • Microphones are ideally placed and subject to much less echo/feedback.

Headphones are available in a variety of types with various trade-offs: Closed Back Headphones As a general rule of thumb, closed back headphones offer the most isolation and bass response. However, the closed construction may lead to discomfort (due to heat and weight), and they tend to offer less accurate reproduction due to internal resonance. Also, if placed on or over the ear, they cause the pinnae to impact sound reproduction slightly.

Audio | Introduction to Virtual Reality Audio | 17

While acoustic isolation can help with immersion, it cuts listeners off from their environment so they may be unable to hear others entering the room, cell phone ringing, doorbell, et cetera. Whether that is a good thing or not is up to the individual. Open Back Headphones Open back headphones are generally more accurate and comfortable than closed-back headphones, but they do not isolate listeners from the exterior environment, and broadcast to the surrounding environment as well. These are suitable for quiet areas devoted to a VR experience, possibly in conjunction with a subwoofer.

As with closed back headphones, when placed on or over the ear, open back headphones allow the pinnae to impact sound reproduction slightly. Earbuds Earbuds (such as those that ship with cell phones or portable music players) are cheap, lightweight, and very portable, though they typically lack bass. Some models, such as Apple EarPods, have surprisingly good frequency response, albeit with a steady roll off of bass frequencies. These are mostly ignored for spatialization. Most earbuds are poor at isolation.

18 | Introduction to Virtual Reality Audio | Audio

In-Ear Monitors In-ear monitors offer superior isolation from your environment, are very lightweight, and have excellent frequency response over the entire range. They remove the effects of the listener's pinnae from sound (unlike on-ear headphones). They have the downside of requiring insertion into the ear canal, which eliminates the effects of the ear canal from sound reproduction entirely (since most HRTFs are captured with microphones right outside the ear canal).

Impulse Responses Headphones, like all transducers, impart their own characteristics on signals, and since HRTFs are frequency sensitive, removing the headphone character from the signal will usually be beneficial. This can be accomplished by deconvolving the output signal with the headphone's impulse response. External Speaker Systems Until recently, the most common way to provide sound immersion was to surround the listener with speakers, such as a Dolby 5.1 or 7.1 speaker configuration. While partially effective for a fixed and narrow sitting position, speaker array systems suffer from key drawbacks: • • • • •

Imprecise imaging due to panning over large portions of the listening area. No elevation cues, sounds only appear in a 360 degree circle around the listener. Assumption of immobile listener; in particular, no head tracking. Room effects such as reverberation and reflections impact the reproduced sound. Poor isolation means that outside sounds can intrude on the VR experience.

It is doubtful that multi-speaker configurations will be common or effective for home VR applications, though they may be viable for dedicated commercial installations. Bluetooth Bluetooth has become a popular communication method of wireless audio broadcast. Unfortunately, modern Bluetooth implementations often incur significant latency, sometimes as high as 500 milliseconds. As a result, Bluetooth technology is not recommended for audio output.

Audio | Introduction to Virtual Reality Audio | 19

Environmental Modeling HRTFs in conjunction with attenuation provide an anechoic model of three dimensional sound, which exhibits strong directional cues but tends to sound dry and artificial due to lacking room ambiance. To compensate for this, we can add environmental modeling to mimic the acoustic effects of nearby geometry. Reverberation and Reflections As sounds travel through space, they reflect off of surfaces, creating a series of echoes. The initial distinct echoes (early reflections) help us determine the direction and distance to a sound. As these echoes propagate, diminish, and interact they create a late reverberation tail, which contributes to our sense of space.

We can model reverberation and reflection using several different methods. Shoebox Model Some 3D positional implementations layer simple “shoebox room” modeling on top of their HRTF implementation. These consist of specifying the distance and reflectivity of six parallel walls (i.e., the “shoebox”) and sometimes the listener's position and orientation within that room as well. With that basic model, you can simulate early reflections from walls and late reverberation characteristics. While far from perfect, it's much better than artificial or no reverberation.

Artificial Reverberations

20 | Introduction to Virtual Reality Audio | Audio

Since modeling physical walls and late reverberations can quickly become computationally expensive, reverberation is often introduced via artificial, ad hoc methods such as those used in digital reverb units of the 80s and 90s. While less computationally intensive than physical models, they may also sound unrealistic, depending on the algorithm and implementation — especially since they are unable to take the listener's orientation into account. Sampled Impulse Response Reverberation Convolution reverb samples the impulse response from a specific real-world location such as a recording studio, stadium, or lecture hall. It can then be applied to a signal later, resulting in a signal that sounds as if it were played back in that location. This can produce some phenomenally lifelike sounds, but there are some drawbacks. Sampled impulse responses rarely match in-game synthetic environments; they represent a fixed listener position and orientation; they are monophonic; they are difficult to transition between different areas. Even with these limitations, they still provide high-quality results in many situations. World Geometry and Acoustics The “shoebox model” attempts to provide a simplified representation of an environment's geometry. It assumes no occlusion, equal frequency absorption on all surfaces, and six parallel walls at a fixed distance from the listener's head. Needless to say, this is a heavy simplification for the sake of performance, and as VR environments become more complex and dynamic, it may not scale properly Some solutions exist today to simulate diffraction and complex environmental geometry, but support is not widespread and performance implications are still significant. Environmental Transitions Modeling a specific area is complex, but still relatively straightforward. Irrespective of choice of model, however, there is a problem of audible discontinuities or artifacts when transitioning between areas. Some systems require flushing and restarting the entire reverberator, and other systems introduce artifacts as parameters are changed in real-time. Presence and Immersion By creating audio that is on par with high quality VR visuals, developers immerse the user in a true virtual world, giving them a sense of presence. Audio immersion is maximized when the listener is located inside the scene, as opposed to viewing it from afar. For example, a 3D chess game in which the player looks down at a virtual board offers less compelling spatialization opportunities than a game in which the player stands on the play field. By the same token, an audioscape in which moving elements whiz past the listener's head with auditory verisimilitude is far more compelling than one in which audio cues cut the listener off from the action by communicating that they're outside of the field of activity.

Audio | Introduction to Virtual Reality Audio | 21

Note: It should be noted that while the pursuit of realism is laudable, it is also optional, as we want developers and sound designers to maintain creative control over the output.

Sound Design for Spatialization Now that we've established how humans place sounds in the world and, more importantly, how we can fool people into thinking that a sound is coming from a particular point in space, we need to examine how we must change our approach to sound design to support spatialization. Mono Most spatialization techniques model sound sources as infinitely small point sources; that is, sound is treated as if it were coming from a single point in space as opposed to a large area, or a pair of discrete speakers. As a result, sounds should be authored as monophonic (single channel) sources. Avoid Sine Waves Pure tones such as sine waves lack harmonics or overtones, which presents several issues: • Pure tones do not commonly occur in the real world, so they often sound unnatural. This does not mean you should avoid them entirely, since many VR experiences are abstract, but it is worth keeping in mind.

22 | Introduction to Virtual Reality Audio | Audio

• HRTFs work by filtering frequency content, and since pure tones lack that content, they are difficult to spatialize with HRTFs • Any glitches or discontinuities in the HRTF process will be more audible since there is no additional frequency content to mask the artifacts. A moving sine wave will often bring out the worst in a spatialization implementation. Use Wide Spectrum Sources For the same reasons that pure tones are poor for spatialization, broad spectrum sounds work well by providing lots of frequencies for the HRTF to work with. They also help mask audible glitches that result from dynamic changes to HRTFs, pan, and attenuation. In addition to a broad spectrum of frequencies, ensure that there is significant frequency content above 1500 Hz, since this is used heavily by humans for sound localization. Low frequency sounds are difficult for humans to locate - this is why home theater systems use a monophonic subwoofer channel. If a sound is predominantly low frequency (rumbles, drones, shakes, et cetera), then you can avoid the overhead of spatialization and use pan/attenuation instead. Avoid Real-time Format Conversions Converting from one audio format to another can be costly and introduce latency, so sounds should be delivered in the same output format (sampling rate and bit depth) as the target device. For most PCs, this will be 16-bit, 44.1 kHz PCM, but some platforms may have different output formats (e.g. 16-bit, 48 kHz on Gear VR). Spatialized sounds are monophonic and should thus be authored as a single channel to avoid stereo-to-mono merging at run-time (which can introduce phase and volume artifacts). If your title ships with non-native format audio assets, consider converting to native format at installation or load time to avoid a hit at run-time.

Mixing Scenes for Virtual Reality As with sound design, mixing a scene for VR is an art as well as a science, and the following recommendations may include caveats. Creative Control Realism is not necessarily the end goal! Keep this in mind at all times. As with lighting in computer environments, what is consistent and/or “correct” may not be aesthetically desirable. Audio teams must be careful not to back themselves into a corner by enforcing rigid notions of correctness on a VR experience. This is especially true when considering issues such as dynamic range, attenuation curves, and direct time of arrival. Accurate 3D Positioning of Sources Sounds must now be placed carefully in the 3D sound field. In the past a general approximation of location was often sufficient since positioning was accomplished strictly through panning and attenuation. The default location for an object might be its hips or where its feet met the ground plane, and if a sound is played from those locations it will be jarring with spatialization, e.g. “crotch steps” or “foot voices”.

Audio | Introduction to Virtual Reality Audio | 23

Directional Sources The Oculus Audio SDK does not support directional sound sources (speakers, human voice, car horns, et cetera). However, higher level SDKs often model these using angle-based attenuation that controls the tightness of the direction. This directional attenuation should occur before the spatialization effect. Area Sources The Oculus Audio SDK does not support area sound sources such as waterfalls, rivers, crowds, and so on. Doppler Effect The Doppler effect is the apparent change of a sound's pitch as its source approaches or recedes. VR experiences can emulate this by altering the playback based on the relative speed of a sound source and the listener, however it is very easy to introduce artifacts inadvertently in the process. The Oculus Audio SDK does not have native support for the Doppler effect, though some high-level SDKs do. Sound Transport Time In the real world, sound takes time to travel, so there is often a noticeable delay between seeing and hearing something. For example, you would see the muzzle flash from a rifle fired at you 100 meters away roughly 330 ms before you would hear it. Modeling propagation time incurs some additional complexity and may paradoxically make things seem less realistic, as we are conditioned by popular media to believe that loud distance actions are immediately audible. The Oculus Audio SDK supports time-of-arrival. Non-Spatialized Audio Not all sounds need to be spatialized. Plenty of sounds are static or head relative, such as: • • • •

User interface elements, such as button clicks, bleeps, transitions, and other cues Background music Narration Body sounds, such as breathing or heart beats

Such sounds should be segregated during authoring as they will probably be stereo, and during mixing so they are not inadvertently pushed through the 3D positional audio pipeline. Performance Spatialization incurs a performance hit for each additional sound that must be placed in the 3D sound field. This cost varies, depending on the platform. For example, on a high end PC, it may be reasonable to spatialize 50+ sounds, while you may only be able to spatialize one or two sounds on a mobile device. Some sounds may not benefit from spatialization even if placed in 3D in the world. For example, very low rumbles or drones offer poor directionality and could be played as standard stereo sounds with some panning and attenuation. Ambiance Aural immersion with traditional non-VR games was often impossible since many gamers or PC users relied on low-quality desktop speakers, home theaters with poor environmental isolation, or gaming headsets optimized for voice chat. With headphones, positional tracking, and full visual immersion, it is now more important than ever that sound designers focus on the user's audio experience.

24 | Introduction to Virtual Reality Audio | Audio

This means: • • • • •

Properly spatialized sound sources Appropriate soundscapes that are neither too dense nor too sparse Avoidance of user fatigue Suitable volume levels comfortable for long-term listening Room and environmental effects

Audible Artifacts As a 3D sound moves through space, different HRTFs and attenuation functions may become active, potentially introducing discontinuities at audio buffer boundaries. These discontinuities will often manifest as clicks, pops or ripples. They may be masked to some extent by reducing the speed of traveling sounds and by ensuring that your sounds have broad spectral content. Latency While latency affects all aspects of VR, it is often viewed as a graphical issue. However, audio latency can be disruptive and immersion-breaking as well. Depending on the speed of the host system and the underlying audio layer, the latency from buffer submission to audible output may be as short as 2 ms in high performance PCs using high end, low-latency audio interfaces, or, in the worst case, as long as hundreds of milliseconds. High system latency becomes an issue as the relative speed between an audio source and the listener's head increases. In a relatively static scene with a slow moving viewer, audio latency is harder to detect. Effects Effects such as filtering, equalization, distortion, flanging, and so on can be an important part of the virtual reality experience. For example, a low pass filter can emulate the sound of swimming underwater, where high frequencies lose energy much more quickly than in air, or distortion may be used to simulate disorientation.

VR Audio Glossary Definitions of technical terms VR audio terms. Term

Definition

Anechoic

Producing no echoes; very low or no reverberation.

Attenuation

A loss of energy; in acoustics, typically a reduction in volume.

Direct Sound

Sound that has traveled directly to the listener without reflecting (versus reverberant sound).

Early Reflections

Reflected sounds that arrive relatively soon at a listener's location (i.e., before Late Reflections).

Head-Related Impulse Response (HRIR)

A formal characterization of the effect of sound interacting with the geometry of a particular human body. Used to create head-related transfer functions.

Audio | Introduction to Virtual Reality Audio | 25

Term

Definition

Head-Related Transfer Function (HRTF)

A transformation of an acoustic signal using a head-related impulse response. Used to simulate the effects of interaction of a sound originating from a specific direction with the geometry of a particular human body.

Head Shadowing

The attenuation of sound caused by the head lying between an ear and the sound source.

Initial Time Delay

The interval between the arrival of a direct sound and its first reflection.

Interaural Level Difference (ILD)

The difference in a sound's level or volume between the two ears.

Interaural Time Difference (ITD)

The length of the interval between when a sound arrives at the first ear and when it arrives as the second ear.

Late Reflections

Reflected sounds that arrive relatively late at a listener's location (i.e., after early reflections).

Motion Parallax

When moving objects are father from a perceiver, their apparent speed of travel decreases; for example, a moving airplane on the horizon appears to be traveling more slowly than a nearby car. The apparent rate of travel of an object can therefore be used as a distance cue.

Pinnae

The visible portion of the ear that lies outside the head.

Reverberant Sound

Sound that has reflected or reverberated before arriving at a listener's location (versus direct sound).

Reverberation

The reflection of sound off of a surface, or the temporary persistence of sound in a space caused by reverberation.

Sound Localization

1. The process of determining the location of a sound's origin; or 2. the suggestion of an object's location based on the manipulation of auditory cues.

Sound Spatialization The representation of a sound within three-dimensional space.

26 | Oculus Audio SDK Guide | Audio

Oculus Audio SDK Guide This document describes how to install, configure, and use the Oculus Audio SDK. The Audio SDK consists of documentation, samples, plugins, source code, libraries, and tools to help developers implement immersive audio in their VR apps. Thus far, much of the discussion about virtual reality has centered on visual aspects such as pixel density, latency, frame rate, and visual artifacts. However, audio characteristics reinforce visual immersion by providing compelling auditory cues which indicate the positions of objects around the viewer, including those that are outside their field of view. We strongly recommend beginning with Introduction to Virtual Reality Audio for a technical discussion of the key concepts and terminology. Note: If you are a video producer looking to create spatialized audio for a 360 video, this is the wrong guide. This guide is to help software developers add spatialized audio to VR apps. Instead see Creating Spatial Audio for 360 Video using FB360 Spatial Workstation.

SDK Contents and Features The Oculus Audio SDK provides tools to help VR developers incorporate high-quality audio into their apps and experiences. We include: • OculusHQ sound rendering path, supporting a large number of sound sources, providing strong directional cues, early reflections and late reverberations • Plugins for game engines such as Unity3D • Plugins for popular audio middleware such as FMOD and Wwise • Native C/C++ API providing a low-level C/C++ SDK with spatialization and reverberation • AAX Plugin for use with Avid's Pro Tools • VST plugin for Windows and OS X • Tutorials and white papers

Requirements Head Tracking By tracking the listener's head position and orientation, we can achieve accurate 3D sound spatialization. As the listener moves or rotates their head, they perceive the sound as remaining at a fixed location in the virtual world. Developers may pass Oculus PC SDK ovrPosef structures to the Oculus Audio SDK for head tracking support. Alternatively, they can pass listener-space sound positions and no pose information for the same effect. Headphones The Oculus Audio SDK assumes that the end user is wearing headphones, which provide better isolation, privacy, portability, and spatialization than free-field speaker systems. When combined with head tracking and spatialization technology, headphones deliver an immersive sense of presence. For more on the advantages

Audio | Oculus Audio SDK Guide | 27

and disadvantages of headphones for virtual reality, please refer to Listening Devices in Introduction to Audio for Virtual Reality.

Features This section describes the features supported by the Oculus Audio SDK.

Supported Features

This section describes supported features. Spatialization Spatialization is the process of transforming monophonic sound sources to make them sound as though they originate from a specific desired direction. The Oculus Audio SDK uses head-related transfer functions (HRTFs) to provide audio spatialization through the C/C++ SDK and plugins. Note: It is important that all sounds passed to Oculus Audio Spatializers are monophonic - stereophonic rendering will be handled by our spatialization, and must not be applied over additional stereophonic rendering provided by your game engine or library. Volumetric Sources Sound sources can be given a radius which will make them sound volumetric. This will spread the sound out, so that as the source approaches the listener, and then completely envelops the listener, the sound will be spread out over a volume of space. This is especially useful for larger objects, which will otherwise sound very small when they are close to the listener. For more information, please see this blog article: https:// developer.oculus.com/blog/volumetric-sounds/ Near-field Rendering Sound sources in close proximity to a listener's head have properties that make some aspects of their spatialization independent of their distance. Our near-field rendering automatically approximates the effects of acoustic diffraction to create a more realistic representation of audio sources closer than 1 meter. Environmental Modeling HRTFs provide strong directional cues, but without room effects, they often sound dry and lifeless. Some environmental cues (for example, early reflections and late reverberation) are also important in providing strong cues about the distance to a sound source. The Audio SDK supports early reflections and late reverberations using a simple 'shoebox model,' consisting of a virtual room centered around the listener's head, with four parallel walls, a floor, and a ceiling at varying distances, each with its own distinct reflection coefficient. Oculus Ambisonics Ambisonics is a multichannel audio format that represents a 3D sound field. It can be thought of as a skybox for audio with the listener at the center. It is a computationally- efficient way to play a pre-rendered or liverecorded scene. The trade-off is that ambisonic sounds offer less spatial clarity and display more smearing than point-source HRTF-processed point source sounds. We recommend using ambisonics for non-diegetic sounds, such as like music and ambiance. Oculus has developed a novel method for the binaural rendering of ambisonics based on spherical harmonics that reduces the smearing, has better frequency response, and produces better externalization compared with

28 | Oculus Audio SDK Guide | Audio

the common virtual speaker- based methods. The Audio SDK supports first-order ambisonics in the AmbiX (ACN/SN3D) format. Oculus offers an Ambisonics Starter Pack as a convenience for developers (available on our Download page). It includes several AmbiX WAV files licensed for use under Creative Commons. The files represent ambient soundscapes, including parks, natural environments with running water, indoor ventilation, rain, urban ambient sounds, and driving noises.

Unsupported Features

There are other aspects of a high quality audio experience, however these are often more appropriately implemented by the application programmer or a higher level engine. Occlusion Sounds interact with a user's environment in many ways. Objects and walls may obstruct, reflect, or propagate a sound through the virtual world. The Oculus SDK only supports direct reflections and does not factor in the virtual world geometry. This problem needs to be solved at a higher level than the Oculus Audio SDK due to the requirements of scanning and referencing world geometry. Doppler Effect The Doppler effect is the perceived change in pitch that occurs when a sound source is moving at a rapid rate towards or away from a listener, such as the pitch change that is perceived when a car whooshes by. It is often emulated in middleware with a simple change in playback rate by the sound engine. Creative Effects Effects such as equalization, distortion, flanging, and so on can be used to great effect in a virtual reality experience. For example, a low pass filter can emulate the sound of swimming underwater, where high frequencies lose energy much faster than in air, distortion may be used to simulate disorientation, a narrow bandpass equalizer can give a 'radio' effect on sound sources, and so on. The Oculus Audio SDK does not provide these effects. Typically, these would be applied by a higher level middleware package either before or after the Audio SDK, depending on the desired outcome. For example, a low-pass filter might be applied to the master stereo buffers to simulate swimming underwater, but a distortion effect may be applied pre-spatialization for a broken radio effect in a game. Area and directional Sound Sources The Oculus Audio SDK supports monophonic point sources. When a sound is specified, it is assumed that the waveform data represents the sound as audible to the listener. It is up to the caller to attenuate the incoming sound to reflect speaker directional attenuation (e.g. someone speaking while facing away from the listener) and area sources such as waterfalls or rivers.

Sound Transport Time Sound takes time to travel, which can result in a noticeable delay between seeing something occur and hearing it (e.g., you would see the muzzle flash from a rifle fired 100 m away roughly 330 ms before you would hear it). We do not model sound propagation time, as it would incur additional complexity, and it can have the paradoxical effect of making things seem less realistic, as most people are conditioned by popular media to believe that loud distance actions are immediately audible. Developers may delay a sound's onset appropriately to simulate that effect, if desired.

Audio | Oculus Audio SDK Guide | 29

Attenuation and Reflections Attenuation is key component of game audio, but 3D spatialization with reflections complicates the topic. Accurate reflections are the most important feature in simulating distance, and to provide correct distance cues it is critical to have a natural balance between the direct path and reflections. We must consider the attenuation of all reflection paths reflected by the room model. For a sound source close to the listener, the sound’s reflections are barely audible, and the sound is dominated by the direct signal. A sound that is further away has more reverberant content, and reflections are typically almost as loud as the direct signal. This creates a challenge when using authored curves. If they do not match the internal curve, they will create conflicting distance cues. Consider the situation where the authored curve is more gradual than the internal curve - as the sound moves away from the listener, the reflections falls off faster and results in an apparentlydistant sound with no audible reflections. That is the opposite of what is expected. The best way to achieve accurate distance cues is to use the Oculus Attenuation model, as it will guarantee that the reflections and direct signal are correctly balanced. If you do need to use authored curves, we recommend that you set the attenuation range min/max to match the authored curve as closely as possible.

Pitfalls and Workarounds Spatialization and room effects greatly contribute to realism, but they come at a cost. Performance Spatialization incurs a performance hit for each additional sound that must be placed in the 3D sound field. This cost varies, depending on the platform. For example, on a high end PC, it may be reasonable to spatialize 50+ sounds, while you may only be able to spatialize one or two sounds on a mobile device. Some sounds may not benefit from spatialization even if placed in 3D in the world. For example, very low rumbles or drones offer poor directionality and could be played as standard stereo sounds with some panning and attenuation. Audible Artifacts As a 3D sound moves through space, different HRTFs and attenuation functions may become active, potentially introducing discontinuities at audio buffer boundaries. These discontinuities will often manifest as clicks, pops or ripples. They may be masked to some extent by reducing the speed of traveling sounds and by ensuring that your sounds have broad spectral content. Latency While latency affects all aspects of VR, it is often viewed as a graphical issue. However, audio latency can be disruptive and immersion-breaking as well. Depending on the speed of the host system and the underlying audio layer, the latency from buffer submission to audible output may be as short as 2 ms in high performance PCs using high end, low-latency audio interfaces, or, in the worst case, as long as hundreds of milliseconds. High system latency becomes an issue as the relative speed between an audio source and the listener's head increases. In a relatively static scene with a slow moving viewer, audio latency is harder to detect.

30 | Oculus Audio SDK Guide | Audio

Compatibility between VR and Non-VR Games Few developers have the luxury of targeting VR headsets exclusively, and must support traditional, nonVR games using external speakers and without the benefit of positional tracking. Weighing quality versus compatibility can be difficult and incur development time.

Platform Notes The Oculus Audio SDK currently supports Windows 7+, Android (Gear VR), Mac OS X, and Linux. This section covers issues that may arise with different versions. Component

Windows

Gear VR / Android

Mac

Linux

C/C++ SDK

yes

yes

yes

yes

Wwise plugin

yes

yes

yes

no

Unity plugin

yes

yes

yes

TBD

FMOD plugin

yes

yes

yes

yes

VST plugin

yes

no

yes

no

AAX plugin

yes

no

yes

no

• • • •

Windows: The Oculus Audio SDK supports Windows 7 and later, both 32 and 64-bit. Mac: The Oculus Audio SDK supports Mac OS X 10.9 and later, 32 and 64-bit. Linux: The Oculus Audio SDK has been ported to Linux Ubuntu 14 (64-bit). Android/Gear VR: Oculus Audio SDK supports Android phones on the Gear VR platform.

Middleware Support Very few Oculus developers will use the Oculus Audio C/C++ SDK directly. Most developers use a middleware framework, such as Audiokinetic Wwise or FMOD, and/or an engine such as Unity or Epic's Unreal. For this reason, we support middleware packages and engines commonly used by developers. • Audiokinetic Wwise: Oculus provides a Wwise compatible plugin for Windows. More information about this plugin can be found in the Oculus Spatializer for Wwise Integration Guide. • FMOD: The Oculus Audio SDK supports FMOD on the Windows, Mac and Android platforms. More information about this plugin can be found in the Oculus Spatializer for FMOD Integration Guide. • Unity3D: The Oculus Audio SDK supports Unity 5 on Android, Mac OS X and Windows. More information about this plugin can be found in the Oculus Native Spatializer for Unity on page 32. • Unreal Engine: Epic's Unreal Engine 4 supports numerous different audio subsystems. The Wwise integration (available directly from Audiokinetic) has been tested with our Wwise Spatializer plugin (see above).

Oculus Hardware Capabilities Each Oculus hardware platform has different audio capabilities.

Audio | Oculus Audio SDK Guide | 31

Note: All Oculus VR headsets requires headphones. DK1 does not provide any audio output and thus relies on the end user's system for audio output. DK2 also relies on the end user's system for audio output, but adds positional tracking. The Oculus CV1 HMD has a built-in DAC (audio over USB connected to the headset) and ADC (for microphone). The audio subsystem is class compliant and supports Windows, Mac and Linux. The HMD has integrated headphones as well and a microphone. The Samsung Gear VR mobile product supports 16-bit, 48 kHz sound. Like DK1, it supports head orientation but not positional tracking. Headphone choice is dictated by the end user. Mobile performance is significantly lower than desktop PC performance, so try to minimize the number of audio sources.

32 | Oculus Native Spatializer for Unity | Audio

Oculus Native Spatializer for Unity Welcome to this guide to using the Oculus Native Spatializer plugin in Unity. Before reading this guide, be sure to review the Oculus Audio SDK Guide on page 26 for information on features, requirements, and integration steps that is relevant to all of our Audio SDK plugins. For general background information on audio spatialization, see our Introduction to Virtual Reality Audio on page 6.

Overview This guide describes how to install and use the Oculus Native Spatializer plugin in Unity 5.2+ and in end-user applications. The Oculus Native Spatializer Plugin (ONSP) is an add-on plugin for Unity that allows monophonic sound sources to be spatialized in 3D relative to the user's head location. The Native Oculus Spatializer is built on Unity’s Native Audio Plugin, which removes redundant spatialization logic and provides a first-party HRTF. Our ability to localize audio sources in three-dimensional space is a fundamental part of how we experience sound. Spatialization is the process of modifying sounds to make them localizable, so they seem to originate from distinct locations relative to the listener. It is a key part of creating a sense of presence in virtual reality games and applications. For a detailed discussion of audio spatialization and virtual reality audio, we recommend reviewing our Introduction to Virtual Reality Audio guide before using the Oculus Native Spatializer. If you’re unfamiliar with Unity’s audio handling, be sure to review the Unity Audio guide. Note: The Legacy Audio Spatializer for Unity 4 has been discontinued. The final version is available with Audio SDK v1.0.4 on the Downloads page; select the Audio category and version 1.0.4 to download. General OSP Limitations 1. CPU usage increases when early reflections are turned on, and increases proportionately as room dimensions become larger.

Requirements and Setup Requirements • Windows 7/8/10 • Unity 5.2 Professional or Personal, or later. See Unity Compatibility and Requirements for details on our recommended versions. Download and Setup Note: We recommend removing any previously-imported versions of the OSP or ONSP before importing a new plugin. See Updating to Oculus Native Spatializer for Unity from previous OSP for Unity Versions below for instructions.

Audio | Oculus Native Spatializer for Unity | 33

To download the ONSP and import it into a Unity project: 1. 2. 3. 4. 5. 6.

Download the Oculus Spatializer Unity package from the Audio Packages page. Extract the zip. Open your project in the Unity Editor, or create a new project. Select Assets > Import Package > Custom Package…. Select OculusNativeSpatializer.unitypackage and import. When the Importing Package dialog opens, leave all assets selected and click Import.

To turn on the Native Spatializer: 1. Go to Edit > Project Settings > Audio in Unity Editor 2. Select OculusSpatializer in the Spatializer Plugin drop-down setting in the AudioManager Inspector panel as shown below. We recommend Rift developers set DSP Buffer Size to Best latency to set up the minimum buffer size for the platform that is supported, reducing overall audio latency. Gear VR developers should set DSP Buffer Size to Good or Default to avoid audio distortion.

Updating to Oculus Native Spatializer for Unity from previous OSP for Unity Versions 1. Note the settings used in OSPManager in your project. 2. Replace OSPAudioSource.cs (from previous OSP) on AudioSources with ONSPAudioSource.cs in / Assets/OSP. 3. Set the appropriate values previously used in OSPManager in the plugin effect found on the mixer channel. Note that the native plugin adds functionality, so you will need to adjust to this new set of parameters. 4. Remove OSPManager from the project by deleting OSPManager*.* from /Assets/OSP except your newly-added ONSPAudioSource.cs. 5. Verify that OculusSpatializer is set in the Audio Manager and that Spatialization is enabled for that voice.

34 | Oculus Native Spatializer for Unity | Audio

Use the functions on AudioSource to start, stop and modify sounds as required. Note: To instantiate a GameObject which includes both ONSPAudioSource.cs and AudioSource components, you must call the function void SetParameters(ref AudioSource source)(found within ONSPAudioSource) on the AudioSource component before you call the AudioSource Play() function. This will explicitly set the AudioSource parameters before the audio stream is handled. If you skip this step, the parameters will not be immediately set, and you may hear a slight ‘jump’ at the start of playback.

Exploring Oculus Native Spatializer with the Sample Scene To get started, we recommend opening the supplied demonstration scene RedBallGreenBall, which provides a simple introduction to OSNP resources and examples of how what the spatializer sounds like.

This simple scene includes a red ball and a green ball, which illustrate different spatializer settings. A looping electronic music track is attached to the red ball, and a short human voice sequence is attached to the green ball. The room model used to calculate reflections and reverb is visualized in the scene around the listener. Launch the scene in the Unity Game View, navigate with the arrow keys, and control the camera orientation with your mouse to quickly hear the spatialization effects. To import and open RedBallGreenBall : 1. 2. 3. 4. 5.

Create a new Unity project. Import the OculusNativeSpatializer.unitypackage. When the Importing Package dialog opens, leave all assets selected and click Import. Enable the Spatializer as described in Download and Setup Open RedBallGreenBall in /Assets/scenes.

To preview the scene with a Rift: 1. 2. 3. 4.

Import and launch RedBallGreenBall as described above. In Build Settings, verify that the PC, Mac and Linux Standalone option is selected under Platform. In Player Settings, select Virtual Reality Supported. Preview the scene normally in the Unity Game View.

Audio | Oculus Native Spatializer for Unity | 35

To preview the scene in Gear VR (requires gamepad): 1. Be sure you are able to build and run projects on your Samsung phone (Debug Mode enabled, adb installed, etc.) See the Mobile SDK Setup Guide for more information. 2. Follow the setup steps at the top of this section. 3. In Build Settings a. Select Android under Platform. b. Select Add Current to Scenes in Build. 4. In Player Settings, select Virtual Reality Supported. 5. Copy your osig to /Assets/Plugins/Android/assets. 6. Build and run your project. 7. Navigate the scene with a compatible gamepad.

Applying Spatialization Attach the helper script ONSPAudioSource.cs, found in Assets/OSPNative/scripts, to an AudioSource. This script accesses the extended parameters required by the Oculus Native Spatializer. Note that all parameters native to an AudioSource are still available, though some values may not be used if spatialization is enabled on the audio source. In this example, we look at the script attached to the green sphere in our sample RedBallGreenBall:

OculusSpatializerUserParams Properties Spatialization Enabled

If disabled, the attached Audio Source will act as a native audio source without spatialization. This setting is linked to the corresponding parameter in the Audio Source expandable pane (collapsed in the above capture).

36 | Oculus Native Spatializer for Unity | Audio

Reflections Enabled

Select to enable early reflections for the spatialized audio source. To use early reflections and reverb, you must select this value and add an OculusSpatializerReflection plugin to the channel where you send the AudioSource in the Audio Mixer. See Audio Mixer Setup below for more details.

Gain

Adds up to 24 dB gain to audio source volume (in db), with 0 equal to unity gain.

Volumetric Radius

Expands the sound source to encompass a spherical volume up to 1000 meters in radius. For a point source, set the radius to 0 meters. This is the default setting.

Oculus Attenuation Settings Enabled

If selected, the audio source will use an internal attenuation falloff curve controlled by the Minimum and Maximum parameters. If deselected, the attenuation falloff will be controlled by the authored Unity Volume curve within the Audio Source Inspector panel. Note: We strongly recommend enabling internal attenuation falloff for a more accurate rendering of spatialization. The internal curves match both the way the direct audio falloff as well as how the early reflections are modeled. For more information, see Attenuation and Reflections on page 29 in our Audio SDK Guide.

Range: Minimum

Sets the point at which the audio source amplitude starts attenuating, in meters. It also influences the reflection/reverb system, whether or not Oculus attenuation is enabled. Larger values will result in less noticeable attenuation when the listener is near the sound source.

Range: Maximum

Sets the point at which the audio source amplitude reaches full volume attenuation, in meters. It also influences the reflection/reverb system, whether or not Oculus attenuation is enabled. Larger values allow for “loud” sounds that can be heard from a distance.

Audio Mixer Setup Note: In Unity’s terminology, group is roughly the same as channel. In this section, we use the terms interchangeably. Unity 5 includes a flexible mixer architecture for routing audio sources. A mixer allows the audio engineer to add multiple channels, each with their own volume and processing chain, and set an audio source to that channel. For detailed information, see Unity’s Mixer documentation. Shared Reflection and Reverb To allow for the reflection engine to be added within a scene, you must create a mixer channel and add the OculusSpatializerReflection plug-in effect to that channel. 1. 2. 3. 4. 5.

Select the Audio Mixer tab in your Project View. Select Add Effect in the Inspector window. Select OculusSpatializerReflection. Set the Output of your attached Audio Source to Master (SpatializerMixer). Set reflection settings to globally affect spatialized voices.

Audio | Oculus Native Spatializer for Unity | 37

Table 1: Mixer Settings Reflections Engine On

Select to enable the early reflection system. For more information, see Attenuation and Reflections on page 29 in our Audio SDK Guide.

Late Reverberation

Select to enable global reverb, based on room size. Requires reflection engine (Reflections Engine On) to be enabled.

Room Dimensions: Width / Height / Length

Sets the dimensions of the room model used to calculate reflections, in meters. The greater the dimensions, the further apart the reflections. Range: 0 - 200 m.

Wall Reflection Coefficients

Sets the percentage of sound reflected by each respective wall. At 0, the reflection is fully absorbed. At 1.0, the reflection bounces from the wall without any absorption. Caps at 0.97 to avoid feedback.

38 | Oculus Native Spatializer for Unity | Audio

Global Scale (1 unit = 1 m)

The scale of positional values fed into the spatializer must be set to meters. Some applications have different scale values assigned to a unit. For such applications, use this field to adjust the scale for the spatializer. Unity defaults to 1 meter per unit. Example: for an application with unit set to equal 1 cm, set the Global Scale value to 0.01 (1 cm = 100 m).

Note: High reflection values in small rooms may cause distortion due to volume overload. Sounds reflect more energetically in small rooms because the smaller walls absorb less sound, and shorter travel distances result in less air dispersion. Larger rooms therefore allow for higher reflection values without distortion. If you encounter distortion in rooms in the ballpark of 5 x 5 x 5 or smaller, this could be a factor. RedBallGreenBall Example

To see how this works in RedBallGreenBall, access the Audio Mixer by selecting the Audio Mixer tab in your Project View. Then select Master under Groups as shown below.

Audio | Oculus Native Spatializer for Unity | 39

Select the green sphere in your Scene View. Note that the Output of the attached Audio Source vocal1 is set to our Master (SpatializerMixer):

You can now set reflection/reverberation settings to globally affect spatialized voices.

Dynamic Room Modeling The Oculus Spatializer provides dynamic room modeling, which enables sound reflections and reverb to be generated based on a dynamically updated model of the current room within the VR experience and the user's position within that space. Overview Prior to the 1.22 release, the Oculus Spatializer could only use a simple reflection model based on a rectangular prism "shoebox". This approach is quite effective at filling the space and making it sound natural. With this approach, sound reflections and reverb are binarually spatialized, i.e. the timing, volume, and other effects are different in each ear. This provides the right cues to reinforce the directionality of the sound. This simple shoebox reflection system is designed for ease of use, and always places the listener at the center with the dimensions of the box set by the sound designer. This allows for a simple workflow where everything is controlled in the middleware tools. The sound designer can determine the dimensions and reflection coefficients for the walls of the box in order to obtain more natural sounding spatialization that fits as closely as possible to the VR experience. However, because the room is a fixed size, this simple approach is somewhat limited. The listener was always placed in the center, even when they move near a wall or other fixed object within the experience. And reverb offers little value because it isn’t dynamic. The Oculus Spatializer now provides more dynamic capabilities to further increase the realism of the sound without greatly increasing the computational complexity or impacting the workflow. As of the 1.22 release, the Oculus Spatializer integrates with the game engine in order to fit the shoebox to the actual space within the VR experience, making it dynamically conform to the shape and size of the space. This enables the dimensions of the room to change dynamically, and enables the sound characteristics to change as the listener moves within the room. In order to achieve this, the Spatializer uses raycasting within the game engine. To keep this simple, there is a default implementation that connects the Spatializer to the engine raycaster. It is designed to be as lightweight as possible, so there are only a small number of rays that are cast. In addition, it maintains a history of previous raycast results, and refines the estimation over time. The Spatializer also includes a visualization option in the Unity Editor, enabling you to see the dimensions of the room and where the rays are hitting. The ability to visualize the raycasting is particularly useful for dealing with unintentional collisions with geometry that isn't meant to affect the sound, such as UI objects. Using Dynamic Room Modeling To use the Oculus Spatializer within the Unity Editor, attach the following script to a game object: Assets/OSPNative/scripts/OculusSpatializerUnity.cs The easiest approach is to add this script to a static, empty game object in the scene. It will then activate the geometry engine, overriding the current implementation.

40 | Oculus Native Spatializer for Unity | Audio

The script exposes public variables that can be accessed after the game object with the script is added to the scene:

You can modify the following variables: • Layer Mask: Geometry can be tagged with a given layer enum. The Spatailizer will only use geometry that matches what has been selected in Layer Mask. • Visualize Room: When turned on, you will see the rays hitting the geometry in the Unity Editor. Only geometry assigned to the layer(s) selected in the Layer Mask will be shown. You will also see the shoebox reverb model dynamically change to fit the current space as you move throughout the VR experience. You will also see the results from setting other variables, such as Rays Per Second, Max Wall Distance, etc. • Rays Per Second: This specifies the number of rays that are randomly sent out from the listener’s position per second to approximate room size and shape. A larger value produces a more accurate approximation of the room characteristics, but requires more CPU resources. • Room Interp Speed: This specifies the time it takes (in seconds) for the room to smoothly transition from one room approximation to the next. The larger the number, the slower the transition. If this value is too small, the reflections and reverb could sound erratic and jump around. If this value is too large, the reflections and reverb may seem to lag behind when moving from one space to another space, especially where there are substantial differences in room size or shape. • Max Wall Distance: The maximum length (in feet) that each ray will travel. If a wall is not hit within that range, it will not be used in approximating the room size. • Ray Cache Size: The number of rays that are cached in order to approximate the room characteristics. The larger the cache, the more rays will be used to approximate the current room characteristics. If this value is too large, the sound may not transition quickly enough when moving from one space to another, especially where there are substantial differences in room size or shape. If this value is too small, the sound may be perceived as being too erratic.

Audio | Oculus Native Spatializer for Unity | 41

• Dynamic Reflections Enabled: If you turn off Dynamic Reflections Enabled, then reverb will persist, but reflections will be turned off. See "Reverb and Reflections", below, for more information. • Legacy Reverb: Dynamic room modeling provides a reverb effect which is smoother and more accurate than the legacy reverb effect. However dynamic room modeling requires more CPU resources. You can use the legacy reverb model if you need to reduce CPU resource usage, and the legacy reverb effect is satisfactory for your application. Reverb and Reflections Here is a brief explanation about the difference between reverb and reflections. When a sound is generated, the first thing that happens is the listener hears the direct version of that sound. Imagine that we are in an anechoic chamber which does not have reflections. All you will hear is the sound coming directly from the source, and all reflections have been absorbed by the wall. It's a very dry sound and unrealistic sounding in the real world, since sounds always find something to bounce off of and come back to the listener. So the direct portion of the sound is the strongest signal the listener hears and for our spatializer, it's really the bread and butter of localizing a sound in 3D. The second part of this is the reflections. When you are in a room, the sound will hit a wall, some of it will be absorbed and some will bounce back into the room. When it passes the listener, that is called the first order reflection. Imagine a room with six sides (like a box). The listener will first hear the direct part of the sound, followed by the next six reflections. These reflections are considered first order, since they are the immediate reflection off the wall. Now, these first order reflections (all six of them) will bounce off the walls again. Each one will create another six reflections, and they will be much quieter then the first six reflections. That's another 6 * 6 = 36 reflections. This is called the second order reflection. First and second order reflections are unique in that they are fairly distinct, they are really like an echo with the sound pretty much being the same as the original. First, second and even third order (36 * 6) are considered part of the reflection portion of the sound. Which is the second most important piece of the Spatializer; it gives the listener a sense of the shape of the room. Now, reverberation takes reflections one step further. After the third order reflection, the bounces start getting fuzzy and undefined. What you end up with is a network of bounces that sound like continuous noise. This is the reverb portion of the sound, and is modeled differently from the reflection portion. If we were to simulate those higher order bounces we would run out of CPU really quickly! So we do tricks to cut down on the CPU and approximate those high order bounces. In our terminology, reflections are the first few bounces that still sound discreet, and reverb is the higher order bounces that become undistinguished from each other.

First-Party Audio Spatialization (Beta) Unity v5.4.0b16 and later include a basic version of our Oculus Native Spatializer Plugin (ONSP) that makes it easy to apply basic spatialization (HRTF transforms) to audio point sources in your Unity project. For full functionality, you must import our standalone plugin into your project. To activate the Unity built-in Oculus spatializer: 1. 2. 3. 4.

Open your project in the Unity Editor. Select Edit > Project Settings > Audio. In the Inspector, select OculusSpatializer in the Spatializer Plugin selection box. Set Default Speaker Mode to Stereo.

42 | Oculus Native Spatializer for Unity | Audio

HRTF transforms will be applied to any point sources as if you had imported the native plugin. For the full functionality, you must download and import the standalone ONSP - see the next section for details. Using the Standalone Oculus Native Spatializer Plugin The standalone plugin provides configurable room reflections, ambisonic decoding, and various other audio settings in addition to the HRTF spatialization you get with the built-in Oculus spatializer. You can switch to the standalone plugin even if you previously activated Unity’s built-in Oculus spatializer in Unity. To import the standalone ONSP, see Requirements and Setup on page 32. After importing the standalone ONSP, its settings and effects override any other settings previously made with the built-in spatializer.

Audio | Oculus Native Spatializer for Unity | 43

You must set the AudioManager settings as follows: 1. 2. 3. 4.

Set Default Speaker Mode to Stereo. Set Spatializer Plugin to OculusSpatializer. Set Ambisonic Decoder Plugin to OculusSpatializer. If developing for Gear VR, set DSP Buffer Size to Good or Default to avoid audio distortion.

Playing Ambisonic Audio in Unity 2017.1 (Beta) The Oculus Spatializer supports playing AmbiX format ambisonic audio in Unity 2017.1. This is a beta feature. For Unity 2017.1, the Oculus Spatializer supports ambisonic audio, letting you attach 4-channel AmbiX format audio clips to game objects. Rotating either the headset (the AudioListener) or the audio source itself affects the ambisonic orientation. You can smooth the cross-fading between multiple ambient ambisonic audio sources in your scene by customizing the Volume Rolloff curve for each audio source. Note: Ambisonics do not work in any other version of Unity other than the Unity 2017.1 beta.

Adding Ambisonic Audio to a Scene To add ambisonic audio to a scene in Unity 2017.1 beta: 1. Import the OculusNativeSpatializer Unity package into your project. 2. Select OculusSpatializer for the ambisonic plugin in AudioManager: a. Click Edit > Project Settings > Audio. b. In the Inspector window, locate the AudioManager options. c. From Ambisonic Decoder Plugin, select OculusSpatializer.

44 | Oculus Native Spatializer for Unity | Audio

Note: If OculusSpatializer is not available as an option, it is likely because Unity is using its built-in OculusSpatializer plugin instead of the newer version you imported into your project. To resolve this issue in Unity 2017.1, save your project and restart Unity.

3. Add the AmbiX format ambisonic audio file to your project: a. Copy the audio file to your Unity assets. b. In the Project window, select your audio file asset. c. In the Inspector window, select the Ambisonic check box and then click Apply.

4. Create a GameObject to attach the sound to. 5. Add the ONSP Ambisonics Native script component to your GameObject. 6. Add an Audio Source component to your GameObject and configure it for your ambisonic audio file:

Audio | Oculus Native Spatializer for Unity | 45

a. In the AudioClip field, select your ambisonic audio file. b. In the Output field, select SpatializerMixer > Master.

ONSP Ambisonics Native Options Use Virtual Speakers

Decodes ambisonics as an array of eight pointsourced and spatialized speakers, each located at the vertex of a cube located around the listener. If the check box is not selected, ambisonics are decoded by OculusAmbi, our novel spherical harmonic decoder. OculusAmbi has a flatter frequency response, has less smearing, uses less compute resources, and externalizes audio better than virtual speakers. However, some comb filtering may become audible in broadband content such as wind and rushing water sounds. For broadband content, we recommend using the virtual speaker mode.

Ambisonic Sample Scene: YellowBall For a quick demonstration of the Oculus beta support for ambisonic sound sources, open the YellowBall scene included in the OculusSpatializerNative package. Move the left Stick on your controller to move closer to and farther from the audio source. The volume rolls off according to its attenuation curve. Turning your head left and right changes the ambisonic orientation accordingly. Before you click Play in Unity, be sure that: • In Edit > Project Settings > Player, Virtual Reality Supported is selected.

46 | Oculus Native Spatializer for Unity | Audio

• In Edit > Project Settings > Audio, Spatializer Plugin and Ambisonic Decoder Plugin are set to OculusSpatializer. Note: If OculusSpatializer is not available as an option, it is likely because Unity is using its built-in OculusSpatializer plugin instead of the newer version you imported into your project. To resolve this issue in Unity 2017.1, save your project and restart Unity.

Audio | Oculus Native Spatializer for Unity | 47

Migrating from TBE 3DCeption to Oculus Spatialization This guide describes how to migrate a Unity project from the Two Big Ears 3Dception spatializer to Unity's built-in Oculus Spatializer or the Oculus Native Spatializer Plugin (ONSP) that ships with the Oculus Audio SDK, using the Two Big Ears 3Dception_Example.unity scene found in the latest Non-Commercial 3Dception 1.2.4 package as an example. Preparation Be sure to use the latest supported version of Unity. For up-to-date version recommendations, see the Compatibility and Requirements section of our Unity Developer Guide. Before attempting migration, open the 3DCeption_Example.unity scene in the 3Dception NonComm package and verify that the scene works as expected by running it and moving the player controller around. We recommend using headphones for this step. Basic Migration using Unity's Built-In Oculus Spatializer This section describes how to provide easy, basic spatialization to your Unity project using the basic spatializer included in some Unity versions. For more information, see First-Party Audio Spatialization (Beta) on page 41. Even if you intend to install the ONSP, we recommend going through this step first. In the Unity editor, go to Edit > Project Settings > Audio and change Spatializer Plugin from 3Dception to OculusSpatializer provided by Unity (no need to install the Oculus integration yet). There are two Game Objects within the Unity hierarchy with attached audio sources: one is under Robot, and the other is under AudioSource2. For both of these Game Objects: 1. Remove the TBE_SourceControl component. 2. In the AudioSource component, ensure that Spatialize is enabled. 3. In the AudioSource component, ensure that SpatialBlend is set fully to the right (to the value 1, 3D). In the AudioSource 3D Sound Settings, ensure that Doppler Level is set to 0. Note: The native Unity Doppler effect now works when using a spatializer provider. You may use Doppler as you see fit, though for the purposes of this guide, we recommend disabling it. Now, run the scene to hear the spatialization take effect. Note the following limitations: • The spatialization is direct-only, as the default Unity/Oculus spatializer provider does not provide room/ reflection properties. • Attenuation falloff defaults to using authored Unity volume curves. The built-in Oculus spatializer does not expose the ability to use the Oculus attenuation mode These limitations can be addressed by installing the ONSP (see next step). Overall signal gain may be attenuated when using the Oculus spatializer due to differences in spatialization algorithms. Gain can be easily adjusted by running sounds through a native Unity channel. For a quick guide to the native Unity audio mixer, see Unity's Audio Mixer and Audio Mixer Groups tutorial. If you are satisfied with the spatialization provided by the built-in Oculus spatializer, proceed to the Final CleanUp section at the bottom of this guide. Otherwise, continue to the next section.

48 | Oculus Native Spatializer for Unity | Audio

Advanced Migration using the Oculus Native Spatializer Plugin (ONSP) This section describes how to add additional features and control by importing the Oculus Native Spatialization Plugin to your project. The ONSP is available for download from our Downloads page. Begin by reviewing the rest of our Oculus Native Spatializer for Unity on page 32 Developer Guide and following the included installation instructions. Once installed, Unity will use the latest spatializer plugin found in Assets/Plugins, overriding the built-in Oculus spatializer described above. We recommend trying the example scene RedBallGreenBall to become familiar with the plugin features. Add the ONSPAudioSource component found in OSPNative/scripts/ to the Robot and AudioSource2 Game Objects. This will add additional parameters on the AudioSource, including: • • • •

Overall gain control Oculus attenuation (instead of native Unity curves) Min / max falloff range setting The ability to enable/disable reflections.

You may now switch between 3Dception provider and OculusSpatializer and adjust the values within the ONSPAudioSource component until they reasonably match 3Dception. Another plugin will be exposed which can be added to a native Unity mixer channel. This plugin exposes the reflection engine, which will allow sounds to be treated with early reflections and reverberation. If you are satisfied with the spatialization provided by the ONSP and do not need additional control, proceed to the Final Clean-Up section at the bottom of this guide. Otherwise, continue to the next section. Reflection Zones and Scene Morphing This section describes the following optional steps for advanced users who have installed the ONSP: • How to add the OculusSpatializerReflection plugin to an audio mixer channel, • How to route sounds to that channel, and • How to set up snapshots which can be triggered by room volumes with the same settings as the 3Dception demo. To begin, be sure to review the Audio Mixer Setup section in the Applying Spatialization on page 35 page of our ONSP guide. Unity's native audio mixer includes a snapshot feature which may be used to change mixer plugin parameters. For more information on how snapshots work, see Unity's Audio Mixer Snapshots tutorial. The Oculus Spatializer integration includes the ONSPReflectionZone.cs helper script. You may add it to a Game Object with a Box Collider, which manages calling any snapshots set up within the audio mixer. See the ONSP sample scene RedBallGreenBall for an example application. To replace 3Dception reverb zones, first ensure that a Box Collider is on the Game Objects and verify that it is set to Is Trigger. Add ONSPReflectionZone.cs to the Game Object. Then open the audio mixer and create snapshots that describe the zone's reflection/reverb characteristics. Note that you can add transition times when triggering snapshots, which will help smooth the transitions from one zone to the next. Finally, assign the snapshot to the ONSPReflectionZone component. When the AudioListener enters that new zone, the snapshot will set the new Reflection/Reverb parameters. Note: We recommend that you start with an INIT zone which covers the entire volume of your scene. Ideally, the values for INIT are set to remove all reverb/reflection state, which allow for a clean slate when entering/exiting zones. Also, be aware that there are limitations in the way ONSPReflectionZone.cs handles overlapping volumes. It is best to create zones that are either fully within or fully outside of each other.

Audio | Oculus Native Spatializer for Unity | 49

Experimentation is key when setting up zones with different snapshots. However, transitioning from TBE reverb zones to this new system should not be difficult, and it provides an effective way to model your reflection/ reverb space. Final Clean-UP Remove all Game Object and Components (scripts) left over from the 3DCeption integration: 1. Delete 3Dception Global Game Object in the main Hierarchy. 2. Delete the TBE_Room_Properties component on the TBE_Room_1/2 Game Objects. 3. Delete TBE_3DCeption folder in the /Assets folder. Contact If you have any questions about this procedure, please visit our Audio Developer Forum.

Managing Sound FX with Oculus Audio Manager The Oculus Audio Manager provides sound fx management that is external to Unity scene files. This has audio workflow benefits as well as providing you with the ability to group sound FX events together for greater flexibility. The Oculus Audio Manager provides greater control over your audio compared to using AudioSource components: • You can group sound FX together for volume control and other collective sound parameters and functions. • You can trigger sound events by an external reference instead of a Unity scene object. The advantage of this is that as an audio designer, you can develop and iterate upon a sound event without interruption while other developers are actively working on the scene. You do not have to merge and resolve your changes with those of other developers because your changes are external to the scene. • When firing a sound event, you have more variety in control options, for example, different volume curves that behave differently from the ones available in the stock AudioSource component. The basic premise is that you create sound effect groups as collections of sound effects that share common parameters. Each sound effect you define is then a sound event that you can play back using the class SoundFXRef. To create sound effects groups and events: 1. Add the script AudioManager.cs to a static game object. 2. In the Inspector window, click + under Sound FX Groups to add a new sound effects group.

50 | Oculus Native Spatializer for Unity | Audio

3. Double-click the sound FX group's name to rename the group. 4. Select the sound FX group – the Properties and Sound Effects options for that group appears. 5. Expand Sound Effects and then click Add FX.

Audio | Oculus Native Spatializer for Unity | 51

6. Expand the new sound FX and provide a name in the Name field. 7. Expand Sound Clips and then use the Size and Element controls to select the audio files.

52 | Oculus Native Spatializer for Unity | Audio

Exploring the Audio Manager Sample Scene Run the Test scene located in Assets/OVRAudioManager/Scenes to try out basic audio manager functionality. Browsing through the OVRAudioManager scene object should give you a basic understanding of the Audio Manager data architecture. Press 1 and 2 on your keyboard to trigger different sound events.Take a look at the example scene script TestScript.cs to see how we use public SoundFXRef soundName and soundName.PlaySoundAt(position) to trigger the sound events.

Audio | Oculus Spatializer for Wwise Integration Guide | 53

Oculus Spatializer for Wwise Integration Guide Welcome to this guide to using the Oculus spatializer plugin in Wwise. Before reading this guide, be sure to review the Oculus Audio SDK Guide on page 26 for information on features, requirements, and integration steps that is relevant to all of our Audio SDK plugins. For general background information on audio spatialization, see our Introduction to Virtual Reality Audio on page 6.

Overview The Oculus Spatializer Plugin (OSP) is an add-on plugin for the Audiokinetic Wwise tool set that allows monophonic sound sources to be spatialized in 3D relative to the user's head location. This integration guide describes how to install and use OSP in both the Wwise application and the end-user application. Version Compatibility Download the Oculus Spatializer Wwise package from the Audio Packages page. Download Package Contents The download package contains the following folders: • Wwise2015 - Files for Wwise 2015.x, tested against Wwise 2015.1 on Windows. • Include - header file and sample code to integrate Wwise into a Windows app. • Win32 - library file for 32-bit Windows Wwise Authoring Tool and apps. • x64 - library file for 64-bit Windows Wwise Authoring Tool and apps. • Wwise2016 Files for Wwise 2016.x, tested against Wwise 2016.1 on Windows and MacOS. Android - library file to add to Android apps. Include - header file and sample code to integrate Wwise into a Windows app. Mac - library file to add to macOS apps. Win32 - library file for macOS Wwise Authoring Tool and 32-bit Windows Wwise Authoring tool and apps. • x64 - library file for 64-bit Windows Wwise Authoring Tooland apps. • Wwise2017 - Files for Wwise 2017.x, tested against Wwise 2017.1 on Windows. • • • •

• Include - header file and sample code to integrate Wwise into a Windows app. • Win32 - library file for 32-bit Windows Wwise Authoring Tool and apps. • x64 - library file for 64-bit Windows Wwise Authoring Tool and apps. The Include folder contains OculusSpatializer.h, which is used to integrate Wwise into a Windows app. It contains important registration values that the app must use to register OSP within Wwise. The header file also includes (commented out) source code that shows how to register the plugin with the Wwise run-time.

54 | Oculus Spatializer for Wwise Integration Guide | Audio

General OSP Limitations 1. CPU usage increases when early reflections are turned on, and increases proportionately as room dimensions become larger. Limitations Specific to Wwise Integration • The plugin may only be added to one bus. If you add the plugin to a second bus, you may experience some unwanted noise on the audio output.

Installing to the Wwise Authoring Tool Installing the OSP plugin for Wwise lets you add the Oculus Spatializer as a Wwise Mixer Plug-in to use in your soundbanks. Installing on Windows 1. Navigate to the download package folder that matches your version of Wwise. 2. If installing on 64-bit Windows, copy \x64\bin\plugins to Audiokinetic \Wwise{version}\Authoring\x64\Release\bin\plugins 3. If installing on 32-bit Windows, copy \Win32\bin\plugins to Audiokinetic \Wwise{version}\Authoring\Win32\Release\bin\plugins Installing on macOS The Oculus Native Spatializer plugin for macOS is only compatible with Wwise 2016. 1. Open the Finder, and then go to the Applications/Audiokinetic/Wwise{version} folder where {version} is your Wwise 2016 version, for example: 2016.2.6098. 2. Control-click Wwise.app, and then select Show Package Contents 3. Go to the Contents/SharedSupport/Wwise2016/support/wwise/drive_c/Program Files/ Audiokinetic/Wwise/Authoring/Win32/Release/bin/plugins folder. This is your Wwise.app / plugins folder. 4. From a new Finder window, copy the contents of the Wwise2016/Win32/bin/plugins folder to the Wwise.app /plugins folder. 5. Open the Terminal application. 6. Enter cd /Applications/Audiokinetic/Wwise\ {version}/Wwise.app/Contents/ SharedSupport/Wwise2016 where {version} is your Wwise 2016 version. Be sure to preserve the space in /Wwise\ {version}/. 7. Enter CX_ROOT=/Applications/Audiokinetic/Wwise\ {version}/Wwise.app/Contents/ SharedSupport/Wwise2016 WINEPREFIX=/Applications/Audiokinetic/Wwise\ {version}/ Wwise.app/Contents/SharedSupport/Wwise2016/support/wwise ./bin/wineprefixcreate --snapshot where {version} is your Wwise 2016 version.

Audio | Oculus Spatializer for Wwise Integration Guide | 55

Adding Target Platform Plugins to Wwise Unity Projects Add the Oculus Spatializer plugin after installing the Wwise Integration Package to your Unity projects. Audiokinetic provides Wwise integration for Unity projects through the Wwise Integration Package, allowing Wwise to be used in Unity games (see Audiokinetic’s documentation here). Before you can add Wwise sound banks to your Unity scene that include Oculus spatialized audio, you must add the appropriate Oculus spatializer plugin to your Unity project. Each Unity target platform (Android, macOS, x86, x86_64) has its own plugin you must add. x86 Target Platform 1. Navigate to the Oculus Spatializer Wwise download package folder that matches your version of Wwise. 2. Copy \Win32\bin\plugins\OculusSpatializerWwise.dll to the {Unity Project}\Assets \Wwise\Deployment\Plugins\Windows\x86\DSP\ folder. x86_64 Target Platform 1. Navigate to the Oculus Spatializer Wwise download package folder that matches your version of Wwise. 2. Copy \x64\bin\plugins\OculusSpatializerWwise.dll to the {Unity Project}\Assets \Wwise\Deployment\Plugins\Windows\x86_64\DSP\ folder. macOS Target Platform macOS target platform support is only available for Wwise 2016 and 2017. 1. Navigate to the Wwise2016/Mac or Wwise2017/Mac folder in your Oculus Spatializer Wwise download package. 2. Copy libOculusSpatializerWwise.dylib to the {Unity Project}/Assets/Wwise/ Deployment/Plugins/Mac/DSP/ folder. Android Target Platform Android target platform support is only available for Wwise 2016 and 2017. 1. Navigate to the Wwise2016\Android or Wwise2017\Android folder in your Oculus Spatializer Wwise download package. 2. Copy libOculusSpatializerWwise.so to the {Unity Project}\Assets\Wwise\Deployment \Plugins\Android\armeabi-v7a\DSP\ folder. Wwise 2015 If you are using a Wwise 2015 version, you will need to rebuild the Wwise Unity Integration plugin. If you are using a later Wwise version, you do not need to rebuild the Wwise Unity Integration plugin. You must have the Wwise SDK installed, and the corresponding version of the Unity integration (check that WWISEROOT and WWISESDK environment variables are correct). Python 2.7 or 3.x is required, and your PATH environment variable must include the installation location. If you are building the Wwise integration source in Visual Studio 2013, you may need to modify the Library Directories paths in the AkSoundEngine project for each build config from _vc100 to _vc120. To install:

56 | Oculus Spatializer for Wwise Integration Guide | Audio

1. Download the Wwise Unity Integration and source code for the relevant Wwise version (two separate downloads) from the Audiokinetic website. 2. In the Unity Editor, select Assets > Import > Custom Package and select the Wwise Integration to import it into your Unity project. 3. Save the project and close Unity. 4. Open the Wwise Unity Integration help doc in \Assets\Wwise\Documentation. a. Follow the instructions in the section Build the Integration code. b. Follow the instructions in the section Adding Licensed Plugins. 5. Add the contents OculusSpatializer.h from the Oculus Audio SDK following the call to AK::SoundEngine::RegisterAllBuiltInPlugins(). 6. Recompile the lib and deploy to Unity Assets\Plugins as described in Adding Licensed Plugins.

How to Use the Oculus Spatializer in Wwise 1. Launch Wwise. 2. To add OSP to a bus, create a new audio bus and place it under one of the master buses.

Note: Mixer bus plugins may not be added to master buses. 3. Click on the newly-created bus and then click on the Mixer Plug-in tab in the Audio Bus Property Editor. Select the >>, find the Oculus Spatializer selection, and add a Default (Custom) preset:

Note: If the Mixer Plug-in tab is not visible, click the "+" tab and verify that mixer plugins are enabled (check box is selected) for buses.

Audio | Oculus Spatializer for Wwise Integration Guide | 57

4. Under the Mixer Plug-in tab, click on the "…" button at the right-hand side of the property window. This will open up the Effect Editor (global properties) for OSP:

Global Properties

The following properties are found within the OSP effect editor: Bypass – Use native panning

Disables spatialization. All sounds routed through this bus receive Wwise native 2D panning.

Gain (+/-24db)

Sets a global gain to all spatialized sounds. Because the spatializer attenuates the overall volume, it is important to adjust this value so spatialized sounds play at the same volume level as non-spatialized (or native panning) sounds.

Global Scale

The scale of positional values fed into the spatializer must be set to meters. Some applications have different scale values assigned to a unit. For such applications, use this field to adjust the scale for the spatializer. Unity defaults to 1 meter per unit.

(1 unit = 1 m)

Example: for an application with unit set to equal 1 cm, set the Global Scale value to 0.01 (1 cm = 100 m). Reflections Engine On

Enables early reflections. This greatly enhances the spatialization effect, but incurs a CPU hit.

58 | Oculus Spatializer for Wwise Integration Guide | Audio

Late Reverberation

If this field is set, a fixed reverberation calculated from the early reflection room size and reflection values will be mixed into the output (see below). This can help diffuse the output and give a more natural sounding spatialization effect. Reflections Engine On must be enabled for reverberations to be applied.

Shared Reverb Attenuation Range Min / Max

Controls the attenuation calculations for the spatial reverb. For more information, see Attenuation and Reflections on page 29.

Reflections Range Max

Range of the attenuation curve for reflections. This is the distance at which the reflections go silent, so it should roughly match the attenuation curve in Wwise. For more information, see Attenuation and Reflections.

Room Dimensions Width / Sets the dimensions of the room model used to calculate reflections. The greater the Height / Length dimensions, the further apart the reflections. Value range is 1-200 meters for each axis. Wall Reflection Coefficients

Sets the percentage of sound reflected by walls for each wall specified for a room (Left/Right,Forward/Backward, Up/Down). At 0, the reflection is fully absorbed. At 1.0, the reflection bounces from the wall without any absorption. Capped at 0.97 to avoid feedback.

Note: High reflection values in small rooms may cause distortion due to volume overload. Sounds reflect more energetically in small rooms because the smaller walls absorb less sound, and shorter travel distances result in less air dispersion. Larger rooms therefore allow for higher reflection values without distortion. If you encounter distortion in rooms in the ballpark of 5 x 5 x 5 or smaller, this could be a factor. Notes and Best Practices Up to 64 sounds running through the bus are spatialized. Subsequent sounds use Wwise native 2D panning until the spatialized sound slots are free to be reused. All global properties may be set to an RTPC, for real-time control within the user application. In the main menu, set your audio output configuration to a 2.1 or 2 Stereo Channel Configuration (Speakers or Headphones). The spatializer will not work on higher channel configurations. Note that the room model used to calculate reflections follows the listener's position and rotates with the listener's orientation. Future implementation of early reflections will allow for the listener to freely walk around a static room. When using early reflections, be sure to set non-cubical room dimensions. A perfectly cubical room may create reinforcing echoes that can cause sounds to be poorly spatialized. The room size should roughly match the size of the room in the game so the audio reinforces the visuals. The shoebox model works best when simulating rooms. For large spaces and outdoor areas, it should be complemented with a separate reverb. IMPORTANT: When early reflections are enabled, you must ensure that the room size is set to be large enough to contain the sound. If a sound goes outside the room size (relative to the listener's position), early reflections will not be heard.

Sound Properties For a sound to be spatialized, you must ensure that sounds are set to use the bus to which you added OSP:

Audio | Oculus Spatializer for Wwise Integration Guide | 59

Ensure that your sound positioning is set to 3D:

Upon setting the sound to the OSP bus, a Mixer plug-in tab will show up on the sounds Sound Property Editor:

60 | Oculus Spatializer for Wwise Integration Guide | Audio

Parameters

The following properties are applied per sound source. Bypass Spatializer

Disables spatialization. Individual voices / actor-mixer channels may skip spatialization processing and go directly to native Wwise panning.

Reflections Enabled

Enables early reflections. This greatly enhances the spatialization effect, but incurs a CPU hit.

Oculus Attenuation Enabled

If selected, the audio source will use an internal amplitude attenuation falloff curve controlled by the Range Min/Max parameters.

Attenuation Range Min

Sets the distance at which the audio source amplitude starts attenuating, in meters. It also influences the reflection/reverb system, even if Oculus attenuation is disabled.

Attenuation Range Max

Sets the distance at which the audio source amplitude reaches full volume attenuation, in meters. It also influences the reflection/reverb system, even if Oculus attenuation is disabled.

Volumetric Radius

Expands the sound source from a point source to a spherical volume. The radius of the sphere is defined in meters. For a point source, use a value of 0.

Treat Sound As Ambisonic Treats sound as ambisonic instead of applying spatialization. Recommended for ambient or environmental sounds, that is, any sound not produced by a visible actor in the scene. Note: Attached sound must be in AmbiX format. Please see Ambisonics in Supported Features on page 27 for more information. Ambisonic Virtual Speaker Decodes ambisonics as an array of eight point-sourced and spatialized speakers, each Mode located at the vertex of a cube located around the listener. If the check box is not selected, ambisonics are decoded by OculusAmbi, our novel spherical harmonic decoder. OculusAmbi has a flatter frequency response, has less smearing, uses less compute resources, and externalizes audio better than virtual speakers. However, some comb filtering may become audible in broadband content such as wind and rushing water sounds. For broadband content, we recommend using the virtual speaker mode.

Audio | Oculus Spatializer for Wwise Integration Guide | 61

Notes and Best Practices Currently, only mono (1-channel) and stereo (2-channel) sounds are spatialized. Any sounds with a higher channel count will not be spatialized. A stereo sound will be collapsed down to a monophonic sound by having both channels mixed into a single channel and attenuated. Keep in mind that by collapsing the stereo sound to a mono sound, phasing anomalies with the audio spectrum may occur. It is highly recommended that the input sound is authored as a mono sound. Spatialized sounds will not be able to use stereo spread to make a sound encompass the listener as it gets closer (this a common practice for current spatialization techniques).

Integrating the Oculus Spatializer This section is for programmers who are integrating Wwise libraries and plugin registration within their application code base. Add the commented-out code at the bottom of the file OculusSpatializer.h file to the code where the Wwise run-time is being initialized. This step is only required for applications that use the PC-SDK. For applications that use Unity, please follow the standard Wwise/Unity integration steps for third party plug-ins which is defined by Audiokinetic. Please see:https://www.audiokinetic.com/library/edge/? source=Unity&id=pg__install__addlicensedplugins.html. Copy OculusSpatializerWwise.dll found within \bin\plugins folder into the folder where the Wwiseenabled application .exe resides. This allows the Wwise run-time to load the plugin when Wwise initialization and registration calls are executed. For example, if you are using UE4, place the plugin into the following folder: UE4\Engine\Binaries\. The spatializer assumes that only one listener is being used to drive the spatialization. The listener is equivalent to the user's head location in space, so please be sure to update as quickly as possible. See Wwise documentation for any caveats on placing a listener update to an alternative thread from the main thread. Provided that the listener and sounds are properly updated within the application, the sounds that have been set to the OSP bus will have a greater sense of 3D presence!

OSP Version Migration in Wwise To migrate to a new version of the Oculus Spatializer Plugin for Wwise, follow these steps: 1. Delete any existing versions of OculusSpatializer.dll and OculusSpatializer.xml. 2. Copy the new versions of OculusSpatializerWwise.dll and OculusSpatializerWwise.xml (32 bit or 64 bit) from the Audio SDK over the existing versions in your Wwise Authoring tool installation directory. 3. Copy the new versions of OculusSpatializerWwise.dll (32 bit or 64 bit) over the existing version in your application directory. 4. Open the Wwise Authoring tool and generate sound banks. 5. Launch your application and load newly generated sound banks.

62 | Oculus Spatializer for FMOD Integration Guide | Audio

Oculus Spatializer for FMOD Integration Guide Welcome to this guide to using the Oculus spatializer plugin in FMOD. Before reading this guide, be sure to review the Oculus Audio SDK Guide on page 26 for information on features, requirements, and integration steps that is relevant to all of our Audio SDK plugins. For general background information on audio spatialization, see our Introduction to Virtual Reality Audio on page 6.

Overview The Oculus Spatializer Plugin (OSP) is an add-on plugin for FMOD Studio for Windows and Mac OS X that allows monophonic sound sources to be properly spatialized in 3D relative to the user's head location. This plugin requires FMOD Studio version 1.08.16 or later. This integration guide outlines how to install and use the OSP in both FMOD Studio and the end-user application. General OSP Limitations 1. CPU usage increases when early reflections are turned on, and increases proportionately as room dimensions become larger. Adding the OSP to your project in FMOD Studio 1.07.00 and later Download the Oculus Spatializer FMOD package from the Audio Packages page. Windows: 1. Navigate to the folder AudioSDK\Plugins\FMOD\x64. 2. Add the 64-bit version of OculusSpatializerFMOD.dll to Program Files\FMOD SoundSystem\FMOD Studio \plugins. macOS 1. Ctrl-click (right-click) on FMOD Studio.app and select Show Package Contents. 2. Copy libOculusSpatializerFMOD.dylib from AudioSDK/Plugins/FMOD/mcub to FMOD Studio.app/ Contents/Plugins. Adding the OSP to your project in earlier versions of FMOD Windows: 1. Navigate to the folder AudioSDK\Plugins\FMOD\Win32. 2. Copy the 32-bit OculusSpatializerFMOD.dll into the Plugins directory in your FMOD Studio project directory. (Create this directory if it does not already exist.) macOS 1. Navigate to the folder AudioSDK/Plugins/FMOD/macub.

Audio | Oculus Spatializer for FMOD Integration Guide | 63

2. Copy libOculusSpatializerFMOD.dylib into the Plugins directory in your FMOD Studio project directory. (Create this directory if it does not already exist.).

How to Use in FMOD Studio 1. Create a new event.

2. Select the master track.

3. Delete the 3D panner.

4. Add the Oculus Spatializer plugin.

64 | Oculus Spatializer for FMOD Integration Guide | Audio

Notes and Best Practices

Up to 64 sounds running through the bus may be spatialized. Make sure that your Project Output Format is set to stereo in FMOD Studio (Edit → Preferences → Format → Stereo). Note that the room model used to calculate reflections follows the listener's position and rotates with the listener's orientation. Future implementation of early reflections will allow for the listener to freely walk around a static room. When using early reflections, be sure to set non-cubical room dimensions. A perfectly cubical room may create reinforcing echoes that can cause sounds to be poorly spatialized. The room size should roughly match the size of the room in the game so the audio reinforces the visuals. The shoebox model works best when simulating rooms. For large spaces and outdoor areas, it should be complimented with a separate reverb. Parameters Sound Properties

Prefer mono sounds and/or set the master track input format to mono (by right-clicking on the metering bars on the left side).

Audio | Oculus Spatializer for FMOD Integration Guide | 65

Attenuation

Enables the internal distance attenuation model. If attenuation is disabled, you can create a custom attenuation curve using a volume automation on a distance parameter.

Range Max

Maximum range for distance attenuation and reflections. Note that this affects reflection modeling even if Attenuation is disabled.

Range Max

Maximum range for distance attenuation and reflections. Note that this affects reflection modeling even if Attenuation is disabled.

Volumetric Radius

Specifies the radius to be associated with the sound source, if you want the sound to seem to emanate from a volume of space, rather than from a point source. Sound sources can be given a radius which will make them sound volumetric. This will spread the sound out, so that as the source approaches the listener, and then completely envelops the listener, the sound will be spread out over a volume of space. This is especially useful for larger objects, which will otherwise sound very small when they are close to the listener. For more information, please see the blog articles https:// developer.oculus.com/blog/volumetric-sounds/.

Enable Reflections

If set to true, this sound instance will calculate reflections. Reflections take up extra CPU, so disabling can be a good way to reduce the overall audio CPU cost. Reflections will only be applied if the Reflection Engine is enabled on the Oculus Spatial Reverb effect. For more information, see Attenuation and Reflections on page 29 in our Audio SDK Guide.

Oculus Spatial Reverb Requires FMOD Studio 1.08 and later. Place the Oculus Spatial Reverb effect before the Fader on the Master Bus channel. These parameters will affect all sounds in the project using the Oculus Spatializer Plugin. 1. From the main menu, select Window > Mixer.

66 | Oculus Spatializer for FMOD Integration Guide | Audio

2. In the space before the Fader, right-click (Cmd+click on Mac) and select Add Effect > Plug-in Effects > Oculus Spatial Reverb. Note: There is no need to send to this reverb, the sends are set up internally inside the plugin. This effect does not perform any processing on the input signal, it simply mixes in the spatial reverb from its internal sends. This effect contains parameters that directly control the reverb as well as global settings that are shared across all instances of the spatializer. Parameters

Refl. Engine

This global setting enables/disables the reflection engine (room model) for all sound sources, including early reflections and late reverberation. For more information, see Attenuation and Reflections on page 29 in our Audio SDK Guide.

Reverb

Enables the shared reverb output from the Oculus Spatial Reverb effect, based on room size.

Bypass All

This global setting bypasses processing in all instances of the Oculus Spatializer, Oculus Ambisonics, and the Oculus Spatial Reverb effects in the project. May be used for A/B testing.

Global Scale

Specifies a scaling factor to convert game units to meters. For example, if the game units are described in feet, the Global Scale would be 0.3048. Default value is 1.0.

Audio | Oculus Spatializer for FMOD Integration Guide | 67

Room Width / Height / Length

Global settings that control the dimensions of the room model used for early reflections and reverb.

Refl. Left / Right / Up / Down / Front / Back

Global settings that control the reflectivity of the walls of the room model used for early reflections and reverb.

Range Min/Max

Controls the attenuation calculations for the spatial reverb.

Ambisonics in FMOD The Oculus FMOD OSP supports Oculus Ambisonics in the AmbiX (ACN/SN3D) format. To apply Oculus Ambisonics to a sound file, select Add Effect > Plug-in Effects > Oculus Ambisonics.

Ambisonics are not officially supported by FMOD Studio 1.8, but its flexible bus architecture and multi-channel audio support allows ambisonics to work. However, there are two quirks to watch out for: Head Tracking Only Works with an Oculus Effect on the Master Track There is no head tracking if you don't have any Oculus effects on the master track. For example, if you are creating a sound scene composed solely of Oculus Ambisonics effects on audio tracks, the audio will not be spatialized as you turn or move your head. This limitation is because FMOD only propogates the 3D positional data to effects that are on the master track. The workaround is to add the Oculus Spatializer effect to the master track even if you only play silence on that track. This works because positional data is shared between all the Oculus effects. The Oculus Spatializer effect on the master track will get the FMOD positional data and then share it with the other Oculus effects on the audio tracks. Avoid putting the Oculus Ambisonic Effect on the Master Track. If you put a 4-channel ambisonic sound file on an audio track and the Oculus Ambisonics effect on the master track, it will not sound right. FMOD upmixes 4-channel ambisonic sound files to 5.1 at the output of the audio track. This mixes the channels together in a way that interferes with the sound field. There are two way to work around this: • Add the Oculus Ambisonics effect to the same Audio Track as the sound file. Notice in the screenshot below that the meters on the “In” at the left side of the track shows 4 channels. This is the best approach if you wish to play back a single ambisonic sound or loop.

68 | Oculus Spatializer for FMOD Integration Guide | Audio

• Convert your 4-channel ambisonics sound files to pseudo 5.1 files by converting them to 6 channels. Insert two silent channels between the ambisonics channel, so the third and fourth channels are silent, as shown below. This prevents FMOD from upmixing and passes through six channels to the Oculus Ambisonics, which knows to interpret that as ambisonic. This approach works will if you are playing back several ambisonic sounds in one event and want to mix them together before spatializing. The most common use case for this is an interactive music mix using ambisonics.

Audio | Oculus Spatializer for FMOD Integration Guide | 69

Note: You will have to make your project speaker mode 5.1 instead of stereo to make this work. For more information on Oculus Ambisonics, see the Oculus Ambisonics section of the Supported Features on page 27 section of our Audio SDK Guide.

Installing with the FMOD Studio Unity Integration The FMOD Studio Unity Integration is a Unity plugin which allows the use of FMOD in Unity games. Compatibility The Oculus Spatializer Plugin (OSP) for FMOD is compatible with FMOD Studio Unity Integration for projects targeting Windows (32/64 bit), OS X, and Android. Two versions of the FMOD Studio Unity Integration are currently available: 2.0 and Legacy. The OSP for FMOD is compatible with: • 2.0 (1.07.04 and higher) • Legacy Version 1.07.03 2.0 Integration Installation If you are migrating from the Legacy Integration, please follow FMOD’s migration guide here: http:// www.fmod.org/documentation/→content/generated/engine_new_unity/migration.html Otherwise, take the following steps:

70 | Oculus Spatializer for FMOD Integration Guide | Audio

1. Follow the guide for setting up the 2.0 Integration here: http://www.fmod.org/documentation/→content/ generated/engine_new_unity/overview.html 2. Follow instructions for using the OSP in FMOD Studio here: https://developer.oculus.com/documentation/ audiosdk/latest/concepts/osp-fmod-usage/ 3. Open your project in the Unity Editor. Select Assets > Import > Custom Package, and select OculusSpatializerFMODUnity.unitypackage in AudioSDK\Plugins\FMOD\Unity to import into your project. 4. In the Project tab of FMOD Settings, click the Add Plugin button, and enter OculusSpatializerFMOD in the new text field. You should now be able to load and play FMOD Events that use the OSP in your Unity application runtime. Legacy Integration Installation 1. Follow instructions for setting up the Legacy Integration here: http://www.fmod.org/documentation/ →content/generated/engine_unity/overview.html 2. Follow instructions for using the OSP in FMOD Studio: https://developer.oculus.com/documentation/ audiosdk/latest/concepts/osp-fmod-usage/ 3. Open your project in the Unity Editor. Then select Assets > Import > Custom Package, and select OculusSpatializerFMODUnity.unitypackage in AudioSDK\Plugins\FMOD\Unity to import it into your project. 4. In the Project view, select the FMOD_Listener script, which should be attached to an object in the root of the scene. In the Unity Inspector view, increment the Size of Plugin Paths by one, and add ovrfmod in the new element. 5. OS X platform only: In FMOD_Listener.cs, in LoadPlugins(), modify the body of the foreach loop with the following code inside the OCULUS start/end tags: foreach (var name in pluginPaths) { // OCULUS start var path = pluginPath + "/"; if(name.Equals("ovrfmod") && (Application.platform == RuntimePlatform.OSXEditor || Application.platform == RuntimePlatform.OSXPlayer || Application.platform == RuntimePlatform.OSXDashboardPlayer) ) { path += (name + ".bundle"); FMOD.Studio.UnityUtil.Log("Loading plugin: " + path); } else { path += GetPluginFileName(name); FMOD.Studio.UnityUtil.Log("Loading plugin: " + path); #if UNITY_5 && (UNITY_64 || UNITY_EDITOR_64) if (!System.IO.File.Exists(path)) { path = pluginPath + "/" + GetPluginFileName(name + "64"); } #endif #if !UNITY_METRO if (!System.IO.File.Exists(path)) { FMOD.Studio.UnityUtil.LogWarning("plugin not found: " + path); } #endif } // OCULUS end

}

uint handle; FMOD.RESULT res = sys.loadPlugin(path, out handle); ERRCHECK(res);

Now you should be able to load and play FMOD Events that use the OSP in your Unity application runtime.

Audio | Oculus Spatializer for FMOD Integration Guide | 71

OSP Version Migration in FMOD To migrate to a new version of the Oculus Spatializer Plugin for FMOD, it is recommended to follow these steps: 1. Copy the new version of OculusSpatializerFMOD.dll from the Audio SDK over the existing version in the Plugins folder of your FMOD Studio project directory. 2. If the Plugins folder contains a file named ovrfmod.dll, delete it and copy OculusSpatializerFMOD.dll into the folder. 3. Copy the new OculusSpatializerFMOD.dll (32-bit or 64-bit, as appropriate) from the Audio SDK over the existing versions in your application directory.. 4. Open your FMOD Studio project and build sound banks. 5. Launch your application and load the newly built banks.

72 | Oculus VST Spatializer for DAWs Integration Guide | Audio

Oculus VST Spatializer for DAWs Integration Guide This guide describes how to install and use the Oculus Spatializer VST plugin with the Oculus Rift. Before reading this guide, be sure to review the Oculus Audio SDK Guide on page 26 for information on features, requirements, and integration steps that is relevant to all of our Audio SDK plugins. For general background information on audio spatialization, see our Introduction to Virtual Reality Audio on page 6.

Overview The Oculus Spatializer VST plugin for professional Digital Audio Workstations (DAWs) lets you spatialize monophonic sound sources in 3D relative to the user's head location and preview the soundscape. You can also record the mix to an ambisonic stream for later use. The VST plugin incorporates the same spatialization algorithms found in our other plugin formats (Wwise, FMOD, and Unity). These other formats are typically used to generate real-time spatialization within virtual environments. For the audio designer, the VST plugin comes in handy for setting up mixes within your favorite DAW, and for previewing what the mix sounds like prior to being spatialized in virtual reality. Version Compatibility This VST has been tested with various DAWs on Windows 7 and 8.1 (32-bit and 64-bit) and Mac OS X 10.8.5+. For more information, please see DAW-Specific Notes. General OSP Limitations 1. CPU usage increases when early reflections are turned on, and increases proportionately as room dimensions become larger. Limitations Specific to VST • All parameters are assigned to MIDI controllers. However, most parameter ranges fall outside of the current MIDI mapping range of 0.0 - 1.0. Range settings for each parameter will be resolved in a future release of the plugin. • Please see DAW-Specific Notes for information about each DAW tested with the spatializer. • You must set your DAW sample rate to be at 44.1 kHz or 48 kHz for optimal fidelity. Note that values below 16 kHz or above 48 kHz will result in no sound. Installation Download the Oculus Spatializer DAW Mac or Oculus Spatializer DAW Win package from the Audio Packages page. Copy the relevant VST files to your system or to your DAW's VST installation directory. PC On Windows, some DAWs require you to specify custom locations for 32-bit and 64-bit VST plugins. If you have not already setup custom locations for your VST folders, we recommend using the following:

Audio | Oculus VST Spatializer for DAWs Integration Guide | 73

64-bit

C:\Program Files\Steinberg\VSTPlugins

32-bit

C:\Program Files (x86)\Steinberg\VSTPlugins

Mac In OS X, VST plugins must be installed to either the global or user Library folder. Global

/Library/Audio/Plug-Ins/VST

User

~/Library/Audio/Plug-Ins/VST

Using the Plugin We recommend assigning the Oculus Spatializer to your DAW tracks as an insert effect. Any sound that plays through this track will then be globally affected by the spatialization process.

VU Indicator The green line above the 2D screen indicates the amplitude of detected audio signals from left to right in volume units (VU), providing a clear visual indication that signal is being sent to the spatializer.

74 | Oculus VST Spatializer for DAWs Integration Guide | Audio

HMD Tracking in DAW (Rift Only) If an Oculus Rift is plugged into your DAW workstation when you load the plugin, a green arrow on the 2D screen will indicate the position and direction of the Rift relative to the spatialized audio source (shown above). Note: The Rift must be plugged in when you load the plugin, or the orientation arrow will not be visible. To reset the Rift to the default position (0,0,0) and orientation (pointing down the negative Z-axis), click Config in the upper-left and then click the Reset Rift button at the bottom of the window.

VST Options and Parameters The audio tracks assigned to the Oculus Spatializer VST appear in the 2D view as colored dots. You can drag the dots in the 2D view to change their position in the soundscape and adjust other options and parameters. Click the XZ and XY buttons to change between top-down and front-facing 2D views. Recording Options You can render the spatialized sound sources in the Oculus Spatializer to an ambisonic stream: 1. Put the Oculus Spatializer into ARMED mode. a. Click REC OFF. b. Enter the .wav filename you want to render the ambisonic stream to. 2. Click or activate Play in your DAWS. Note: The position of the HMD does not influence the rotation or attenuation of the ambisonic stream. To cancel ARMED mode, click ARMED.

Audio | Oculus VST Spatializer for DAWs Integration Guide | 75

Config Options Click Config to open the Configuration page. There are two tabs on this page, Instance and Global. Instance Options Label field

Label the track for easy identification in the 2D and HMD views.

Color picker tool

Change the color of the circle representing the track for easy identification in the 2D and HMD views.

Input

Select mono or AmbiX stream source.

Output

Sets the spatialization mode for the track, dependent on the Input. • Binaural: If input is Mono, renders output as point-source spatialized sound. If output is AmbiX, renders output using the Oculus ambisonic decoder. • Ambisonic, OculusAmbi Mode: If input is Mono, encodes the output into AmbiX 1st order format, and then decodes output using the Oculus ambisonic decoder. This is useful as a preview device to compare the audio decoding between binaural and ambisonic. If input is AmbiX, renders output using the Oculus ambisonic decoder. • Ambisonic, Virtual Speaker Mode: Output is rendered using a virtual array of eight, point-sourced and spatialized speakers, each located at the vertex of a cube located around the listener. If input is Mono, encodes the output into AmbiX 1st order format first, and then decodes output using the virtual speaker mode. If input is AmbiX, decodes output using the virtual speaker mode.

76 | Oculus VST Spatializer for DAWs Integration Guide | Audio

Global Options BPM

Activates a beacon in the HMD that pulsates in sync with the BPM.

Time Info Show XYZ axis

Displays the BPM and the current DAW measure and beat in the HMD Displays the axis in the HMD

Local Track Parameters The top two rows of dials on interface contains parameters that affect the individual track. Local track parameters are used to set up the location of a sound source in 3D space, and to shape the attenuation of the sound as it gets further away from the listener. These parameters are stored with the project when saving and loading. Note: If you have a mouse with a scroll wheel, you can change the values in the interface by hovering over the dial and rotating the wheel. Bypass

Prevents audio from being processed through the spatializer.

GAIN(dB)

Increases processed signal volume (in decibels).

[0.0 - 24.0] NEAR (m) [0.0 - 175.0]

Sets the distance from the listener at which a sound source will start controlling the attenuation curve (in meters). The attenuation curve approximates an inverse square and reaches maximum attenuation when it reaches the Far parameter. In the 2D grid display, the radius is represented by an orange disk around the sound position.

FAR (m) [0.0 - 175.0]

X/Y/Z Pos (m) [-100.0 - 100.0]

Sets the distance from the listener at which a sound source reaches maximum attenuation value (in meters). In the 2D grid display, the radius will be represented by a red disk around the sound position. Sets the location of the sound relative to the listener (in meters). The co-ordinate system is right-handed, with Y-axis pointing up and the Z-axis pointing toward the screen (a.k.a. Oculus coordinate system).

Global Track Parameters These parameters are global to all instances of the plugin within the DAW. Any parameter changed in one global parameter plugin instance will be reflected in all global parameter plugin instances, allowing the user to easily set global parameters for all tracks. These parameters are stored with the project when saving and loading. REFLECTIONS

Toggles the reflection engine, as defined by the reflection global parameters. This enhances the spatialization effect but incurs a commensurate performance penalty.

Audio | Oculus VST Spatializer for DAWs Integration Guide | 77

REVERB

When enabled, a fixed reverberation is mixed into the output, providing a more natural-sounding spatialization effect. Based on room size and reflection values (see X/Y/Z Size and Left/Right, Forward/Backward, Up/Down Refl.). If X/Y/Z Size, Near, Far, or Refl. parameters are changed, reverb must be turned on and off again for the changes to take effect (this avoids hitching artifacts). Reflections must be enabled to use.

X/Y/Z Size (m)

Sets the dimensions of the theoretical room used to calculate reflections. The greater the dimensions, the further apart the reflections.

[1.0 - 200.0]

In the 2D grid display, the room will be represented by a cyan box (this will only display if Reflections are enabled). Sets the percentage of sound reflected by each wall in a room with the dimensions specified by the X/Y/Z SIZE parameters. At 0, the reflection is fully absorbed. At 1.0, the reflection bounces from the wall without any absorption. Capped at 0.97 to avoid feedback.

LEFT/RIGHT, FORWARD/ BACKWARD,UP/DOWN REFL. [0.0 - 0.97]

Note: High reflection values in small rooms may cause distortion due to volume overload. Sounds reflect more energetically in small rooms because the smaller walls absorb less sound, and shorter travel distances result in less air dispersion. Larger rooms therefore allow for higher reflection values without distortion. If you encounter distortion in rooms in the ballpark of 5 x 5 x 5 or smaller, this could be a factor. Other Buttons and Toggles ABOUT

Displays the current version of the VST (which matches the version of the Oculus Audio SDK being used). The Update button navigates to the latest Audio SDK in the Oculus developer site.

XZ/XY (toggle)

Changes the 2D grid display from top-down (XZ) to front-face (XY). The head model in the center of the grid will change to indicate which view you are in, making it easier to understand the relationship between the sound location and head position.

SCALE (m)

Sets the scale of the 2D grid display (in meters). This allows the user to have greater control over the sound position placement.

[20.0 - 200.0]

3D Visualizer This guide describes how to install and use the Oculus Spatializer VST plugin with the Oculus Rift. The Oculus 3D Audio Visualizer with HMD interface allows users to optionally visualize and manipulate sound parameters within VR, using either Oculus Touch or Xbox controllers. Using Touch, a user can control two sounds at the same time. The sound locations of all Visualizer channels are visible with the HMD display. Users can change the position of sounds relative to the listener, adjust volume attenuation parameters, and enable/disable room and wall parameters within VR.

78 | Oculus VST Spatializer for DAWs Integration Guide | Audio

Use Note: Be sure your Oculus Rift software and firmware are up to date and running properly. To start visualization of the VST parameters within the HMD, add a plugin to a track/channel.

And then, put on the HMD.

Audio | Oculus VST Spatializer for DAWs Integration Guide | 79

The Touch controllers are visualized as blue (left controller) and red (right controller) spheres which track with controller movement. The Xbox controller may be visualized as a selection sphere directly in front of the user by squeezing the left trigger - it is stationary, as the controller is not tracked. Squeezing the trigger on your controller initiates the selection pointers. Some settings increase in intensity as you squeeze the trigger. The Reset Rift button found in the Config window returns you to your initial position. If you move away from the audio track sources and want to ’teleport’ back to the center of the audio space, use this button. Use the Label field and Color picker tool in the Config Window to assign a label and color to the currentlyselected instance of the spatializer. This is used to differentiate each audio source within the HMD. A beats-per-minute (BPM) metronome sphere is located at the center of origin. Above the sphere is a message that indicates the current BPM and the current DAW measure:beat. Moving the Camera The sticks on the controllers control movement within the audio space. The left joystick moves your camera parallel to your viewing direction. The x-axis of the right joystick rotates you around the y-axis. The y-axis of the right joystick moves the camera up or down.

80 | Oculus VST Spatializer for DAWs Integration Guide | Audio

Left Stick

Pan left and right, move forwards and back.

Right Stick

Pan up and down, rotate left and right.

Modifying Audio Track Objects Audio sources are initially displayed as yellow spheres in the HMD which modulate (or pulse) while audio is on. To select a sound, squeeze the trigger. If you are using a Touch controller, squeezing the trigger activates a laser beam you use to lock onto a source. If you are using the Xbox controller, a green sphere cursor will display at the center of the screen, which can be used to lock onto a source. After you lock onto a source, squeeze the grip to move it around and position it. Partial squeezes of the trigger cause the track to lag towards the position that it is being placed in. This can be used for smooth sound movement (for example, if you want to have softer curves when automating location). Press the Stick on the Touch controller or the left joystick on the Xbox controller to cycle through the available setting controls. To change parameters, you must keep the index trigger squeezed to keep the object captured. The first setting adjusts sound fall-off. Two spheres, each representing the near/far fall-off value, will appear. Your controller controls the values of each sphere and will no longer control the camera. The x-axis changes the near fall-off, and the y-axis changes the far fall-off. A floating message follows the direction of the selected sound, allowing user to see the actual parameter values. Click on the joystick button again to cycle to distance and gain, the next parameter change mode. Use joystick up/down to change distance - the sound will move closer or farther away from you. Use joystick left/right to adjust gain - a green sphere will change representing the attenuation of the sound (-90/+24 db). A floating message follows the direction of the selected sound, allowing user to see actual parameter values. Press the Stick again to turn off parameter change mode, or release the Trigger to release the sound and return to movement mode. Modifying Room Reflection Parameters Use the left Touch or Xbox controller. The y button toggles reflections on/off and causes a dimly-colored grid cube to appear, representing the room. Each notch in the cube represents one meter. The x button enables reverb (brightly-colored). If pressing the left trigger on either controller does not capture a sound but intersects with a room wall, that wall will turn bright green. Clicking on the joystick will it in room parameter mode, and it will stop controlling the camera. The selected wall will turn white, and a solid transparent wall will overlay the wall grid. The intensity of the opacity on that wall represents the reflection coefficient value of that wall. The joystick y-axis controls the size of the room. The x-axis controls the reflection coefficient. Note that you can select other walls without having to manually switch modes, provided the index finger is still down. Click on the joystick button again to turn off room parameter mode, or release the index trigger to release the sound and return to movement mode.

Audio | Oculus VST Spatializer for DAWs Integration Guide | 81

Automation Automation is supported for audio position only. In your DAW, enable automation record (varies with each DAW) and move the sounds around using your controller of choice. The sequence will be recorded and can be played back. Automated sequences are stored with the DAW project.

DAW-Specific Notes This section discusses DAW-specific caveats and issues for the Oculus Spatializer Plugin. The Oculus Spatializer DAW plugin is a full stereo effect (Left/Right in, Left/Right out). It will only process the incoming Left channel and allow you to move this monophonic signal through 3D space. Some DAWs will work properly when assigning a stereo effect to a monophonic channel, while others will require workarounds. Up to 64 sounds running through the bus may be spatialized. DAW

WindowsOS X

Additional Notes

Ableton Live 9.1.7

Yes

Adobe Audition 8.1

Yes

PreSonus Studio One 2.6.5

Partial

Mono track not supported. Use a stereo track; the plugin will collapse channels to mono automatically. You may also send the mono track to the plugin as a send/return effect instead of as an insert effect.

Reaper 6.5

Yes

Multicore usage must be turned off when rendering out an ambisonic stream. This can be set by navigating to:

Yes

Options > Preferences > Buffering and setting Audio reading/processing threads to 0. Steinberg Nuendo 6.5 / Steinberg Cubase 8

Partial

Placing a stereo insert effect onto a mono track is not supported. Solution 1) Place your mono sound on a stereo track, with the OSP as an insert effect. Solution 2) Convert your mono source into a stereo source. Currently, the left channel of the source will be affected; there is no right channel selection or stereo to mono collapse feature in the plugin. Solution 3) Use a stereo send from a mono source.

Legal Notifications VST is a trademark and software of Steinberg Media Technologies GmbH.

82 | Oculus AAX Spatializer for DAWs Integration Guide | Audio

Oculus AAX Spatializer for DAWs Integration Guide This guide describes how to install and use the Oculus Spatializer AAX plugin with the Oculus Rift. Before reading this guide, be sure to review the Oculus Audio SDK Guide on page 26 for information on features, requirements, and integration steps that is relevant to all of our Audio SDK plugins. For general background information on audio spatialization, see our Introduction to Virtual Reality Audio on page 6.

Overview The Oculus Spatializer AAX plugin is an add-on plugin for Avid’s Pro Tools audio production platform. This plugin allows for monophonic sound sources to be properly spatialized in 3D relative to the user's head location. The AAX plugin incorporates the same spatialization algorithms found in our other plugin formats (e.g., Wwise, Unity). These other formats are typically used to generate real-time spatialization within virtual environments. For the audio designer, the AAX plugin is useful for setting up mixes within the Pro Tools DAW (Digital Audio Workstation) and for hearing what the mix will sound like prior to being spatialized in virtual reality. Version Compatibility The AAX plugin has been tested in Pro Tools 11 (64-bit) on Windows 7 and 8, and OS X 10.10+. General OSP Limitations 1. CPU usage increases when early reflections are turned on, and increases proportionately as room dimensions become larger. Limitations Specific to AAX • All parameters are assigned to MIDI controllers. However, most parameter ranges fall outside of the current MIDI mapping range of 0.0 - 1.0. Range settings for each parameter will be resolved in a future release of the plugin. • You must set the Pro Tools sample rate to be at 44.1 kHz or 48 kHz for optimal fidelity. Note that values below 16 kHz or above 48 kHz will result in no sound. Installation Download the Oculus Spatializer DAW Mac or Oculus Spatializer DAW Win package from the Audio Packages page. Copy Oculus Spatializer.aaxplugin to the Avid Pro Tools Plug-Ins folder. On a Mac: Macintosh HD/Library/Application Support/Avid/Audio/Plug-Ins

Audio | Oculus AAX Spatializer for DAWs Integration Guide | 83

On a PC: C:\Program Files\Common Files\Avid\Audio\Plug-Ins

Using the Plugin

We recommend assigning the Oculus Spatializer as an insert effect to a mono track in Pro Tools. Any sound that plays through this track will then be globally affected by the spatialization process.

Track Parameters Local Track Parameters The top section of the plugin interface contains parameters that affect the individual track. Local track parameters are used to set up the location of a sound source in 3D space, and to shape the attenuation of the sound as it gets further away from the listener. These parameters are stored with the project when saving and loading. Bypass

Prevents audio from being processed through the spatializer.

GAIN(dB)

Increases processed signal volume (in decibels).

[0.0 - 24.0] NEAR (m)

Sets the distance from the listener at which a sound source will start controlling the attenuation curve (in meters). The attenuation curve approximates an inverse square and reaches maximum attenuation when it reaches the Far parameter.

84 | Oculus AAX Spatializer for DAWs Integration Guide | Audio

[0.0 - 175.0]

In the 2D grid display, the radius is represented by an orange disk around the sound position.

FAR (m)

Sets the distance from the listener at which a sound source reaches maximum attenuation value (in meters).

[0.0 - 175.0]

X/Y/Z Pos (m) [-100.0 - 100.0] SCALE (m) [20.0 - 200.0]

In the 2D grid display, the radius will be represented by a red disk around the sound position. Sets the location of the sound relative to the listener (in meters). The co-ordinate system is right-handed, with Y-axis pointing up and the Z-axis pointing toward the screen (a.k.a. Oculus coordinate system). Sets the scale of the 2D grid display (in meters). This allows the user to have greater control over the sound position placement.

Global Track Parameters These parameters are global to all instances of the plugin within the DAW. Any parameter changed in one global parameter plugin instance will be reflected in all global parameter plugin instances, allowing the user to easily set global parameters for all tracks. These parameters are stored with the project when saving and loading. REFLECTIONS

Toggles the reflection engine, as defined by the reflection global parameters. This enhances the spatialization effect but incurs a commensurate performance penalty.

REVERB

When enabled, a fixed reverberation is mixed into the output, providing a more natural-sounding spatialization effect. Based on room size and reflection values (see X/Y/Z Size and Left/Right, Forward/Backward, Up/Down Refl.). If X/Y/Z Size, Near, Far, or Refl. parameters are changed, reverb must be turned on and off again for the changes to take effect (this avoids hitching artifacts). Reflections must be enabled to use.

X/Y/Z Size (m) [1.0 - 200.0]

LEFT/RIGHT, FORWARD/ BACKWARD,UP/DOWN REFL.

Sets the dimensions of the theoretical room used to calculate reflections. The greater the dimensions, the further apart the reflections. In the 2D grid display, the room will be represented by a cyan box (this will only display if Reflections are enabled). Sets the percentage of sound reflected by each wall in a room with the dimensions specified by the X/Y/Z SIZE parameters. At 0, the reflection is fully absorbed. At 1.0, the reflection bounces from the wall without any absorption. Capped at 0.97 to avoid feedback.

[0.0 - 0.97] Note: High reflection values in small rooms may cause distortion due to volume overload. Sounds reflect more energetically in small rooms because the smaller walls absorb less sound, and shorter travel distances result in less air dispersion. Larger rooms therefore allow for higher reflection values without distortion. If you encounter distortion in rooms in the ballpark of 5 x 5 x 5 or smaller, this could be a factor.

Audio | Oculus AAX Spatializer for DAWs Integration Guide | 85

Other Buttons and Toggles Note: If you are using a scrolling mouse, you may change the rotaries with it by placing the cursor over the rotary and scrolling up or down. ABOUT

Displays the current version (which matches the version of the Oculus Audio SDK being used). The Update button navigates to the latest Audio SDK in the Oculus developer site.

XZ/XY (toggle)

Changes the 2D grid display from top-down (XZ) to front-face (XY). The head model in the center of the grid will change to indicate which view you are in, making it easier to understand the relationship between the sound location and head position.

3D Visualizer This guide describes how to install and use the Oculus Spatializer AAX plugin with the Oculus Rift. The Oculus 3D Audio Visualizer with HMD interface allows users to optionally visualize and manipulate sound parameters within VR, using either Oculus Touch or Xbox controllers. Using Touch, a user can control two sounds at the same time. The sound locations of all Visualizer channels are visible with the HMD display. Users can change the position of sounds relative to the listener, adjust volume attenuation parameters, and enable/disable room and wall parameters within VR. Use Note: Be sure your Oculus Rift software and firmware are up to date and running properly. To begin visualization of the VST parameters within the HMD, add a plugin to a channel.

86 | Oculus AAX Spatializer for DAWs Integration Guide | Audio

Audio | Oculus AAX Spatializer for DAWs Integration Guide | 87

Touch controllers are visualized as blue (left controller) and red (right controller) spheres which track with controller movement. The Xbox controller may be visualized as a selection sphere directly in front of the user by pressing the left index trigger - it is stationary, as the controller is not tracked. Use the index triggers to initiate a selection pointer. Some settings respond to the strength of the pointer (see below for details). The strength value is controlled by the amount the trigger is pressed. The Reset Rift button found in the About window tears down and re-initializes the HMD and returns users to an initial position. If you move away from the sound sources and want to ’teleport’ back to the center of the audio space, use this button. Press the Set Color button in the About window to bring up a color selector. This selector assigns the color to the sound location assigned to the currently-selected instance of the spatializer in the DAW. This is used to differentiate each sound within the HMD. A beats-per-minute (BPM) metronome sphere is located at the center of origin. Above the sphere is a message that indicates the current BPM and the current DAW measure:beat. Moving the Camera Movement within the audio space is controlled via joysticks on either the Xbox controller or Touch controllers: The left joystick moves your camera parallel to your viewing direction. The x-axis of the right joystick rotates you around the y-axis. The y-axis of the right joystick moves the camera up or down. Modifying Sound Objects Audio sources are initially displayed as yellow spheres in the HMD which modulate (or pulse) while audio is on. To select a sound, squeeze the index-finger trigger. If you are using a Touch controller, a beam will shoot out that can be used to select a sound. If you are using the Xbox controller, a green sphere cursor will display at the center of the screen, which can be used to select a sound. Once you have locked onto a sound, you may move it around to position it. The joystick will stop functioning as a camera controller. Partially release the index-finger trigger to cause the sound to lag towards the position that it is being placed in. This can be used for smooth sound movement (e.g., if you want to have softer curves when automating location). Click the joystick button of the Touch controller or the left joystick on the Xbox controller to cycle through the available setting controls. To change parameters, you must keep the index trigger pressed to keep the sound captured. The first setting adjusts sound fall-off. Two spheres, each representing the near/far fall-off value, will appear. Your controller will now control the values of each sphere and will no longer control the camera. The x-axis changes the near fall-off, and the The y-axis changes the far fall-off. A floating message follows the direction of the selected sound, allowing user to see the actual parameter values. Click on the joystick button again to cycle to distance and gain, the next parameter change mode. Use joystick up/down to change distance - the sound will move closer or farther away from you. Use joystick left/right to adjust gain - a green sphere will change representing the attenuation of the sound (-90/+24 db). A floating message will follow the direction of the selected sound, allowing user to see actual parameter values. Click on the joystick button again to turn off parameter change mode, or release the index trigger to release the sound and place you back into movement mode.

88 | Oculus AAX Spatializer for DAWs Integration Guide | Audio

Modifying Room Reflection Parameters Use the left Touch or Xbox controller. The y button toggles reflections on/off and causes a dimly-colored grid cube to appear, representing the room. Each notch in the cube represents one meter. The x button enables reverb (brightly-colored). If pressing the left trigger on either controller does not capture a sound but intersects with a room wall, that wall will turn bright green. Clicking on the joystick will it in room parameter mode, and it will stop controlling the camera. The selected wall will turn white, and a solid transparent wall will overlay the wall grid. The intensity of the opacity on that wall represents the reflection coefficient value of that wall. The joystick y-axis controls the size of the room. The x-axis controls the reflection coefficient. Note that you can select other walls without having to manually switch modes, provided the index finger is still down. Click on the joystick button again to turn off room parameter mode, or release the index trigger to release the sound and return to movement mode. Automation Automation is supported for audio position only. In your DAW, enable automation record (varies with each DAW) and move the sounds around using your controller of choice. The sequence will be recorded and can be played back. Automated sequences are stored with the DAW project.

Audio | Oculus Lip Sync Unity Integration Guide | 89

Oculus Lip Sync Unity Integration Guide This guide describes how to install and use Oculus Lip Sync for Unity. Before reading this guide, be sure to review the Oculus Audio SDK Guide on page 26 for information on features, requirements, and integration steps that is relevant to all of our Audio SDK plugins. For general background information on audio spatialization, see our Introduction to Virtual Reality Audio on page 6.

Overview This guide describes how to install and use the Oculus Lip Sync Unity integration. The Oculus Lip Sync Unity integration (OVRLipSync) is an add-on plugin and set of scripts used to sync avatar lip movements to speech sounds. OVRLipSync analyzes an audio input stream from a canned source or microphone input, and creates a set of values (called visemes) which may be used to animate the lips of an avatar. A viseme is a gesture or expression of the lips and face that corresponds to a particular speech sound. The term is used, for example, when discussing lip reading, where it is analogous to the concept of a phoneme, and is a basic visual unit of intelligibility. In computer animation, visemes may be used to animate avatars so that they look like they are speaking. OVRLipSync uses a repertoire of visemes to modify avatars based on a specified audio input stream. Each viseme targets a specified morph target in an avatar to influence the amount that target will be expressed on the model. Thus, realistic lip movement can be used to sync what is being spoken to what is being seen, enhancing the visual cues that one can used when populating an application with avatars (either controlled by a user locally or on a network, or for generating lip sync animations for NPC avatars via dialogue samples). Our system currently maps to 15 separate viseme targets: sil, PP, FF, TH, DD, kk, CH, SS, nn, RR, aa, E, ih, oh, and ou. These visemes correspond to expressions typically made by people producing the speech sound by which they’re referred, e.g., the viseme sil corresponds to a silent/neutral expression, PP appears to be pronouncing the first syllable in “popcorn,” FF the first syllable of “fish,” and so forth. These targets have been selected to give the maximum range of lip movement, and are agnostic to language. For more information on these 15 visemes and how they were selected, please read the following documentation: Viseme MPEG-4 Standard Note: OVRLipSync currently animates lip movements only.

Requirements OVRLipSync requires Unity 5.x Professional or Personal or later, Unity 4.x Professional, targeting Android or Windows platforms, running on Windows 7, 8 or 10. OS X 10.9.5 and later are also currently supported. See Unity Compatibility and Requirements for details on our recommended versions. The Legacy Oculus Spatializer Plugin (OSP) for Unity is required (available from our Downloads page), and a minimum installation of this spatializer is included with the OVRLipSync download, providing the spatialization support necessary for this asset. The Legacy OSP included with this release can work alongside either the Legacy OSP for Unity or our Native OSP for Unity. It is provided to supply necessary functionality for

90 | Oculus Lip Sync Unity Integration Guide | Audio

OVRLipSync and is not intended as a replacement for the full OSP for Unity (available from our Downloads Page). The Native OSP for Unity does not currently support audio pre-processing, which is required by the viseme analysis for the highest possible signal strength from the input. Future versions of OVRLipSync will address this issue when pre-processing (or ‘insert’ effects) is available within a native audio source.

Download and Setup Note: We recommend removing any previously-imported versions of the OVRLipSync integration before importing a new plugin. To download OVRLipSync and import it into a Unity project: Download the Oculus Lipsync Unity package from the Audio Packages page. Extract the zips. Open your project in the Unity Editor, or create a new project. In the Unity Editor, select Assets > Import Package > Custom Package…. Select LegacyOculusSpatializer.unitypackage in /Plugins/Unity. When the Importing Package dialog opens, leave all assets selected and click Import. 6. Select OVRLipSync.unitypackage in the OVRLipSync folder and import. When the Importing Package dialog opens, leave all assets selected and click Import. 1. 2. 3. 4. 5.

Using Lip Sync Integration To use the Lip Sync integration, a scene must include the LipSyncInterface, the main interface to the OVRLipSync dll. A prefab is included in the integration for convenience. OVRLipSyncContext must be added to each GameObject which has the morph or texture target that you want to control. OVRLipSyncContextMorphTarget and OVRLipSyncContextTextureFlip are the scripts that bridge the viseme output from OVRLipSyncContext. OVRLipSyncContextMorphTarget requires a Skinned Mesh Renderer, which should have blend targets assigned to it (see the Prefab LipSyncTarget_Female for an example). The mesh should include all 15 visemes generated by the OVRLipSyncContex - expand BlendShapes in the head_girl Inspector view to access:

Audio | Oculus Lip Sync Unity Integration Guide | 91

Each blend target from sil to ou represents a viseme generated by the viseme engine. You may view each one by setting the blend target for a single viseme to 100.0 Note that sil corresponds to the silence, i.e., the neutral expression, and setting it to 100 with all other values 0 will have no visible effect. You may use more than 15 blend shapes in a target, and in fact we recommend doing so, in order to add facial expressions and blinking to the avatar. Notice that not every blend shape is a viseme - for example, blinkR and blinkL control the eyes. Select LipSyncMorphTarget_Female under Prefabs and, in the Inspector, find the attached script OVR Lip Sync Context Morph Target and expand it to see a map of the viseme outputs to the blend shapes:

92 | Oculus Lip Sync Unity Integration Guide | Audio

Notice that Element 0 (which represents the sil viseme) has an index of two - this assigns which blend target in the model is to be influenced with the viseme blend value. Now select LipSynchMorphTarget_RobotTextures in Prefabs to view OVR Lip Sync Context Texture Flip in the Unity Inspector. Expand Textures:

Audio | Oculus Lip Sync Unity Integration Guide | 93

OVR Lip Sync Context Texture Flip requires the Material you wish to target textures with, and a set of textures. These textures must be set within the Textures field, and must match the texture which you want to associate with a given viseme: The logic within the TextureFlip script only chooses one texture to use on a given frame, and assigns it to the main material texture, which should be assigned to the model which uses the texture for drawing the avatar lips. This type of avatar is somewhat cartoon-like and is a good fit with scenes that include a large number of avatars, such as social scenes. Other OVRLipSync Scripts OVRLipSyncMicInput is for use with a GameObject which has an AudioSource attached to it. It takes input from any attached microphone and pipes it through the AudioSource. Note that an AudioSource must be available to use the OVRLipSyncContext script, as the system relies on the function OnAudioFilterRead to analyze the audio and return the viseme buffers which drive the morph or texture targets.

94 | Oculus Lip Sync Unity Integration Guide | Audio

We recommend looking at the other scripts included with this integration. They will provide more insight as to what is possible with OVRLipSync. We include, for example, some helper scripts to facilitate easy on-screen (in VR) debugging.

Precomputing Visemes to Save CPU You can save a lot of processing power by precomputing the visemes for recorded audio instead of generating the visemes in real-time. This is particularly useful for lip synced animations in mobile apps as there is less processing power available as compared to the Rift. We provide both a tool in Unity for generating precomputed visemes for an audio source and a context called OVRLipSyncContextCanned. It works much the same as OVRLipSyncContext but reads the visemes from a precomputed viseme asset file instead of generating them in real-time. Precomputing Viseme Assets from an Audio File You can generate viseme assets files for audio clips that meet these requirements: • Preload Audio check box is selected. • Compression Mode is set to Decompress on Load. Note: You do not have to ship the audio clips with these settings, but you do need to have them set up as such to generate viseme assets files. To generate a viseme assets file: 1. Select one or more audio clips in the Unity project window. 2. Click Tools > Oculus > Generate Lip Sync Assets. The viseme assets files are saved in the same folder as the audio clips, with the file name: audioClipName_lipSync.asset. Playing Back Precomputed Visemes 1. On your Unity object, pair an OVRLipSyncContextCanned script component with both an Audio Source component and either an OVRLipSyncContextTextureFlip or a OVRLipSyncContextMorphTarget script component. 2. Drag the viseme asset file to OVRLipSyncContextCanned component's Current Sequence field. 3. Play the source audio file on the attached Audio Source component.

Audio | Oculus Lip Sync Unity Integration Guide | 95

Exploring Oculus Lip Sync with the Sample Scene To get started, we recommend opening the supplied demonstration scene LipSync_Demo, located under Assets/OVRLipSync/Scenes. This scene provides an introduction to OVRLipSync resources and examples of how the library works. How to use LipSync_Demo:

You can switch models between morph target (our human avatar) to a texture flip-book target (a robot avatar), and also switch between microphone and our provided sample audio clip using the following controls. Table 2: Keyboard Controls Key

Control

1

Select Morph target, Mic input (default).

2

Select Flipbook target, Mic input.

3

Select Morph target, Audio Clip.

4

Select Flipbook target, Audio Clip.

5

Select Morph target, Precomputed Visemes.

6

Select Flipbook target, Precomputed Visemes.

96 | Oculus Lip Sync Unity Integration Guide | Audio

Key

Control

L

Toggle loopback on/off to hear your voice with the mic input. Use headphones to avoid feedback. (default is off).



Decrease microphone gain (1-15).



Increase microphone gain (1-15).

Action

Control

Swipe Down

Decrease microphone gain (1-15).

Swipe Up

Increase microphone gain (1-15).

Swipe Forward / Swipe Cycle forward/backward through targets: Backward 1. Morph target - mic input 2. Flipbook target - mic input 3. Morph target - audio clip input 4. Flipbook target - audio clip input 5. Morph target - pregenerated visemes 6. Flipbook target - pregenerated visemes Audio clip input plays automatically. Single Tap

Toggle mic loopback on/off to hear your voice with the mic input.

Audio | Oculus Lip Sync Unity Integration Guide | 97

To preview the scene in the Unity Editor Game View: 1. Import and launch LipSync_Demo as described above. 2. Play the LipSync_Demo scene in OVRLipSync > Scenes in the Unity Editor Game View. To preview the scene with a Rift: 1. Import and launch LipSync_Demo as described above. 2. In Build Settings, verify that the PC, Mac & Linux Standalone option is selected under Platform. 3. In Player Settings, select Virtual Reality Supported. 4. Preview the scene normally in the Unity Game View. To preview the scene in Gear VR: 1. Be sure you are able to build and run projects on your Samsung phone (Debug Mode enabled, adb installed, etc.) See the Mobile SDK Setup Guide for more information. 2. Import and launch LipSync_Demo as described above. 3. In Build Settings: a. Select Android under Platform. b. Select Add CurrentsScenes in Build. c. Set Texture Compression to ASTC (recommended). 4. In Player Settings: a. Select Virtual Reality Supported. b. Specify the Bundle Identifier. 5. Copy your osig to /Assets/Plugins/Android/assets. 6. Build and run your project. Note: In order to select targets, change mic input level, and so on. for Gear VR, you will need a compatible Bluetooth keyboard. If you do not have one available, you can experiment with these changes in Unity Game View.

98 | Oculus Audio SDK Profiler | Audio

Oculus Audio SDK Profiler This guide describes how to use the Oculus Audio SDK Profiler to gather realtime performance metrics and statistics from the Oculus Spatializer Plugin.

Overview The Audio SDK Profiler provides real-time statistics and metrics to track audio performance in apps that use Oculus Spatializer plugins. The profiler collects analytics from an analytics server embedded within every Oculus Spatializer plugin (OSP). You can profile audio performance in both VR and non-VR applications, either running locally or remotely. Limitations • Analytics are only available for Unity, Wwise, FMOD, and Native OSP versions 1.18 or later. • Remote profiling requires both nodes to be in the same subnet of the local area network. • Profiling mobile or Gear VR apps requires a Wi-Fi connection. • Port 2121 is the default port for the OSP server. To change the port, you must edit your OSP settings and then rebuild. See audio-profiler-setup.xml→audio-profiler-setup/activatingprofiling.

Setup Installing the Profiler Download the Oculus Audio Profiler for Windows package from the Audio Packages page. After downloading the package, extract the contents of the .zip file to the desired location. Activating Profiling in Your App We ship the Oculus Spatializer with the analytics server turned off. Before you can profile your app's audio, you must activate the Oculus Spatializer analytics server in your app. Oculus Spatializer Plugins in Unity (Native, FMOD, Wwise) 1. Create an empty game object. 2. Add the appropriate script component to the game object:

a. For Unity Native Plugin, add ONSPProfiler. b. For FMOD Unity Plugin, add OculusSpatializerFMOD. c. For Wwise Unity Plugin, add OculusSpatializerWwise. 3. Select the Profiler Enabled check box. 4. (Optional) Change the network port if the default port of 2121 is not suitable for your use case.

Audio | Oculus Audio SDK Profiler | 99

Oculus Spatializer Plugin for Wwise 1. Call OSP_Wwise_SetProfilerEnabled(bool enabled); 2. (Optional) Change the network port if the default port of 2121 is not suitable for your use case by calling OSP_Wwise_SetProfilerPort(int port) Oculus Spatializer Plugin for FMOD 1. Call OSP_FMOD_SetProfilerEnabled(bool enabled); 2. (Optional) Change the network port if the default port of 2121 is not suitable for your use case by calling OSP_FMOD_SetProfilerPort (int port); Oculus Spatializer Plugin for Native C/C++ Apps 1. Call ovrAudio_SetProfilerEnabled(ovrAudioContext Context, int enabled); 2. (Optional) Change the network port if the default port of 2121 is not suitable for your use case by calling ovrAudio_SetProfilerPort(ovrAudioContext Context, int portNumber);

Profiling Spatialized Audio Profiling Local Apps To connect the profiler to a local app: 1. Start your app. 2. Start OculusAudioProfiler.exe. 3. Click Connect. Profiling Gear VR and Other Remote Apps Your app must be running on the same local area network as the computer running the Oculus Audio SDK Profiler. Additionally: • Windows apps must have the Oculus Audio Profiler port allowed in the Windows Firewall. The default port is 2121. • Gear VR and other mobile Android devices must be placed into Wi-Fi debugging mode. See Connecting adb via WIFI To connect the profiler to a remote app: 1. Obtain the IP address of the device running your app. 2. 3. 4. 5.

Start your app. Start OculusAudioProfiler.exe. Enter the IP address and port of the device. The default port is 2121. Click Connect.

If you cannot connect to your remote app, try these troubleshooting tips: • Make sure the port you are using is not blocked by your network settings or Windows Firewall. • Make sure your computer and remote device are on the same LAN and subnet. • If connecting to Gear VR or other mobile Android devices, try to connect the PC to the network over Wi-Fi so that both the device and the PC are connected to same Wi-Fi network.

100 | Oculus Audio SDK Profiler | Audio

Reading Profiler Analytics

• OSP Version. The Oculus Audio SDK version of the connected OSP instance.

Audio | Oculus Audio SDK Profiler | 101

• Spatialized Sounds . The number of spatialized sounds currently processed by the OSP.

• Ambisonic Sounds. The number of ambisonic sounds currently processed by the OSP. • Reverb and Reflections. The current reverb and reflections parameter settings. • CPU %. Plots the estimated CPU usage of the process the OSP is running in. This is an estimated value and does not account for multi-processor architectures. • Processed Sounds. Plots the total number of spatialized and ambisonic sounds processed by the OSP.

102 | Oculus Audio Loudness Meter | Audio

Oculus Audio Loudness Meter This guide describes how to use the Oculus Audio Loudness Meter to profile the overall loudness of your app.

Overview The Oculus Audio Loudness Meter measures the overall loudness of your app's audio mix. Loudness goes beyond simple peak level measurements, using integral functions and gates to measure loudness over time in LUFS units according to ITU standards (BS.1770-2). In the interest of providing a consistent audio volume experience across all Oculus VR experiences, we ask that you set a target of -18 LUFS for Rift apps and -16 LUFS for Gear VR apps. If the loudness profile of your app exceeds these thresholds, adjust your audio mix until your app no longer exceeds them.

Setup Requirements The Oculus Audio Loudness Meter runs only on Microsoft Windows. There are additional requirements for measuring loudness on Gear VR devices: • a line-in audio jack on the computer • a 3.5mm male-to-male stereo audio cable Installing Download the Oculus Audio Loudness Meter package from the Audio Packages page. After downloading the package, extract the contents of the .zip file to the desired location.

Measuring Loudness The Loudness Meter continuously monitors the selected audio interface to compile an overall loudness profile for your app. The loudness computation begins as soon as the Loudness Meter detects an audio signal stronger than -70 LUFS. The longer you monitor your audio, the less fluctuation you will see in the integrated LUFS. Measuring Rift Loudness Rift apps should not exceed the loudness target of -18 LUFS. The observed LUFS value turns red if this threshold is exceeded. 1. Start your Rift app. 2. Set the app audio volume (if any) and Rift audio volume to 100%. 3. Start OculusLoudnessMeter.exe.

Audio | Oculus Audio Loudness Meter | 103

4. On the Options menu, point to LUFS Threshold Target, and then click -18 (Rift). 5. Play a typical scene or level. Measuring Gear VR Loudness Gear VR apps should not exceed the loudness target of -16 LUFS. The observed LUFS value turns red if this threshold is exceeded. 1. Connect the stereo audio cable between the Gear VR headphone jack and your computer's line-in jack. 2. Set the Windows sound mixer line-in level to 100%. 3. 4. 5. 6. 7. 8. 9.

Start your Gear VR app. Set the Gear VR audio volume to 100%. Verify that you can hear the Gear VR app audio through your current Windows playback device. Start OculusLoudnessMeter.exe. On the Options menu, point to Input, and then click the current Windows playback device. On the Options menu, point to LUFS Threshold Target, and then click -16 (Gear VR). Play a typical scene or level.

Resetting the Meter Click RESET to discard the current loudness measurement and start over. Keep in mind that integrated LUFS are not calculated until the audio signal is stronger than -70 LUFS. Measuring Momentary Loudness Right-click the contents of the Loudness Meter to toggle momentary loudness measurement mode. This mode uses a 400ms time interval for calculating loudness, and is therefore good for observing peaks in the audio mix while the audio is being analyzed. You may switch freely between momentary and integrated loudness measurement modes. Switching to momentary mode does not affect the integrated loudness that is continuously calculated in the background.

104 | Release Notes | Audio

Release Notes This section describes changes for each version release.

Audio SDK 1.25 Release Notes This document provides an overview of new features, improvements, and fixes included in the latest version of the Oculus Audio SDK. 1.25.0 New Features • The behavior of all spatializer plugins was changed so that disabling per-source reflections also disables reverb for that source. This fix applies to the following plugins: Oculus Spatializer DAW Mac, Oculus Spatializer DAW Win, Oculus Spatializer FMOD, Oculus Spatializer Native, Oculus Spatializer Unity, Oculus Spatializer Wwise. Big Fixes • Fixed a bug that caused the Oculus application to crash. This occurred when reverb was enabled while running on a Windows PC based on a Sandy Bridge Intel CPU and configured for Oculus Minimum Spec. API Changes • There are no breaking API changes in this release.

Audio SDK 1.24 Release Notes This document provides an overview of new features, improvements, and fixes included in the latest version of the Oculus Audio SDK. 1.24.0 New Features • Spatializer plugin sizes reduced by half: A technique known as half floats was applied to the Unity, Wwise, and FMOD spatializer plugins. This reduced the code size of those plugins by half. Big Fixes • In the Unity Spatializer plugin, rotation for ambisonic sounds was incorrectly handled so that when the listener turned their head to the right or left, the ambisonic sound field rotated with the listener. This has been fixed so that the ambisonic sound field is rotated in the opposite direction from the headset rotation. This makes it sound as if it is fixed in place, which is the proper effect. • There was a typo in the Unity Spatializer plugin, where _AssignRaycastCallback was typed in as AssignRayCastCallback. This caused the function not to work properly, which broke the Dynamic Room Modeling feature in the Unity Spatializer plugin. This has been corrected.

Audio | Release Notes | 105

API Changes • There are no breaking API changes in this release.

Audio SDK 1.22 Release Notes This document provides an overview of new features, improvements, and fixes included in the latest version of the Oculus Audio SDK. 1.22.0 New Features • Dynamic Room Modeling: The Oculus Native Spatializer plugin for Unity 5.2+ now supports Dynamic Room Modeling, This enables sound reflections and reverb to be generated based on a dynamically updated model of the current room (or space) within the VR experience, as well as the user's position within that room. For example, if the user moves from a small room to a larger room within the VR experience, the natural echos and reverb that are associated with the larger room are automatically applied. These effects also change naturally as the user moves about within a room. This feature can be enabled by adding a script to an object within the scene, in the Unity Editor. You can then optionally configure a number of public variables, including: Layer Mask, Visualize Room, Rays Per Second, Room Interp Speed, Max Wall Distance, Ray Cache Size, Dynamic Reflections Enabled, and Legacy Reverb. The associated documentation has also been enhanced to more clearly explain the difference between sound reflections and reverb. For more information, please see Dynamic Room Modeling on page 39 API Changes • The new or updated API function calls associated with Dynamic Room Modeling include: ovrAudio_AssignRaycastCallback, ovrAudio_SetDynamicRoomRaysPerSecond, ovrAudio_SetDynamicRoomInterpSpeed, ovrAudio_SetDynamicRoomMaxWallDistance, ovrAudio_SetDynamicRoomRaysRayCacheSize, ovrAudio_UpdateRoomModel, ovrAudio_GetRoomDimensions, and ovrAudio_GetRaycastHits.

Audio SDK 1.20 Release Notes This document provides an overview of new features, improvements, and fixes included in the latest version of the Oculus Audio SDK. Oculus DAW 1.20.0 • Bug Fixes • Fixed an issue where global parameters (such as reflection room size and room coefficients) were overwritten when a developer added a new spatializer instance to a project. Oculus Spatializer Wwise 1.20.0 • Bug Fixes • Fixed an issue where allocating an ambisonic voice would cause the plug-in to crash, if the binaural spatializer voice limit had been reached.

106 | Release Notes | Audio

Audio SDK 1.19 Release Notes This document provides an overview of new features, improvements, and fixes included in the latest version of the Oculus Audio SDK. Oculus 1.19.0 DAW Win/Mac OSP New Features • Optimized 2D graphics to allow for multiple instances of plugin windows, which previously caused performance issues. • Randomized the color of instance locations when new instances added, making it easier to see and utilize them. • Randomized the location of instances when they are added, enabling easier selection in both 2D and DAW. Oculus 1.19.0 DAW Win OSP Bug Fixes • Fixed shader compile crash on Windows 10 when HMD plugged in. Oculus Spatializer Wwise 1.19.0 Bug Fixes • Fixed ambisonic output which was collapsing to monosonic.

Audio SDK 1.18 Release Notes This document provides an overview of new features, improvements, and fixes included in the latest version of the Oculus Audio SDK. Oculus Audio Loudness Meter 1.0.0 Initial Release Measures the overall loudness of your app's audio mix. Loudness goes beyond simple peak level measurements, using integral functions and gates to measure loudness over time in LUFS units. In the interest of providing a consistent audio volume experience across all Oculus VR experiences, we ask that you set a target of -18 LUFS for Rift apps and -16 LUFS for Gear VR apps. Oculus Audio Profiler for Windows 1.18.0 Provides real-time audio statistics and performance metrics from apps that use the Oculus Spatializer Plugin. Oculus Spatializer FMOD 1.18.0 • New Features • Added support for Oculus Audio Profiler Oculus Spatializer Native 1.18.0 • New Features • Added support for Oculus Audio Profiler

Audio | Release Notes | 107

Oculus Spatializer Unity 1.18.0 • New Features • Added support for Oculus Audio Profiler Oculus Spatializer Wwise 1.18.0 • New Features • Added support for Oculus Audio Profiler • Added Wwise 2017 platform target support for Android and macOS. • Bug Fixes • The Treat Sound As Ambisonic check box now works properly.

Audio SDK 1.17 Release Notes This document provides an overview of new features, improvements, and fixes included in the latest version of the Oculus Audio SDK. 1.17.0 New Features • Wwise 2016 • Added plugin and Unity platform target support for macOS. • Added Unity platform target support for Android. • Added plugin support for Wwise 2017. Bug Fixes • Fixed a reverb bug in all plugins causing it to sometimes fail to turn on when forced. • Unity ONSP • ONSPAudioSource: fixed volumetric radius fiend and Enabling/Disabling reflections on audio instance. • Fixed bug causing room reflections to periodically fail to be recognized at startup of Editor or application.

Audio SDK 1.16 Release Notes This document provides an overview of new features, improvements, and fixes included in the latest version of the Oculus Audio SDK. 1.16.0 New Features • Added support for pre-computed visemes to Oculus Lip Sync for Unity. • Viseme engine improved on Rift to generate smoother and more accurate visemes.

108 | Release Notes | Audio

• Added Oculus Audio Manager to provide sound FX management that is external to Unity scene files. This has audio workflow benefits as well as providing you with the ability to group sound FX events together for greater flexibility. Bug Fixes Fixed an issue with ovrLipSyncDll_ProcessFrame where it did not properly analyze the audio signal. API Changes Due to an API change in Unity 2017 beta, the 1.1.5 version of the Oculus ambisonic decoder is obsolete and does not work in Unity 2017 beta 9 or later. The 1.16.0 version of the ambisonic decoder is now the official ambisonic integration.

Audio SDK 1.1 Release Notes This document provides an overview of new features, improvements, and fixes included in the latest version of the Oculus Audio SDK. 1.1.5 New Features • Added near-field rendering for more realistic spatialization of sound sources closer than 1 meter. • Added volumetric rendering to sound sources to support sound emanating from a spherical volume instead of a point-source. Set to 0m to render a point-source. • Added new virtual speaker mode to decode ambisonics as an array of eight point-sourced and spatialized speakers, each located at the vertex of a cube located around the listener. • In Wwise, the 3D position editor can now be used to preview spatialization. • VST Plugin • • • •

Added a recording mode to render the spatialized soundscape to an AmbiX .wav file. Added mouse support to the 2D view to move sound sources around. Moved several options from the About dialog box to the Config dialog box. You can now label a sound source for easy identification as well as change its color from the Config dialog box. • Added beta ambisonic playback support for Unity version 2017.1 beta. Bug Fixes • Fixed an issue that could corrupt shared data when running DAWS on a multi-core computer system. • Fixed an issue that could create spurious error messages in Steinberg Nuendo. 1.1.4 Bug Fixes • Fixed issue causing spatializer to sometimes fail to initialize and output silence. This issue affected all middleware/engines and platforms.

Audio | Release Notes | 109

1.1.3 The 1.1.3 release includes bug fixes for the FMOD and Wwise plugins. Our FMOD spatializer plugin now requires on FMOD Studio version 1.08.16 or later. Bug Fixes • FMOD • Fixed incorrect listener position, which sometimes caused “glitchy" sound. • Added workaround for FMOD issue with not releasing DSP instances. This would sometimes cause sound to drop out. • WWISE • Fixed issue with Shared Attenuation Range parameters not being written to sound banks correctly. 1.1.2 The 1.1.2 release includes minor bug fixes and logging changes. New Features • Added more detailed error log messages for setting Shared Reverb Range parameters. Bug Fixes • FMOD • Fixed inverse solving of listener coordinates. • Increased "inside head" zone where sound is placed in front of listener. Sounds closer than 1 cm to the center of the head are placed in front to prevent glitches where sound jumps from side to side due to floating point error (exhibited in FMOD plugin reverse solving listener coordinates). • Fixed incorrect audio output from OSP when FMOD Studio project is set to surround output (i.e., 5.1) rather than stereo. • Fixed Shared Reverb Min/Max range parameter values not being set correctly. 1.1.1 The 1.1.1 release adds an updated version of the Oculus Native Spatializer Plugin for Unity and minor bug fixes. Bug Fixes • Unity ONSP: Version 1.0.4 of the Unity ONSP was inadvertently included with release 1.1.0. Audio SDK 1.1.1 includes the latest plugin. • Wwise: SDK now always sets Min/Max attachment parameter values. This allows reflection values to be modified even if we are using Wwise to author direct curve. • AAX and VST for DAWs: Visualizer now only updates listener position when HMD is mounted/worn. 1.1.0 Overview of Major Changes The 1.1 release includes major improvements to the Audio SDK and some big new features, including Ambisonics support, shared reverb, and attenuation controls. It also includes tweaks to the HRTF to flatten

110 | Release Notes | Audio

frequency response and improve spatialization. The VST and AAX plugins now feature 3D Audio Visualizers allowing users to visualize and manipulate sound parameters within VR. We have discontinued the Legacy Audio Spatializer for Unity 4. If you still need that plugin, it is still available with Audio SDK v1.0.4 on the Downloads page page - select the Audio category and version 1.0.4 to download. The Oculus Spatializer for FMOD has been renamed OculusSpatializerFMOD.dll, and the Oculus Spatializer for Wwise has been renamed the OculusSpatializerWwise.dll. Note: Version 1.0.4 of the Unity ONSP was inadvertently included in this release. Version 1.1.x will be released soon. New Features • Wwise and FMOD: Added Ambisonics spatialization support using spherical harmonic-based rendering to provide accurate reproduction of Ambisonics. For more information, see the Oculus Ambisonics section of the Supported Features on page 27 section of our Audio SDK Guide. • Added shared reverb to FMOD, Wwise, and Unity plugins, moving all reverb processing to a single effect for more efficient processing. • Added Attenuation Range min/max, providing control over internal attenuation model used for early reflections. This allows better distance simulation, as the authored curve can match the internal curve, meaning reflections fall off naturally. • AAX and VST: added 3D Audio Visualizer with HMD interface, allowing users to visualize and manipulate sound parameters within VR using Oculus Touch or Xbox controllers. For more information, see 3D Visualizer on page 85 (AAX) and 3D Visualizer on page 77 (VST). • Added visual representation of Room Model to Unity Native plugin. API Changes • • • • • • • • •

Moved all FMOD global settings to Oculus Spatial Reverb effect. Oculus Spatializer for FMOD has been renamed OculusSpatializerFMOD.dll. Oculus Spatializer for Wwise has been renamed the OculusSpatializerWwise.dll. Renamed several parameters for consistency across plugins. Removed OculusFMODSpatializerSettings.h from FMOD Plugin. All values are now available through the Oculus Spatializer. Deprecated OvrFMODGlobalSettings.cs from FMOD Plugin. Added Bypass Spatializer option to Wwise Plugin. Renamed Disable Reflections to Enable Reflections in FMOD, Wwise, and Unity plugins, inverted logic. Removed Legacy Audio Spatializer for Unity 4.

Known Issues • Unity ONSP: Version 1.0.4 of the Unity ONSP was inadvertently included in this release. Version 1.1.x will be released soon. • FMOD Studio 1.8.xx: Putting the Oculus Ambisonics on the event master track does not provide optimal results. FMOD interprets the 4 channel ambisonics as quadraphonic and automatically up-mixes to 5.1 at the track output. To work around this issue: • Put the Oculus Ambisonics effect on the audio track so it has 4 channel input, OR • Set the project to 5.1 speaker mode and manually convert the 4 channel B-Format to 5.1 by leave channels 3 (center) and 4 (LFE) silent to prevent automatic upmix.

Audio | Release Notes | 111

• Gear VR developers using Unity 5.3.4 or later, or using Unity 5.4.0b16 and later: Do not set DSP Buffer Size to Best in Audio Manager in the Inspector for now or you will encounter audio distortion. Set it to Good or Default instead.

Audio SDK 1.0 Release Notes This document provides an overview of new features, improvements, and fixes included in the latest version of the Oculus Audio SDK. 1.0.4 Overview of Major Changes The Unity v 5.4.0b16 Editor added a basic version of our Oculus Native Spatializer Plugin for Unity 5 (ONSP). It makes it trivially easy to add basic spatialization (HRTF transforms) to audio point sources in your Unity project. For more information, see First-Party Audio Spatialization (Beta) in our Oculus Native Spatializer for Unity guide. Bug Fixes • Fixed bug in which audio sources were silent when attenuation mode was unset. • Oculus Spatializer for Wwise Plugin • Fixed crash when adding Wwise 2017 spatializer to a Wwise project. Known Issues • Gear VR developers using Unity 5.3.4 or later, or using Unity 5.4.0b16 and later: Do not set DSP Buffer Size to Best in Audio Manager in the Inspector for now or you will encounter audio distortion. Set it to Good or Default instead. 1.0.3 New Features • VST Integration • Added head tracking in DAW. • Added Volume Unit (VU) meter. • Added support for Adobe Audition v 8.1 (Windows and OS X). API Changes • Oculus Native Spatializer for Unity • Log ONSP version number when running test scene. • Redballgreenball demo scene - changed scale of audio parameters (room size and audio source curves) as well as visual geometry to reflect real-world scale. Bug Fixes • Fixed incorrect Ambisonics rotation when the listener head rotated. • Fixed listener rotation for reflections and optimized performance by removing extra silence after reflections. • Oculus Native Spatializer for Unity

112 | Release Notes | Audio

• Fixed parameter setting issue in which volume changes and other settings were not set properly. • Fixed volume pop and buzzing anomalies when going in and out of Unity app focus. • Various crash fixes and CPU optimizations. • VST • Fixed scrubbing and loss of early reflections in Adobe Audition and potentially other DAWs. 1.0.2 API Changes • Native Spatializer for Unity: Set void SetParameter(ref AudioSource source) function within ONSPAudioSoure.cs to public. When instantiating a prefab with this component, please be sure to call this function (and pass a reference of the AudioSource component) before calling AudioSource.Play(). • Ambisonics API: Changed to right-hand coordinates to match the rest of the public AudioSDK. Bug Fixes • Fixed bug causing reverb to turn on/off at random. Note that reverb settings in v 1.0.1 plugins may not have been active. • Reverb is now disabled when per-sound setting to disable reflections is on. • Fixed rejecting high reflections value from plugins: changed AudioSDK parameter validation to accept reflection coefficients 0-1, and clamp to 0.95. • Wwise: Fixed for incorrect detection of channel count on sounds sent to OSP bus (previously resulted in n > 1 channel warning message, even though sound is mono). • Wwise: Temporary fix for sounds using multi-position mode - they now bypass the OSP and display a warning message in profiler log, rather than playing spatialized but unattenuated. • Unity Native: Fixed reversed z-axis when calculating position of sound relative to listener. Known Issues • High reflection values in small rooms may cause distortion due to volume overload. See parameter documentation in the appropriate OSP guide for more information. 1.0.1 Overview of Major Changes This constitutes the first full release of the Oculus Audio SDK. New Features • OSP for FMOD: Added attenuation Range Min. parameter. • OSP for FMOD: Added support for FMOD Unity Integration 2.0 API Changes • Unity Native: Added global scale parameter. Bug Fixes • FMOD: Fixed plugin loading issue for FMOD Unity Integration (Legacy) on OS X. • Unity Legacy: Microphone input can now be spatialized.

Audio | Release Notes | 113

1.0.0-beta Overview of Major Changes This constitutes our beta release of Audio SDK 1.0. We have added an additional spatializer plugin for Unity based on their Native Audio Plugin, and are maintaining our original Oculus Spatializer Plugin (OSP) for Unity for legacy development with Unity 4. Developers using Unity 5 should use the Oculus Native Spatializer Plugin. The priority system has been removed in lieu of middleware- or engine-implemented sound priority. We removed frequency hint as improvements in the core engine made it redundant. New Features • Added Oculus Native Spatializer for Unity. • Added support for using OSP for FMOD with the FMOD Studio Unity Integration. • Added support for using OSP for Wwise with the Wwise Unity Integration. API Changes • Removed priority system and frequency hint from all OSPs. • Added falloff range near/far to Unity Native OSP. Bug Fixes • Wwise: Fixed potential crash bug in spatializer tail processing.

Audio SDK 0.11 Release Notes This document provides an overview of new features, improvements, and fixes included in the latest version of the Oculus Audio SDK. 0.11.3 Bug Fixes • Fixed spurious warnings in debug output window. 0.11 Overview of Major Changes This release introduces the OculusHQ spatializer provider, which combines the quality of the former High Quality Provider with the performance of the Simple Provider. Plugins no longer require the selection of HQ or Simple paths. Old implementations use OHQ by default, with reflections enabled. New Features • • • •

Minor VST changes. Added AAX. Added Wwise 2015.1 support. Improved PC and Android performance.

114 | Release Notes | Audio

Known Issues • FastPath is currently not supported for Android. As of this release, it cannot be disabled in Unity 5.1 which will cause intermittent audio issues. To workaround this, use Unity 4.6 until the next 5.1 patch release.

Audio SDK 0.10 Release Notes This document provides an overview of new features, improvements, and fixes included in the latest version of the Oculus Audio SDK. 0.10.2 Overview of Major Changes The Oculus Audio SDK consists of a set of plugins for popular middleware and engines; the Oculus Spatializer VST plugin for content authors; and documentation to assist developers that want to incorporate realistic spatialized audio in their VR-enabled applications and games. Currently the Oculus Audio SDK supports Mac, Windows, and mobile platforms, and provides integrations with: • FMOD (Windows, Mac and mobile) • Audiokinetic Wwise (Windows) • Unity 4.6 and later (Windows, Mac, and mobile) The optional OVRAudio C/C++ SDK is available to qualified developers by contacting developer support directly. New Features • Unity Plugin • Works with Unity 4 Free. • Defaults to 'slow' audio path for reliability. • Wwise plugin • Removed dependency on VS2013 CRTL. • FMOD plugin • Significant crash bug/reliability improvements. • Added Mac support. • Removed dependency on VS2013 CRTL. • VST • Finalized user interface. • Now available for Mac. • OVRAudio (internal only) • • • •

Changed from bool returns to error code returns. Added debug output. Added 16 kHz support. Removed Bass Boost option.

Audio | Release Notes | 115

API Changes • OVRAudio (internal only) • Added ovrAudio_SetAudioSourcePropertyf(). • Added ovrAudio_SetUserConfig(). Bug Fixes • Unity plugin • Removed AndroidManifest, which was causing conflicts with user's existing manifests. • Fixed various bugs. • Wwise plugin • Fixed various crash bugs. Known Issues • This is still a preview release, so expect a lot of bugs and other minor issues!

116 | Audio SDK Developer Reference | Audio

Audio SDK Developer Reference The Oculus Audio SDK Developer Reference contains the detailed information about the data structures and files included with the SDK. For the latest reference, see Oculus Audio SDK Reference Manual 1.25. For the Audio SDK 1.24 reference, see Oculus Audio SDK Reference Manual 1.24. For the Audio SDK 1.22 reference, see Oculus Audio SDK Reference Manual 1.22. For the Audio SDK 1.20 reference, see Oculus Audio SDK Reference Manual 1.20. For the Audio SDK 1.19 reference, see Oculus Audio SDK Reference Manual 1.19. For the Audio SDK 1.18 reference, see Oculus Audio SDK Reference Manual 1.18. For the Audio SDK 1.17 reference, see Oculus Audio SDK Reference Manual 1.17. For the Audio SDK 1.16 reference, see Oculus Audio SDK Reference Manual 1.16. For the Audio SDK 1.1 reference, see Oculus Audio SDK Reference Manual 1.1. For the Audio SDK 1.0 reference, see Oculus Audio SDK Reference Manual 1.0.

Audio | Audio Documentation Archive | 117

Audio Documentation Archive This section provides links to legacy documentation. Select from the following: Version

HTML

PDFs

Latest

AUDIO SDK Documentation

Audio SDK Guide

1.24

AUDIO SDK Documentation

Audio SDK Guide

1.22

AUDIO SDK Documentation

Audio SDK Guide

1.20

AUDIO SDK Documentation

Audio SDK Guide

1.19

AUDIO SDK Documentation

Audio SDK Guide

1.18

AUDIO SDK Documentation

Audio SDK Guide

1.17

AUDIO SDK Documentation

Audio SDK Guide

1.16

AUDIO SDK Documentation

Audio SDK Guide

1.0

AUDIO SDK Documentation

Audio SDK Guide

0.11

AUDIO SDK Documentation

Audio SDK Guide

0.10

AUDIO SDK Documentation

Audio SDK Guide

Life Enjoy

" Life is not a problem to be solved but a reality to be experienced! "

Get in touch

Social

© Copyright 2013 - 2019 DOKUMENTIX.COM - All rights reserved.