Blame Game EXP feat. Harry Shotta in Dolby Atmos
Blame Game EXP featuring Harry Shotta is an 18-minute music experience comprising four tracks and five-part audio cinematic scenes. This piece explores socio-economic issues such as crime, poverty, deprivation, and mental health. The story is told from different perspectives reflected in the lyrics and narrative transitions that help the audience to better understand the complexity and dynamics of the depicted relationships.
The aim of this collaboration was to bring together conventional music production and songwriting with the narrative sound design and new generation immersive audio technologies—such as Dolby Atmos® —usually associated with cinema and gaming. The use of spatial sound design for narrative components and cutting-edge Dolby Atmos audio format for music mixing brings the audience face to face with the stories unfolding throughout. Blame Game EXP showcases the true potential of immersive storytelling to on-demand music streaming.
We wanted to engulf our audience in music as well as physical environments. The project utilizes a number of experimental techniques including recordings with ambisonic and wearable binaural microphones that offer a new sound to the listening experience typically not included in conventional stereo music productions.
-Oliver Kadel 2021
During this project we really enjoyed experimenting with a number of recording and mixing techniques. Some scenes were recorded with the Ambeo first order ambisonic microphone, which essentially captured the scene like a 360 camera by recording positional information in all directions. This was subsequently decoded into 5.1 and then integrated into a 7.1.2 channel bed within Pro Tools. Other scenes featured native binaural recordings recorded with a wearable Sonic Presence SP-15C microphone. Using pre-binauralized audio as an input source to the Atmos mix may sound, and indeed is, paradoxical. However, rules are there to be broken and we highly encourage experimentation of any kind. Personally, I found this technique was effective at portraying a scene from the first-person perspective POV as the native binaural recording captured in action can be hard to replicate with object mixing alone with the desired amount of viscerality. Whilst working wonderfully for the headphone mix, the downside was that the binaural elements did not translate well to loudspeakers, but ultimately an optimal balance could be found to satisfy both delivery methods from a creative standpoint. Lastly, some scenes were completely invented and brought to life with sound design and spatialization utilizing beds and objects. It was fun to mix and play with these formats within a single piece. As always, there is no right or wrong way to do everything - different ideas require an individual approach.
Our technical approach was to produce a spatial mix with the channel beds and objects across the spatial audio resolution of the 7.1.2 Dolby Atmos layout. This format offers true 3D audio on loudspeakers and headphones, thanks to binaural processing. In comparison to legacy formats such as stereo and 5.1, Dolby Atmos' 7.1.2 channel layout, coupled with object-based audio offers a height dimension, which can be reproduced even over headphones, when filtered with a set of Head-Related Transfer Functions (HRTFs). An HRTF is a pair (left and right ears separately) of filters which capture the spatial cues encoded by the human anatomy, including the size and shape of our ears and head. These filters can be used in real-time to impose the spatial cues onto any audio signal, such as an immersive music mix, and offer a level of immersion not possible using traditional stereo headphone reproduction.
The Dolby Atmos Renderer offers a selection of HRTF distances ('near', 'mid' and 'far') which gives producers and mixing engineers control over how much spatial effect is applied to individual sound elements in their mix. In Blame Game, the spatial processing was emphasized on the narrative, diegetic components which bridge the musical tracks together, while for the music itself, the spatial processing was used conservatively to preserve the timbral fidelity and punchiness expected from Hip Hop. The human brain is incredibly adaptive and can familiarize itself with spatial cues within a short timeframe. Having control over how much and when, the spatial effect is applied to individual, or groups of sound elements offers a very powerful set of tools that can combine conservative "center-stage" mixing with immersive object arrangements that occupy the full three-dimensional space. This is not possible using traditional, non-spatial approaches. The result is a set of completely novel and creative strategies to please and excite your listener. In the same way that the creative decisions made during the arrangement and production of a mix can create contrast and dynamism between the musical sections, so can spatial processing reveal new dimensions to the immersive auditory experience for producers and mixing engineers.
Dolby Atmos is currently supported on the Apple Music platform along with several other major music streaming platforms. We believe that in the immediate future, more and more members of the global audience will gain access to this content as the evolution and adaptation of immersive audio continues across the industries. Furthermore, we are very excited about the introduction of a head-tracking feature on a number of consumer devices. This technology is typically built into earphones or headphones that work wirelessly through Bluetooth. The head-tracking gives three degrees of freedom (3DOF) movement in relation to a center-stage of the content thus adding a whole new dynamic dimension to the listening experience that most regular listeners are not currently familiar with. We believe that spatial and interactive audio in the next decade is going to become a key differentiating factor and an indispensable aspect of immersive communication, storytelling, brand advertising, music streaming, and podcasting. It's a truly exciting landscape for modern music and other audio content creatives.
The AvidPlay platform has been instrumental when it came to the distribution of this piece. With currently limited options we wanted to maximize the outreach by making the content accessible on all platforms that support spatial audio with Dolby Atmos. AvidPlay is a one-stop shop covering all key metadata and distribution features and currently supports Apple Music, TIDAL and Amazon Music HD as well as the rest of the streaming platforms that only support stereo playback. The project was released on the 8th of September 2021.
ARTIST: Harry Shotta
PRODUCERS: Macky Gee & Erb N Dub
ARTIST MANAGEMENT: David Ross
CREATIVE DIRECTOR: Oliver Kadel
MIXING ENGINEER: Oliver Kadel
LABEL: 1.618 Music
Special mention to Oliver Scheuregger, Mathew Neutra, Dylan Marcus and Emma Rees for their contributions and support across this project.
Harry Shotta
Harry Shotta is an award-winning international MC who has travelled the globe touring since he exploded onto the Drum N Bass scene. He is a Guinness world record champion achieving the title of 'most words on a song' beating Eminem's "Rap God" back in 2017 with his epic display of double-time speed rap on "Animal". Harry has also written the first Rap AR experience Consequences which was launched at the Future Of Storytelling Festival in New York in 2018. This was followed by a showcase sponsored by Bose at the Raindance Immersive Festival in 2019. Continuing his passion for weaving storytelling into musical soundscapes, Harry's latest journey into thought-provoking lyrical narratives finds him breaking down the story of a troubled boy over four different tracks.
Author: Oliver Kadel
https://www.1618digital.com
Oliver Kadel is an award-winning audio engineer and sound designer based in London, specializing in spatial and interactive audio for new generation immersive media. Since founding 1.618 Digital in 2014, Oliver and his team have worked on audio for over 100 immersive projects. Alongside industry practice, Oliver lectures at the University of West London, teaching Immersive Audio on a master's program. Most recently, Oliver embarked on a PhD at the University of York AudioLab, researching the impact of spatial psychoacoustics in the context of learning and training in Virtual Reality. In early 2018, Oliver launched the Immersive Audio Podcast, hosting industry experts and influential guests to discuss all areas of immersive audio and the XR industry. The podcast has been highly commended as a valuable source of information for industry professionals, academics, and students, representing numerous segments of the industry and continuing to grow its audience globally.