Startup puts the brain back into the audio experienceAmy J. Born | March 10, 2020
The experience of listening to live music is fundamentally different from listening to recorded music. A number of factors that impact the live experience are absent when listening, for example, to tracks on a smartphone. Recreating that full sound is at the heart of IRIS, a London-based startup founded in 2018, which is dedicated to improving the way people listen to all types of audio.
"We looked at the audio industry and saw that sound quality had been neglected and in decline over the past 30 years," said Jacobi Anstruther, founder and CEO of IRIS. He explained that the push to make music available on smartphones and computers meant that quantity (getting lots of music on an easily accessible device) came at the expense of quality. He saw an opportunity, however, with the current boom in music streaming and podcasts. "'Listen well' is our company motto," he said. "How people listen is as important as what they listen to."
Anstruther explained that live music (or any live audio) contains a combination of the real and the imaginary. The real is the sound being produced, the actual music that goes directly to the ear, while the imaginary is everything that affects the experience and makes it unique to each listener: the shape and space of the room, where the listener sits, the person's history with the music and emotions each individual brings to it.
One key distinction is that the brain is actively engaged when listening live but passive when listening to a recording. "All of the imaginary element is missing from recorded music. Engineers try to solve the problem with frequency manipulation, EQing, to fill in what's missing, channeling to make it more full. But they are working with something that is limited from the start. What is missing is the way the brain engages," Anstruther said. "IRIS brings that missing piece back into the listening experience. We look at the space of the audio. We look at what happens in the brain when we listen to live music." He refers to this as "hacking the nervous system."
He explained that headphones and speakers do their EQing (equalization) by manipulating the DSP (digital signal processor) driver. The result of that processing is an artificial soundscape, one in which the listener is not active and, therefore, less engaged with less of an emotional response. "IRIS takes the original mp3 file and simulates the imaginary piece to trick the brain into being in a live experience. You are getting the information as if you were in almost infinite space, as if you are sitting in every seat, engaging the brain back into a listening experience. The brain creates the missing information," he said.
The benefits of active listening go beyond the enjoyment of music. "We're limited on what we can retain because listening is so passive. Music and podcasts have become background noise. We retain below 17% of a podcast and almost nothing in 6 months," Anstruther said. He and his team are studying the benefits of active listening with top researchers in the field of human performance from Formula 1 racing, Mount Sinai, UCLA and Goldsmiths, University of London. While it is too early to make any claims, the initial findings are promising in terms of improved memory and retention, better sleep and relaxation, lower stress and anxiety, and overall well-being. Applications for the IRIS technology include education technology, podcasts and wellness, in addition to the music industry and streaming services.
The IRIS technology is available for licensing to content providers, hardware and software developers, venues, and anyone who wants to incorporate active listening into their product or space. Individuals can try the technology for themselves via an iPhone app that works in conjunction with Spotify. The app allows the listener to toggle IRIS on and off to experience the difference.
More information is available at irislistenwell.com.