Immersed in a virtual park and dodging virtual balls, participants in a new study have helped researchers unravel the role of sound in balance using virtual reality. The researchers discovered that helpful sound direction cues boosted accuracy in a balance task, and this benefit was consistent regardless of the sound delivery method. This suggests simpler, headphone-based systems could be widely used for balance testing and rehabilitation. The findings were published in Experimental Brain Research.
Previous research had shown that sounds play an important role in keeping our balance, whether we are standing still or moving. However, many earlier studies used either speakers or headphones to deliver these sound cues, leaving open the question of whether one method might be better than the other for practical use.
With applications ranging from clinical assessments to everyday safety, the team wanted to know if a simpler setup using common headphones could be just as effective as more complex loudspeaker arrangements. Their study also aimed to test whether making sounds feel external through a simulation method would further boost performance, especially in situations where visual information was absent.
“Since 2014, when I arrived at NYU, I worked with Professor Ken Perlin and a team at the Ear Institute of Mount Sinai,” said study author Anat V. Lubetzky, an associate professor at New York University and director of the Physical Therapy Sensorimotor Lab.
“We have been using head-mounted displays as a method to expand our understanding of balance function and dysfunction, particularly how the brain uses sensory information for balance (sensory integration for postural control). What we know about balance is dependent on the tools we use to measure it and we believed that HMDs can help us expand that thanks to the ability to create more diverse and contextual visual environments.”
“Then in 2018, Dr. Maura Cosetti asked me if her patients with hearing loss have balance problems. That question has taken us on a journey to discover whether what we hear matters for balance. I partnered up with Dr. Agnieszka Roginska, an NYU Music Professor of Music Technology, to develop a paradigm to answer these clinical questions. With funding from the Hearing Health Foundation, we developed new applications that combine auditory cues with visual cues and then we received funding from the NIH to study this question in people with unilateral vestibular hypofunction or unilateral hearing loss or healthy controls.”
“That work primarily focused on static balance rather than dynamic tasks and I always wanted to also study the implications to dynamic balance,” Lubetzky continued. “In addition, studies that investigated the influence of sounds on balance typically used either headphones or loudspeakers and we wanted to make a direct comparison. For sounds to be used clinically, we need a simple setup so we’re trying to understand what’s the simplest that can still provide clinically meaningful information.”
For their study, the researchers recruited 24 healthy young adults, with an average age of 26. Participants wore a virtual reality headset that displayed a 60-second scene of a park. In this virtual park, balls were launched from a cannon towards the participant’s head, and the task was to dodge these balls by moving their upper body to the left or right while keeping their feet still.
The speed at which the balls were launched increased in waves throughout the 60-second scene, making the task progressively more challenging. The color of each ball was a visual cue, indicating whether participants should dodge left or right according to a pre-set rule (for example, red ball = dodge right, blue ball = dodge left, or vice versa, with the rule randomly changed to prevent participants from simply memorizing a sequence).
The experiment involved four different conditions, presented in a randomized order. In the ‘Visual-Silent’ condition, participants relied only on visual cues (the ball color) and there were no sounds. In the ‘Visual-Congruent’ condition, helpful sound cues were added; if the visual cue indicated dodging right, the sound cue would also come from the right, and vice versa. In the ‘Visual-Incongruent’ condition, the sound cues were unhelpful and misleading; the sound direction was random and didn’t match the visual cue. Finally, in the ‘Dark-Congruent’ condition, the park scene was completely dark, so participants had to rely solely on the congruent sound cues to guide their dodges.
To deliver the sound cues, the researchers tested four different setups, again presented in random order. The first setup used standard headphones, where spatial audio was delivered directly through the headphones. The second setup used loudspeakers, with 16 speakers placed around the room to create a multi-channel sound environment. The third setup was ‘passthrough,’ where participants wore headphones, but the headphones were inactive and all sounds were played through the loudspeakers in the room. This condition was designed to test the effect of headphone weight alone, without active sound delivery.
The final setup was room simulation, where participants wore headphones that simulated the sound of the loudspeaker setup. This was achieved using specialized software to replicate the acoustics of the sound lab within the headphones, effectively making the headphone sounds feel like they were coming from the room’s speakers.
Throughout the experiment, the researchers measured participants’ reaction time – how quickly they started to move their head to dodge after a ball was launched – and their accuracy – whether they dodged in the correct direction according to the cues. Participants also completed questionnaires before, during, and after the session to assess any motion sickness they experienced in the virtual reality environment.
The results of the study were surprising and went against the initial expectations. Contrary to the idea that headphone weight would slow responses, participants actually reacted faster when wearing headphones compared to loudspeakers in silent conditions. However, this difference disappeared when sound cues were introduced; with sound, reaction times were similar across all sound delivery methods.
Regarding accuracy, participants were better at dodging in the correct direction when provided with congruent sound cues, regardless of whether the sounds were delivered through headphones or loudspeakers. This indicated that people can use helpful auditory information to improve their balance reactions. Importantly, participants generally ignored the misleading incongruent sound cues, maintaining their accuracy at a level similar to when there were no sounds at all.
“This study confirmed that healthy young adults can use congruent auditory stimuli to enhance accuracy and disregard incongruent auditory stimuli such that accuracy is not harmed when performing a dynamic visual choice task,” Lubetzky told PsyPost. “This was true with headphones or loudspeakers and, since this was a replication study, the second sample where we found the same thing.”
Finally, when participants had to rely solely on sound in the dark, their reaction times became faster, especially with loudspeakers, suggesting they were relying more heavily on auditory cues in the absence of visual information. However, accuracy in the dark was reduced compared to when visual cues were also available, indicating that while sound helps, it doesn’t fully compensate for the lack of sight in this task.
“When we designed the study we thought that confusing sounds will make people slower and make more mistakes and helpful sounds will make them faster and more accurate,” Lubetzky said. “We found that any sounds made people faster and helpful sounds made them more accurate but distracting sounds did not interfere with performance. This finding is fascinating to me because it means that healthy young adults can use sounds to improve performance if they’re helpful but can very easily ignore them if they’re not.”
“We saw that this effect is somewhat stronger with speakers than headphones but we’re not quite sure why. We looked at whether the weight of the headphones matters and the answer (at least in this study) was no. We thought we could create spatialized sounds in headphones that will be similar to speakers but that did not do it either.”
Ultimately, the researchers aim to develop balance assessment tools that can be used in various settings and to create effective interventions that harness the power of sound to help people with balance problems and hearing loss.
“Our team aims to build, test and disseminate accurate and accessible assessments of balance that include all aspects of balance and can be used in the lab, clinic and homes,” Lubetzky explained.
“For people with vestibular loss, since our work and those of others have shown the importance of sounds for these patients, we would like to continue and develop assessments that allow clinicians to measure the role of sounds in balance control and interventions that can help sensory integration. We think technology and team science can open the door to that.”
“For people with hearing loss, the findings that sounds are important for balance are only one piece of the puzzle of why people with hearing loss are at an increased risk for falls. My lab is looking to identify all other mechanisms that drive this relationship and find effective intervention (whether it’s hearing aids, cochlear implants, and/or rehabilitation and exercise).”
“This pilot study was funded by a seed award from the NYU Music and Audio Research Laboratory (MARL) center,” Lubetzky added. “I am extremely lucky to work with an incredible interdisciplinary team. At NYU — Computer Scientists (Prof. Ken Perlin, Dr. Zhu Wang), Music Technology (Prof, Agnieszka Roginska), Applied Statistics (Prof. Daphna Harel). At The Ear Institute of Mount Sinai — Physical Therapy and vestibular rehabilitation (Dr. Jennifer Kelly); Neurotology & Otolaryngology (Dr. Maura Cosetti); Audiology (Katherine Scigliano).
“This specific study also involved a talented group of students from diverse disciplines working together: Liraz Arie (PT, PhD), Yi Wu (PhD student Music Technology), Delong Lin (MA, Music Technology), Alvaro F. Olsen (PhD student CREATE lab at NYU).”
“The structure of the team allows us to work on clinically important problem with the latest technology and with multiple perspectives. As a PhD in Rehabilitation Sciences and the director of NYU’s PhD program in Rehabilitation Sciences, I strongly believe that this is how science should be done.”
The study, “A detailed inquiry of the differences between headphones and loudspeakers influences on dynamic postural task performance,” was authored by Anat V. Lubetzky, Yi Wu, Delong Lin, Alvaro F. Olsen, Anjali Yagnik, Daphna Harel, and Agnieszka Roginska.