New research uncovers brain hierarchies in music perception

Ever heard a snippet of a song and instantly known what comes next? Or picked up the rhythm of a chorus after just a few notes? New research from the Center for Music in the Brain at Aarhus University and the Centre for Eudaimonia and Human Flourishing at the University of Oxford has found that our brains process music through a specific hierarchical activation of several regions. The findings, published in Nature Communications, provide new insights into the neural mechanisms underlying our ability to anticipate and identify familiar melodies.

While previous research has established the hierarchical organization of auditory perception, it has mostly focused on elementary auditory stimuli and automatic predictive processes. However, much less is known about how this information integrates with complex cognitive functions, such as consciously recognizing and predicting sequences over time. By investigating these mechanisms, the researchers aimed to uncover new insights into how our brains handle complex auditory tasks.

“My interest in this topic began during my multidisciplinary education. As a child, I was passionate about both science and football, but I eventually dedicated myself to studying classical guitar in depth. Between the ages of 18 and 22, I performed in several concerts and taught guitar. However, I realized that my childhood passion for science was calling me back,” said study author Leonardo Bonetti (@LeonardoBo92), an associate professor at Aarhus University and the University of Oxford.

“I transitioned first to studying psychology and then moved into neuroscience, with a particular interest for analytical methods. During my studies, I discovered that music could serve as a powerful tool to explore certain features of the brain that are challenging to understand with non-musical stimuli. This is because music consists of a series of hierarchical sounds arranged over time, making it an excellent means to investigate how the brain processes information consciously over periods.”

The study involved 83 participants between the ages of 19 and 63, all of whom had normal hearing and were predominantly university-educated. Participants were first introduced to a short musical piece, specifically the first four bars of Johann Sebastian Bach’s Prelude No. 2 in C Minor, BWV 847. They listened to this piece twice and were asked to memorize it.

Following this memorization phase, the participants were subjected to an auditory recognition task while their brain activity was recorded using magnetoencephalography (MEG). MEG is a non-invasive imaging technique that captures the magnetic fields produced by neural activity, providing precise temporal and spatial resolution.

The recognition task consisted of 135 five-tone musical sequences, some of which were identical to the original piece while others were systematically varied. These variations were introduced at different points in the sequence to observe how the brain responds to changes in familiar patterns.

Bonetti and his colleagues found that when participants recognized the original memorized sequences, their brain activity followed a specific hierarchical pattern. This pattern began in the auditory cortex, the region responsible for processing basic sound information, and progressed to the hippocampus and cingulate gyrus, areas associated with memory and cognitive evaluation.

When variations were introduced into the sequences, the brain generated prediction errors. These errors started in the auditory cortex and then spread to the hippocampus, anterior cingulate gyrus, and ventromedial prefrontal cortex. Notably, the anterior cingulate gyrus and ventromedial prefrontal cortex exhibited their strongest responses when the variations were introduced.

The study also uncovered a consistent brain hierarchy characterized by feedforward and feedback connections. Feedforward connections from the auditory cortices to the hippocampus and cingulate gyrus, along with simultaneous feedback connections in the opposite direction, were observed.

This hierarchical organization was consistent for both previously memorized and varied sequences, although the strength and timing of the brain responses varied. This suggests that while the overall structure of brain processing remains stable, the dynamics change depending on whether the sequence is familiar or novel.

“Our study shows that the brain processes music (and information over time) by activating several brain regions in a specific, hierarchical order,” Bonetti told PsyPost. “Initially, sensory regions like the auditory cortex handle basic sound features. Then, this information is passed to a larger network of regions that arguably analyze the sounds more deeply, including the relationships between them (such as musical intervals). This process helps the brain determine if the sequence of sounds is familiar or new.”

“This study not only explains how we perceive music but also provides insights into how the brain processes and recognizes information over time. On a practical level, future research could focus on studying this phenomenon in aging, both healthy and pathological (like dementia). By using music, advanced neuroscientific tools, and analytical methods, we might gain further understanding of dementia and memory disorders.”

Bonetti said the long-term goals of this research are to develop dementia screening tools based on brain responses to music and to enhance data collection methods by integrating MEG with intracranial recordings for a more comprehensive understanding of music memory mechanisms.

“By studying aging and dementia over time, I aim to develop screening tools based on brain responses during music recognition,” he explained. “These tools could predict the risk of older adults developing dementia.”

“Second, I want to expand our data collection methods. Currently, we use magnetoencephalography (MEG), which is a great non-invasive tool but lacks the ability to focus deeply within the brain. In the future, I plan to integrate MEG with intracranial recordings from electrodes implanted in epileptic patients. This combination will help us understand the brain mechanisms involved in music memory across a wider range of time and spatial scales.”

“I wish to thank very much the several foundations which are supporting our work, in particular Lundbeck Foundation, Carlsberg Foundation, the Danish National Research Foundation and the Linacre College of the University of Oxford,” Bonetti added.

The study, “Spatiotemporal brain hierarchies of auditory memory recognition and predictive coding,” was authored by L. Bonetti, G. Fernández-Rubio, F. Carlomagno, M. Dietz, D. Pantazis, P. Vuust, and M. L. Kringelbach.