Loud and clear: The neurological secrets of hearing in noisy environments

Man Listening Science Hearing Art Concept

 

Man listening, science, and hearing, art concept

Researchers have found that the brain processes speech differently depending on how different it is and whether it focused on it. The study combines neural recordings and computer modeling, showing that phonetic information is encoded differently when a speech is overwhelmed by louder voices compared to when it is not.

 

Columbia University scientists have found that the brain encodes speech differently based on its clarity and our focus on it. This finding, involving separate processing of flashed and masked speech, could improve accuracy of brain-controlled hearing aids.

Researchers led by Dr. Nima Mesgarani of Columbia University, USA, reports that the brain processes speech in a crowded room differently depending on how easy it is to hear and whether we focus on it. Recently published in the open access journal PLOS Biology, the study uses a combination of neural recordings and computer modeling to show that when we follow speech that is drowned out by louder voices, phonetic information is encoded differently than in the opposite situation. The findings could help improve hearing aids that work by isolating present speech.

 

It can be difficult to focus on speaking in a crowded room, especially when other voices are louder. But amplifying all sounds equally does little to improve the ability to isolate these hard-to-hear voices, and hearing aids that attempt to amplify only the speech present are still too inaccurate for practical use.

” sizes=”(max-width: 777px) 100vw, 777px” alt=”Two brain mechanisms that choose speech from the crowd” width=”777″ height=”437″ aria-describedby=”caption-attachment-284423″ data-ezsrcset=”https://bodyroc.co.uk/wp-content/uploads/2023/09/1695720708_371_Loud-and-clear-The-neurological-secrets-of-hearing-in-noisy.jpg 777w,https://scitechdaily.com/images/Two-Brain-Mechanisms-Picking-Speech-Out-of-Crowd-400×225.jpg 400w,https://scitechdaily.com/images/Two-Brain-Mechanisms-Picking-Speech-Out-of-Crowd-768×432.jpg 768w,https://scitechdaily.com/images/Two-Brain-Mechanisms-Picking-Speech-Out-of-Crowd-1536×864.jpg 1536w,https://scitechdaily.com/images/Two-Brain-Mechanisms-Picking-Speech-Out-of-Crowd-2048×1152.jpg 2048w,https://scitechdaily.com/images/Two-Brain-Mechanisms-Picking-Speech-Out-of-Crowd-180×101.jpg 180w,https://scitechdaily.com/images/Two-Brain-Mechanisms-Picking-Speech-Out-of-Crowd-260×146.jpg 260w,https://scitechdaily.com/images/Two-Brain-Mechanisms-Picking-Speech-Out-of-Crowd-373×210.jpg 373w,https://scitechdaily.com/images/Two-Brain-Mechanisms-Picking-Speech-Out-of-Crowd-120×67.jpg 120w” data-ezsrc=”https://bodyroc.co.uk/wp-content/uploads/2023/09/1695720708_371_Loud-and-clear-The-neurological-secrets-of-hearing-in-noisy.jpg” />

Example of listening to someone speaking in a noisy environment. Credit: Zuckerman Institute, Columbia University (2023) (CC-BY 4.0)

To gain a better understanding of how speech is processed in these situations, researchers at Columbia University recorded neural activity from electrodes implanted in the brains of people with epilepsy as they underwent brain surgery. Patients were asked to listen to a single voice, which was sometimes louder than another voice (flashed) and sometimes quieter (masked).

 

The researchers used neural recordings to generate predictive models of brain activity. The models showed that phonetic information about glimpsed speech was encoded in both primary and secondary auditory cortex in the brain, and that encoding of the speech present was enhanced in the secondary cortex. In contrast, phonetic information about masked speech was coded only if it was the voice present. Finally, speech encoding occurred later for masked speech than for flashed speech. Because glimpsed and masked phonetic information appear to be encoded separately, focusing on deciphering only the masked portion of the speech present may lead to improved auditory attentional decoding systems for brain-controlled hearing aids.

Vinay Raghavan, the lead author of the study, says: When you listen to someone in a noisy place, your brain recovers what you missed when the background noise is too loud. Your brain can also pick up bits of speech you’re not focused on, but only when the person you’re listening to is quiet in comparison.

Reference: Distinct neural coding of glimpsed and masked speech in multitalker situations by Vinay S Raghavan, James OSullivan, Stephan Bickel, Ashesh D. Mehta, and Nima Mesgarani, 6 June 2023, PLOS Biology.
DOI: 10.1371/journal.pbio.3002128

This work was supported by National Institutes of Health (NIH), National Institute on Deafness and Other Communication Disorders (NIDCD) (DC014279 to NM). The funding had no role in the study design, data collection and analysis, decision to publish or preparation of the manuscript.

 


#Loud #clear #neurological #secrets #hearing #noisy #environments
Image Source : scitechdaily.com

Leave a Reply

Your email address will not be published. Required fields are marked *