Skip to content Skip to footer

MIT researchers have been utilizing computational models derived from deep neural networks which mimic the structure and function of the human auditory system, a development that could help in the design of better hearing aids, cochlear implants, and brain-machine interfaces. This represents a significant step in understanding how the human brain processes sound and how machine learning models can be utilized for the improved development of auditory technology. Deep neural networks are computational models, and neuroscientists are investigating how these can describe how the human brain undertakes certain tasks.

In the study, the team analyzed nine public deep neural network models and created another 14, each based on two different architectures and each trained to perform a singular auditory task. They found that the models trained on more than one task, and on auditory input including background noise, most closely mimicked the brain’s response patterns.

The research suggests the human auditory cortex exhibits a hierarchical organization, with processing divided into stages to support distinct computational functions. The researchers found that representations generated during the early stages of the model most closely resembled those seen in the primary auditory cortex while those generated in later stages more closely corresponded to those beyond the primary cortex. Models trained on different tasks were better at replicating different aspects of audition.

The team plans to use the findings to develop models that more successfully reproduce human brain responses. They believe that a computer model capable of predicting brain responses and behavior could open a lot of doors. The work could also contribute to developing better hearing aids, cochlear implants, and brain-machine interfaces.

Leave a comment

0.0/5