Skip to content Skip to footer

A study conducted by a team from MIT offers promising results in the development of computational models that simulate the function and structure of the human auditory system. These models have potential applications for improving hearing aids, cochlear implants, and brain-machine interfaces. Conducted on an unprecedented scale, the study used deep neural networks trained to perform auditory tasks and found most of them exhibited patterns that were similar to those observed in the human brain when listening to the same sounds.

One of the chief findings of the study regards the method of training used for these networks. The team discovered that models which were trained using auditory input inclusive of background noise more closely mimicked the activation patterns of the human auditory cortex. This vital insight could provide a pathway for training more effective models.

The team used nine publicly available neuron network models and created fourteen new ones of their own. In all, these models performed a variety of tasks, like recognizing words, identifying speakers, discerning environmental sounds, identifying musical genres. Some were even trained for multiple tasks. Each model was subjected to natural sounds that were previously used in human fMRI experiments. The team found that the internal representations projected by these models were indeed similar to the patterns exhibited by the human brain when hearing the same sounds.

The study provides strong support for the assumption that the human auditory cortex has a hierarchical organization, with processing divided into stages for specific data functions. It was also observed that the models were better at replicating different aspects of audition when they were trained on varied tasks. For instance, a model trained for a speech-related task better resembled speech-selective areas.

With this study, the MIT team has made a significant step towards optimizing the machine learning models to more accurately reproduce human brain responses. Such advancements will not only increase our understanding of the brain’s organization but also lead to the development of enhanced auditory devices and brain-machine interfaces. The potential to predict brain responses and behavior using computer models can open up vast possibilities in the sphere of health and neurotechnology.

The study was funded by several sources, including the National Institutes of Health, an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, an MIT Friends of McGovern Institute Fellowship, a fellowship from the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, and a Department of Energy Computational Science Graduate Fellowship.

Leave a comment

0.0/5