Skip to content Skip to footer

A new study by MIT researchers reveals that computational models derived from machine learning, similar to the human auditory system, could significantly enhance the development of hearing aids, cochlear implants, and brain-machine interfaces. This is the largest study so far that delves into deep neural networks trained to perform auditory tasks. These models produced internal representations akin to those observed in the human brain during the same auditory processes.

The researchers also found that models trained on auditory input that included background noise mirrored human auditory cortex activation patterns more closely. As a result, the study provides invaluable guidance on how to optimally train these models. The lead authors of the new study are Greta Tuckute, a MIT graduate student, and Jenelle Feather, a PhD at MIT ’22 with the open-access paper available in PLOS Biology.

Deep neural networks (DNN) are computational models with numerous layers of information-processing units. They can be trained on extensive data volumes to perform specific tasks. Neuroscientists have been considering the possibility that these systems might be used to understand how the human brain performs certain functions.

In 2018, Josh McDermott, an associate professor of brain and cognitive sciences at MIT, reported similarities between the representations generated by a neural network trained on auditory tasks and those observable in fMRI scans of humans exposed to the same sounds. Since then, McDermott’s research group has sought to evaluate more models to determine if these models’ neural representation approximations are a general feature.

In the new study, the researchers examined nine public deep neural network models trained for auditory tasks, and they created 14 models of their own based on two different architectures. They found that the models which had been subjected to training on more than one task and included background noise seemed to most closely resemble those found in the human brain. Moreover, the results also suggest that human auditory cortex has a hierarchical structure, broken down into stages that deal with distinct computational functions.

The study also indicates a future research direction, with McDermott’s lab aiming to use the findings to create models that more effectively reproduce human brain responses. Successful models could potentially advance the design and functionality of hearing aids, cochlear implants, and brain-machine interfaces. This research project received funding from various sources, including the National Institutes of Health, the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, and an MIT Friends of McGovern Institute Fellowship. McDermott maintains that the ultimate goal is to establish a computer model that can predict brain responses and behaviours.

Leave a comment

0.0/5