Skip to content Skip to footer

Computational models that imitate how the human auditory system works may hold promise in developing technologies like enhanced cochlear implants, hearing aids, and brain-machine interfaces, a recent study from the Massachusetts Institute of Technology (MIT) reveals. The study focused on deep neural networks, machine learning-derived computational models that stimulate the basic structure of the human brain. These deep neural networks are capable of taking in large amounts of data and translating them into tasks.

The MIT study found that these deep neural networks that were trained to simulate auditory tasks produced internal representations that shared similar properties to those generated by the human brain. It was found those models which were trained using an auditory input including background noise mirrored the activation patterns of the human auditory cortex closer. In the past, previous studies by the same group of researchers found that the use of neural networks to simulate auditory tasks produced internal representations that exhibited similarities to the ones seen during fMRI scans of people listening to auditory input.

In the current study, the MIT researchers evaluated a broader set of models to check if most of these models could mimic the way human brain processes auditory input. The researchers analyzed nine publicly available deep network models that were trained to perform auditory tasks, and they also created 14 of their own models using different architecture designs. These models were tested with natural sounds which were used in human fMRI studies, and the researchers noticed a similarity in the representations generated by the models and the human brain. The models which were trained on more than one task and included background noise showed the closest resemblance.

The study also noticed that the human auditory cortex seemed to be organized hierarchically, processing information in stages. Certain models, trained for particular tasks, demonstrated better replication of specific aspects of hearing, leading to task-specific accuracy. For example, models trained on a speech-related task showed a more precise resemblance to speech-favoring areas. The researchers are planning to use these findings to evolve models that are even more successful in mimicking human brain responses which could eventually aid in the improvement of devices like hearing aids, cochlear implants, and brain-machine interfaces. The ultimate goal is to create a computer model that can predict brain responses and behavior accurately, which could pave the way for advancements in the field.

The study was sponsored by several institutions including the National Institutes of Health, the American Association of University Women, and Amazon. It was published in PLOS Biology.

Leave a comment

0.0/5