Skip to content Skip to footer

Deep neural networks demonstrate potential in being suitable models for studying human auditory perception.

A new study by researchers from the Massachusetts Institute of Technology (MIT) has brought us closer to creating computational models that can mimic the human auditory system in the design of better hearing aids, cochlear implants, and brain-machine interfaces.

The research, which is the most extensive of its kind, showed that most deep neural network models trained for auditory tasks create internal representations that share properties with human brain representations when individuals listen to the same sounds. The researchers also found that including background noise in training these models makes them better mimic the activation patterns of the human auditory cortex.

Deep neural networks are computational models that consist of multiple layers of information-processing units which can be trained on massive volumes of data to perform specific tasks. A major focus of MIT’s research was to examine if the ability to simulate neural representations seen in the human auditory system is a common characteristic of these models.

The researchers discovered that when models trained in noise were presented with natural sounds stimulating human fMRI experiments, they generated internal representations more similar to those created by the human brain. Specifically, models that had been trained to perform more than one task and with auditory input that included background noise produced better representations.

The study also supported the theory that the human auditory cortex has a hierarchical structure, where processing is divided into different stages that support specific computational functions. The primary auditory cortex, for example, closely resembled representations generated in the models’ earlier stages, while other regions of the brain were more closely mirrored by the representations created in the models’ later stages.

Moreover, the researchers found that models trained to perform different tasks were better at simitating different aspects of hearing. For instance, the models trained on a speech-related task more closely resembled the speech-selective areas of the brain. Given the new findings, the MIT team plans to develop models that reproduce human brain responses even more successfully, which could aid in creating better hearing aids, cochlear implants, and brain-machine interfaces.

In addition to aiding our understanding of the human auditory system, these improved models may be able to predict brain responses and behavior, an idea that Josh McDermott, senior author of the study, believes could “open a lot of doors”. The study was funded by several organizations, including the National Institutes of Health and the Department of Energy Computational Science Graduate Fellowship.

Leave a comment

0.0/5