Skip to content Skip to footer

A new study from MIT reveals that modern computational models based on machine learning, which mimic the structure and function of the human auditory system, are coming closer to potentially aiding the design of improved hearing aids, cochlear implants, and brain-machine interfaces.

The MIT team’s research is the most extensive to date on deep neural networks, computational models comprising multiple layers of information-processing units and capable of analysing large volumes of data, trained to perform auditory tasks. The study shows that the internal representations generated by these neural networks share characteristics with the brain’s representations while hearing the same sounds.

These models, established through machine learning, facilitate behaviours previously unachievable with older models. Theoretically, they can explain how the human brain works when performing certain tasks. Neural network models perform a task by developing activation patterns to react to audio inputs, such as a word or sound. These patterns are then compared with similar patterns observed in fMRI brain scans of people listening to the same input.

In 2018, MIT associate professor Josh McDermott and then-graduate student Alexander Kell found that neural networks trained to perform auditory tasks exhibited representational similarity to the brain’s interpretations of similar inputs in fMRI scans.

Based on this, McDermott’s research group evaluated more neural network models to determine whether approximating neural representations seen in the human brain is a common trait in these models.

In the latest study, nine publicly available deep neural network models and 14 models based on two different architectures developed by the researchers were assessed. Most of the models were trained to do a single task. In contrast, two were trained for multiple tasks.

When presented with sound stimuli previously used in human fMRI experiments, these models produced internal representations which exhibited similarities to those created by the human brain. Models trained on multiple tasks and auditory input inclusive of background noise were those whose representations most closely matched those in the human brain.

The study also found evidence of hierarchical organization in the human auditory cortex, where processing is divided into levels supporting different computational functions. Models trained on different tasks could replicate different aspects of audition.

These findings could be used to build models even more effective at reproducing human brain responses, which could potentially assist the development of advanced hearing aids, cochlear implants, and brain-machine interfaces.

Leave a comment

0.0/5