Skip to content Skip to footer

There is potential in deep neural networks as they could effectively serve as models for human auditory perception.

A new MIT study has found that computational models derived from machine learning that mimic the structure and function of the human auditory system could help improve the design of hearing aids, cochlear implants, and brain-machine interfaces. Its research explored deep neural networks trained to perform auditory tasks and showed that these models generate internal representations that share properties seen in the human brain when listening to similar sounds.

The researchers noted that models trained on auditory input that includes background noise more closely mimic the patterns of the human auditory cortex. This suggests better brain predictions when the models were trained in noise as much real-world hearing involves noise. This was described as the most comprehensive comparison of these types of models to the auditory system to date.

Deep neural networks are computational models comprised of many layers of information-processing units trained to perform specific tasks on large volumes of data. Machines learnt to replicate human-like behavior in a scale that was not possible with previous models. The MIT research team trained the neural network to perform auditory tasks resulting in internal representations similar to those seen in fMRI scans of humans listening to the same sounds.

The study involved an analysis of nine available deep neural network models, as well as 14 models of their own. The studied models had performed a single task, such as recognizing words or identifying speakers, and two performed multiple tasks. Models trained on more than one task and included background noise were found to have representations most similar to the human brain.

The study reinforced the concept that the human auditory cortex has hierarchical organization, in which processing is divided into stages for various computational functions. Models performing different tasks replicated different aspects of audition accordingly. For instance, models trained on speech-related tasks more closely mimicked speech-selective areas.

The research could lead to computational models that can predict brain responses and behaviours. It could help develop better hearing aids, cochlear implants, and brain-machine interfaces, aside from contributing towards understanding brain organization. The MIT lab anticipates using this research in developing more efficient models in reproducing human brain responses.

Leave a comment

0.0/5