Skip to content Skip to footer

Deep neural networks demonstrate potential as representations of human auditory mechanisms.

An MIT study has been making strides towards developing computational models that mimic the human auditory system, which could enhance the design of hearing aids, cochlear implants, and brain-machine interfaces. These computational models stem from advances in machine learning. The study found that internal representations generated by deep neural networks often mirror those within the human brain when exposed to identical sounds.

These models trained with background noise demonstrated a superior similarity to the activation patterns in the human auditory cortex. As Josh McDermott, the senior author of the study, puts it, “The study suggests that models that are derived from machine learning are a step in the right direction, and it gives us some clues as to what tends to make them better models of the brain.”

Deep neural networks are information-processing models that can be trained on vast amounts of data to perform specific tasks. The unique characteristic of these models is their ability to process tasks at a scale that was not previously attainable, sparking an interest in their potential to mirror brain processes.

Research towards this endeavor began in 2018 when McDermott and Alexander Kell reported similarities between their neural network’s internal representations and fMRI scans of individuals subjected to the same sounds. Now, researchers are evaluating an expanded panel of models to test whether this trait is a common feature of these models.

The study involved the analysis of nine publicly available deep neural network models programmed for auditory tasks and the creation of an additional 14 models. Most of these models were generated for single tasks such as identifying the speaker, recognizing words, and identifying the genre of music, while two were multi-task models.

Upon introducing these models to natural sounds used in human fMRI experiments, the research team noted a resemblance between the models’ internal representations and those of the human brain. Models trained for a specific task demonstrated an enhanced capacity to replicate certain aspects of human audition. For example, models assigned a speech-related task closely resembled speech-selective areas.

The new research reinforces the concept that human auditory cortex demonstrates a hierarchical organization with distinct computational stages. The teams observed similarities between early stage model representations and primary auditory cortex, while later stages better reflected regions beyond the primary cortex.

McDermott’s team is presently leveraging these findings to build models that can more accurately replicate human brain responses. The ultimate goal is creating a computational model that can predict both brain responses and behavior. Success in this endeavor could unlock potential advances in the development of enhanced hearing aids, cochlear implants, and brain-machine interfaces.

Leave a comment

0.0/5