Skip to content Skip to footer

Modern machine learning models are becoming increasingly adept at simulating the structure and function of the human auditory system, a development that could lead to improvements in devices like hearing aids, cochlear implants, and brain-machine interfaces.

A team at MIT conducted what is considered the largest study of deep neural networks trained for auditory tasks and found that most models create internal processes that closely resemble those that occur in the human brain when people listen to the same sounds. Furthermore, models that were trained with background noise in the auditory input better replicated the activation patterns in the human auditory cortex.

Deep neural networks are computational models made up of multiple layers of processing units trained on large amounts of data to perform specific tasks. Recently, researchers have begun investigating whether these models can emulate the way the human brain operates.

In 2018, MIT team leader Josh McDermott and a graduate student Alexander Kell found that when a neural network was trained to perform auditory tasks, such as identifying words from an audio signal, the internal representations it constructed mirrored those seen in human fMRI scans listening to the same sounds.

In this latest study, the MIT team evaluated nine publicly available deep neural networks and created 14 of their own. Most of these were trained on a single task such as recognizing words or identifying the speaker, but two were programmed to perform multiple tasks.

The researchers found that the networks’ internal models often closely resembled the patterns generated in the human brain when exposed to natural sounds used in human fMRI experiments. The models that demonstrated the closest resemblance were those trained on multiple tasks and those trained in environments with background noise.

The study also supports the theory that the human auditory cortex operates hierarchically, dividing processing into stages that support different computational functions. For instance, the researchers discovered that earlier stages of their neural networks’ processes were similar to those seen in the primary auditory cortex, while later stages more accurately reflected patterns generated in other regions of the brain.

These findings could be useful in revealing more about the brain’s organization and be instrumental in enhancing hearing aids, cochlear implants, and brain-machine interface devices. Eventually, the team hopes to create a computational model that can accurately predict brain responses and behavior.

Leave a comment

0.0/5