Skip to content Skip to footer

Promising signs are exhibited by deep neural networks in their potential of modeling human hearing.

In the largest study of deep neural networks that can perform auditory tasks, MIT found that the models mimic human auditory representations when exposed to the same sounds. Neural networks are models that have multiple layers of information-processing units that can be trained to perform particular tasks using large amounts of data. These models are increasingly used to replicate the human mind’s function. Researchers learned that models trained with sound that consists of background noise better imitate the activation patterns of the human auditory cortex. The study also reveals that the human auditory cortex’s organization is hierarchical, with processing divided into stages that accomplish different computational tasks. Such models could help design superior hearing aids, cochlear implants, and brain-machine interfaces, and they assist scientists in understanding how the brain is organized.
The MIT study used nine publicly available deep neural network models trained to perform auditory tasks and 14 models based on different architectures that the researchers had developed. The team trained most of these models to carry out a single task like recognizing words, identifying the speaker, recognizing environmental sounds, identifying musical genre while two models could perform multiple tasks. When the researchers presented these models with the same natural sounds used in human fMRI experiments, they discovered that the representations generated by the models resembled those generated by the human brain. Moreover, they found that models trained on different tasks were better at imitating different aspects of audition.
The research team now plans to make use of these findings to develop models that can more successfully reproduce human brain responses. This computer model that can predict brain behavior might open several doors, the researchers believe.

Leave a comment

0.0/5