Skip to content Skip to footer

A team of researchers from the Massachusetts Institute of Technology (MIT) has been investigating computational models that are designed to mimic the structure and function of the human auditory system. They claim that these models could have future applications in the development of more advanced hearing aids, cochlear implants, and brain-machine interfaces.

In a study that the scientists have described as the largest yet examination of deep neural networks performing auditory tasks, they showed that the majority of these models generate internal representations that bear similarities to those seen in the human brain in response to the same sounds. Additionally, the study provided key insights into how these models should be trained to best mimic the human mind. For instance, it was found that models trained with auditory input that included background noise more closely resembled the activation patterns seen in the human auditory cortex.

Deep neural networks are essentially computational models featuring numerous layers of information-processing units which can be trained on large volumes of data to perform specific tasks. They’ve been widely used in a variety of applications, with neuroscientists now beginning to explore their potential for modeling brain functions.

Numerous publicly available deep neural network models were analyzed during the study. Additionally, the researchers created 14 models of their own. Most of these models were trained to perform a singular task, such as recognizing words, identifying the speaker, recognizing environmental sounds, or identifying a musical genre. However, two of them were trained to perform multiple tasks simultaneously.

The researchers found that the model representations tended to exhibit similarity with those generated in the human brain. Models that were trained for more than one task and those that had been trained on auditory input that included background noise were found to have representations that were most similar to those in the human brain.

The study also brought forth the idea that the human auditory cortex might have a degree of hierarchical organization, where processing is divided into stages that are responsible for different computational functions. Different aspects of audition could be replicated more effectively by models that had been trained on various tasks.

The researchers believe that if they can develop models which are capable of accurately predicting brain responses and behavior, it could significantly advance the fields of neuroscience and other medical and technological fields. Future efforts at MIT are focused on developing such models based on the insights generated in this study.

Leave a comment

0.0/5