A study from MIT has shown that machine learning can be employed to improve the design of hearing aids, cochlear implants, and brain-machine interfaces. These computational models are designed to simulate the function and structure of the human auditory system. The research is the largest of its kind in studying deep neural networks that have been trained for auditory tasks. The results showed that the internal representations the models produce exhibit similarities to the brain’s behavior when listening to the same sounds.
Understanding how to optimally train these models was a significant outcome of the study. The team found that models trained with background noise more closely mirrored the activation patterns observed in the human brain. The models that were able to most accurately mimic the patterns found in the human brain were the ones trained in tasks including recognizing words, identifying the speaker, recognizing environmental sounds, and identifying musical genres.
Josh McDermott, the senior author of the study, emphasizes the importance of this research. He believes that the study mainly suggests that using machine learning can improve brain models, but more importantly, the findings give researchers a clearer direction on how to continue making advancements.
The researchers examined nine publicly available deep neural network models that had been trained for auditory tasks and created fourteen new models. The results demonstrate that the internal representations generated by the models show similarities to the activation patterns found in fMRI brain scans of people listening to the same sounds.
Notably, the study proves that the human cortex has a hierarchical structure, with processing divided into stages that support different computational functions. Models trained on different tasks were found to be better in duplicating specific aspects of hearing. For example, models trained on speech-related tasks more closely resembled speech-selective areas.
The researchers plan to use their findings to create better models, contributing not only to our understanding of brain organization but also to the development of improved hearing aids, cochlear implants, and brain-machine interfaces.
Josh McDermott voiced the ultimate goal of the field: creating a computer model that can predict brain responses and behaviors. If achieved, it will unlock numerous possibilities in the field of neuroscience and associated areas. The research was funded by various organizations including the National Institutes of Health and the Department of Energy.