Researchers from MIT have moved closer to creating computational models that effectively mimic the structure and function of the human auditory system. Utilizing machine learning, they developed models that could help improve hearing aids, cochlear implants, and brain-machine interfaces. The recent study showed that most deep learning models, trained to execute auditory tasks, generated internal representations that shared properties seen in the human brain when processing the same sounds.
The team also discovered that models trained on auditory input inclusive of background noise more closely approximated the activation patterns of the human auditory cortex. The deployment of machine-learning in these models enables far more extensive and comprehensive simulations of auditory behaviors compared to traditional models. By comparing the activation patterns in models to fMRI brain scans of individuals listening to the same input, researchers found significant similarities and evidence that these algorithms could approximate neural representations in the human brain.
These models were then presented with natural sounds previously used in human fMRI experiments, and their internal model representations showed similarity to those generated by the human brain. Specifically, models trained on multiple tasks and those that had been trained on auditory input inclusive of background noise created representations most resembling those of the brain.
Another revelation was that the human auditory cortex seems to have some level of hierarchical organization, supporting varied computational functions through different processing stages. Comparably, representations generated in earlier stages of the model more closely mimicked those seen in the primary auditory cortex, while those created at later stages resembled other brain regions. Moreover, models trained on different tasks showed better performance in replicating different aspects of audition.
The researchers aim to make use of their findings to develop improved models capable of accurately replicating human brain responses, which could further the development of better hearing aids, cochlear implants, and brain-machine interfaces. Their ultimate goal is to create a computer model that can predict brain responses and behavior, opening many new opportunities in the field. The research project was funded by the National Institutes of Health, an Amazon Fellowship from the Science Hub, an MIT Friends of McGovern Institute Fellowship, and more.