A team from the Massachusetts Institute of Technology (MIT) has found that machine learning (ML) models can effectively mimic and understand the human auditory system, potentially helping to improve technologies such as cochlear implants, hearing aids and brain-machine interfaces.
These findings are based on the largest-ever study of deep neural networks used to perform auditory tasks. The study concluded that these ML-informed models can generate internal representations very similar to those seen in the human brain during the processing of the same sounds.
Interestingly, the study also arises the theory that models trained on auditory inputs, comprising background noise, better emulate the activation patterns within the human auditory cortex.
Josh McDermott, a senior author of the study and Associate Professor at MIT, emphasized the study’s comprehensiveness. He suggested that the results point to the potential of ML-derived models in better understanding and mimicking the human brain, and offered some insights as to how to improve these models.
Deep neural networks have been increasingly used in several applications due to their ability to adapt to, and handle immense volumes of data to perform specific tasks, as well as their capability to simulate biological neural system processes.
The MIT research group compared the activation patterns of models to those seen in fMRI brain scans of individuals listening to the same inputs. The models were able to approximate neural representations seen in the human brain, with models trained on multiple tasks and on inputs containing background noise recording the closest similarity.
The study further supports the theory of hierarchical processing, where the brain divides tasks into stages that support distinct computational functions. This suggests that models trained on different tasks can better replicate specific aspects of human hearing.
The MIT team plans to leverage these insights to develop even more accurate models of the human auditory system. Besides boosting our knowledge of the auditory system’s structure, these models could significantly improve hearing-related medical devices.
The research was funded by numerous organizations, including the National Institutes of Health and Amazon Fellowship from the Science Hub, amongst others.
In conclusion, the study by MIT provides a significant leap towards more effective modeling of the human auditory system using ML and deep neural networks. Such efforts could lead to enhancements in hearing aids and brain-machine interfaces, paving the way for more effective treatments for individuals with hearing impairment.