Scientists at MIT have made significant progress in developing advanced computational models that can emulate the human auditory system, which could be pivotal in improving hearing aids, cochlear implants, and brain-machine interfaces. The researchers used deep neural networks—a type of artificial intelligence (AI) that imitates the human brain—to conduct the most extensive study so far of auditory tasks. Their findings revealed that many modern models form internal representations that closely mimic the human brain’s interpretation of sounds. They also discovered that AI models trained on auditory input including background noise were more successful in replicating how the human auditory cortex processes sounds.
According to Josh McDermott, professor of brain and cognitive sciences at MIT and senior author of the study, these machine-learning-derived models are a step in the right direction, providing valuable insight into how to get closer to an accurate model of the human brain.
Deep neural networks are widely used in various applications, and with their ability to perform tasks on an unprecedented scale, researchers have been exploring their potential to simulate human brain activities. The team at MIT compared representations created by these models in response to different sounds to the patterns observed in brain scans of humans exposed to the same stimuli.
For the study, researchers analysed various deep neural network models trained for auditory tasks and built new models based on different structures. They discovered that models trained on more tasks and stimuli that included background noise reflected brain responses more precisely. This training approach is in line with real-world hearing situations where humans are frequently exposed to noise.
Moreover, the study supports that human hearing is processed in a hierarchical manner, with different processing stages supporting distinct functions. The researchers found that neural network models replicated this hierarchal processing, with earlier stages representing the primary auditory cortex and later stages reflecting regions beyond this primary cortex.
The team also discovered that AI models trained for different tasks were especially successful at reproducing different aspects of hearing. For instance, models trained on speech-related tasks more accurately replicated brain response to speech.
Looking forward, the researchers aim to create models with improved proficiency in predicting human brain responses. Beyond expanding knowledge of the brain’s structure, these models could also contribute to the development of more advanced hearing aids, cochlear implants, and brain-machine interfaces.
McDermott envisages a computer model that can predict brain responses and behaviour, and believes achieving this goal could open numerous opportunities in his field. The research was backed by the National Institute of Health and other fellowships and grants to improve brain-science research.