A study by Massachusetts Institute of Technology (MIT) researchers has indicated that computational models that perform auditory tasks could speed up the development of improved hearing aids, cochlear implants, and brain-machine interfaces. In the study, the largest ever conducted into deep neural network-based models trained to perform hearing-related functions, it was found that most mimic the internal workings of the human brain when responding to specific auditory inputs. The team also discovered that models developed using machine learning processes which included background noise during the training phase, generated audit processing patterns more akin to those of the human brain.
Researchers have been exploring the potential of computational models, specifically deep neural networks, which have multiple layers of information-processing units, can be used to mimic certain functions of the human brain. Neural network models offer the ability to handle data volumes and exhibit behaviours in response to stimuli, which echo elements of natural human processing, in a way not previously possible.
The MIT researchers investigated activation patterns generated by deep neural network models in response to audio inputs such as sounds or spoken words. The patterns could then be compared to those seen in fMRI scans of the human brain responding to the same stimuli. Previous research in 2018, noted an approximate equivalent between the model and brain activation patterns.
The latest study saw the researchers broadening their research to include nine publicly accessible models and 14 developed by the team itself. The comparison showed that models trained to recognise multiple forms of sound and those trained in noisy environments were most likely to mimic human activity.
This new data support the established premise that human auditory cortex operation also follows a hierarchical structure. Through the various stages of sound processing, the brain areas activated tend to mirror those activated in the various stages of the computational models’ responses to inputs. The research team’s findings point towards the future development of a computer model that predicts brain responses and behaviours, with potential applications including better hearing aids, cochlear implants, and brain-machine interfaces.