Skip to content Skip to footer

A study from the Massachusetts Institute of Technology (MIT) has advanced the development of computational models based on the structure and function of the human auditory system. Findings from the study suggest these models that are derived from machine learning could be used to improve hearing aids, cochlear implants and brain-machine interfaces.

The study, conducted by a team from MIT, is the largest study of deep neural networks trained to perform auditory tasks. These models mimic the pattern of the human brain when listening to sounds, generating internal representations that are similar to those occurring in human brains. The researchers also found that models trained on auditory input with background noise more closely aligned with the activation pattern of the human auditory cortex.

One distinct aspect of this study is that it presents a broad comparison of these models with the human auditory system, suggesting machine learning derived models could offer improved understanding of brain function and organisation. The study indicates effective models are those trained on more than one task and those trained on auditory input with background noise.

Deep neural network models, often used in a variety of applications, consist of multiple layers of information-processing units. These units are trained on massive volumes of data to execute specific tasks, potentially reflecting how human brains perform certain tasks. Notably, these machine learning built models have the potential to mediate behaviours on a larger scale compared to precedent models.

Processing units in a neural network create activation patterns in response to each sound input (e.g., a word or a sound) they receive. These patterns can be compared to fMRI brain scans of individuals listening to the same sound. The models are thus able to approximate the neural representations seen in the human brain.

The researchers analysed nine publicly available deep neural networks and created 14 of their own based on two different architectures. Some of these models were specifically designed to perform single tasks (recognising words, identifying speakers, recognising environmental sounds and identifying music genres) while others were trained to perform multiple tasks.

The study affirms the notion of a hierarchical organisation in the human auditory cortex where processing is divided into stages that support distinct computational functions. Consequently, models trained on different tasks were more proficient in replicating different aspects of audition. The findings may aid in the development of more refined models that can reproduce human brain responses more closely.

The future utilisation of such models could be seminal in developing better hearing aids, cochlear implants and brain-machine interfaces. Ultimately, the researchers believe that developing a computer model which can accurately predict brain responses and behaviour could present numerous opportunities for advancements in the field. McDermott’s lab now plans to utilise the findings to develop more refined models that can better match human brain responses.

Leave a comment

0.0/5