Recent advances in machine learning (ML) and artificial intelligence (AI) are being applied across numerous fields thanks to increased computing power, extensive data access, and improved ML techniques. Researchers from MIT and Harvard University have used these advancements to shed light on the brain’s reaction to language, using AI models to trigger and suppress responses in the language network.
Language comprehension is controlled within predominantly left-sided brain areas, including parts of the frontal and temporal lobes. The mechanisms behind these processes are not yet fully understood, but advances in large language models (LLMs), capable of generating human-like language based on vast amounts of data, offer great potential.
The researchers utilised a GPT-style LLM to construct an encoding model predicting the brain’s responses to various sentences. The encoding model, built using last-token sentence embeddings from GPT2-XL, was trained on a dataset of diverse sentences from five participants. The model achieved a correlation coefficient of r=0.38, indicating a moderate relationship.
Further tests using alternative methods for obtaining sentence embeddings and incorporating embeddings from an alternate LLM architecture also maintained high predictive performance, indicating model robustness. Furthermore, the model provided accurate predictions when applied to anatomically defined language regions.
The study’s findings have significant implications for neuroscience research and practical applications. Specifically, it opens new areas for language processing study and potential treatment of language-impacting disorders. Applying LLMs as human language processing models can also enhance natural language processing technologies, such as chatbots and virtual assistants.
In summary, this study represents considerable progress in better understanding the interplay between AI and the human brain. LLMs offer a unique pathway to unravel the intricacies of language processing and create innovative approaches for influencing neural activity.
For more information, follow us on Twitter and Google News, join our ML SubReddit, Facebook Community, Discord Channel, and LinkedIn Group. Also, sign up for our newsletter and join our Telegram Channel. The full research paper can be found here. All credit goes to the researchers of this project.