Using an artificial language network, MIT neuroscientists have found that sentences with unusual grammar or unexpected meanings tend to strongly activate the brain’s key language processing centers. In contrast, straightforward sentences cause only minimal engagement of these regions, as do nonsensical sequences of words.
The researchers discovered this by analyzing how human participants’ brain network activity, measured via functional magnetic resonance imaging (fMRI), responded to 1,000 distinct sentences from diverse sources. They also had an AI language model process the same sentences, and then used a mapping model to correlate the human and AI activation responses. The mapping model was then leveraged to predict how the human language network would respond to new sentences based on the AI language network’s recorded response.
Afterward, researchers tested 500 new sentences that were predicted to maximally stimulate the human brain (drive sentences) alongside sentences expected to cause minimal engagement (suppress sentences) on three new human participants. The results aligned with expectations, validating the accuracy of this novel ‘closed-loop’ brain activity modulation methodology.
The team further found that linguistic complexity and surprisal (uncommonness) drive brain activity. While sentences at extreme ends of the complexity spectrum—very simple or overly complex—evoke minimal activity, sentences requiring effort to interpret induce the most significant responses in the language network.
In the next phase of study, researchers plan to test these conclusions in speakers of languages other than English and investigate the type of stimuli that could activate brain language processing centers in the right hemisphere. The study, whose lead author is MIT graduate student Greta Tuckute, and senior author Associate Professor Evelina Fedorenko, has been published in Nature Human Behaviour.