Delete search term

Main navigation

Do LLMs predict the next words like our brain does?

Brain recordings show consistent patterns on the predictability of next words as language models

Language comprehension involves continuous anticipation of upcoming words and linguistic content. However, it is not clear if the way our brain anticipates language based on the predictability of words is similar to patterns observed in large language models (LLMs). The latest publication of the Responsible AI Innovation group uses neuroimaging techniques to investigate word-class-specific neural responses during continuous speech perception. Brain patterns captured via electroencephalography (EEG) and magnetoencephalography (MEG) techniques were related to word class-level predictability and representational structure in a large language model. 

Analysis of word class-specific predictability and representational structure in the transformer-based language model Llama, provides a computational reference frame that complements the neural findings at the level of word classes. These findings highlight the power of neuroscience-informed studies in unraveling the predictive, syntactic, and semantic mechanisms that underlie language comprehension by humans and ist encoding in LLMs.

This work is result of a collaboration with Achim Schilling and Patrick Krauss from FAU Erlangen-Nürnberg and Universitätsmedizin Mannheim, with support of the EELISA European University.

Kölbl, N., Rampp, S., Kaltenhäuser, M. et al.Prediction, syntax and semantic grounding in the brain and large language models. Sci Rep 16, 8728 (2026). https://doi.org/10.1038/s41598-026-41532-0