How can our brains understand the constant influx of notifications we get? (Rawpixel.com/Shutterstock)
In a nutshell
- Your brain can detect sentence structure in just 125 milliseconds, showing how quickly we process written language when words appear all at once.
- Even when grammar is incorrect or the sentence is nonsensical, the brain still recognizes it as a sentence, as long as the word order follows a familiar pattern.
- Our brains read more like we view scenes than listen to speech. Just as we instantly grasp the gist of a visual scene, we also rapidly extract sentence structure from brief flashes of text, highlighting how well-adapted we are to today’s fast-paced, text-heavy world.
NEW YORK — With constant notification pings, our brains are always being bombarded with fleeting text. These brief encounters with written language barely register in our conscious awareness, yet somehow, we understand their meaning instantly.
New research published in Science Advances reveals that our brains can detect sentence structure in as little as 130 milliseconds—faster than you can blink. The study shows that when multiple words appear simultaneously, our brains rapidly identify basic phrase structure, even before processing the actual meaning.
The researchers from New York University wanted to understand the extent to which our brains can comprehend language from a quick glance, similar to how we can instantly grasp visual scenes.
When notifications that pop up on your phone or you pass a sign while driving, you barely have time to look at them, yet somehow you grasp their message. It’s this everyday phenomenon that inspired the researchers to investigate how our brains accomplish.
Lightning-Fast Language Processing

While we typically think of language comprehension as a sequential process, especially with spoken words that unfold one sound at a time, our brains appear to have evolved a special capacity for processing written text in parallel, taking in multiple words simultaneously.
To investigate this phenomenon, the research team flashed three-word sentences like “nurses clean wounds” to participants for just 300 milliseconds, about as long as an eye blink. Using magnetoencephalography (MEG), which measures magnetic fields produced by electrical activity in the brain, they recorded neural responses to these quick-flash sentences.
The results revealed sentence-sensitive activity in the left temporal cortex starting just 125 milliseconds after seeing the sentences. This brain region, known to be critical for language processing, became active remarkably quickly, faster than most estimates of even single-word visual processing.
The researchers explain that processing words letter by letter isn’t fundamentally required for word recognition. Rather, sequential processing only happens because of limitations in our sensory and motor systems, particularly when speaking.
Structure First, Meaning Later
But what exactly does the brain detect in those first crucial milliseconds? To find out, the researchers cleverly manipulated their sentence stimuli in various ways.
When they introduced grammar errors, like “nurses cleans wounds” (where the verb doesn’t agree with the subject), the brain still recognized these as sentences rather than random word lists. The same held true when they switched word orders to create implausible meanings like “wounds clean nurses.” The brain still identified these as sentences, despite their nonsensical meaning.
However, when they introduced more complex structures by moving words around to create relative clauses like “wounds nurses clean,” the rapid sentence detection disappeared. This suggests that our lightning-fast sentence processing works best with simple, expected word orders, without rearrangement or displacement.
Participants weren’t instructed to interpret the sentences; they were simply asked to identify whether two sequentially presented word groups matched. Yet their brains automatically detected sentence structure within a fraction of a second.
Parallel Processing in the Brain

Just as we can instantly grasp the “gist” of a visual scene (like recognizing you’re looking at a kitchen before identifying individual appliances), our brains appear to rapidly extract the structural “gist” of a sentence before fully processing its meaning.
Scene perception works better when objects are in their expected locations (like an airplane in the sky rather than underwater), and similarly, sentence perception works better with words in their expected order.
This suggests that when the brain gets to “decide” the order of language processing operations, without being forced to follow the step-by-step flow of spoken language, it focuses on sentence structure first, before figuring out what the words mean.
It seems our neural machinery is remarkably well-adapted to our modern, text-saturated environment, where reading at a glance isn’t just convenient; it’s essential. Every time a notification flashes briefly on your screen, your brain performs a complex linguistic feat in less time than it takes to blink.
Paper Summary
Methodology
The researchers conducted magnetoencephalography (MEG) recordings of 36 native English speakers (29 included in final analysis after removing noisy data) as they viewed three-word stimuli flashed for 300 milliseconds. Participants completed a simple matching task, indicating whether a second stimulus matched the first. The stimuli included subject-verb-object (SVO) sentences like “nurses clean wounds,” lists of related nouns like “hearts lungs livers,” and various manipulated versions of the SVO sentences including agreement errors, thematic role reversals, and relative clauses. The researchers analyzed the MEG data for neural activity that differed between sentences and non-sentence controls, focusing on a time window of 100-500 milliseconds after stimulus presentation.
Results
The study found that SVO sentences elicited stronger neural activity than noun lists in the left temporal cortex starting around 125 milliseconds after stimulus onset. This sentence-sensitive neural activity persisted even when sentences contained agreement errors or implausible meaning due to role reversals. However, the activity disappeared when sentences were transformed into more complex relative clause structures. Behaviorally, participants responded faster and more accurately when matching grammatical SVO sentences compared to lists of related nouns, demonstrating a “sentence superiority effect.” The researchers concluded that at-a-glance language comprehension begins with rapid detection of basic phrase structure following canonical word order, independent of meaning or grammatical details.
Limitations
The study used an average brain template rather than individual magnetic resonance imaging, which limits the precision of neural localization. The researchers acknowledge that their findings are tentative and raise questions about how this processing might differ across languages, particularly those with free word order. The study also doesn’t fully address whether sentence processing is inherently serial even when presented in parallel, which would suggest that serial processing is an intrinsic property of language.
Funding and Disclosures
The research was supported by the National Science Foundation award #2335767 and award G1001 from NYUAD Institute, New York University Abu Dhabi. The authors declared no competing interests.
Publication Information
The study “Language at a glance: How our brains grasp linguistic structure from parallel visual input” by Jacqueline Fallon and Liina Pylkkänen was published in Science Advances (volume 10, article eadr9951) on October 23, 2024. Both researchers are affiliated with the Department of Psychology at New York University, with Pylkkänen also associated with the Department of Linguistics, and Fallon with the Department of Psychology and Neuroscience at the University of Colorado Boulder.







