Driving with ear buds in

(Photo by Alessandro de Leo on Shutterstock)

In A Nutshell

  • Talking on the phone while driving delays may eye movements by 20-100+ milliseconds across all phases
  • Listening to podcasts, audiobooks, and radio produces no measurable delays
  • Even hands-free phone conversations could interfere with how quickly eyes spot and track object. That’s because the brain networks for speech production compete with those controlling eye movements
  • Looking downward takes longer for everyone, and talking makes this delay even worse

Drivers who feel guilty about streaming podcasts during their commute may be worrying about the wrong thing. A new study found that passively listening to audio content produced no measurable delays in how quickly people could move their eyes to spot targets on a screen. Answering questions aloud, however, slowed down eye movement speed and accuracy.

Researchers at Fujita Health University in Japan tested 30 adults as they performed rapid eye movements under three conditions: while answering questions aloud, while listening to audio recordings, and while performing the task with no distractions. Answering questions clearly slowed how long eyes took to move to and settle on targets, and it tended to slow the start of eye movements. Listening looked essentially like no distraction in this task

The laboratory findings suggest a distinction between different types of audio engagement, though whether this translates directly to actual driving scenarios remains to be tested. It certainly raises the question as to whether even hands-free phone conversations could still be considered a form of dangerous distracted driving.

Man using hands-free driving with ear buds for smartphone
That hands-free phone conversation while you drive could be more dangerous than you realize. (Photo by fast-stock on Shutterstock)

Why Talking Affected Eye Movement Control

What the brain does during talking versus listening makes all the difference. When people listen to audio content, they process incoming information without generating verbal responses, potentially leaving mental resources available for controlling eye movements. Answering questions requires retrieving information, planning what to say, and coordinating the physical act of speech, all processes that may compete with visual attention for brain power.

Research team leader Shintaro Uehara designed an experiment to measure how these different types of audio engagement affect the mechanics of looking. Participants sat in front of a computer screen while an eye tracker monitored their gaze 60 times per second. Their job was simple: move their eyes as quickly and accurately as possible from a center point to a red target appearing in one of eight locations around the screen.

During the talking condition, researchers posed questions requiring thoughtful answers. Some came from standard intelligence tests, asking about factual knowledge like world capitals. Others probed personal memory, asking what participants had worn the previous day or when they had gone to sleep. The researchers used this structured question-and-answer format rather than free-flowing conversation to limit the range of content. For the listening condition, researchers played recordings from a famous Japanese novel and told participants to focus on understanding the content.

How Talking Affected Eye Movement Timing

Three timing measurements captured different phases of the eye movement process. Reaction time measured how long participants took to start moving their eyes after the target appeared. Movement time tracked how long the eye traveled from starting position to target. Adjusting time recorded how long participants needed to stabilize their gaze on the target.

When talking, participants took an average of 280 milliseconds to begin moving their eyes toward the target, about 20 milliseconds longer than while listening or doing the task alone. While this difference was modest, the delays compounded.

Eyes took 260 milliseconds on average to travel to the target while talking, but only 142 milliseconds while listening and 161 milliseconds with no distraction. Nearly double the time.

The most striking difference appeared in stabilization time. Participants needed an average of 1,227 milliseconds to lock their gaze on the target while talking, more than twice the 493 milliseconds required while listening.

Listening and doing the task with no distraction produced virtually identical results across all measurements. Talking stood apart.

Bad driving, passenger scared
It may be best to avoid asking your driver questions while on the road. (© Syda Productions – stock.adobe.com)

Brain Networks and Eye Movement Control

The researchers propose that talking and eye movement control draw on overlapping brain networks, particularly in the frontal and parietal regions. These areas help plan and execute voluntary eye movements toward specific locations. When simultaneously engaged in language production, these networks may have less capacity available for managing precise, rapid eye movements.

The authors note that the listening clips may not have demanded sustained attention in the way real-world audio sometimes does, and they suggest a more demanding listening task could potentially interfere with gaze timing. Participants may not have paid close attention to the novel recordings, which could explain why listening produced no measurable delays.

Earlier brain imaging research offers some support for competition between language and visual control systems. One study using functional MRI found that language comprehension while viewing driving scenes reduced activity in brain areas responsible for spatial processing.

Looking Down Takes Longer

Beyond the talking versus listening distinction, participants consistently took longer to initiate eye movements toward targets in the lower visual field, particularly at positions requiring downward and sideways movements. This downward bias appears in classical vision research dating back decades.

Objects requiring downward gaze carry particular weight in driving scenarios. Children entering the street, animals crossing the road, debris that’s fallen from vehicles, and potholes all require looking below the horizon line. The compound effect of natural downward delays plus talking-induced delays could prove especially dangerous for detecting these hazards.

What the Lab Findings May Mean for Real-World Driving

Visual information provides an estimated 90 percent of what drivers need for safe vehicle operation. Earlier research has established that phone conversations while driving, whether on hand-held or hands-free devices, increase crash risk four times compared to normal driving. The impairment can match or exceed that of driving while legally drunk.

Researchers have typically attributed these elevated crash risks to divided attention and slower physical reactions, such as delayed braking. The new laboratory study points to another possible mechanism: answering questions aloud interfered with gaze timing in this controlled task. One reason the authors pursued this research is that prior driving studies suggest hands-free phone calls can still impair responses, raising questions about how the act of talking might affect early visual steps.

The distinction between listening and answering questions in this controlled laboratory task offers some preliminary observations, though with important caveats. The study tested simple eye movements to clear targets on a screen, not the complex visual demands of actual driving. The listening task may not have required the level of attention that engaging audio content sometimes demands. More difficult or engaging listening material might produce different results.

Answering questions aloud created measurable interference with eye movement control in this laboratory setting. The controlled conditions suggest talking interferes with basic visual motor processes, though how this translates to real roads with their unpredictable hazards remains an open question requiring further research.

Still, the clear distinction between talking and listening in this controlled setting suggests that not all audio engagement carries the same risk. The act of speaking appears to be what interferes with visual control, not just having your ears occupied.

Paper Notes

Limitations

The experiment measured eye movements in a laboratory setting rather than during actual driving. Participants performed a simplified visual task with clear targets on a computer screen, which differs substantially from the unpredictable visual environment of real-world driving. Whether these laboratory findings translate to actual driving performance remains unknown.

The study used a specific type of structured question-and-answer format, with researchers asking predetermined questions that participants answered aloud. Natural conversations that flow more freely between topics might produce different levels of interference. Emotionally charged discussions or casual chat could affect eye movements differently. The study did not test actual phone conversations, either hand-held or hands-free.

The research did not directly measure the cognitive load experienced by participants. The listening condition involved exposure to a novel recitation, but participants may not have paid close attention to the content. A more engaging listening task that required answering questions afterward might have produced different results. The authors explicitly note this limitation and caution against broadly concluding that all listening tasks would be free from interference.

The reaction time comparisons between talking and the other conditions showed trends (p=0.07 for talking vs. listening; p=0.09 for talking vs. control) but did not reach conventional statistical significance thresholds after correction for multiple comparisons. The movement time and adjusting time differences were statistically significant.

All participants were relatively young adults with an average age of 22.6 years. Results may differ for older drivers or teenagers.

Funding and Disclosures

The authors received no specific funding for this work. The study was conducted using existing facilities and equipment at Fujita Health University. The authors declared no competing interests.

Publication Details

Title: Talking-associated cognitive loads degrade the quality of gaze behavior

Authors: Takuya Suzuki (Department of Rehabilitation, Fujita Health University Hospital and Graduate School of Health Sciences, Fujita Health University, Aichi, Japan), Takaji Suzuki (Faculty of Rehabilitation, Fujita Health University School of Health Sciences, Aichi, Japan; current address: Department of Occupational Therapy, Faculty of Health Sciences, Kinjo University, Ishikawa, Japan), Shintaro Uehara (Faculty of Rehabilitation, Fujita Health University School of Health Sciences, Aichi, Japan)

Journal: PLOS One | Published: October 6, 2025 | DOI: 10.1371/journal.pone.0333586 | Study Period: November 7, 2019 to August 13, 2020 | Ethics Approval: Ethics Review Committee of Fujita Health University (approval numbers: HM18-369 [original] and HM20-073 [revised]) | Open Access: This article is distributed under the terms of the Creative Commons Attribution License | Corresponding Author: Shintaro Uehara (shintaro.uehara@gmail.com)

About StudyFinds Analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

John Anderer

Associate Editor

Leave a Reply