discussion people talking

(Photo by Jopwell from Pexels)

Keeping a few key questions in mind may help detect bad faith arguments and half-baked ideas.

In A Nutshell

  • Swedish researchers tested whether written instructions alone could help people evaluate political arguments better, without any teachers or classroom time
  • Participants who received repeated reminders about asking critical questions (Is this expert qualified? Is this consequence likely?) became modestly better at identifying weak arguments
  • The effect was small but consistent across different political topics, from economic policy to immigration debates
  • Rejecting bad arguments may matter more for democracy than accepting good ones, since weak proposals that gain support tend to end debate prematurely

Differentiating good arguments from bad ones often seems like it requires years of education or training. However, Swedish researchers discovered a simple way to help spot a faulty line of thought. After reading the same short instructions multiple times, people became modestly better at spotting flawed political arguments, with no classroom time required.

The secret wasn’t sophisticated training or complex reasoning exercises. Participants who received reminders about how to evaluate arguments showed a modestly better pattern of being tougher on weak claims while going easier on strong ones, compared to those who saw the instructions just once or not at all. The finding suggests that reinforcement, even without any teacher or discussion, can sharpen how people evaluate political claims.

Testing Whether Written Instructions Can Improve Argument Evaluation

Researchers from the University of Gothenburg recruited 1,219 Swedish adults and asked them to evaluate six political arguments on topics ranging from economic policy to immigration. Each person saw three strong arguments and three weak arguments, presented in random order.

One group got no special instructions. They simply rated the arguments based on their own judgment. A second group received written guidance teaching them to ask critical questions: Does this cited expert actually have relevant knowledge in the field? Is this predicted consequence actually likely to occur? A third group received reminders of these instructions.

The arguments were carefully designed. For arguments citing experts, quality hinged on whether the expert possessed relevant expertise. A criminology professor discussing crime policy represented a strong argument; an anthropology professor making the same claims represented a weak one. For arguments predicting consequences, strong versions included plausible causal connections while weak versions merely repeated claims without real reasoning.

student with notebook
The more participants read the instructions, the more critically they approached the topics being discussed. (Credit: Gorgev on Shutterstock)

Repetition Helped People Spot Weak Arguments More Effectively

Participants who saw the instructions just once became more skeptical overall, rating both strong and weak arguments lower than the control group. The repeated-instruction group showed the clearest pattern: they were especially tough on weak arguments while being only somewhat more critical of strong ones.

Reading the same simple instructions multiple times appeared to embed the critical questions more firmly in participants’ minds. The effect appeared across every political topic researchers tested: non-ideological issues, traditional left-right economic debates, and divisive topics like crime and immigration.

Why Getting More Critical of Bad Arguments Matters

Lead researcher Henrik Friberg-Fernros and colleagues argue this pattern matters more than it might seem. From a democratic perspective, helping citizens identify flawed arguments may actually matter more than ensuring they fully appreciate strong ones.

Here’s why: weak proposals that gain acceptance tend to close deliberation prematurely—the issue feels settled. Strong proposals facing too much skepticism typically remain open for continued debate, giving them more chances to eventually succeed. The long-term cost of accepting bad arguments may be worse than temporarily rejecting good ones.

The instructions themselves were straightforward. For arguments citing experts, participants learned to ask whether the authority actually possessed relevant knowledge in the field. For arguments about consequences, they learned to assess whether an action would actually produce the claimed outcome. Simple questions, repeatedly reinforced.

Previous research on improving argument evaluation typically involved classroom settings with face-to-face instruction. This study shows that even brief written guidance, when emphasized through repetition, can produce measurable improvements without any interpersonal teaching.

The approach has limits. The study, published in Argumentation, focused on Swedish adults and two specific argument types, and the improvements were modest rather than dramatic. Still, the findings point to a practical path forward. If repeated exposure to simple written instructions can help people become more discerning consumers of political arguments, the approach could reach entire populations without requiring costly educational programs. Sometimes improving critical thinking is less about teaching complex skills and more about reinforcing simple questions people can learn to ask.


Paper Notes

Limitations

The research measured outcomes (how people rated arguments) rather than the underlying reasoning process, so researchers couldn’t observe exactly how the instructions changed thinking. The study focused on Swedish adults and tested only two types of arguments, so whether similar approaches work across different cultures or argument structures needs further testing. Improvements were modest—participants showed better discrimination but didn’t achieve perfect evaluation.

Funding and Disclosures

The work was supported by the Riksbankens Jubileumsfond (grant M18-0310:1), a Swedish research foundation. The authors declared no competing interests.

Publication Details

Authors: Henrik Friberg-Fernros (University of Gothenburg), Sebastian Lundmark (University of Gothenburg), Nora Theorin (University of Gothenburg), Jakob Ahlbom (Stockholm University), and Henrik Ekengren Oscarsson (University of Gothenburg). The study “Can People be Made More Rational? Testing Whether People’s Ability to Assess Arguments Can be Enhanced,” was published in the journal Argumentation on December 6, 2025. DOI: 10.1007/s10503-025-09683-y. Materials and analysis code are publicly available at https://osf.io/zhjws.

About StudyFinds Analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

John Anderer

Associate Editor

Leave a Reply