
(Credit: Shyntartanya/Shutterstock)
In A Nutshell
- Fact-checks that repeat scary disease warnings while debunking false cures can accidentally make people trust the bogus remedies more, not less
- Fear-based corrections inflate people’s confidence that fake treatments work, without actually making them feel more threatened by the disease
- People who see health issues as personally relevant are more vulnerable to this backfire effect after reading corrections
- Effective corrections should focus on why the remedy doesn’t work rather than rehearsing frightening disease symptoms
Sarah scrolls past a Facebook post warning that vitamin deficiency causes bleeding gums and a weakened immune system. The post claims multivitamins can fix it. Two days later, she sees a fact-check from a major health organization: “Myth Busted: Vitamin deficiency is serious and causes bleeding gums, but multivitamins don’t improve overall health.” The correction repeats the scary symptoms, mentions the supplements, then says they don’t work.
A week later, Sarah’s at the pharmacy. She tosses a bottle of multivitamins in her basket.
This scenario isn’t from the study itself, but it illustrates a pattern the research helps explain. New findings shed light on why certain health myths are especially hard to eliminate. When fact-checkers debunk false health remedies by repeating the frightening disease warnings that came with them, they can accidentally make people trust the bogus treatments more, not less. In some cases, the correction starts to function like a commercial for the lie.
Researchers at Tianjin Normal University ran two experiments with 180 people total (54 undergraduates in the first study, then 126 adults with a broader age range in the second) to figure out why health misinformation sticks even after it’s been corrected. What they found should worry anyone trying to fight medical myths online: context matters as much as content. Pair a false cure with a scary disease, and corrections that mention both elements actually strengthen belief in the fake remedy.
How Fear-Hijacked Corrections Backfire
Scientists tested this by having people read health stories in three different formats. Some read neutral health facts with a simple correction (control group). Others read a false remedy claim followed by a correction, but without scary disease info. A third group got the full misinformation package: frightening health warnings plus a miracle cure claim, followed by the same correction everyone else saw.
The results were compelling. When corrections included fear appeals (mentioning serious symptoms like bleeding gums or weakened immunity) people were significantly more likely to believe the debunked remedy and say they would buy the products. Corrections without the scary context had little measurable effect.
Even worse, personal relevance amplified the effect. People who thought the health issue applied to them were more likely to ignore corrections and trust the false cure. Age also played a role: older participants in the first experiment showed greater reliance on debunked claims when making health judgments, though this sample skewed younger overall and didn’t include many middle-aged or elderly adults.
A second experiment with 126 adults dug into why this happens. Researchers measured two elements: how much people believed the fake remedy would work, and how threatened they felt by the disease. Surprisingly, fear-based corrections inflated confidence in fake treatments without actually making people feel more threatened by the disease. The fear context didn’t raise perceived threat, it inflated perceived efficacy. Reading a correction that said “vitamin deficiency causes bleeding gums, but multivitamins don’t help” made people more convinced that multivitamins work: the exact opposite of what fact-checkers intended.

Why Your Brain Trusts the Lie
When you first encounter a post claiming garlic prevents COVID or lemon water cures cancer, your brain builds a simple story: scary problem, easy solution. That narrative feels complete. It makes sense. Later, when a correction shows up saying “COVID is dangerous and spreads through respiratory droplets, but garlic doesn’t prevent it,” your brain doesn’t necessarily become more afraid of COVID. Instead, the correction reinforces the problem-solution pairing, which paradoxically makes the solution (even the fake one) seem more credible because the brain prioritizes finding a way to fix the problem over the technical details of why the fix is fake.
The solution sticks because the fear appeal creates an emotional charge around the entire narrative. In psychological terms, the fear-solution pairing strengthens the mental model built around the false remedy. Your brain encoded that fake remedy within an emotionally-charged framework about disease, and emotions are powerful memory glue. When you think about the health issue later (maybe when you’re shopping or making medical decisions) that emotionally-tagged memory of the false cure pops up automatically. The neutral, boring correction? It barely registers.
This explains why simply saying “that’s false” almost never works with health misinformation. But it gets worse when corrections inadvertently recreate the exact fear framework that made the lie persuasive in the first place.
What This Means for Fighting Health Lies
Social media algorithms love posts that combine health scares with miracle solutions. They get clicks, shares, engagement. Platforms boost them without recognizing they’re often peddling dangerous nonsense. During COVID-19, claims that garlic, alcohol, or megadoses of vitamin C prevented infection spread faster than accurate public health information, potentially undermining vaccination efforts and proper medical care.
False health remedies cause real damage. People delay legitimate treatment, waste money on useless products, or experience side effects from unregulated supplements. The broader misinformation ecosystem has documented extreme cases: people who’ve drunk bleach to “cure” autism in children, cancer patients who chose coffee enemas over chemotherapy, diabetics who stopped insulin because social media said cinnamon works better. While these specific examples aren’t from this study, they illustrate the real-world stakes of health misinformation that resists correction.
Current fact-checking strategies assume that providing full context helps people understand why something is false. Explain the disease, present the false claim, debunk it with science. Thorough, right? But with health misinformation embedded in fear appeals, thoroughness backfires. Every time a correction rehearses the scary symptoms, it breathes new life into the fake cure attached to those symptoms.
The findings, published in Acta Psychologica, suggest that corrections may work better when they focus on dismantling why the remedy doesn’t work, ideally without reminding people what terrible things might happen if they don’t find a solution. Instead of “Disease X causes scary symptoms, but Remedy Y doesn’t help,” try “Remedy Y has no proven benefits and may cause harm.”
Health product marketing should also face more scrutiny. Ads that describe frightening diseases followed by product benefits use the same psychological manipulation as misinformation. Even when companies make technically legal claims, pairing them with fear messaging exploits cognitive vulnerabilities that regulations haven’t caught up with yet.
The backfire effect isn’t inevitable, but avoiding it requires rethinking how we debunk health lies. Corrections that repeat fear appeals don’t correct, they advertise. Strip away the scary context, focus on why the fake remedy fails, and maybe the truth will finally stick.
Disclaimer: This article discusses research findings about health misinformation and should not be construed as medical advice. Always consult qualified healthcare professionals for medical concerns and treatment decisions.
Paper Notes
Study Limitations
Several limitations affect the generalizability and interpretation of these findings. First, experimental materials presented misinformation and corrections within the same text, which differs from real-world scenarios where corrections typically appear separately from initial misinformation, often after considerable time has passed. Second, the study used text-only stimuli, while social media misinformation often includes images, videos, and interactive elements that may influence persuasiveness differently. Third, all materials came from Chinese rumor-debunking platforms, and cultural factors may affect how people in other countries process health misinformation. Fourth, Experiment 1 focused on undergraduate students, and while Experiment 2 expanded to a broader adult sample, middle-aged and elderly populations—who may be more vulnerable to health misinformation—were still underrepresented. Fifth, the study examined only simple negation as a correction method, not more sophisticated debunking strategies that might prove more effective. Finally, the within-subjects design, while controlling for individual differences, meant participants encountered multiple health stories in sequence, which could differ from processing a single piece of misinformation in isolation.
The authors also note that the mediation findings regarding perceived efficacy should be considered exploratory because the experimental design didn’t use factorial manipulation to isolate the independent effects of threat and efficacy. Future research employing factorial designs could provide stronger evidence for the dual pathways proposed by the Extended Parallel Process Model.
Funding and Disclosures
This work was supported by the Philosophy and Social Science Research Planning Project of Tianjin (General Project), Grant No. TJXL24-001. All authors declared no competing interests. During manuscript preparation, the authors used GPT-4o-mini and Grammarly to improve readability and language, then reviewed and edited the content and take full responsibility for the published article.
Publication Details
Authors: Xuying Wang, Xiaokang Jin, Yifan Yu, Hua Jin (corresponding author) | Affiliations: Faculty of Psychology, Tianjin Normal University, China; Key Research Base of Humanities and Social Sciences of the Ministry of Education, Academy of Psychology and Behavior, Tianjin Normal University, China | Journal: Acta Psychologica | Volume/Issue: Volume 264 (2026) | Article Number: 106418 | DOI: https://doi.org/10.1016/j.actpsy.2026.106418 | Received: 15 May 2025 | Revised: 20 January 2026 | Accepted: 3 February 2026 | Published Online: 7 February 2026 | License: Open access under CC BY-NC license







