Generative AI: Robot hands typing

Generative artificial intelligence is becoming increasingly difficult to detect. (Image by feeling lucky on Shutterstock)

In a nutshell

  • Americans believe only 41% of online content is accurate and created by humans, with three-quarters reporting their trust in the internet is at an all-time low.
  • When tested, only 30% of people could correctly identify AI-written content, showing how difficult it’s become to distinguish between human and artificial writing.
  • 82% of Americans want businesses to be legally required to disclose when they use AI in marketing, customer service, or content creation.

NEW YORK — Americans’ confidence in online content has hit rock bottom. Most people now believe the majority of what they see on the internet isn’t trustworthy, according to a nationwide poll.

The survey of 2,000 adults by Talker Research paints a concerning picture. Americans believe only 41% of online content is accurate, factual, and made by humans. They think 23% is completely false and purposely inaccurate or misleading, while 36% falls somewhere in between. Three-quarters of respondents say they trust the internet less today than ever before.

About 78% of respondents agree that the internet has “never been worse” when it comes to differentiating between what’s real and what’s artificial. This skepticism has grown as AI-generated material becomes increasingly prevalent.

Spotting Fake Content

The average American encounters information they know or suspect was generated by AI about five times per week, with 15% indicating it’s more than 10 times.

When asked where they most commonly notice artificial content, respondents identified:

  • Social media posts (48%)
  • News articles (34%)
  • Chatbot interactions (32%)

Those polled believe that 50% of the news stories and articles they come across online have some element of AI, whether in images or written content.

The research, commissioned by World.org, uncovered a troubling reality: when tested, only 30% of participants could correctly identify which business reviews were written by humans versus AI. Of the three options written by people, two ranked at the very bottom of the list, demonstrating how challenging it has become to recognize genuine human writing.

Worker or student using ChatGPT on their laptop computer
The explosion of generative AI platforms like ChatGPT have made Americans even more skeptical of the information they read online. (© irissca – stock.adobe.com)

Real-World Consequences

This erosion of trust affects how consumers behave online. With 80% of Americans relying on reviews when choosing businesses to support, this uncertainty undermines consumer confidence.

Consumers reported being less likely to patronize companies using:

  • Bot-written reviews (62%)
  • AI customer service representatives (50%)
  • AI-generated images (49%)

Nearly half (46%) of respondents have purchased something that ended up not being what was advertised. Of those, 24% weren’t able to get a refund or return the item.

Rebecca Hahn, Chief Communications Officer of Tools for Humanity, developers of World ID, described the situation directly: “Trust in the internet hasn’t just declined — it’s collapsed under an avalanche of AI-generated noise. The internet has become a house of mirrors where 78% of Americans can no longer distinguish real from artificial.”

Finding Solutions

The most stressful situation when it comes to differentiating whether they’re dealing with a person or chatbot is when speaking to a customer service representative (43%), followed by booking lodging or hotels (23%) and sending money through a third-party app (22%).

People have developed their own verification methods. As one respondent explained, “I often ask open-ended questions or test for human-like responses, such as asking for personal opinions or experiences.” Additionally, 24% will Google or search for the entity online to verify their human status, while 23% ask for a phone or video call.

The vast majority of Americans (82%) agree that businesses and vendors should be legally required to disclose whether AI is used in their marketing, content, customer service or on their website.

Hahn noted the urgency of finding solutions: “Being able to prove you’re human online is becoming as essential as having an email address was twenty years ago. Our survey shows Americans are desperate for tools that restore confidence in digital interactions. We’re pioneering a new paradigm where human verification becomes a foundational layer of the internet — simple, secure, and universally accessible. This isn’t just about solving today’s trust crisis; it’s about building tomorrow’s internet where human-to-human connection remains at the heart of everything we do.”


About the Research: Talker Research surveyed 2,000 general population Americans between March 28 through March 31, 2025. The survey, commissioned by World, used traditional online access panels and programmatic sampling with incentives for completion. Quality control measures removed speeders (completing the survey in less than one-third the median interview time), inappropriate responses, Captcha-identified bots, and duplicate submissions through digital fingerprinting. The survey was only available to individuals with internet access.

About StudyFinds Analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

John Anderer

Associate Editor

Leave a Reply

10 Comments

  1. Thermos says:

    Lot of well known sites using AI generated videos which is so ridiculous and obvious the AI generated person talking and trying to lyp synch is utterly childish. Please STOP that. I don’t bother to watch anymore.

  2. Benjamin K says:

    Hilarious that right in the sidebar is a bunch of clickbait junk “news stories” “from our partners” yeah we ain’t ever gettting out of the muck. A quick buck is too much to pass up. And it’s all funded by clickbait garbage, AI or not. That’s why the internet sucks. You’re part of the problem.

  3. Big Fatback says:

    The main stream media already made everyone suspicious long before LLMs. These lying, propagandist and agents of a nameless foreign country attempting a coup of the USA is the real problem.

    This country is falling apart and we’re still talking about (realistic) cat memes. Be real.

  4. Jim Johnson says:

    I used to watch some cute dog stories or human interest stories on You Tube. I now see that many of them are AI generated and I have quit those types of videos entirely.

  5. Diana says:

    I have encountered errors and mis truths many times… To the point of argument and the AI finally admitting I was right. This is frightening. And took me stressful work to prove and support, until IT eventually flip-flopped like It knew all along.
    We are in trouble. (Yes different platforms’)

  6. Blue Centaur says:

    Note how this article never claims to have been written by a human, just “reviewed” by one. Also, a human writer is just as capable of writing something untrustworthy on the internet as an AI. By the way, I used ChatGPT to write this comment.

  7. Laus Deo Beware says:

    Don’t bother commenting. Some AI will throw it in the trash anyway. Unless it’s “okay” with some controller.

  8. Wade says:

    I never believed anything anyway. Behind every smile is another shoe waiting to drop.

  9. SydneyRossSinger says:

    The tendency for AI to feed you what you are already interested in has also caused polarization of the culture, with people getting biased information that often ignores any alternative viewpoints. I also wonder if people still believe what AI language models say, like ChatGPT. These language models are extremely biased towards the mainstream consensus, and steers people towards accepting mainstream beliefs. This prevents any non-mainstream or alternative viewpoints to be falsely discredited, ignored, or suppressed. Sometimes the AI will outright lie and fabricate information, which it will admit when you challenge it, but will revert to its programmed bias. Those in the know, who see the mainstream narrative as biased, can see through these narrative gatekeepers. Unfortunately, most people accept what AI tells them without critical reflection, just as they accept Wikipedia as truth. The next generation growing up with AI as a given in their lives will have a different relationship with AI than pre-AI generations.

  10. Chanmac says:

    It was already out of control.. media ads wrapped as articles, algorithm chasing.. photo editing.
    Out right MS or alt media.. don’t blame AI lol.. and bots and AI not the same either..
    If anything few more years and it will be AI that can sort this all out.