AI can ‘lie and BS’ just like humans, but can’t match our intelligence

CINCINNATI — As artificial intelligence continues to dominate the headlines, more and more people find themselves wondering if they’ll one day be out of a job because of an AI-powered robot. The University of Cincinnati’s Anthony Chemero, a professor of philosophy and psychology, however, contends that popular conceptions regarding the intelligence of today’s AI have largely been muddled by linguistics. Put another way, Prof. Chemero explains that while AI is indeed intelligent, it simply cannot be intelligent in the way that humans are, even though “it can lie and BS like its maker.”

To start, the report details how ChatGPT and other AI systems are large language models (LLM) that are “trained” using massive amounts of data mined from the internet. Importantly, much of that information shares the biases of the people who posted the data in the first place.

“LLMs generate impressive text, but often make things up whole cloth,” Chemero states in a university release. “They learn to produce grammatical sentences, but require much, much more training than humans get. They don’t actually know what the things they say mean,” he says. “LLMs differ from human cognition because they are not embodied.”

Brain and computer: Neural interface
(© peshkova –

The creators of LLMs call it “hallucinating” when the programs make things up, but Prof. Chemero claims “it would be better to call it ‘bullsh*tting.’” Why? LLMs work by constructing sentences through the repeated addition of the most statistically likely next word. These programs don’t know or care if what they are producing is actually true.

Even worse, with a little prodding, he adds, anyone can get an AI tool to say “nasty things that are racist, sexist and otherwise biased.”

Prof. Chemero stresses that LLMs are not intelligent in the way humans are intelligent because humans are embodied, meaning we’re living beings who are always surrounded by other living beings, as well as material and cultural environments.

“This makes us care about our own survival and the world we live in,” he notes, commenting that LLMs aren’t really in the world and don’t actually care about, well, anything.

Ultimately, the main takeaway here is that LLMs are not intelligent in the way that humans are because they “don’t give a damn,” Prof. Chemero concludes, adding, “things matter to us. We are committed to our survival. We care about the world we live in.”

The study is published in Nature Human Behaviour.

You might also be interested in:

YouTube video