Human brain and artificial intelligence

(Image by Shutterstock AI Generator)

CAMBRIDGE, England — Imagine scrolling through your social media feed when your AI assistant chimes in: “I notice you’ve been feeling down lately. Should we book that beach vacation you’ve been thinking about?” The eerie part isn’t that it knows you’re sad — it’s that it predicted your desire for a beach vacation before you consciously formed the thought yourself. Welcome to what some experts believe will be known as the “intention economy,” a way of life for consumers in the not-too-distant future.

A new paper by researchers at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence warns that large language models (LLMs) like ChatGPT aren’t just changing how we interact with technology, they’re laying the groundwork for a new marketplace where our intentions could become commodities to be bought and sold.

“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve,” says co-author Dr. Yaqub Chaudhary, a visiting scholar at the Centre, in a statement.

For decades, tech companies have profited from what’s known as the attention economy, where our eyeballs and clicks are the currency. Social media platforms and websites compete for our limited attention spans, serving up endless streams of content and ads. But according to researchers Chaudhary and Dr. Jonnie Penn, we’re witnessing early signs of something potentially more invasive: an economic system that could treat our motivations and plans as valuable data to be captured and traded.

What makes this potential new economy particularly concerning is its intimate nature. “What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions,” Chaudhary explains.

AI assistant on smartphone
(Image by TippaPatt on Shutterstock)

Early signs of this emerging marketplace are already visible. Apple’s new “App Intents” developer framework for Siri includes protocols to “predict actions someone might take in future” and suggest apps based on these predictions. OpenAI has openly called for “data that expresses human intention… across any language, topic, and format.” Meanwhile, Meta has been researching “Intentonomy,” developing datasets for understanding human intent.

Consider Meta’s AI system CICERO, which achieved human-level performance in the strategy game Diplomacy by predicting players’ intentions and engaging in persuasive dialogue. While currently limited to gaming, this technology demonstrates the potential for AI systems to understand and influence human intentions through natural conversation.

Major tech companies are positioning themselves for this potential future. Microsoft has partnered with OpenAI in what the researchers describe as “the largest infrastructure buildout that humanity has ever seen,” investing over $50 billion annually from 2024 onward. The researchers suggest that future AI assistants could have unprecedented access to psychological and behavioral data, often collected through casual conversation.

The researchers warn that unless regulated, this developing intention economy “will treat your motivations as the new currency” in what amounts to “a gold rush for those who target, steer, and sell human intentions.” This isn’t just about selling products — It could have implications for democracy itself, potentially affecting everything from consumer choices to voting behavior.

An intention economy’s targets could extend far beyond vacation planning or shopping habits. The researchers argue we must consider the likely impact on human aspirations, including free and fair elections, a free press, and fair market competition, before we become victims of unintended consequences.

Perhaps the most unsettling aspect of the intention economy isn’t its ability to predict our choices, but its potential to subtly guide them. As our AI assistants become more sophisticated at anticipating our needs, we must ask ourselves: In a world where our intentions are commodities, how many of our choices will truly be our own?

Paper Summary

Methodology

The researchers conducted a comprehensive analysis of corporate announcements, technical literature, and emerging research on large language models to identify patterns suggesting the development of an intention economy. They examined statements from key tech industry figures, analyzed research papers (including unpublished works from ArXiv), and studied the technical capabilities of systems like Meta’s CICERO and various LLM applications.

Results

The study found clear evidence of major tech companies positioning themselves to capture and monetize user intentions through LLMs. They identified specific technological developments enabling this shift, including improved natural language processing, psychological profiling capabilities, and infrastructure investments. The research also revealed how companies are already developing tools to circumvent traditional privacy protections.

Limitations

The researchers acknowledge that many of their observations are based on emerging trends and corporate statements rather than long-term empirical data. Additionally, some of the research papers they cite are still undergoing peer review. The full impact of these technologies remains somewhat speculative.

Discussion and Takeaways

The paper argues that the intention economy represents a significant evolution beyond the attention economy, with potentially far-reaching implications for privacy, autonomy, and democracy. The researchers emphasize the need for sustained scholarly, civic, and regulatory scrutiny of these developments. They particularly highlight the risks of personalized persuasion at scale and the potential for manipulation of democratic processes.

Funding and Disclosures

The research was conducted at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. The authors declared no financial or non-financial conflicts of interest.

About StudyFinds Analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

John Anderer

Associate Editor