“You said you were feeling overworked, can I save you that movie ticket we talked about?” According to Dr Yaqub Chaudhary and Dr Jonnie Penn, two researchers at the University of Cambridge, artificial intelligence (AI) giants will soon be selling our “intentions”, after the winners of targeted advertising (Google, Facebook) have commercialised our attention.
Researchers at the UK university’s Leverhulme Centre for the Future of Intelligence (LCFI) believe that if the authorities are not careful, there will be “a gold rush for those who target, drive and sell human intentions”.
The two experts argue, in an article published in the Harvard Data Science Reviewon December 31, that ChatGPT, Mistral AI, Gemini, Claude and other generative artificial intelligence (AI) tools could soon "predict and influence our decisions at an early stage, and sell these 'intentions' in real time to companies that can meet the needs, even before we have made our decision."
An auction system
A company or group seeks to boost the sale of a certain product, to promote a certain service, to favor a certain candidate in the presidential election? It would then be enough to pay the AI giants, via an auction system similar to that which exists for our personal data, to “direct” the conversation of such conversational agent towards the product or service which won the auction.
Since the advent of ChatGPT and the craze generated by generative AI, these tools have in fact had access to “vast quantities of intimate psychological and behavioral data, collected via informal and conversational dialogue,” the researchers note. “What people say in conversations, how they say it, and what kind of inferences can be made in real time are far more intimate than simple recordings of online interactions,” they add.
And for several months, AI tools have already been seeking to “elicit, infer, collect, record, understand, predict, and ultimately manipulate and market the plans and goals of human beings.” The two authors thus write that "behind the considerable investments", behind the "sensational speeches on the future of LLMs", "the central ambition" of the AI giants would be to use generative AI to "infer human preferences, intentions, motivations and other psychological and cognitive attributes".
As proof, OpenAI, the company behind ChatGPT, declared, in a blog post dated November 9, 2023, "be interested in large-scale data sets that reflect human society (…). We’re specifically looking for data that expresses human intent (e.g., long-form writing or conversations rather than disconnected snippets), in any language, on any topic, and in any format».
A week later, Miqdad Jaffer, then director of product at Shopify – now at OpenAI – described at the OpenAI developer conference a kind of:
The Nvidia CEO also explained that LLMs could be used to understand intention and desire.
A possible “social manipulation on an industrial scale”
Concretely, conversational agents could thus record over a long period of time "behavioral and psychological data that signal intention" and which lead to decisions. "If certain intentions are fleeting, the classification and targeting of intentions that persist will be extremely profitable for advertisers", write the two researchers. This data would then be classified, correlated with online history, age, gender, vocabulary, political orientations, and even the way in which a user can be convinced and therefore manipulated.
This is enough to worry researchers, who believe that it ishigh time toreflect on the likely impact of such a market of intent on our democratic norms,"including free and fair elections, a free press and fair competition in the marketplace." Because for the latter, we are indeed on the cusp of a possible "social manipulation on an industrial scale."

0 Comments