ChatGPT still has a bright future ahead of it. For several months, the startup Builder.ai had established itself as one of the symbols of a new generation of artificial intelligence, more reliable, but above all, more natural in its responses. Its virtual assistant, called Natasha, was presented as a feat of artificial intelligence, to the point of earning the company the support of Microsoft and a valuation of nearly $1.5 billion.
Reality, however, soon emerged. Behind the technological promises, there were actually 700 Indian engineers imitating chatbots to answer Internet users' queries. A fiasco that is no longer an exception on the market.
Natasha, the all-too-human AI
Builder.ai's promise was enticing: allowing anyone to create an application as easily as ordering a pizza, thanks to its virtual assistant supposed to automate software generation. This proposition logically attracted many investors, including Microsoft, always quick to bet on new technological geniuses.
Except that, as early as 2019, doubts emerged about the reality of Builder.ai's technical prowess. An article in the Wall Street Journal already pointed out the weakness of its AI technology, largely compensated for by human intervention. The deception finally came to light in May 2025, when an audit revealed that the company had made a habit of inflating its revenues. Instead of the announced $220 million, Builder.ai reportedly only generated $50 million.
An investigation by the Times of India ultimately revealed that almost all of the work attributed to Natasha was actually carried out by 700 Indian engineers, who manually responded to customer queries while maintaining the illusion of extensive automation. Testimonies from former employees confirmed that the system relied on precise instructions to mimic the immediacy and relevance of a chatbot: artificial delivery times, standardized responses, and a ban on using Technical jargon likely to betray the human nature of the service. Everything was designed to create the illusion of the perfect chatbot.
An investigation opened
The discovery of this fraud precipitated the fall of Builder.ai. Unable to meet its financial obligations – including $85 million owed to Amazon and $30 million to Microsoft for cloud services – the startup laid off nearly 1,000 employees and filed for insolvency in the United Kingdom, India, and the United States. US authorities opened a federal investigation, demanding access to the company's accounts and customer lists.
While it is a landmark in the AI industry, the Builder.ai case is part of a broader trend of AI washing: the overvaluation or falsification of artificial intelligence capabilities by companies eager to capture the attention of investors and customers. This practice, which ranges from marketing hype to outright deception, affects many sectors, from finance to consumer goods, and is beginning to attract the attention of regulators. AI has become a selling point, and it's no longer uncommon to see it used at will in marketing pitches.
0 Comments