Ticker

6/recent/ticker-posts

Criminals have already widely adopted AI to scam you, this study proves it

Criminals have already widely adopted AI to scam you, this study proves it

A new report shows how artificial intelligence tools are being misused. Malicious uses are on the rise, ranging from virus creation to political manipulation and fake job offers. Even inexperienced people can now perform complex actions.

Criminals have already widely adopted AI to scam you, this study proves it

Artificial intelligence is now part of everyday life. It can write texts, create images, or generate code. But while these abilities are useful in many areas, they are also misused by malicious individuals. More and more cases show how AI is becoming a tool in the hands of cybercriminals that allows them to be faster, or like Gemini, that allows them to carry out cyberattacks.

This is what a recent study conducted by Anthropic, the company behind the chatbot Claude, reveals. Its report, published in April 2025, shows several cases of illegal use of this AI model. Low-skilled people were able to create advanced malware. Others generated fake political content to manipulate social media or create fake job offers. large scale.

Claude was used to create viruses, manipulate voters, and simulate recruiters

According to the report, one malicious actor used Claude to analyze credentials from hacked cameras. Another transformed a simple open-source kit into malware capable of facial recognition. A third example shows how it was used to generate content for paid political operations, via hundreds of automated social media accounts. These bots interacted with thousands of real accounts, in multiple languages, to convey certain messages.

Anthropic also identified online recruitment fraud. Scammers were posing as companies using Claude to write professional messages. The AI was used to correct the style of the messages, so that they appeared credible in English. The company has since blocked the responsible accounts, but warns that these uses will continue to grow. In the absence of a clear legal framework, the only defenses remain internal analyses and voluntary reporting. The risk no longer comes only from experts, but from anyone with access to a powerful chatbot.

Post a Comment

0 Comments