ChatGPT's capabilities haven't fallen on deaf ears. Hackers and rogue states are using it to orchestrate influence operations, disinformation, and even cyberattacks—Russia, China, Iran, and North Korea are particularly fond of OpenAI's bot.
Malicious Operations Using ChatGPT
OpenAI has released a new report on malicious uses of ChatGPT. Since the beginning of the year, the company has identified and dismantled 10 campaigns, including the "Sneer Review" operation, likely organized by China, aimed at generating pro-Chinese political commentary on social media.
China is also believed to be behind the "Uncle Spam" operation, which aimed to exacerbate political tensions in the United States by spreading contradictory messages to create confusion. Also worth noting is the "Helgoland Bite" operation, in which Russia attempted to influence German public opinion in favor of the far-right AfD party and criticize NATO and the United States on X (formerly Twitter) and Telegram.
ChatGPT can also be used to carry out good old-fashioned scams, such as those in Cambodia. The bot produced messages in several languages offering money for fake odd jobs like liking posts. A pyramid scam for which users paid money.
After identifying these influence campaigns and scams, OpenAI systematically bans accounts linked to these activities. The company is taking the opportunity to strengthen its detection systems and refine its models to limit future abuses that are sure to happen.
Source: OpenAI
0 Comments