Less content removed on Facebook and Instagram, fewer moderation errors, and no more "problematic" or harmful posts. This is how Meta, the parent company of Instagram, Facebook, and WhatsApp, presents its very first assessment of its new rules for managing false, misleading, or illicit content shared on its platforms. The digital giant believes that its new moderation policy, announced last January, is bearing fruit, in a report published Thursday, May 29 on its website.
According to figures provided by Meta, its users were exposed to one or two pieces of hateful content for every 10,000 posts viewed during the first three months of 2025, compared to two or three at the end of last year. In "most problem areas," the proportion of posts that violate its usage guidelines (such as hateful content, harassment, etc.) "has remained largely unchanged," the American company claims.
Moderation errors halved, according to Meta
This assessment, in favor of Meta, should surprise experts – as well as the Oversight Board of Facebook's parent company itself – who feared that hateful or illicit messages would multiply on Instagram and Facebook, after Meta's change of gear last January. Five months earlier, Mark Zuckerberg's group explained that it was entrusting the moderation of Facebook and Instagram to its users, to "restore freedom of expression" on its platforms. Following the Cambridge Analytica scandal in 2016, Facebook sought to show its credentials by implementing a fact-checking program, which relied in particular on news agencies. Deemed too politically oriented, it was replaced in the United States by community ratings, similar to what exists on X, Elon Musk's platform.
In this new system, users can add notes or corrections to content likely to contain false or misleading information. At the same time, Meta had excluded topics such as immigration, sexual identity, and gender from its moderation. These changes now allow Instagram and Facebook users to publish posts deemed "hateful" by human rights defenders – particularly those aimed at immigrants or people who identify as transgender. Meta thus authorizes "claims of mental illness or abnormality when based on gender or sexual orientation," recalls the American media Wired.
To justify these changes, the group's founder, Mark Zuckerberg, emphasized that the company's fact-checking system resulted in "too many errors and too much censorship." These errors would now be reduced by half, assures Meta in its report, although no explanation is given on the method of calculating this figure. The American giant, however, promises that its future reports "will include metrics on our mistakes so people can track our progress."
What can we conclude from this report issued by Meta, with figures provided by... Meta?
In detail, the report shows that there were fewer post deletions in the first three months of 2025, compared to previous periods.Despite this overall decrease, Meta notes two categories in which the frequency of problematic cases has increased. Cases of online bullying and harassment increased slightly in the first quarter of 2025 compared to the last three months of 2024, from 6 views per 10,000 posts to 7 views per 10,000 posts. sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol', 'Noto Color Emoji';">.
Similarly,violent content, described as "glorifying violence or celebrating the suffering or humiliation of others on Facebook and Instagram», went from 6 to 7 views out of 10,000 posts in 2025 to 9 views out of 10,000 at the end of 2024. While this report provides a relative overview of the situation after the change in Meta's moderation system in January 2025, there remainsa document issued by Meta– with figures provided exclusively by the American giant, which have not been independently verified by third-party organizations.
Researchers at Northeastern University (Boston, United States) have also demonstrated that moderation on social networks often comes too late, with contentious posts having already been seen by a significant number of users before being removed. A new study published on April 28, 2025, in the Journal of Online Trust and Safety took a close look at moderation on Facebook in 2023—a year in which moderation was more stringent than the current system. According to the authors, posts removed for violating the social network's rules had already reached at least three-quarters of their intended audience by the time they were removed.

0 Comments