Ticker

6/recent/ticker-posts

Disinformation on X: a tool supposed to stop it becomes his best ally

Disinformation on X: a tool supposed to stop it becomes his best ally

Since Elon Musk took over X (formerly Twitter), one of the platform's main focuses has been the fight against misinformation—a mission entrusted to the community via Community Notes. This initiative, intended to make every user a potential moderator, was based on a good idea: decentralizing fact-checking to make it more representative and less biased. But a recent analysis by Bloomberg Opinions serves as a stark reminder of the system's limitations.

Of the 1.1 million notes submitted, fewer than 10% are actually displayed to users. This staggering figure can be explained by the way the tool works: for a note to be visible, it must be validated by contributors with divergent political opinions. Conceived as a counter to ideological intimacy, this requirement backfires on the system: on the most sensitive subjects (politics, international conflicts), consensus is rare, and notes disappear into limbo before they can even shed light on the debates.

A tool gagged by its own architecture

This is where the problem lies. Far from calming the conversation, the approval mechanism acts as an opaque filter, preventing factually sound content from seeing the light of day. In the case of the war in Ukraine, for example, more than 40% of the initially validated notes were subsequently removed, particularly those concerning statements relayed by Russian officials or media. These deletions, while understandable in the context of information warfare, raise a fundamental question: what is the legitimacy of this system, if facts validated by peers then disappear without a clear explanation?

Added to this is a problem of contributor qualification. The entry point for joining the Community Notes program is deliberately broad, in the name of democratization. But this openness weakens the whole: in the absence of robust safeguards, the reliability of contributions fluctuates dangerously. And verification mechanisms are still far from compensating for this variability.

A political divide that sabotages trust

The unease goes beyond X. A study by the Pew Research Center highlights a growing distrust of institutional fact-checkers, particularly pronounced among Republican voters. Nearly 70% of them believe that these organizations are politically biased, compared to 30% of Democrats. This asymmetry poisons the very perception of what constitutes established fact.

More broadly, it is the media narrative itself that is being challenged: two-thirds of Americans believe that the media take sides in their political or social coverage. In such a climate, no tool, however well-designed, can achieve consensus. And certainly not an automated or community moderation system.

Towards a facade of moderation?

The figures speak for themselves: between 80 and 90% of Community Notes are rejected, even when they are deemed relevant by independent evaluations. The promise of more horizontal moderation is colliding with a technical and ideological reality: too little transparency, too much political friction, too much disillusionment.

As X and other platforms seek to reduce the costs of human moderation, the temptation is great to delegate everything to the algorithm... or to the crowd. But the case of Community Notes shows that without a clear editorial line, without expert supervision, disinformation always finds a blind spot to thrive.

Disinformation doesn't die from being ignored. It takes root, patiently, in silence.

Post a Comment

0 Comments