Ticker

6/recent/ticker-posts

Almost everything you say to Meta AI is out there on the internet.

Almost everything you say to Meta AI is out there on the internet.

This is a security flaw that raises serious questions about Meta's generative AI development. Almost everything you ask Meta AI on Instagram, Facebook, and WhatsApp ends up in clear text on a feed accessible to everyone on the internet. The reason is the recent rollout of the Meta AI app and the Discover feed, designed to "share and explore how others use AI." Under its guise of a social network, the American giant's AI platform is pushing many users to involuntarily share their exchanges on a public feed, accessible to everyone.

Your private conversations aren't private

By default, not all of your conversations are shared publicly. Except that after each interaction with the AI, the platform invites you to share your conversation on Discover, leading to the unencrypted publication of text, images, and even audio recordings, accessible to everyone on the internet. The seriousness of the problem lies in the nature of the content exposed: intimate questions about gender identity, requests for suggestive images, sharing of medical, legal, or professional documents, sometimes associated with pseudonyms or profile pictures that allow the user to be identified. According to the BBC, at least one request shared in clear text would have made it possible to precisely identify an Internet user's identity thanks to their profile picture.

A design flaw, not a bug

Meta displays a warning when sharing, but it is often ignored or misunderstood. The sharing mechanism, very similar to social media usage, does not always clearly distinguish a private post from a public one. As a result, hundreds of private conversations are exposed, while the majority of users believe they are interacting in complete "confidentiality" with a personal assistant.

The firm states that conversations are private by default and that the user must explicitly choose to make them public. However, the service's ergonomics and lack of educational content lead to massive confusion, compounded by the fact that the chatbot's activity can be linked to an Instagram or Facebook account.

And in Europe?

In Europe, the situation is complex. While Meta was forced to delay the full rollout of Meta AI due to regulators' data protection requirements, the company still plans to use users' public content to train its AI, unless explicitly opposed. Private messages are not used for training unless shared with Meta AI via the dedicated feature. But the line between private and public remains blurred for many users, especially since the procedure for making queries completely private or objecting to the use of their data remains complex to find and activate.

What solutions are there to protect your privacy?

  • Systematically check privacy settings and avoid sharing sensitive content via Meta AI.
  • Do not link your Instagram or Facebook account to the Meta AI application if the profile is public.
  • Use the options to delete or privatize queries in the settings.
  • Exercise your right to object to the use of your data for AI training, via the forms offered by Meta in Europe.

Despite the warnings, the very design of Meta AI encourages involuntary exposure. personal data, posing a major risk to user security. Until education and transparency are strengthened, caution is required: avoiding the use of Meta AI for any sensitive or confidential requests seems to be the most reasonable and least risky option.

Post a Comment

0 Comments