In a move that was supposed to ease privacy concerns, Meta Platforms announced that their new AI assistant has been trained solely on public Facebook and Instagram posts, deliberately excluding private messages and chats. Despite the company’s claims of respecting user privacy, the announcement has been met with skepticism and a fair amount of criticism from the public. Here’s what you need to know about Meta’s latest foray into artificial intelligence and the surrounding controversies.
The AI Training Ground
Nick Clegg, Meta’s President of Global Affairs, revealed in a recent interview that the new Meta AI assistant does not use private data for its training. The assistant is built on Llama 2, a large language model, and a new model called Emu that generates images in response to text prompts.
“We’ve tried to exclude datasets that have a heavy preponderance of personal information,” Clegg said. He further added that LinkedIn was deliberately not used as a data source due to privacy concerns.
User Responses: Less than Enthusiastic
The reaction from the online community has been less than favorable. Comments on Reddit ranged from calling the assistant a “propaganda machine” to expressing concerns about the reliability of its data output. Others were outright skeptical about Meta’s claim of using only public data, with one user remarking, “Public only eh, yeah right.”
Legal and Ethical Implications
The tech world has seen a recent uptick in lawsuits against companies like Meta, OpenAI, and Google for using information scraped from the internet without permission to train their AI models. On this front, Clegg admitted he was expecting a “fair amount of litigation” over the matter of copyrighted materials.
New Features and Future Developments
In addition to the AI assistant, Meta is developing several other AI-based features, such as generative AI stickers, image restyling features, and an AI sandbox for advertisers. While these features are still under development, they point to Meta’s significant investment in AI technologies.
The Public Trust Deficit
Despite these technological advancements, Meta’s announcement hasn’t done much to bridge the public trust deficit. While the company has made efforts to restrict the kind of data their AI assistant can generate, such as a ban on creating photo-realistic images of public figures, it hasn’t eased public concerns about privacy and data misuse.
Meta’s announcement raises more questions than answers, particularly around the ethical use of data for training AI models. While the company is bullish on the transformative potential of AI, it still has a long way to go in winning public trust, especially considering the skepticism the announcement has generated. As AI continues to be integrated more deeply into our digital lives, companies like Meta will need to find a balance between innovation and ethical considerations.