Meta, the parent company of Facebook, recently announced the release of its new AI language model called LLaMA. This model was not intended for public use like Microsoft’s Bing or OpenAI’s ChatGPT, but instead is an open-source package available to AI researchers who request access. Meta believes that by “democratizing access” to AI, it can encourage further research into AI language models, and also mitigate issues of bias, toxicity, and the potential for generating misinformation.

However, just a week after the release of LLaMA, it was leaked online on 4chan, sparking debates about the potential dangers of such technology being made freely accessible. Some are worried that the leak could lead to more personalized spam and phishing attempts, as the model could be used for malicious purposes. Others, however, believe that open access is necessary to develop safeguards for AI systems and that similarly complex language models have already been made public without causing significant harm.

LLaMA is not intended for the average internet user, as it is a “raw” AI system that requires a considerable amount of technical expertise to get up and running. Furthermore, the model has not been fine-tuned for conversation, like ChatGPT or Bing. Fine-tuning is a necessary step in creating a user-friendly product, as it focuses on a more specific task. LLaMA is still an extremely powerful tool, however, with four models of differing sizes and computational demands. The 13 billion version outperforms OpenAI’s 175 billion-parameter GPT-3 model on numerous benchmarks for AI language models.

The LLaMA leak raises questions about the ongoing ideological struggle in the wider world of AI: the battle between “closed” and “open” systems. Openers argue for greater access to AI research and models, while closers think this information and technology needs to be doled out more cautiously. For those who want more openness, the LLaMA leak is a blessing. However, the leak could also reduce trust between companies and academics, making it harder for people to release things.

Overall, the LLaMA leak presents a complicated issue that requires careful consideration. While the leak could lead to potential misuse, open access to AI language models is necessary to develop safeguards and improve AI systems. The future of AI research depends on striking a balance between open and closed systems, and ensuring that AI language models are developed and released in a responsible manner.