Today, world’s top AI companies, Anthropic, Google, Microsoft, and OpenAI, announced the formation of the Frontier Model Forum, a new industry body that will work on safe and responsible development of frontier AI models through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards. In the coming months, the Frontier Model Forum will establish an Advisory Board to help guide its strategy and priorities, representing a diversity of backgrounds and perspectives.
According to the Frontier Model Forum, the following are its objectives:
- Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
- Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
- Collaborating with policymakers, academics, civil society, and companies to share knowledge about trust and safety risks.
- Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
Also, the Frontier Model Forum is open to other organizations as well. To become a member, organizations must meet the following conditions:
- Develop and deploy frontier models (large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks. ).
- Demonstrate strong commitment to frontier model safety, including through technical and institutional approaches.
- Are willing to contribute to advancing the Frontier Model Forum’s efforts including by participating in joint initiatives and supporting the development and functioning of the initiative.
The Frontier Model Forum will be one vehicle for cross-organizational discussions and actions on AI safety and responsibility.
You can read the comments from the founding members of the Frontier Model Forum below.
Kent Walker, President, Global Affairs, Google & Alphabet said: “We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. Engagement by companies, governments, and civil society will be essential to fulfill the promise of AI to benefit everyone.”
Brad Smith, Vice Chair & President, Microsoft said: “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
Anna Makanju, Vice President of Global Affairs, OpenAI said: “Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies – especially those working on the most powerful models – align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well– positioned to act quickly to advance the state of AI safety.”
Dario Amodei, CEO, Anthropic said: “Anthropic believes that AI has the potential to fundamentally change how the world works. We are excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology. The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety.”