Mark Zuckerberg, CEO of Meta, today provided an update on some of their next generation cloud infrastrucure they have developed to power new AI experiences.
First, Meta has designed a new data center optimized for AI. This new data center is equipped to support liquid cooling hardware to handle training and inference at large scale, and a high-performance network designed to support large scale superclusters.
Second, Meta’s Research SuperCluster now has 16,000 GPUs and high-speed interconnect. Meta claims that RSC is one of the fastest supercomputers in the world. Meta is using this supercomputer to train its large language models (LLMs) and the world’s first AI translation system for oral languages. Since Meta built this supercomputer themselves, they were able to tune it to train their next generation foundation models.
Third, Meta revealed their first-generation custom silicon chip called MTIA (Meta Training and Inference Accelerator). MTIA is designed to power Meta’s AI recommendation systems that are already being used by Instagram and Facebook products to show the best content for users even faster.
Fourth, CodeCompose is Meta’s generative AI coding assistant that was built to help Meta’s engineers when writing code. Microsoft’s GitHub has a similar product called GitHub Copilot and it is available for any organization now.
Meta also has plans to build more AI tools to support its development teams across the whole software development process. Meta may open source some of these tools as well.