Dubbed the AI Research SuperCluster, it has taken several hundred people two years to build, including researchers from partners Nvidia, Penguin Computing and Pure Storage.
Meta, which announced the news in a blog post Monday, said its research team currently is using the supercomputer to train AI models in natural-language processing and computer vision for research.
The aim is to boost capabilities to one-day train models with more than a trillion parameters on data sets as large as an exabyte, which is roughly equivalent to 36,000 years of high-quality video.
Meta CEO Mark Zuckerberg said: "The experiences we're building for the metaverse require enormous compute power and RSC will enable new AI models that can learn from trillions of examples, understand hundreds of languages, and more."
Meta's AI supercomputer houses 6,080 Nvidia graphics-processing units, putting it fifth among the fastest supercomputers in the world.
By mid-summer, when the AI Research SuperCluster is fully built, it will house some 16,000 GPUs, becoming the fastest AI supercomputer in the world, Meta said. The company declined to comment on the location of the facility or the cost.
Eventually, the supercomputer will help Meta's researchers build AI models that can work across hundreds of languages, analyse text, images and video together and develop augmented reality tools, the company said.
The technology helps Meta spot harmful content and will aim to help Meta researchers develop artificial-intelligence models that think like the human brain and support rich, multidimensional experiences in the metaverse.
Meta vice president of AI Jerome Pesenti said: "In the metaverse, it's one hundred per cent of the time, a 3-D multi-sensorial experience, and you need to create artificial-intelligence agents in that environment that are relevant to you."