Meta has a giant new AI supercomputer to shape the metaverse

  • Home
  • internet
  • Meta has a giant new AI supercomputer to shape the metaverse

On Monday, Facebook’s subsidiary Meta revealed that it’s built one of the world’s fastest supercomputers, a behemoth known as the Research SuperCluster. With 6,080 graphics processing units packaged into 760 Nvidia A100 modules, it’s the fastest machine built for artificial intelligence tasks, says Mark Zuckerberg, who founded both Facebook and Meta.

Meta’s processing power is like that of the Perlmutter supercomputer, which has 6,000 GPUs and ranked as the world’s fifth fastest supercomputer as of last year. By expanding to 16,000 GPUs this year, Meta expects to achieve a performance boost by a factor of 2.5.

Meta—a top AI researcher—hopes that the recent investment in RSC will pay off by helping to build out the company’s latest priority: a virtual world it calls the metaverse. The investment could also help RSC develop a language-translation program powerful enough to simultaneously translate speech for a large group of individuals who each speak a different language.

“The experiences we’re building for the metaverse require enormous compute power,” Meta CEO Mark Zuckerberg said in a statement. “RSC will enable new AI models that can learn from trillions of examples, understand hundreds of languages and more.”

Artificial intelligence, or AI, is powered by machine-learning algorithms that can learn to handle different tasks. These algorithms are revolutionary because they’re programmed to understand the world by being exposed to real-world data. For example, AI can learn what a cat looks like by analyzing thousands of photos of cats, rather than having someone directly instruct it about their fur and whiskers, eyes and ears. 

Acqumulating vast amounts of labeled data and applying machine learning to it is one way artificial intelligence can master human tasks, but these efforts can be costly and time-consuming.

META provides a different path Computer scientists at the University of California, Santa Barbara (UCSB) have developed a new approach—called Meta—that aims to solve a particularly thorny AI problem: self-supervised learning AI models are trained today based on carefully annotated data For example, stop signs are labeled in photos used to train autonomous vehicles AI, and a transcript accompanies the audio used to train speech recognition AI.

The more difficult task of self-supervised training uses raw, unlabeled data instead So far, that’s an area in which humans still have an edge over computers.


Meta and other AI proponents have shown that training AI models with ever larger data sets produces better results.Running those models is cheaper than training them, which is why iPhones can unlock when they recognize your face without requiring that the phone connect to a data center packed with servers.

Supercomputer designers customize their machines by balancing memory use, GPU performance, CPU performance, power consumption and internal data pathways. Today’s artificial intelligence applications rely on the GPU, which was originally developed for accelerating graphics but is now used in many other computing tasks.

Leave a Comment