Since the beginning of this year, there has been a lot of hype, skepticism, cynicism, and confusion surrounding the concept of the metaverse.
For some, it has added to the confusion of an already elusive world of augmented reality and mixed reality. But for the well-initiated, the metaverse is a landmark moment in the extended reality world; a world approaching the ‘second life’ that many have long predicted.
News that some of the world’s top tech firms are rapidly developing AI supercomputers has further fueled that anticipation.
But what will the entry of supercomputers mean for the metaverse and virtual reality — and how can we manage it responsibly?
Simply put, a supercomputer is a computer with a very high level of performance. That performance, which far outclasses any consumer laptop or desktop PC available on the shelves, can, among other things, be used to process vast quantities of data and draw key insights from it. These computers are massive parallel arrangements of computers — or processing units — which can perform the most complex computing operations.
Whenever you hear about supercomputers, you’re likely to hear the term FLOPS — “floating point operations per second.” FLOPS is a key measure of performance for these top-end processors.
Floating numbers, in essence, are those with decimal points, including very long ones. These decimal numbers are key when processing large quantities of data or carrying out complex operations on a computer, and this is where FLOPS comes in as a measurement. It tells us how a computer will perform when managing these complicated calculations.
The supercomputer market is expected to grow at a compound annual growth rate of about 9.5% from 2021 to 2026. Increasing adoption of cloud computing and cloud technologies will fuel this growth, as will the need for systems that can ingest larger datasets to train and operate AI.
The industry has been booming in recent years, with landmark achievements helping to build public interest, and companies all over the world are now striving to outcompete and outpace the competition on their own supercomputer projects.
In 2008, IBM’s Roadrunner was the first to break the one petaflop barrier — meaning it could process one quadrillion operations per second. According to one study, the Fugaku supercomputer, based in the RIKEN Centre for Computational Science in Kobe, Japan, is the world’s fastest machine. It is capable of processing 442 petaflops per second.
In late January, Meta announced on social media that it would be developing an AI supercomputer. If Meta’s prediction is true it will one day be the world’s fastest supercomputer.
Its sole purpose? Running the next generation of AI algorithms.
The first phase of its creation is already complete, and by the end of 2022 the second phase is expected to be finished. At that point, Meta’s supercomputer will contain some 16,000 total GPUs, and the company has promised that it will be able to train AI systems with more than a trillion parameters on data sets as large as an exabyte — or one thousand petabytes.
While these numbers are impressive, what does this mean for the future of AI?
Meta has promised a host of revolutionary uses of its supercomputer, from ultrafast gaming to instant and seamless translation of mind-bendingly large quantities of text, images and videos at once — think about a group of people simultaneously speaking different languages, and being able to communicate seamlessly. It could also be used to scan huge quantities of images or videos for harmful content, or identify one face within a huge crowd of people.
The computer will also be key in developing next-generation AI models, it will power the Metaverse, and it will be a foundation upon which future metaverse technologies can rely.
But the implications of all this power mean that there are serious ethical considerations for the use of Meta’s supercomputer, and for supercomputers more generally.
The World Economic Forum’s Centre for the Fourth Industrial Revolution, in partnership with the UK government, has developed guidelines for more ethical and efficient government procurement of artificial intelligence (AI) technology. Governments across Europe, Latin America and the Middle East are piloting these guidelines to improve their AI procurement processes.
Our guidelines not only serve as a handy reference tool for governments looking to adopt AI technology, but also set baseline standards for effective, responsible public procurement and deployment of AI – standards that can be eventually adopted by industries.
We invite organizations that are interested in the future of AI and machine learning to get involved in this initiative. Read more about our impact.
New technologies have always demanded societal conversations about how they should be used — and how they should not. Supercomputers are no different in this regard.
While AI has been brilliant at solving some large and complex problems in the world, there still remain some flaws. These flaws are not caused by the AI algorithms — instead, they are a direct result of the data that is fed into the AI systems.
If the data fed into systems has a bias, then the result of an AI calculation is bound to carry that bias — and, if the metaverse and virtual reality do become a ‘second life,’ then are we bound to carry with us the flaws, prejudices and biases of the first life?
The age of AI also brings with it key questions about human privacy and the privacy of our thoughts.
To address these concerns, we must seriously examine our interaction with AI. When we look at the ethical structures of AI, we must ensure its usage is transparent, explainable, bias-free, and accountable.
We must be able to explain why a certain calculation or process was initiated in the first place, what exactly happened when the AI ran it, make sure there was no initial human bias against any group or idea, and be clear about who should be held accountable for the results of a calculation.
It remains to be seen whether these supercomputers and the companies producing them will ensure that these four key areas are consistently and transparently addressed. But it will become all the more pressing as they continue to wield more power and influence over our lives — both online and in the real world.
The surge in the supercomputing era will push the era of parallel computing and use cases at the speed of thought. We see a future where a combination of supercomputers and intelligent software will run on a hybrid cloud, feeding partial workflows of computation to a quantum computer, a form of computing that experts believe has the capacity to exceed even that of the fastest supercomputers.
What remains to be seen is how this era will fuel the next generation of metaverse experiences.
Arunima Sarkar, Lead, Artificial Intelligence and Machine Learning, World Economic Forum
Nikhil Malhotra, Chief Innovation Officer, Tech Mahindra
The views expressed in this article are those of the author alone and not the World Economic Forum.
Top news stories on metaverse this week. Meta tells advertisers that its virtual environments will take "a few years" to develop. English Premier League football could soon attract virtua…
The metaverse has already established a big presence in our society, but it has potential to cause a large environmental impact. Here's what we can do.
© 2022 World Economic Forum