In response to the recently approved Online News Act in Canada, Meta, formerly known as Facebook, has announced its plans to discontinue access to news on Facebook and Instagram for Canadian users. The legislation, known as Bill C-18, requires internet giants to negotiate compensation agreements with news publishers for the use of their content, including posting or linking to it.
In a blog post, Meta stated, “We are confirming that news availability will be ended on Facebook and Instagram for all users in Canada prior to the Online News Act (Bill C-18) taking effect.” The company has been vocal about its stance on the legislation since it was first proposed in 2021. Last year, Meta threatened to block the sharing of Canadian news content unless the government made amendments to the legislation. Earlier this month, Meta began blocking news on its platforms for some Canadian users. With the bill approved by the Senate and awaiting royal assent, which is considered a formality, Meta is prepared to follow through on its previous warnings.
Canadian Heritage Minister Pablo Rodriguez expressed his disagreement with Meta’s decision in a tweet, emphasizing that Meta is currently not obligated to comply with the act. Rodriguez stated, “Facebook knows very well that they have no obligations under the act right now,” and questioned who would stand up for Canadians against tech giants if the government does not.
It is noteworthy that Meta is not the only internet giant expressing discontent with the legislation. Earlier this year, Google conducted tests that restricted access to news content for some Canadian users. The company stated its efforts to find a mutually agreeable solution, expressing a desire to avoid an unfavorable outcome. Google has proposed various solutions throughout the process, seeking to address concerns and facilitate increased investments in the Canadian news ecosystem. However, the company claims that none of their concerns have been addressed, and they consider Bill C-18 to be unworkable. Google remains committed to urgently collaborating with the government to find a way forward.
Canada’s legislation bears similarities to a law passed in Australia in 2021. Meta had temporarily removed news content from the platform in Australia following the law’s passage but eventually restored it after the Australian government amended the legislation to allow for extended negotiation time between the platform and publishers.
Meta Platform’s AI research division has unveiled Voicebox, a groundbreaking machine learning model capable of generating speech from text. Unlike traditional text-to-speech models, Voicebox showcases remarkable versatility by effortlessly tackling various tasks, including editing, noise removal, and style transfer, even without specific training.
The model’s training process employed a unique methodology devised by Meta researchers. Although Meta has refrained from releasing Voicebox to address ethical apprehensions regarding potential misuse, the initial findings are highly encouraging and hold tremendous potential for a wide range of applications in the times ahead.
Voicebox is a generative model that can synthesize speech across six languages, including English, French, Spanish, German, Polish, and Portuguese. Like large language models, it has been trained on a very general task that can be used for many applications. But while LLMs try to learn the statistical regularities of words and text sequences, Voicebox has been trained to learn the patterns that map voice audio samples to their transcripts.
Such a model can then be applied to many downstream tasks with little or no fine-tuning. “The goal is to build a single model that can perform many text-guided speech generation tasks through in-context learning,” Meta’s researchers write in their paper (PDF) describing the technical details of Voicebox.
The model was trained Meta’s “Flow Matching” technique, which is more efficient and generalizable than diffusion-based learning methods used in other generative models. The technique enables Voicebox to “learn from varied speech data without those variations having to be carefully labeled.” Without the need for manual labeling, the researchers were able to train Voicebox on 50,000 hours of speech and transcripts from audiobooks.
The model uses “text-guided speech infilling” as its training goal, which means it must predict a segment of speech given its surrounding audio and the complete text transcript. Basically, it means that during training, the model is provided with an audio sample and its corresponding text. Parts of the audio are then masked and the model tries to generate the masked part using the surrounding audio and the transcript as context. By doing this over and over, the model learns to generate natural-sounding speech from text in a generalizable way.
Replicating voices across languages, editing out mistakes in speech, and more
Unlike generative models that are trained for a specific application, Voicebox can perform many tasks that it has not been trained for. For example, the model can use a two-second voice sample to generate speech for new text. Meta says this capability can be used to bring speech to people who are unable to speak or customize the voices of non-playable game characters and virtual assistants.
Voicebox also performs style transfer in different ways. For example, you can provide the model with two audio and text samples. It will use the first audio sample as style reference and modify the second one to match the voice and tone of the reference. Interestingly, the model can do the same thing across different languages, which could be used to “help people communicate in a natural, authentic way — even if they don’t speak the same languages.”
The model can also do a variety of editing tasks. For example, if a dog barks in the background while you’re recording your voice, you can provide the audio and transcript to Voicebox and mask out the segment with the background noise. The model will use the transcript to generate the missing portion of the audio without the background noise.
The same technique can be used to edit speech. For example, if you have misspoken a word, you can mask that portion of the audio sample and pass it to Voicebox along with a transcript of the edited text. The model will generate the missing part with the new text in a way that matches the surrounding voice and tone.
One of the interesting applications of Voicebox is voice sampling. The model can generate various speech samples from a single text sequence. This capability can be used to generate synthetic data to train other speech processing models. “Our results show that speech recognition models trained on Voicebox-generated synthetic speech perform almost as well as models trained on real speech, with 1 percent error rate degradation as opposed to 45 to 70 percent degradation with synthetic speech from previous text-to-speech models,” Meta writes.
Voicebox has limits too. Since it has been trained on audiobook data, it does not transfer well to conversational speech that is casual and contains non-verbal sounds. It also doesn’t provide full control over different attributes of the generated speech, such as voice style, tone, emotion, and acoustic condition. The Meta research team is exploring techniques to overcome these limitations in the future.
Meta, under the guidance of its chief AI scientist Yann LeCun, has achieved a significant milestone in the development of deep learning systems capable of learning world models with minimal human intervention. The company has recently released the inaugural version of I-JEPA, a cutting-edge machine learning (ML) model that acquires abstract representations of the world through self-supervised learning on images.
Early evaluations have demonstrated I-JEPA’s exceptional performance across various computer vision tasks. Moreover, the model exhibits remarkable efficiency, demanding just a fraction of the computing resources required by other state-of-the-art models during training. In a testament to their commitment to fostering collaboration and advancement, Meta has made the training code and model open source, and they are set to showcase I-JEPA at the prestigious Conference on Computer Vision and Pattern Recognition (CVPR) next week.
The launch of I-JEPA marks a significant step toward the realization of LeCun’s long-standing vision. By leveraging self-supervised learning on vast amounts of unlabeled data, I-JEPA autonomously learns abstract representations of the world, gradually developing a deep understanding of its intricacies. This capability holds tremendous potential for advancing the field of computer vision and revolutionizing various domains that heavily rely on visual data analysis.
Early tests have demonstrated I-JEPA’s prowess across a range of computer vision tasks, showcasing its ability to extract meaningful insights from complex images. Whether it’s object recognition, scene understanding, or image generation, the model consistently delivers impressive results, surpassing existing benchmarks. The breakthrough lies not only in its performance but also in its efficiency. I-JEPA significantly reduces the computational burden, requiring just a fraction of the resources consumed by contemporary models during training. This efficiency paves the way for accelerated research, wider adoption, and more accessible development of advanced computer vision systems.
Meta’s commitment to open collaboration and knowledge sharing is evident in their decision to open-source the training code and model for I-JEPA. By making these resources freely available to the research and development community, Meta encourages innovation and collaboration, fostering a collective effort to push the boundaries of computer vision. This move is expected to facilitate further advancements, as researchers and practitioners can build upon the foundation laid by I-JEPA, unlocking new possibilities and fueling breakthroughs in various real-world applications.
The upcoming presentation of I-JEPA at the renowned CVPR conference highlights the significance of this achievement within the computer vision community. It serves as a platform for Meta to showcase the potential of their self-supervised learning model, garner feedback from experts, and inspire further research and exploration. By sharing their findings and engaging with the community, Meta aims to stimulate dialogue, collaboration, and collective progress in the pursuit of more intelligent and capable computer vision systems.
In conclusion, Meta’s release of I-JEPA represents a significant advancement in the realm of deep learning and computer vision. The model’s ability to learn abstract representations of the world through self-supervised learning on images heralds a new era of autonomous knowledge acquisition. With exceptional performance across computer vision tasks and impressive computational efficiency, I-JEPA opens doors to enhanced visual understanding and analysis. By open-sourcing the training code and model, Meta invites collaboration and aims to accelerate advancements in the field. As I-JEPA takes the stage at CVPR, the excitement and anticipation within the computer vision community are palpable, underscoring the transformative potential of this groundbreaking achievement.
Meta, has officially confirmed the rumors circulating about their plans to launch a new social media platform named Threads to compete with Twitter, could be released as early as the end of June.
During a companywide meeting, Meta’s chief product officer, Chris Cox, showcased several screenshots of the upcoming app, revealing its close integration with Instagram. Threads will utilize Instagram’s account system to populate users’ information, allowing them to sign up and log in using their Instagram credentials.
Internal documents obtained by The Verge suggest that the project was internally referred to as “Project 92.” The chosen name, Threads, indicates a focus on Twitter-style, enabling users to provide additional context through a series of connected posts.
Notably, Threads will also feature compatibility with other social media platforms such as Mastodon and Bluesky, further expanding its reach and potential user base.
With Meta’s entry into the competitive social media landscape, alongside the rising popularity of alternative platforms like Bluesky and Mastodon, the launch of Threads is poised to introduce a new player in the realm of online social networking.
The Elon Musk factor
Ever since Musk took over Twitter and made it a private company, many celebrities and prominent figures have left the app. The reasons for their exodus have ranged from less oversight to an increase in hate speech. Elton John left Twitter in December, saying that Twitter is allowing “misinformation to flourish unchecked.” Jim Carrey, who had 19 million followers on the app, deactivated his profile as well. Others who followed were Whoopi Goldberg, Shonda Rhimes, Gigi Hadid, and Jameela Jamil.
In a direct jibe at Twitter CEO Elon Musk, Cox said, “We’ve been hearing from creators and public figures who are interested in having a platform that is sanely run, that they believe that they can trust and rely upon for distribution.” With a focus on “safety, ease of use, reliability,” Meta wants to make sure that users have a “stable place to build and grow their audiences,” he added.
Meta has been meeting with content creators and public figures ahead of launching the platform. Cox revealed that Meta is in talks with the likes of Oprah Winfrey, Dalai Lama, and DJ Slime to convince them to use the app.
In a strategic move just days before Apple’s anticipated entry into the mixed reality headset market, Mark Zuckerberg took the stage to unveil Meta’s Quest 3 headset. As the dominant player in the virtual reality headset space, Meta faces the prospect of formidable competition from Apple this year.
While Zuckerberg has been vocal about Meta’s ambitions to shape the future of the internet through the metaverse, Apple has quietly been working on its own vision for the digital world.
Reports indicate that Apple has faced two delays in announcing its mixed reality headset due to the device not meeting the company’s stringent standards for design and functionality.
As tech enthusiasts and Apple fans eagerly await the upcoming Worldwide Developers Conference (WWDC) next week, Meta is strategically aiming to capture attention and maintain a competitive edge with the unveiling of its Quest 3 headset.
What to expect from Meta’s Quest 3?
With Apple looking set to jump into the AR/ VR segment, Meta needs to up its game and the Quest 3 is an obvious attempt to do get this done. For starters, the device is 40 percent thinner than its predecessor while the graphic performance has been doubled.
Meta is moving away from being just a VR headset company by adding three cameras on the front and giving users a connection to the real world around them. Smartly, it will also leverage these cameras to lets users play virtual games on a tabletop, increasing the ways the headset can be used.
The company has added a depth sensor to this headset and dropped the halos around the controllers to make them feel more natural. The device is priced at $499 and Meta is dropping the prices of its other headsets to stir up demand, after seeing a dip in sales over the past year.
Meta is perhaps hopeful that the rumored price of $3,000 for Apple’s upcoming headset will serve as a deterrent for many buyers, who will pick its pocket-friendly offering instead.
However, knowing Apple’s track record of entering the market with the right offering at the right time, makes one wonder if Meta is missing a trick by trying to keep its device light on the pockets. Reports suggest that Apple could pack 4k displays inside its headset, leaving nothing to chance when it comes to user experience.
The WWDC will perhaps give the world the first glimpse of Apple’s vision for mixed-reality headsets and it would not be a surprise if others are found falling severely short of what Apple achieves. It did it with the smartphone, the smartwatch and the mixed reality could be its next big offering.
Meta is reportedly in talks with a company called Magic Leap with an eye to a partnership that could see Meta developing its alternative reality (AR) headset in the future.
According to the Financial Times, the two are negotiating a multi-year intellectual property (IP) and manufacturing alliance. The report’s timing is significant for a few reasons.
Meta is facing investor pressure to demonstrate the results of its substantial investments in pursuing CEO Mark Zuckerberg’s vision for the future of computing, namely the so-called “Metaverse.” And this, many experts believe, could become a huge thing in the future.
“Facebook has been pushing the use case for its social possibilities in particular, whereby groups of friends can ‘meet up’ and watch a film together or watch a live performer. You’re able to see the live movements and reactions of your friends around you, and as the AR, VR, and haptic technologies improve, the level of definition on that will mean it really will feel like you’re sitting together as a group. So that stands to be something of a game-changer”.
The company does not anticipate generating profits from its metaverse projects for a few more years. Meanwhile, it is spending approximately $10 billion each year on its “Reality Labs” division. Additionally, many anticipate Apple to enter the AR headset market during its upcoming WWDC developer conference next month.
This, among other things, could well be the driving force behind this development. There is limited information available about the negotiations, but sources suggest that a partnership between Magic Leap and Meta could soon be a reality. However, it is unlikely that the two companies will jointly develop a headset. Instead, the deal may involve Magic Leap sharing some of its optical technology with Meta. Additionally, there is a possibility that Meta could receive assistance from Magic Leap in the manufacturing of their devices.
This partnership would enable Meta to produce more VR headsets domestically, which is becoming increasingly important as U.S. companies aim to reduce their reliance on China. Magic Leap told theFinancial Times that partnerships were becoming a“significant line of business and growing opportunity for Magic Leap.”
Last year, Magic Leap’s CEO Peggy Johnson wrote a blog post entitled “What’s Next for Magic Leap” where she shared the company’s future plans. Within she said that the company had “received an incredible amount of interest from across the industry to license our IP and utilize our patented manufacturing process to produce optics for others seeking to launch their own mixed-reality technology.”
Meta, a leading tech company, has developed new AI models that were trained using the Bible to recognize and generate speech in over 1,000 languages. The company aims to employ these algorithms in efforts to preserve languages that are at risk of disappearing.
Currently, there are approximately 7,000 languages spoken worldwide. To empower developers working with various languages, Meta is making its language models publicly available through GitHub, a popular code hosting service. This move encourages the creation of diverse and innovative speech applications.
The newly developed models were trained on two distinct datasets. The first dataset contains audio recordings of the New Testament Bible in 1,107 languages, while the second dataset comprises unlabeled New Testament audio recordings in 3,809 languages. By leveraging these comprehensive datasets, Meta’s research scientist, Michael Auli, explains that the models can be utilized to build speech systems with minimal data.
While languages like English possess extensive and reliable datasets, the same cannot be said for smaller languages spoken by limited populations, such as those spoken by only 1,000 individuals. Meta’s language models provide a solution to this data scarcity, enabling the development of speech applications for languages lacking adequate resources.
The researchers assert that their models can not only converse in over 1,000 languages but also recognize more than 4,000 languages. Furthermore, when compared to rival models like OpenAI Whisper, Meta’s version exhibited a significantly lower error rate despite covering a broader range of languages, exceeding even 11 times more language coverage.
However, the scientists acknowledge that the models may occasionally mistranscribe specific words or phrases. Additionally, their speech recognition models displayed a slightly higher occurrence of biased words compared to other models, albeit only by a marginal increase of 0.7%.
Chris Emezue, a researcher at Masakhane, an organization focused on natural-language processing for African languages, expressed concerns about the use of religious text, such as the Bible, as the basis for training these models. He believes that the Bible carries inherent biases and misrepresentations, which could impact the accuracy and neutrality of the models’ outputs.
This development poses an important question: Is Meta’s advancement in language models a step forward, or does its utilization of religious text for training introduce controversial elements that hinder its overall impact? The conversation around the ethical considerations and potential biases involved in training language models remains ongoing.
Meta, formerly known as Facebook, has been at the forefront of artificial intelligence (AI) for over a decade, utilizing it to power their range of products and services, including News Feed, Facebook Ads, Messenger, and virtual reality. With the increasing demand for more advanced and scalable AI solutions, Meta recognizes the need for innovative and efficient AI infrastructure.
At the recent AI Infra @ Scale event, a virtual conference organized by Meta’s engineering and infrastructure teams, the company made several announcements regarding new hardware and software projects aimed at supporting the next generation of AI applications. The event featured Meta speakers who shared their valuable insights and experiences in building and deploying large-scale AI systems.
One significant announcement was the introduction of a new AI data center design optimized for both AI training and inference, the primary stages of developing and running AI models. These data centers will leverage Meta’s own silicon called the Meta training and inference accelerator (MTIA), a chip specifically designed to accelerate AI workloads across diverse domains, including computer vision, natural language processing, and recommendation systems.
Meta also unveiled the Research Supercluster (RSC), an AI supercomputer that integrates a staggering 16,000 GPUs. This supercomputer has been instrumental in training large language models (LLMs), such as the LLaMA project, which Meta had previously announced in February.
“We have been tirelessly building advanced AI infrastructure for years, and this ongoing work represents our commitment to enabling further advancements and more effective utilization of this technology across all aspects of our operations,” stated Meta CEO Mark Zuckerberg.
Meta’s dedication to advancing AI infrastructure demonstrates their long-term vision for utilizing cutting-edge technology and enhancing the application of AI in their products and services. As the demand for AI continues to evolve, Meta remains at the forefront, driving innovation and pushing the boundaries of what is possible in the field of artificial intelligence.
Building AI infrastructure is table stakes in 2023
Meta is far from being the only hyperscaler or large IT vendor that is thinking about purpose-built AI infrastructure. In November, Microsoft and Nvidia announced a partnership for an AI supercomputer in the cloud. The system benefits (not surprisingly) from Nvidia GPUs, connected with Nvidia’s Quantum 2 InfiniBand networking technology.
A few months later in February, IBM outlined details of its AI supercomputer, codenamed Vela. IBM’s system is using x86 silicon, alongside Nvidia GPUs and ethernet-based networking. Each node in the Vela system is packed with eight 80GB A100 GPUs. IBM’s goal is to build out new foundation models that can help serve enterprise AI needs.
Not to be outdone, Google has alsojumped into the AI supercomputer race with an announcement on May 10. The Google system is using Nvidia GPUs along with custom designed infrastructure processing units (IPUs) to enable rapid data flow.
What Meta’s new AI inference accelerator brings to the table
Meta is now also jumping into the custom silicon space with its MTIA chip. Custom built AI inference chips are also not a new thing either. Google has been building out its tensor processing unit (TPU) for several years and Amazon has had its own AWS inferentia chips since 2018.
For Meta, the need for AI inference spans multiple aspects of its operations for its social media sites, including news feeds, ranking, content understanding and recommendations. In a video outlining the MTIA silicon, Meta research scientist for infrastructure Amin Firoozshahian commented that traditional CPUs are not designed to handle the inference demands from the applications that Meta runs. That’s why the company decided to build its own custom silicon.
“MTIA is a chip that is optimized for the workloads we care about and tailored specifically for those needs,” Firoozshahian said.
Meta is also a big user of the open source PyTorch machine learning (ML) framework, which it originally created. Since 2022, PyTorch has been under the governance of the Linux Foundation’s PyTorch Foundation effort. Part of the goal with MTIA is to have highly optimized silicon for running PyTorch workloads at Meta’s large scale.
The MTIA silicon is a 7nm (nanometer) process design and can provide up to 102.4 TOPS (Trillion Operations per Second). The MTIA is part of a highly integrated approach within Meta to optimize AI operations, including networking, data center optimization and power utilization.
The discussion surrounding open-source AI is reaching new levels of intensity within the realm of Big Tech, fueled by recent developments involving Google and Meta.
According to a report from CNBC on Tuesday evening, Google’s latest large language model (LLM) PaLM 2 is said to utilize nearly five times more text data for training compared to its predecessor. However, Google had initially claimed that PaLM 2 was smaller in size while employing a more efficient technique. Notably, Google did not disclose specific details about the training data’s size or other relevant information.
While a Google spokesperson declined to comment on the CNBC report, several Google engineers expressed their dissatisfaction with the leak and were eager to voice their opinions. In a tweet that has since been removed, Dmitry (Dima) Lepikhin, a senior staff software engineer at Google DeepMind, directed strong language towards the individual responsible for leaking PaLM 2 details, stating, “whoever leaked PaLM2 details to cnbc, sincerely fuck you!”
Additionally, Alex Polozov, a senior staff research scientist at Google, shared his thoughts in what he described as a “rant,” highlighting the concerns regarding increased siloing of research brought about by such leaks.
Lucas Beyer, a Google AI researcher based in Zurich, echoed similar sentiments, expressing his dismay not only at the potential accuracy of the token count but also at the broader impact of the leak. Beyer emphasized the erosion of trust and respect resulting from such incidents, which could ultimately lead to more guarded communication, reduced openness over time, and a less favorable work and research environment.
I'm so pissed I'm going to take this rant public. WTF are you trying to accomplish with leaks? Is it just the ego thrill of importance? 100s of Googlers work *hard* to keep publishing & scientific collabs alive. And you just make precedent to silo it all.https://t.co/0o6iDj4PsJ
The leaked information has stirred up further debate and intensified the ongoing conversation surrounding open-source AI, with implications that extend beyond the specific details of PaLM 2. The incident raises questions about the delicate balance between transparency and the protection of intellectual property in the fast-paced world of AI development.
Meta’s LeCun: “The platform that will win will be the open one”
Not in response to the Google leak — but in coincidental timing — Meta chief AI scientist Yann LeCun did an interview focusing on Meta’s open-source AI efforts with the New York Times, which published this morning.
The piece describes Meta’s release of its LLaMA large language model in February as “giving away its AI crown jewels” — since it released the model’s source code to “academics, government researchers and others who gave their email address to Meta [and could then] download the code once the company had vetted the individual.”
“The platform that will win will be the open one,” LeCun said in the interview, later adding that the growing secrecy at Google and OpenAI is a “huge mistake” and a “really bad take on what is happening.”
In a Twitter thread, VentureBeat journalist Sean Michael Kerner pointed out that Meta has “actually already gave away one of the most critical AI/ML tools ever created — PyTorch. The foundational stuff needs to be open/and it is. After all, where would OpenAI be without PyTorch?”
Meta’s take on open source is nuanced
But even Meta and LeCun will only go so far in terms of openness. For example, Meta had made LLaMA’s model weights available for academics and researchers on a case-by-case basis — including Stanford for its Alpaca project — but those weights were subsequently leaked on 4chan. That leak is what actually allowed developers around the world to fully access a GPT-level LLM for the first time, not the Meta release, which did not include releasing the LLaMA model for commercial use.
VentureBeat spoke to Meta last month about the nuances of its take on the open- vs. closed-source debate. Joelle Pineau, VP of AI research at Meta, said in our interview that accountability and transparency in AI models is essential.
“More than ever, we need to invite people to see the technology more transparently and lean into transparency,” she said, explaining that the key is to balance the level of access, which can vary depending on the potential harm of the model.
“My hope, and it’s reflected in our strategy for data access, is to figure out how to allow transparency for verifiability audits of these models,” she said.
On the other hand, she said that some levels of openness go too far. “That’s why the LLaMA model had a gated release,” she explained. “Many people would have been very happy to go totally open. I don’t think that’s the responsible thing to do today.”
LeCun remains outspoken on AI risks being overblown
Still, LeCun remains outspoken in favor of open-source AI, and in the New York Times interview argued that the dissemination of misinformation on social media is more dangerous than the latest LLM technology.
“You can’t prevent people from creating nonsense or dangerous information or whatever,” he said. “But you can stop it from being disseminated.”
And while Google and OpenAI may become more closed with their AI research, LeCun insisted he — and Meta — remain committed to open source, saying “progress is faster when it is open.”
The once-promising Metaverse, a technology that aimed to immerse users in a disorienting video-game-like world, has met its demise after being abandoned by the business world, despite being just three years old.
Born in 2021 when Facebook rebranded to Meta, the capital-M Metaverse drew inspiration from the movie “Tron” (1982) and the video game “Second Life” (2003). Its grand entrance captivated the tech industry and became a strategy to impress Wall Street investors. However, despite the initial hype, the lack of a coherent vision for the product ultimately led to its downfall. The tech industry swiftly shifted its attention to the more promising realm of generative AI, leaving the Metaverse behind.
Now consigned to the graveyard of failed ideas within the tech industry, the Metaverse’s short-lived existence and ignominious demise serve as a glaring indictment of the very industry that birthed it.
Ultimate Assurance
The Metaverse, as proclaimed by Mark Zuckerberg, was touted as the future of the internet. With a glitzy promotional video accompanying his name-change announcement, Zuckerberg promised a future where seamless interaction in virtual worlds would become the norm. Users would have the ability to “make eye contact” and feel as if they were physically present in the same room. This immersive experience was presented as a grand vision for the future. However, these lofty promises created sky-high expectations that the actual technology failed to fulfill.
One of the challenges faced by the Metaverse was its acute identity crisis. While Zuckerberg spoke passionately about it being a vision that spans multiple companies and the successor to the mobile internet, he struggled to articulate the fundamental business problems that the Metaverse aimed to solve. The concept of virtual worlds and interacting with digital avatars has existed since the late 1990s, but Zuckerberg’s one tangible product, the VR platform Horizon Worlds, did not provide a clear roadmap or a compelling vision. As a result, the Metaverse’s conceptual development remained stagnant, and the media’s portrayal of its future often bordered on unrealistic and irresponsible, with promises of billions of users and significant financial transactions without a clear value proposition or compelling reason for users to embrace the technology.
A high-flying life
The inability to define the Metaverse in any meaningful way didn’t get in the way of its ascension to the top of the business world. In the months following the Meta announcement, it seemed that every company had a Metaverse product on offer, despite it not being obvious what it was or why they should.
Microsoft CEO Satya Nadella would say at the company’s 2021 Ignite Conference that he couldn’t “overstate how much of a breakthrough” the Metaverse was for his company, the industry, and the world. Roblox, an online game platform that has existed since 2004, rode the Metaverse hype wave to an initial public offering and a $41 billion valuation. Of course, the cryptocurrency industry took the ball and ran with it: The people behind the Bored Ape Yacht Club NFT company conned the press into believing that uploading someone’s digital monkey pictures into VR would be the key to “master the Metaverse.” Other crypto pumpers even successfully convinced people that digital land in the Metaverse would be the next frontier of real-estate investment. Even businesses that seemed to have little to do with tech jumped on board. Walmart joined the Metaverse. Disney joined the Metaverse.
Despite Zuckerberg’s obsession with the Metaverse, the tech never lived up to the hype.
Companies’ rush to get into the game led Wall Street investors, consultants, and analysts to try to one up each other’s projections for the Metaverse’s growth. The consulting firm Gartner claimed that 25% of people would spend at least one hour a day in the Metaverse by 2026. The Wall Street Journal said the Metaverse would change the way we work forever. The global consulting firm McKinsey predicted that the Metaverse could generate up to “$5 trillion in value,” adding that around 95% of business leaders expected the Metaverse to “positively impact their industry” within five to 10 years. Not to be outdone, Citi put out a massive report that declared the Metaverse would be a $13 trillion opportunity.
A brutal downfall
In spite of all this hype, the Metaverse did not lead a healthy life. Every single business idea or rosy market projection was built on the vague promises of a single CEO. And when people were actually offered the opportunity to try it out, nobody actually used the Metaverse.
Decentraland, the most well-funded, decentralized, crypto-based Metaverse product (effectively a wonky online world you can “walk” around), only had around 38 daily active users in its “$1.3 billion ecosystem.” Decentraland would dispute this number, claiming that it had 8,000 daily active users — but that’s still only a fraction of the number of people playing large online games like “Fortnite.” Meta’s much-heralded efforts similarly struggled: By October 2022, Mashable reported that Horizon Worlds had less than 200,000 monthly active users — dramatically short of the 500,000 target Meta had set for the end of 2022. The Wall Street Journal reported that only about 9% of user-created worlds were visited by more than 50 players, and The Verge said that it was so buggy that even Meta employees eschewed it. Despite the might of a then-trillion-dollar company, Meta could not convince people to use the product it had staked its future on.
But the Metaverse was officially pulled off life support when it became clear that Zuckerberg and the company that launched the craze had moved on to greener financial pastures. Zuckerberg declared in a March update that Meta’s “single largest investment is advancing AI and building it into every one of our products.” Meta’s chief technology officer, Andrew Bosworth, told CNBC in April that he, along with Mark Zuckerberg and the company’s chief product officer, Chris Cox, were now spending most of their time on AI. The company has even stopped pitching the Metaverse to advertisers, despite spending more than $100 billion in research and development on its mission to be “Metaverse first.” While Zuckerberg may suggest that developing games for the Quest headsets is some sort of investment, the writing is on the wall: Meta is done with the Metaverse.
Did anyone learn their lesson?
While the idea of virtual worlds or collective online experiences may live on in some form, the Capital-M Metaverse is dead. It was preceded in death by a long line of tech fads like Web3 and Google Glass. It is survived by newfangled ideas like the aforementioned generative AI and the self-driving car. Despite this long lineage of disappointment, let’s be clear: The death of the Metaverse should be remembered as arguably one of the most historic failures in tech history.
I do not believe that Mark Zuckerberg ever had any real interest in “the Metaverse,” because he never seemed to define it beyond a slightly tweaked Facebook with avatars and cumbersome hardware. It was the means to an increased share price, rather than any real vision for the future of human interaction. And Zuckerberg used his outsize wealth and power to get the whole of the tech industry and a good portion of the American business world into line behind this half-baked idea.
The fact that Mark Zuckerberg has clearly stepped away from the Metaverse is a damning indictment of everyone who followed him, and anyone who still considers him a visionary tech leader. It should also be the cause for some serious reflection among the venture-capital community, which recklessly followed Zuckerberg into blowing billions of dollars on a hype cycle founded on the flimsiest possible press-release language. In a just world, Mark Zuckerberg should be fired as CEO of Meta (in the real world, this is actually impossible).
Zuckerberg misled everyone, burned tens of billions of dollars, convinced an industry of followers to submit to his quixotic obsession, and then killed it the second that another idea started to interest Wall Street. There is no reason that a man who has overseen the layoffs of tens of thousands of people should run a major company. There is no future for Meta with Mark Zuckerberg at the helm: It will stagnate, and then it will die and follow the Metaverse into the proverbial grave.