WordPress Ad Banner

Microsoft Has Completed The First Step To Building A Quantum Supercomputer

Microsoft is leading the race in artificial intelligence (AI) models and has also set its eye on the future of computing. In an announcement made on Wednesday, the Redmond, Washington-headquartered company unveiled a roadmap where it plans to build a quantum supercomputer in the next 10 years.

Quantum computing has been in the news in recent weeks for beating supercomputers at complex math and being able to compute at speeds much faster than one could imagine. Scientists have acknowledged that they have used noisy physical qubits for these achievements, which are not error-free.

Microsoft refers to today’s quantum computers as those belonging to the foundational level. According to the software giant, these computers need upgrades in the underlying technology, much like early computing machines did as they moved from vacuum tubes to transistors and then to integrated circuits before taking their current form.

Logical qubits

In its roadmap, Microsoft suggests that as an industry, quantum computing needs to move on from noisy physical qubits to reliable logical qubits since the former cannot reliably run scaled applications.

Microsoft suggests bundling hundreds to thousands of physical qubits into one logical qubit to increase redundancy and reduce error rates. Since qubits are prone to interference from their environment, efforts must be made to increase their stability, which will aid in increasing their reliability.

Reliable logical qubits can be scaled to perform complex problems that need solving urgently. However, since we do not have a measure of how reliable calculations in quantum computing are, the company has proposed a new measure called reliable Quantum Operations Per Second (rQOPS) to do so.

Microsoft claims that the Majorana-based qubit announced last year is highly stable but also difficult to create. The company has published its progress in the peer-reviewed publication in the journal Physical Review B.

Platform to accelerate discovery

Microsoft has completed the first step to building a quantum supercomputer
When quantum computing will reach the supercomputer stageMicrosoft 

Microsoft estimates that the first quantum supercomputer will need to deliver at least one million rQOPS with an error rate of 10-12, or one in every trillion operations, to be able to provide valuable inputs in solving scientific problems. However, quantum computers of today deliver an rQOPS value of zero, meaning that the industry as a whole has a long way to go before we see the first quantum supercomputer.

Instead of decades, Microsoft wants to build this supercomputer in a matter of years and has now launched its Azure Quantum Elements platform to accelerate scientific discovery. The platform will enable organizations to leverage the latest breakthroughs in high-performance computing (HPC), AI, and quantum computing to make advances in chemistry and material science to build the next generation of quantum computers.

The company is also extending its Copilot services to Azure Quantum, where researchers will be able to use natural language processing to solve complex problems of chemistry and materials science. Copilot can help researchers query quantum computers and visualize data using an integrated browser.

Microsoft’s competitors in this space are Google and IBM, who have also unveiled their quantum capabilities.

NVIDIA to Build Israel’s Most Potent AI Supercomputer

NVIDIA, the World’s Top-Ranking Chip Firm, is Pouring Hundreds of Millions into Building Israel’s Most Powerful Artificial Intelligence (AI) Supercomputer, Israel-1. This Move Comes as a Response to a Surge in Demand for AI Applications, as per the Company’s Announcement on Monday.

Set to Be Partly Operational by Year-End 2023, Israel-1 is Expected to Deliver up to Eight Exaflops of AI Computing, Placing It Among the Fastest AI Supercomputers Worldwide. Putting That into Perspective, a Single Exaflop Can Perform a Quintillion – That’s 18 Zeros – Calculations Every Second.

Super-AI

According to Gilad Shainer, Senior Vice President at NVIDIA, the upcoming supercomputer in Israel will be a game-changer for the thriving AI scene in the country. Shainer highlighted the extensive collaboration between NVIDIA and 800 startups nationwide, involving tens of thousands of software engineers.

Shainer emphasized the significance of large Graphics Processing Units (GPUs) in the development of AI and generative AI applications, stating, “AI is the most important technology in our lifetime.” He further explained the growing importance of generative AI, noting the need for robust training on large datasets.

The introduction of Israel-1 will provide Israeli companies with unprecedented access to a supercomputer resource. This high-performance system is expected to accelerate training processes, enabling the creation of frameworks and solutions capable of tackling more complex challenges.

An example of the potential of powerful computing resources is evident in projects like ChatGPT by OpenAI, which utilized thousands of NVIDIA GPUs. The conversational capabilities of ChatGPT showcase the possibilities when leveraging robust computing resources.

The development of the Israel-1 system was undertaken by the former Mellanox team, an Israeli chip design firm that NVIDIA acquired in 2019 for nearly $7 billion, surpassing Intel Corp.

While the primary focus of the new supercomputer is NVIDIA’s Israeli partners, the company remains open to expanding its reach. Shainer revealed, “We may use this system to work with partners outside of Israel down the road.”

In other news, NVIDIA recently announced a partnership with the University of Bristol in Britain. Their collaboration aims to build a new supercomputer powered by an innovative NVIDIA chip, positioning NVIDIA as a competitor to chip giants Intel and Advanced Micro Devices Inc.

IBM’s Quantum Leap: The Future Holds a 100,000-Qubit Supercomputer

IBM Aims for Unprecedented Quantum Computing Advancement: A 100,000-Qubit Supercomputer Collaboration with Leading Universities and Global Impact.

During the G7 summit in Hiroshima, Japan, IBM unveiled an ambitious $100 million initiative, joining forces with the University of Tokyo and the University of Chicago to construct a massive quantum computer boasting an astounding 100,000 qubits. This groundbreaking endeavor intends to revolutionize the computing field and unlock unparalleled possibilities across various domains.

Despite already holding the record for the largest quantum computing system with a 433-qubit processor, IBM’s forthcoming machine signifies a monumental leap forward in quantum capabilities. Rather than seeking to replace classical supercomputers, the project aims to synergize quantum power with classical computing to achieve groundbreaking advancements in drug discovery, fertilizer production, and battery performance.

IBM’s Vice President of Quantum, Jay Gambetta, envisions this collaborative effort as “quantum-centric supercomputing,” emphasizing the integration of the immense computational potential of quantum machines with the sophistication of classical supercomputers. By leveraging the strengths of both technologies, this fusion endeavors to tackle complex challenges that have long remained unsolvable. The initiative holds the potential to reshape scientific research and make significant contributions to the global scientific community.

Strides made for technological advancement

While significant progress has been made, the technology required for quantum-centric supercomputing is still in its infancy. IBM’s proof-of-principle experiments have shown promising results, demonstrating that integrated circuits based on CMOS technology can control cold qubits with minimal power consumption.

However, further innovations are necessary, and this is where collaboration with academic research institutions becomes crucial.

IBM’s modular chip design serves as the foundation for housing many qubits. With an individual chip unable to accommodate the sheer scale of qubits required, interconnects are being developed to facilitate the transfer of quantum information between modules.

IBM’s “Kookaburra,” a multichip processor with 1,386 qubits and a quantum communication link, is currently under development and anticipated for release in 2025. Additionally, the University of Tokyo and the University of Chicago actively contribute their expertise in components and communication innovations, making their mark on this monumental project.

As IBM embarks on this bold mission, it anticipates forging numerous industry-academic collaborations over the next decade. Recognizing the pivotal role of universities, Gambetta highlights the importance of empowering these institutions to leverage their strengths in research and development.

With the promise of a quantum-powered future on the horizon, the journey toward a 100,000-qubit supercomputer promises to unlock previously unimaginable scientific frontiers, revolutionizing our understanding of computation as we know it.

Empowering an AI-First Future: Meta Unveils New AI Data Centers and Supercomputer

Meta, formerly known as Facebook, has been at the forefront of artificial intelligence (AI) for over a decade, utilizing it to power their range of products and services, including News Feed, Facebook Ads, Messenger, and virtual reality. With the increasing demand for more advanced and scalable AI solutions, Meta recognizes the need for innovative and efficient AI infrastructure.

At the recent AI Infra @ Scale event, a virtual conference organized by Meta’s engineering and infrastructure teams, the company made several announcements regarding new hardware and software projects aimed at supporting the next generation of AI applications. The event featured Meta speakers who shared their valuable insights and experiences in building and deploying large-scale AI systems.

One significant announcement was the introduction of a new AI data center design optimized for both AI training and inference, the primary stages of developing and running AI models. These data centers will leverage Meta’s own silicon called the Meta training and inference accelerator (MTIA), a chip specifically designed to accelerate AI workloads across diverse domains, including computer vision, natural language processing, and recommendation systems.

Meta also unveiled the Research Supercluster (RSC), an AI supercomputer that integrates a staggering 16,000 GPUs. This supercomputer has been instrumental in training large language models (LLMs), such as the LLaMA project, which Meta had previously announced in February.

“We have been tirelessly building advanced AI infrastructure for years, and this ongoing work represents our commitment to enabling further advancements and more effective utilization of this technology across all aspects of our operations,” stated Meta CEO Mark Zuckerberg.

Meta’s dedication to advancing AI infrastructure demonstrates their long-term vision for utilizing cutting-edge technology and enhancing the application of AI in their products and services. As the demand for AI continues to evolve, Meta remains at the forefront, driving innovation and pushing the boundaries of what is possible in the field of artificial intelligence.

Building AI infrastructure is table stakes in 2023

Meta is far from being the only hyperscaler or large IT vendor that is thinking about purpose-built AI infrastructure. In November, Microsoft and Nvidia announced a partnership for an AI supercomputer in the cloud. The system benefits (not surprisingly) from Nvidia GPUs, connected with Nvidia’s Quantum 2 InfiniBand networking technology.

A few months later in February, IBM outlined details of its AI supercomputer, codenamed Vela. IBM’s system is using x86 silicon, alongside Nvidia GPUs and ethernet-based networking. Each node in the Vela system is packed with eight 80GB A100 GPUs. IBM’s goal is to build out new foundation models that can help serve enterprise AI needs.

Not to be outdone, Google has also jumped into the AI supercomputer race with an announcement on May 10. The Google system is using Nvidia GPUs along with custom designed infrastructure processing units (IPUs) to enable rapid data flow. 

What Meta’s new AI inference accelerator brings to the table

Meta is now also jumping into the custom silicon space with its MTIA chip. Custom built AI inference chips are also not a new thing either. Google has been building out its tensor processing unit (TPU) for several years and Amazon has had its own AWS inferentia chips since 2018.

For Meta, the need for AI inference spans multiple aspects of its operations for its social media sites, including news feeds, ranking, content understanding and recommendations. In a video outlining the MTIA silicon, Meta research scientist for infrastructure Amin Firoozshahian commented that traditional CPUs are not designed to handle the inference demands from the applications that Meta runs. That’s why the company decided to build its own custom silicon.

“MTIA is a chip that is optimized for the workloads we care about and tailored specifically for those needs,” Firoozshahian said.

Meta is also a big user of the open source PyTorch machine learning (ML) framework, which it originally created. Since 2022, PyTorch has been under the governance of the Linux Foundation’s PyTorch Foundation effort. Part of the goal with MTIA is to have highly optimized silicon for running PyTorch workloads at Meta’s large scale.

The MTIA silicon is a 7nm (nanometer) process design and can provide up to 102.4 TOPS (Trillion Operations per Second). The MTIA is part of a highly integrated approach within Meta to optimize AI operations, including networking, data center optimization and power utilization.

As Per Google: It’s AI Supercomputer Is Faster And Greener Than The Nvidia A100 Chip

Alphabet Inc’s Google on Tuesday released new details about the supercomputers it uses to train its artificial intelligence models, saying the systems are both faster and more power-efficient than comparable systems from Nvidia Corp.

Google has designed its own custom chip called the Tensor Processing Unit, or TPU. It uses those chips for more than 90 per cent of the company’s work on artificial intelligence training, the process of feeding data through models to make them useful at tasks like responding to queries with human-like text or generating images.

The Google TPU is now in its fourth generation. Google on Tuesday published a scientific paper detailing how it has strung more than 4,000 of the chips together into a supercomputer using its own custom-developed optical switches to help connect individual machines.

Improving these connections has become a key point of competition among companies that build AI supercomputers because so-called large language models that power technologies like Google’s Bard or OpenAI’s ChatGPT have exploded in size, meaning they are far too large to store on a single chip.

The models must instead be split across thousands of chips, which must then work together for weeks or more to train the model. Google’s PaLM model – its largest publicly disclosed language model to date – was trained by splitting it across two of the 4,000-chip supercomputers over 50 days.

Google said its supercomputers make it easy to reconfigure connections between chips on the fly, helping avoid problems and tweak for performance gains.

“Circuit switching makes it easy to route around failed components,” Google Fellow Norm Jouppi and Google Distinguished Engineer David Patterson wrote in a blog post about the system. “This flexibility even allows us to change the topology of the supercomputer interconnect to accelerate the performance of an ML (machine learning) model.”

While Google is only now releasing details about its supercomputer, it has been online inside the company since 2020 in a data centre in Mayes County, Oklahoma. Google said that startup Midjourney used the system to train its model, which generates fresh images after being fed a few words of text.

In the paper, Google said that for comparably sized systems, its supercomputer is up to 1.7 times faster and 1.9 times more power-efficient than a system based on Nvidia’s A100 chip that was on the market at the same time as the fourth-generation TPU.

Google said it did not compare its fourth-generation to Nvidia’s current flagship H100 chip because the H100 came to the market after Google’s chip and is made with newer technology.

Google hinted that it might be working on a new TPU that would compete with the Nvidia H100 but provided no details, with Jouppi telling Reuters that Google has “a healthy pipeline of future chips.”