WordPress Ad Banner

Neuralink Will Test Its Chip In Human Trials This Year Says Elon Musk

Neuralink, the biotech venture led by Elon Musk, expects to begin its human trials later this year, the billionaire entrepreneur said in France last week. Musk did not reveal the details of the number of participants in the trial during his talk at the VivaTech event in Paris, Reuters reported.

Launched in 2016, Neuralink is yet another moonshot project from Musk, where he wants to link the human brain to a computer. Musk’s ideal application for the technology is to enable a paraplegic person to walk again. So far has only demonstrated the technology in monkeys who have been able to play video games.

Biotechnology is nothing like making electric vehicles or managing a social media platform, and Musk has found it challenging to get the technology to work in the ways he intended. The news of human trials is exciting for many of Musk’s followers, keen to get a chip embedded into their brains.

Not the first time

Well-known for setting highly ambitious deadlines, Musk has claimed on at least four previous occasions that Neuralink was set to begin human trials. Not surprisingly, though, none of these announcements led to anything of substance.

Instead, employees at his company have reported to the media that such announcements have led to increased pressure at work and even botched surgeries on animal subjects. On one such occasion in 2021, 25 pigs were implanted with wrong-sized devices and had to be subsequently euthanized.

In recent years, the company has also come under the scrutiny of the Department of Transportation for allegedly transporting implant chips with dangerous pathogens on them without appropriate containment measures.

The animal experiments conducted by the company are crucial for generating data needed to obtain approval for human trials.

FDA gives Neuralink the nod

Neuralink has previously approached the U.S. Food and Drug Administration for approval to test its chip in humans. However, the agency has rejected its application on multiple grounds such as the presence of lithium in Neuralink’s chip, the risk of wires on the implant, and the safe extraction of the chip from the brain.

Last month though, the FDA finally gave its nod to Neuralink even as the investigations into alleged mistreatment of animal subjects continue. The nod has led to a sudden increase in the company’s valuation, now estimated at around $5 billion, based on privately executed stock trades.

Neuralink’s technology, though, has a long way to go with the company now required to provide more details of its work to authorities while demonstrating significant advantages for the implant. The process could take a minimum of ten years to become commercially available.

Additionally, it has to contend with competition in the space where former team members have floated competing products. Synchron is one such product that does not even need to be implanted and received FDA approval two years ago, leaving much for Musk’s company to catch up to.

New Tech Promises Smartphones That Can Actually Rival Professional Cameras

Scientists have unveiled a new technology based on hybrid meta-optics that can take high-grade photos with none of the bulk of conventional cameras.

Most modern cameras, especially those intended for professional use, leverage multiple interchangeable lenses that help the photographer obtain the best image for any particular scenario, albeit at the cost of compactness and portability.

Scientists have been researching flat optics made of meta-structures to achieve high-grade image quality while not compromising on portability.

Meta-structures are novel materials with repeating patterns in their structures designed to manipulate other structures by designing these patterns to be smaller than that of the characteristic material being manipulated. The invisibility cloak is one of its popular applications.

Modern smartphones and other smart-devices instead rely on computational imaging— using software to mask the shortcomings of their compact optical setup— to produce high-quality images and video.

The team’s findings, published in the peer-reviewed scientific journal Science Advances, explore combining optics made from metamaterials and software.

Translating theoretical delights to the real world

While these meta-optics theoretically have a high ceiling, the difficulty researchers face while modeling the complex interactions between light and all the optical components often results in a design that delivers mediocre image quality by conventional standards.

The team addressed this drawback by employing a “hardware in the loop” strategy by experimenting with actual lenses and sensors instead of a software model, a move that the team attributes to have helped reduce the processing and memory demands.

In the experiments, the researchers utilized a combination of hybrid meta-optics and computational imaging techniques to capture photographs of objects located at distances ranging from 0.5 to 1.8 meters. 

New tech promises smartphones that can actually rival professional cameras
Configuration for lens-only and designed hybrid systems.Science Advances/ Pinilla et al. 

The hybrid meta-optics employed in their setup included a standard refractive lens measuring a thickness of 4.5 millimeters, coated with a quartz meta-optic film 500 µm thick. Additionally, the surface of the film was adorned with square silicon nitride pillars standing at a height of 700 nanometers.

The team observed these single-lens devices to click full-color pictures rivaling a Sony Alpha 1 III mirrorless camera, the Japanese behemoth’s flagship offering, coupled with a Sony 85 mm f/1.8 lens, while at under 1 percent of the volume of Sony’s system. (SEL85F18/2)

Future Research and Application

“This hardware-in-the-loop methodology is able to produce better optics compared with the state-of-the-art,” said Vladimir Katkovnik, a senior research fellow at Tampere University, Finland, and co-author of the study. 

“I believe the most impactful application at the moment is the design of a new generation of customized cameras for smartphones,” said Samuel Pinilla, lead study author, at the Science and Technology Facilities Council in Harwell, England, while noting the devices’ potential in biomedical applications.

The team suggests developing meta-optics wider than their current 5 mm width could result in even better image quality by collecting more light. 

Perhaps the day when we finally get cameras rivaling DSLRs and top mirrorless cameras fit into our pockets isn’t too far off.

Intel to Begin Shipping its 12-Qubit Quantum Processor

In a major announcement for the technology industry, U.S. chipmaker Intel has revealed that its highly anticipated 12-qubit quantum processor is now ready for shipment, giving it a significant edge over its competitors. According to a report by Ars Technica, the quantum processor will be dispatched to select research laboratories across the United States.

In recent months, Intel has faced growing scrutiny as companies increasingly turned to Nvidia for their artificial intelligence (AI) requirements or opted for in-house chip designs for upcoming products. Furthermore, with rivals like IBM pushing the boundaries of quantum computing, the pressure on Intel has mounted. Just this week, Interesting Engineering reported on IBM’s quantum computer surpassing a supercomputer in solving complex mathematical problems. In this competitive landscape, the news of Intel’s quantum processor comes as a potential game-changer that could revitalize the company’s position.

By delivering its 12-qubit quantum processor ahead of schedule, Intel aims to reassert its prominence and solidify its position in the market. The successful shipment of this advanced technology could provide Intel with the much-needed boost it requires to stay at the forefront of the evolving technology landscape.

Intel’s quantum processor

Intel’s quantum offering has been dubbed Tunnel Falls, much like its other processors named after water bodies. Unlike its competitors, Intel has been working to build silicon-based qubits. In its opinion, this helps it facilitate a transition from silicon chip to quantum chip in the future with the least effort possible.

Intel’s qubits are, therefore, small quantum dots that can capture individual electrons and store information. This makes Intel’s job a lot tougher since it has to focus on getting the hardware and the software right for its quantum processor.

By shipping its quantum processors to research laboratories, Intel hopes to get some more hands and brains to work on what it takes to get its quantum processors to work for everybody.

Currently, the processor needs a dilution refrigeration system to get temperatures down to absolute zero degrees before it can begin work. Intel’s partnership with research laboratories is seeking remedies for such real-world problems of quantum computers. At the same time, the company uses its fabrication expertise to build better quantum chips with more qubits at par with its competitors.

Dropping the ‘i’ to gain visibility

The Santa Clara, California-based company is also working on its branding to remain visible amidst the cloud of chipmakers that have sprung up over the years. In a recent move, Intel has decided to drop the ‘i” in its processor’s nomenclature and simply call them Core3/5/7/9 in the future.

Intel will soon start shipping its 12-qubit quantum processor
Intel’s new branding for its processorsIntel 

This is being done to make it difficult for people to shorten the processor name, company executives told The Verge. Instead, users will have to call them Core 3 or Core 5 processors in the near future, allowing the company to differentiate its latest flagship chips that will also carry the Ultra branding.

Intel has surely made life a bit more complicated for those keen on knowing the generation of the processor they are investing in. Users will have to dig deep to see if they buy devices with the latest chips or some leftover inventory from the previous year.

Until the quantum range of processors becomes available, there is only Intel, Core, and Core Ultra ranges for die-hard users. Intel says tiering within these ranges will continue in the future as well.

NVIDIA Unveils ‘Grace Hopper’: Next-Gen CPU+GPU Chip for AI Models

NVIDIA, renowned for its advancements in artificial intelligence (AI), has introduced its latest CPU+GPU chip, Grace Hopper, which promises to usher in the next era of AI models and chatbots.

While traditionally known for their role in accelerating graphics rendering for computer games, graphics processing units (GPUs) have demonstrated significantly higher computing power compared to central processing unit (CPU) chips. This led tech companies to adopt GPUs for training AI models due to their ability to perform multiple calculations simultaneously, in parallel.

In 2020, NVIDIA introduced the A100 GPU chip, which proved instrumental in training early iterations of conversational chatbots and image generators. However, within just a short span, the highly advanced H100 Hopper chips have emerged as essential components in data centers that power popular chatbots like ChatGPT. Now, NVIDIA has unveiled a groundbreaking chip that integrates both CPU and GPU capabilities.

The Grace Hopper chip represents a significant leap forward, combining the strengths of CPU and GPU technologies to enhance AI model training and performance. Its introduction marks a new milestone in the ongoing development of AI hardware, enabling more efficient and powerful computing capabilities for AI-related applications.

As the AI landscape continues to evolve, NVIDIA’s Grace Hopper chip aims to play a pivotal role in driving advancements in AI models and chatbot technologies, propelling the field toward unprecedented possibilities.

What are Grace Hopper chips from Nvidia?

According to a press release, Nvidia has created its new chip by combining its Hopper GPU platform with the Grace CPU platform (both named after Grace Hopper, a pioneer of computer programming). The two chips have been connected using Nvidia’s NVLink chip-to-chip (C2C) interconnect technology.

Dubbed GH200, the super chip has 528 GPU tensor cores which can support 480 GB of CPU RAM and 96 GB of GPU RAM. The GPU memory bandwidth on the GH200 is 4TB per second, which is twice as much as the A100 chips.

The super chip also boasts 900GB/s of the coherent memory interface, which is seven times faster than the latest generation PCIe, which has only become available this year. Along with running all Nvidia software such as HPC SDK, Nvidia AI, and Omniverse, the GH200 has 30 times higher aggregate memory bandwidth compared to the A100 chips.

What will chips be used for?

Nvidia, well on its way to becoming a trillion-dollar company, expects the GH200 chips to be used for giant-scale AI and high-performance computing (HPC) applications. At this point in time, one can only imagine AI models and chatbots that are faster and more accurate being built with this superior technology.

The company also plans to use them to build a new exaflop supercomputer capable of performing 1018 floating point operations per second (FLOPS). Two hundred fifty-six of the GH200 chips will be put together to function as one large GPU and have 144 TB of shared memory, about 500 times that of the A100.

“Generative AI is rapidly transforming businesses, unlocking new opportunities, and accelerating discovery in healthcare, finance, business services, and many more industries,” said Ian Buck, vice president of accelerated computing at NVIDIA, in a press release. “With Grace Hopper Superchips in full production, manufacturers worldwide will soon provide the accelerated infrastructure enterprises need to build and deploy generative AI applications that leverage their unique proprietary data.”

Global hyperscalers and supercomputing centers in the U.S. and Europe will get access to the GH200-powered systems later this year, the release added.

China’s Quantum Computer, Juizhang, Claims to be 180 Million Times Faster for AI Tasks

Juizhang, a quantum computer developed by a team led by Pan Jianwei, has made headlines by asserting that it can process artificial intelligence (AI) tasks at a staggering speed 180 million times faster than traditional computing methods,  the South China Morning Post reported. Jianwei, widely recognized as the “father of quantum” in China, has spearheaded the project.

While the United States currently leads the global rankings of the TOP500 supercomputers, China has been steadily advancing its expertise in quantum computing, the next frontier of computational technology. Unlike classical computers that operate using binary bits (either 0 or 1), quantum computing leverages quantum bits, or qubits, which can exist in multiple states simultaneously.

By harnessing the unique properties of qubits, quantum computers have the potential to process vast amounts of information in parallel, offering exponential computational speedup compared to classical computers.

China’s progress in quantum computing, exemplified by the achievements of Juizhang, highlights the nation’s growing presence in this cutting-edge field. With quantum computing’s promise of revolutionizing various domains, including AI, China’s advancements contribute to the global race for quantum supremacy.

How fast is China’s Jiuzhang?

China’s Jiuzhang first shot to fame in 2020, when the research team led by Jianwei performed Gaussian boson sampling in 200 seconds. The same on a conventional supercomputer would take an estimated 2.5 billion years.

Quantum computing is still in its infancy, and researchers worldwide have only begun testing how these systems work and can be used in the future. Pan Jianwei’s team, however, decided to use the “noisy intermediate scale” quantum computers to solve real-world problems.

They put Jiuzhang to the test by implementing two algorithms commonly used in AI- random search and simulated annealing. These algorithms can be a challenge even for supercomputers, and the researchers decided to use 200,000 samples to solve it.

At current technological levels, even the fastest supercomputer would take an estimated 700 seconds to go through each sample and a total of five years of computing time to process the samples the researchers had in mind. In sharp contrast, Juizhang took less than a second to process them. That’s 180 million times faster than the fastest supercomputer on the planet today.

Advantages of using Juizhang Quantum Computer

The US has also been working on quantum computers and has found that the sub-atomic particles involved in the computing process are prone to error even if exposed to the slightest disturbance from the surroundings. This is why quantum computers are operated in isolated environments and at extremely low temperatures.

Jiuzhang, on the other hand, uses light as a physical medium for calculation and does not need to work at extremely low temperatures either. However, the researchers claim it does not require very low temperatures to operate.

The team purposely used some of the advanced algorithms that are in use today to demonstrate the advantages of using quantum computing. The research has demonstrated that even early-stage “noisy” quantum computers offer a distinct advantage over classical computers.

The research team said that the computations achieved by Jiuzhang could also help researchers apply the technology in areas such as data mining, biological information, network analysis, and chemical modeling research, the research team said.

The research findings were published in the peer-reviewed journal Physical Review Letters last month.

Intel and Microsoft Collaborate to Democratize AI with Meteor Lake Chips

Intel showcased details of its upcoming Meteor Lake chips of PC processors during Microsoft’s Build 2023 conference. With a “chiplet” system-on-chip (SoC) design, Intel aims to deliver advanced intellectual properties (IPs) and segment-specific performance while maintaining lower power consumption. Meteor Lake will introduce Intel’s first PC platform with a built-in neural VPU, enabling efficient AI model execution.

The integrated VPU will collaborate with existing AI accelerators on the CPU and GPU, allowing for accessible and impactful AI features for PC users. Intel asserts that its product is at the forefront of the AI trend, positioning Meteor Lake as a key player.

At Computex, Intel disclosed that the VPU in Meteor Lake is derived from Movidius’s third-generation Vision Processing Unit (VPU) design. By leveraging this acquisition from 2016, Intel aims to establish itself as an AI market leader. Although specific performance figures and VPU specifications have not been revealed, it is anticipated that Intel’s VPU will surpass Movidius’s previous throughput rating of 1 TOPS (tera operations per second).

As the VPU is integrated into the SoC, AI capabilities will be a standard feature across all Meteor Lake SKUs, rather than a differentiating factor. Intel seeks to achieve similar energy efficiency levels as smartphone SoCs, enabling tasks like dynamic noise suppression and background blur.

Collaborating closely with Microsoft, Intel aims to scale Meteor Lake and Windows 11 across the ecosystem. Through partnerships and leveraging the ONNX Runtime—an open-source library for deploying machine learning models—Intel plans to optimize AI model execution on the Windows platform.

Intel envisions shifting server-based AI workloads to client devices, offering benefits such as reduced costs, lower latency, and enhanced privacy. By pursuing this vision, Intel aims to gain a competitive advantage in the market.

Joystick-Controlled Video Capsule Promises to Replace Traditional Endoscopy

In the United States, approximately seven million patients undergo endoscopy procedures annually. However, a team of researchers from George Washington University (GWU) has recently developed an innovative magnetically-controlled ingestible video capsule that promises to revolutionize the field of endoscopy. Unlike existing ingestible capsules, this groundbreaking technology grants doctors precise control over its movement within the digestive system, marking a significant advancement in diagnostic capabilities.

Limitations of Existing Ingestible Capsules: While ingestible video capsules already exist, they suffer from a significant drawback—once they enter the stomach, doctors have no means of controlling their movement. These capsules rely solely on gravity and the natural peristaltic motions of the gut to navigate through the body. This lack of control hinders their effectiveness and limits the range of diagnostic procedures that can be performed using this technology.

Drawbacks of Traditional Endoscopy Procedures: On the other hand, conventional tube-based endoscopy procedures are highly invasive, costly, and time-consuming, often requiring anesthesia. This becomes particularly burdensome for patients experiencing severe stomach pain or those diagnosed with conditions such as stomach cancer, who must undergo multiple appointments for a comprehensive endoscopy examination. These limitations necessitate the development of a less invasive and more patient-friendly alternative.

The GWU Breakthrough: In response to these challenges, the GWU research team has introduced a revolutionary technique that empowers patients to take a capsule and receive an immediate diagnosis. Furthermore, doctors can easily control the movement of the proposed video capsule using a joystick, enhancing diagnostic accuracy and procedural flexibility.

Andrew Meltzer, a professor of emergency medicine at GW School of Medicine and one of the researchers involved in the project, explains the potential impact of magnetically-controlled capsules: “These capsules could serve as a quick and easy screening tool for health issues in the upper gastrointestinal tract, such as ulcers or stomach cancer.” The ability to swiftly screen for such conditions using this non-invasive method holds tremendous promise for early detection and timely intervention.

How does the new endoscopy capsule work?

Endoscopy is a medical procedure in which a doctor uses a camera-equipped device (a tube or a capsule) to look at the upper part of the digestive tract in humans. It allows doctors to detect anomalies in the body that lead to acute stomach pain, ulcers, gastric cancer, internal bleeding, and various other diseases. 

During their study, the GWU researchers wanted to develop a less invasive, hassle-free, and easily controllable endoscopy method. Their primary aim was to combine the capabilities of a tube-based endoscopy and the ease of a video capsule.

They created an ingestible capsule with a camera system and external magnets to achieve this extraordinary feat. Hand-held joysticks could easily control the movement of the magnets. So instead of relying on gravity and gut flow, the capsule now moved as per the will of the doctor who controlled the joystick. 

The researchers performed an interesting experiment to test their magnetically-controlled video capsules against the traditional endoscopy method. They used both techniques to examine 40 patients with stomach-related health problems.

“The doctor could direct the capsule to all major parts of the stomach with a 95 percent rate of visualization,” without causing any pain. Moreover, “No high-risk lesions were missed with the new method, and 80 percent of the patients preferred the capsule method to the traditional endoscopy,” the researchers note

This is the first study to demonstrate magnetically-controlled endoscopy in the US.

Limitations of magnet-driven endoscopy

Meltzer reveals that his team often encounters patients with severe ulcers or bleeding problems. Traditional endoscopy is not viable for many such patients because it might escalate their condition.

The proposed ingestible capsule doesn’t have any such risks associated with them. It’s possibly the least invasive endoscopy technique. However, it does have some limitations. For instance, currently, the video capsule can’t perform a biopsy of the identified lesions in the stomach.

Additionally, before giving the capsules to patients for endoscopy, doctors must first become proficient with the joystick controls to guarantee complete safety. Training thousands of physicians already accustomed to traditional endoscopic techniques will be challenging. 

The researchers believe that an AI-based program could allow the capsules to move autonomously. However, the AI might also make the capsules very expensive, and the doctors will still require some training to supervise the operations. 

Meltzer suggests this is a first step towards a better and faster endoscopy procedure—the need to conduct more trials involving many patients to discover more pros and cons of the capsules. 

AnX Robotica, a Texas-based AI company, owns the capsule design. Hopefully, this innovation will play a crucial role in making endoscopy more accessible, feasible, and safer for patients across the globe.

The study is published in the journal iGIE.    

NVIDIA to Build Israel’s Most Potent AI Supercomputer

NVIDIA, the World’s Top-Ranking Chip Firm, is Pouring Hundreds of Millions into Building Israel’s Most Powerful Artificial Intelligence (AI) Supercomputer, Israel-1. This Move Comes as a Response to a Surge in Demand for AI Applications, as per the Company’s Announcement on Monday.

Set to Be Partly Operational by Year-End 2023, Israel-1 is Expected to Deliver up to Eight Exaflops of AI Computing, Placing It Among the Fastest AI Supercomputers Worldwide. Putting That into Perspective, a Single Exaflop Can Perform a Quintillion – That’s 18 Zeros – Calculations Every Second.

Super-AI

According to Gilad Shainer, Senior Vice President at NVIDIA, the upcoming supercomputer in Israel will be a game-changer for the thriving AI scene in the country. Shainer highlighted the extensive collaboration between NVIDIA and 800 startups nationwide, involving tens of thousands of software engineers.

Shainer emphasized the significance of large Graphics Processing Units (GPUs) in the development of AI and generative AI applications, stating, “AI is the most important technology in our lifetime.” He further explained the growing importance of generative AI, noting the need for robust training on large datasets.

The introduction of Israel-1 will provide Israeli companies with unprecedented access to a supercomputer resource. This high-performance system is expected to accelerate training processes, enabling the creation of frameworks and solutions capable of tackling more complex challenges.

An example of the potential of powerful computing resources is evident in projects like ChatGPT by OpenAI, which utilized thousands of NVIDIA GPUs. The conversational capabilities of ChatGPT showcase the possibilities when leveraging robust computing resources.

The development of the Israel-1 system was undertaken by the former Mellanox team, an Israeli chip design firm that NVIDIA acquired in 2019 for nearly $7 billion, surpassing Intel Corp.

While the primary focus of the new supercomputer is NVIDIA’s Israeli partners, the company remains open to expanding its reach. Shainer revealed, “We may use this system to work with partners outside of Israel down the road.”

In other news, NVIDIA recently announced a partnership with the University of Bristol in Britain. Their collaboration aims to build a new supercomputer powered by an innovative NVIDIA chip, positioning NVIDIA as a competitor to chip giants Intel and Advanced Micro Devices Inc.

IBM’s Quantum Leap: The Future Holds a 100,000-Qubit Supercomputer

IBM Aims for Unprecedented Quantum Computing Advancement: A 100,000-Qubit Supercomputer Collaboration with Leading Universities and Global Impact.

During the G7 summit in Hiroshima, Japan, IBM unveiled an ambitious $100 million initiative, joining forces with the University of Tokyo and the University of Chicago to construct a massive quantum computer boasting an astounding 100,000 qubits. This groundbreaking endeavor intends to revolutionize the computing field and unlock unparalleled possibilities across various domains.

Despite already holding the record for the largest quantum computing system with a 433-qubit processor, IBM’s forthcoming machine signifies a monumental leap forward in quantum capabilities. Rather than seeking to replace classical supercomputers, the project aims to synergize quantum power with classical computing to achieve groundbreaking advancements in drug discovery, fertilizer production, and battery performance.

IBM’s Vice President of Quantum, Jay Gambetta, envisions this collaborative effort as “quantum-centric supercomputing,” emphasizing the integration of the immense computational potential of quantum machines with the sophistication of classical supercomputers. By leveraging the strengths of both technologies, this fusion endeavors to tackle complex challenges that have long remained unsolvable. The initiative holds the potential to reshape scientific research and make significant contributions to the global scientific community.

Strides made for technological advancement

While significant progress has been made, the technology required for quantum-centric supercomputing is still in its infancy. IBM’s proof-of-principle experiments have shown promising results, demonstrating that integrated circuits based on CMOS technology can control cold qubits with minimal power consumption.

However, further innovations are necessary, and this is where collaboration with academic research institutions becomes crucial.

IBM’s modular chip design serves as the foundation for housing many qubits. With an individual chip unable to accommodate the sheer scale of qubits required, interconnects are being developed to facilitate the transfer of quantum information between modules.

IBM’s “Kookaburra,” a multichip processor with 1,386 qubits and a quantum communication link, is currently under development and anticipated for release in 2025. Additionally, the University of Tokyo and the University of Chicago actively contribute their expertise in components and communication innovations, making their mark on this monumental project.

As IBM embarks on this bold mission, it anticipates forging numerous industry-academic collaborations over the next decade. Recognizing the pivotal role of universities, Gambetta highlights the importance of empowering these institutions to leverage their strengths in research and development.

With the promise of a quantum-powered future on the horizon, the journey toward a 100,000-qubit supercomputer promises to unlock previously unimaginable scientific frontiers, revolutionizing our understanding of computation as we know it.

Nvidia Aims to Achieve Trillion-Dollar Milestone as Leading Chipmaker

Silicon Valley’s Nvidia Corp is set to become the first chipmaker to reach a valuation of $1 trillion, following a stunning sales forecast and soaring demand for its artificial intelligence (AI) processors.

Nvidia’s shares soared by 23% on Thursday morning in New York after it announced an $11 billion sales forecast for the next quarter, exceeding Wall Street’s estimates by more than 50%.

Nvidia is on track to becoming the first trillion-dollar chipmaker

This announcement added a whopping $170 billion to Nvidia’s market value, more than the entire value of Intel or Qualcomm. According to Bloomberg, this incredible increase constitutes the most significant one-day gain for a US stock ever.

With its market cap sitting at $927.2 billion, Nvidia is now edging closer to joining the exclusive trillion-dollar club that includes the likes of Apple, Microsoft, Alphabet, Amazon, and Saudi Aramco.

The Big Chip-Makers in Town

Nvidia’s recent successes are attributed to the skyrocketing demand for cutting-edge tech across various industries. The firm’s H100 processor is in high order by big tech companies and a new wave of AI startups such as OpenAI and Anthropic.

These startups have raised billions in venture funding over recent months, putting Nvidia in a strong position in the growing AI market.

“Our chips and allied software tools are the picks and shovels of a generational shift in AI,” said Geoff Blaber, CEO of CCS Insight. “Nvidia provides a comprehensive toolchain that no other company currently matches,” he added.

The AI hype doesn’t stop with Nvidia; shares of AMD, a firm that produces specialized chips for AI, jumped 8% in early trading.

Microsoft and Google saw shares climb too. However, not everyone shared in the excitement. Intel’s shares fell 5% in early trading due to its perceived lagging in the AI transition.

Last year’s worries about a potential slowdown in cloud spending following a tech boom during the pandemic have been replaced by a frenzied enthusiasm for a new generation of AI. Pioneers in this space include chatbots like OpenAI’s ChatGPT and Google’s Bard.

However, even as tech giants like Amazon, Google, Meta, and Microsoft invest in their own AI chips, analysts say only some can match Nvidia’s technological advantage.

Nvidia CEO Jensen Huang emphasizes that Nvidia is well-positioned for the AI revolution, thanks to 15 years of steady investment and production expansion.

“With generative AI becoming the primary workload of most of the world’s data centers… it’s apparent now that a data center’s budget will shift very dramatically towards accelerated computing,” Huang stated.

Despite past market fluctuations with earlier AI technologies and cryptocurrencies, Nvidia’s current success is a testament to the company’s resilience and potential. As it stands, Nvidia is in the right place at the right time, poised to lead the next generation of AI innovation.