WordPress Ad Banner

Nvidia Unveils ‘Chat with RTX’ Next Game-Changer in AI Technology

Nvidia is once again making waves in the tech world with its latest innovation: ‘Chat with RTX.’ Fresh off the success of their RTX 2000 Ada GPU launch, Nvidia is now venturing into the realm of AI-centric applications, and the early buzz surrounding ‘Chat with RTX’ is hard to ignore, especially among users with Nvidia’s RTX 30 or 40 series graphics cards.

Yesterday, Nvidia had heads turning with the introduction of the RTX 2000 Ada GPU. Today, they’re back in the spotlight with ‘Chat with RTX,’ an application designed to harness the power of newer Nvidia graphics cards, specifically the RTX 30 or 40 series.

If you’re onboard the tech train, get ready for an immersive AI experience that puts your computer in control of handling complex AI tasks effortlessly.

This groundbreaking application transforms your computer into a powerhouse, seamlessly managing the heavy lifting of AI-related functions. It is custom-built for tasks ranging from analyzing YouTube videos to deciphering dense documents.

The best part? You only need an Nvidia RTX 30 or 40-series GPU to embark on this AI adventure, making it an irresistible proposition for those already equipped with Nvidia’s latest graphics technology.

Time-Saving Capabilities with ‘Chat with RTX’

The allure of this lies in its potential to save time, particularly for individuals dealing with vast amounts of information. Imagine swiftly extracting the essence of a video or pinpointing crucial details within a stack of documents.

Its aims to be your go-to AI assistant for such scenarios, joining the ranks of other prominent chatbots like Google’s Gemini or OpenAI’s ChatGPT, but with the distinctive Nvidia touch.

However, let’s not overlook its imperfections. When functioning optimally, ‘Chat with RTX’ adeptly guides you through critical sections of your content. Its true prowess shines when tackling documents – effortlessly navigating PDFs and other files, extracting vital details almost instantaneously.

For anyone familiar with the overwhelming task of sifting through extensive reading material for work or school, ‘Chat with RTX’ could be a game-changer.

Yet, like any innovation, ‘Chat with RTX’ is a work in progress. Setting it up requires patience, and it can be resource-intensive. Some wrinkles still need smoothing out – for instance, it struggles with retaining memory of previous inquiries, necessitating starting each question anew.

Nevertheless, given Nvidia’s pivotal role in the ongoing AI revolution, these quirks are likely to be addressed swiftly as ‘Chat with RTX’ evolves.

Looking Ahead: The Future of AI Interaction

As we eagerly await the refinement of ‘Chat with RTX,’ the application provides a glimpse into the future of AI interactions. Nvidia, renowned for its trailblazing efforts in the AI field, appears poised to push the boundaries further and shape the future of AI assistance.

While ‘Chat with RTX’ may have some rough edges at present, it represents a promising stride forward in AI integration. Keep an eye out as Nvidia continues to lead the charge in driving innovation. Stay tuned for updates on ‘Chat with RTX’ and the exciting possibilities it holds.

Nvidia Unveils GH200 GraceHopper: Next-Gen Superchips for Complex AI Workloads

In a recent press release, Nvidia, the world’s foremost supplier of chips for artificial intelligence (AI) applications, has introduced its latest breakthrough: the next generation of superchips, designed to tackle the most intricate generative AI workloads. This revolutionary platform, named GH200 GraceHopper, boasts an unprecedented feature: the world’s first HBM3e processor.

Combining Power: The Birth of GH200 GraceHopper

Nvidia’s ingenious GH200 GraceHopper superchip is the result of merging two distinct platforms: the Hopper platform, housing the graphic processing unit (GPU), and the Grace CPU platform, responsible for processing needs. These platforms, named in honor of computer programming pioneer Grace Hopper, have been seamlessly amalgamated into a singular superchip, paying homage to her legacy.

From Graphics to AI: The Evolution of GPUs

Historically, GPUs have been synonymous with high-end graphic processing in computers and gaming consoles. However, their immense computational capabilities have found new applications in fields like cryptocurrency mining and AI model training.

Powering AI through Collaborative Computing

Notably, Microsoft’s Azure and OpenAI have harnessed Nvidia’s chips to build substantial computing systems. By employing Nvidia’s A100 chips and creating infrastructures to distribute the load of large datasets, Microsoft facilitated the training of GPT models, exemplified by the popular ChatGPT.

Nvidia’s Pursuit of AI Dominance

Nvidia, the driving force behind chip production, now seeks to independently construct large-scale data processing systems. The introduction of the Nvidia MGX platform empowers businesses to internally train and deploy AI models, underscoring Nvidia’s commitment to AI advancement.

The GH200 GraceHopper: A Leap Forward in Superchip Technology

Nvidia’s achievement in crafting the GH200 superchip can be attributed to its proprietary NVLink technology, which facilitates chip-to-chip (C2C) interconnections. This innovation grants the GPU unfettered access to the CPU’s memory, resulting in a robust configuration that offers a substantial 1.2 TB of high-speed memory.

Unveiling the HBM3e Processor

The GH200 GraceHopper is distinguished by the inclusion of the world’s inaugural HBM3e processor, surpassing the computational speed of its predecessor, HBM3, by an impressive 50%. In a single server setup, featuring 144 Neoverse cores, a staggering eight petaflops of AI performance can be achieved. With a combined bandwidth of 10TB/sec, the GH200 platform possesses the capability to process AI models that are 3.5 times larger and 3 times faster than previous Nvidia platforms.

Nvidia’s Unrivaled Market Position

Having briefly entered the $1 trillion valuation echelon earlier in the year, Nvidia commands over 90% of the market share in chip supply for AI and related applications. The demand for GPUs extends beyond training AI models to their operational execution, and this demand is poised to escalate as AI integration becomes commonplace. Evidently, not only chip manufacturers such as AMD, but also tech giants like Google and Amazon, are actively developing their offerings in this burgeoning sector.

Charting a Technological Course: GH200’s Arrival

The unveiling of the GH200 GraceHopper superchip solidifies Nvidia’s status as the premier technology provider. Anticipated to be available for users in Q2 2024, these groundbreaking chips promise to reshape the landscape of AI processing, further establishing Nvidia’s dominance in the industry.

NVIDIA Unveils ‘Grace Hopper’: Next-Gen CPU+GPU Chip for AI Models

NVIDIA, renowned for its advancements in artificial intelligence (AI), has introduced its latest CPU+GPU chip, Grace Hopper, which promises to usher in the next era of AI models and chatbots.

While traditionally known for their role in accelerating graphics rendering for computer games, graphics processing units (GPUs) have demonstrated significantly higher computing power compared to central processing unit (CPU) chips. This led tech companies to adopt GPUs for training AI models due to their ability to perform multiple calculations simultaneously, in parallel.

In 2020, NVIDIA introduced the A100 GPU chip, which proved instrumental in training early iterations of conversational chatbots and image generators. However, within just a short span, the highly advanced H100 Hopper chips have emerged as essential components in data centers that power popular chatbots like ChatGPT. Now, NVIDIA has unveiled a groundbreaking chip that integrates both CPU and GPU capabilities.

The Grace Hopper chip represents a significant leap forward, combining the strengths of CPU and GPU technologies to enhance AI model training and performance. Its introduction marks a new milestone in the ongoing development of AI hardware, enabling more efficient and powerful computing capabilities for AI-related applications.

As the AI landscape continues to evolve, NVIDIA’s Grace Hopper chip aims to play a pivotal role in driving advancements in AI models and chatbot technologies, propelling the field toward unprecedented possibilities.

What are Grace Hopper chips from Nvidia?

According to a press release, Nvidia has created its new chip by combining its Hopper GPU platform with the Grace CPU platform (both named after Grace Hopper, a pioneer of computer programming). The two chips have been connected using Nvidia’s NVLink chip-to-chip (C2C) interconnect technology.

Dubbed GH200, the super chip has 528 GPU tensor cores which can support 480 GB of CPU RAM and 96 GB of GPU RAM. The GPU memory bandwidth on the GH200 is 4TB per second, which is twice as much as the A100 chips.

The super chip also boasts 900GB/s of the coherent memory interface, which is seven times faster than the latest generation PCIe, which has only become available this year. Along with running all Nvidia software such as HPC SDK, Nvidia AI, and Omniverse, the GH200 has 30 times higher aggregate memory bandwidth compared to the A100 chips.

What will chips be used for?

Nvidia, well on its way to becoming a trillion-dollar company, expects the GH200 chips to be used for giant-scale AI and high-performance computing (HPC) applications. At this point in time, one can only imagine AI models and chatbots that are faster and more accurate being built with this superior technology.

The company also plans to use them to build a new exaflop supercomputer capable of performing 1018 floating point operations per second (FLOPS). Two hundred fifty-six of the GH200 chips will be put together to function as one large GPU and have 144 TB of shared memory, about 500 times that of the A100.

“Generative AI is rapidly transforming businesses, unlocking new opportunities, and accelerating discovery in healthcare, finance, business services, and many more industries,” said Ian Buck, vice president of accelerated computing at NVIDIA, in a press release. “With Grace Hopper Superchips in full production, manufacturers worldwide will soon provide the accelerated infrastructure enterprises need to build and deploy generative AI applications that leverage their unique proprietary data.”

Global hyperscalers and supercomputing centers in the U.S. and Europe will get access to the GH200-powered systems later this year, the release added.

Nvidia and MediaTek Collaborate to Unveil Next-Generation AI-Powered In-Car Systems

As the demand for advanced in-car entertainment and communication systems continues to grow, Nvidia and MediaTek have announced a strategic partnership to introduce next-generation solutions that leverage artificial intelligence (AI) to enhance the driving experience.

Under the partnership, MediaTek will develop SoCs (system-on-a-chip) that integrate Nvidia’s GPU (graphics processing unit) chipset, which offers advanced AI and graphics capabilities. The collaboration aims to create a comprehensive, one-stop-shop for the automotive industry, delivering intelligent, always-connected vehicles that meet evolving consumer needs.

According to Rick Tsai, CEO of MediaTek, this partnership will enable the development of “the next generation of intelligent, always-connected vehicles.” With this collaboration, Nvidia and MediaTek are poised to transform the in-car infotainment experience, enabling drivers to stream video, play games, and interact with their vehicles using cutting-edge AI technology.

Partnership to widen the market for both players

Nvidia has a range of GPU solutions for computers and servers, and SoCs for automotive and robotic applications. Now, the firm hopes to cover broader markets with MediaTek integrating its GPU chipset into automotive SoCs. The chipset firm will have better access to the $12 billion market for infotainment SoCs, thanks to the cooperation with MediaTek.

Nvidia will be able to offer its “DRIVE OS, DRIVE IX, CUDA, and TensorRT software technologies on these new automotive SoCs to enable connected infotainment and in-cabin convenience and safety functions.” This will make in-vehicle infotainment options available to automakers on the Nvidia DRIVE platform.

Automakers have been employing NVIDIA’s technology for infotainment systems, graphical user interfaces, and touchscreens for well over a decade to help modernize their car cockpits. According to the statement, the capabilities of MediaTek’s Dimensity Auto platform are to see a marked improvement using NVIDIA’s core competencies in AI, cloud, graphics technology, and the software ecosystem in combination with NVIDIA’s advanced driver assistance systems. 

MediaTek’s Dimensity Auto platform enables smart multi-displays, high-dynamic range cameras, and audio processing, allowing drivers and passengers to engage with cockpit and infotainment systems easily. According to Reuters, till now, Nvidia has centered its efforts on high-end premium automakers, however, with its roots in the Android smartphone chip industry, MediaTek sells its Dimensity Auto technology to mass-market, cost-efficient automakers. The collaboration is set to benefit all car classes, from luxury to entry-level, offering new user experiences, improved safety, and new connected services.

“By integrating the NVIDIA GPU chiplet into its automotive offering, MediaTek aims to enhance the performance capabilities of its Dimensity Auto platform to deliver the most advanced in-cabin experience available in the market.” The platform also has Auto Connect, a function that uses high-speed telematics and Wi-Fi networking to guarantee that drivers stay wirelessly connected. The partnership plans to release its first offering by the end of 2025.

NVIDIA to Build Israel’s Most Potent AI Supercomputer

NVIDIA, the World’s Top-Ranking Chip Firm, is Pouring Hundreds of Millions into Building Israel’s Most Powerful Artificial Intelligence (AI) Supercomputer, Israel-1. This Move Comes as a Response to a Surge in Demand for AI Applications, as per the Company’s Announcement on Monday.

Set to Be Partly Operational by Year-End 2023, Israel-1 is Expected to Deliver up to Eight Exaflops of AI Computing, Placing It Among the Fastest AI Supercomputers Worldwide. Putting That into Perspective, a Single Exaflop Can Perform a Quintillion – That’s 18 Zeros – Calculations Every Second.


According to Gilad Shainer, Senior Vice President at NVIDIA, the upcoming supercomputer in Israel will be a game-changer for the thriving AI scene in the country. Shainer highlighted the extensive collaboration between NVIDIA and 800 startups nationwide, involving tens of thousands of software engineers.

Shainer emphasized the significance of large Graphics Processing Units (GPUs) in the development of AI and generative AI applications, stating, “AI is the most important technology in our lifetime.” He further explained the growing importance of generative AI, noting the need for robust training on large datasets.

The introduction of Israel-1 will provide Israeli companies with unprecedented access to a supercomputer resource. This high-performance system is expected to accelerate training processes, enabling the creation of frameworks and solutions capable of tackling more complex challenges.

An example of the potential of powerful computing resources is evident in projects like ChatGPT by OpenAI, which utilized thousands of NVIDIA GPUs. The conversational capabilities of ChatGPT showcase the possibilities when leveraging robust computing resources.

The development of the Israel-1 system was undertaken by the former Mellanox team, an Israeli chip design firm that NVIDIA acquired in 2019 for nearly $7 billion, surpassing Intel Corp.

While the primary focus of the new supercomputer is NVIDIA’s Israeli partners, the company remains open to expanding its reach. Shainer revealed, “We may use this system to work with partners outside of Israel down the road.”

In other news, NVIDIA recently announced a partnership with the University of Bristol in Britain. Their collaboration aims to build a new supercomputer powered by an innovative NVIDIA chip, positioning NVIDIA as a competitor to chip giants Intel and Advanced Micro Devices Inc.

NVIDIA Strengthens Portfolio to Offer More AI Products and Services

Chipmaker NVIDIA has unveiled a range of Artificial Intelligence (AI) products as it strives to stay ahead of the game and join the trillion-dollar valuation club alongside Apple, Microsoft, and Amazon. The announcement closely follows a market rally that saw NVIDIA’s stock surge by over 25 percent last week.

While NVIDIA was once primarily known for manufacturing chips for gaming enthusiasts, it now occupies a central position in the AI frenzy that has captivated the world. The company’s graphic processing units (GPUs) have become an essential component of AI tools, with its A100 and H100 chips gaining widespread recognition, particularly through the popularity of tools like ChatGPT.

Notably, NVIDIA recently revealed its sales forecast for the upcoming quarter, projecting a figure of $11 billion. This estimate surpassed Wall Street’s expectations by more than 50 percent, leading to a significant surge in the company’s stock value and bringing its market cap tantalizingly close to $1 trillion.

NVIDIA’s new lineup

Nvidia CEO Jensen Huang unveiled a new line-up of AI products and services, which also included a supercomputer platform called DGX GH200. The platform is expected to help companies create products as powerful as ChatGPT, which require high amounts of computing power.

It has previously reported how companies like Microsoft stitched together chips to generate a computing system that could cater to the needs of OpenAI, the creator of ChatGPT.

It now appears that Nvidia will itself provide a platform that companies like Microsoft, Meta, or Google can use. This is also an attempt to keep users hooked on Nvidia’s chips, even as alternates are being developed in the market.

Additionally, the company will also build its own supercomputers that customers can directly use. These will be located in Taiwan, The Straits Times reported.

Nvidia is also keen to address the issue of the slow speed of data movement inside data centers and will deploy its new networking system, dubbed Spectrum X, at a data center in Israel to demonstrate its effectiveness.

Nvidia has also partnered with advertising firm WPP to leverage the power of AI in advertising. WPP will deploy Nvidia’s Omniverse to create “virtual twins” of products, which can then be manipulated to create custom ads for customers while reducing costs.

Nvidia also plans to up its offering from just chips for its hardcore base of users, gamers. The company will now deploy its ACE services which will improve the gaming experience by addressing problems of non-player characters or NPCs.

The ACE service will use information from the game’s main characters and use AI to create more natural responses than scripted and repetitive responses, the company said. The service is currently under testing to ensure that the responses are not offensive or inappropriate.

Nvidia Aims to Achieve Trillion-Dollar Milestone as Leading Chipmaker

Silicon Valley’s Nvidia Corp is set to become the first chipmaker to reach a valuation of $1 trillion, following a stunning sales forecast and soaring demand for its artificial intelligence (AI) processors.

Nvidia’s shares soared by 23% on Thursday morning in New York after it announced an $11 billion sales forecast for the next quarter, exceeding Wall Street’s estimates by more than 50%.

Nvidia is on track to becoming the first trillion-dollar chipmaker

This announcement added a whopping $170 billion to Nvidia’s market value, more than the entire value of Intel or Qualcomm. According to Bloomberg, this incredible increase constitutes the most significant one-day gain for a US stock ever.

With its market cap sitting at $927.2 billion, Nvidia is now edging closer to joining the exclusive trillion-dollar club that includes the likes of Apple, Microsoft, Alphabet, Amazon, and Saudi Aramco.

The Big Chip-Makers in Town

Nvidia’s recent successes are attributed to the skyrocketing demand for cutting-edge tech across various industries. The firm’s H100 processor is in high order by big tech companies and a new wave of AI startups such as OpenAI and Anthropic.

These startups have raised billions in venture funding over recent months, putting Nvidia in a strong position in the growing AI market.

“Our chips and allied software tools are the picks and shovels of a generational shift in AI,” said Geoff Blaber, CEO of CCS Insight. “Nvidia provides a comprehensive toolchain that no other company currently matches,” he added.

The AI hype doesn’t stop with Nvidia; shares of AMD, a firm that produces specialized chips for AI, jumped 8% in early trading.

Microsoft and Google saw shares climb too. However, not everyone shared in the excitement. Intel’s shares fell 5% in early trading due to its perceived lagging in the AI transition.

Last year’s worries about a potential slowdown in cloud spending following a tech boom during the pandemic have been replaced by a frenzied enthusiasm for a new generation of AI. Pioneers in this space include chatbots like OpenAI’s ChatGPT and Google’s Bard.

However, even as tech giants like Amazon, Google, Meta, and Microsoft invest in their own AI chips, analysts say only some can match Nvidia’s technological advantage.

Nvidia CEO Jensen Huang emphasizes that Nvidia is well-positioned for the AI revolution, thanks to 15 years of steady investment and production expansion.

“With generative AI becoming the primary workload of most of the world’s data centers… it’s apparent now that a data center’s budget will shift very dramatically towards accelerated computing,” Huang stated.

Despite past market fluctuations with earlier AI technologies and cryptocurrencies, Nvidia’s current success is a testament to the company’s resilience and potential. As it stands, Nvidia is in the right place at the right time, poised to lead the next generation of AI innovation.

As Per Google: It’s AI Supercomputer Is Faster And Greener Than The Nvidia A100 Chip

Alphabet Inc’s Google on Tuesday released new details about the supercomputers it uses to train its artificial intelligence models, saying the systems are both faster and more power-efficient than comparable systems from Nvidia Corp.

Google has designed its own custom chip called the Tensor Processing Unit, or TPU. It uses those chips for more than 90 per cent of the company’s work on artificial intelligence training, the process of feeding data through models to make them useful at tasks like responding to queries with human-like text or generating images.

The Google TPU is now in its fourth generation. Google on Tuesday published a scientific paper detailing how it has strung more than 4,000 of the chips together into a supercomputer using its own custom-developed optical switches to help connect individual machines.

Improving these connections has become a key point of competition among companies that build AI supercomputers because so-called large language models that power technologies like Google’s Bard or OpenAI’s ChatGPT have exploded in size, meaning they are far too large to store on a single chip.

The models must instead be split across thousands of chips, which must then work together for weeks or more to train the model. Google’s PaLM model – its largest publicly disclosed language model to date – was trained by splitting it across two of the 4,000-chip supercomputers over 50 days.

Google said its supercomputers make it easy to reconfigure connections between chips on the fly, helping avoid problems and tweak for performance gains.

“Circuit switching makes it easy to route around failed components,” Google Fellow Norm Jouppi and Google Distinguished Engineer David Patterson wrote in a blog post about the system. “This flexibility even allows us to change the topology of the supercomputer interconnect to accelerate the performance of an ML (machine learning) model.”

While Google is only now releasing details about its supercomputer, it has been online inside the company since 2020 in a data centre in Mayes County, Oklahoma. Google said that startup Midjourney used the system to train its model, which generates fresh images after being fed a few words of text.

In the paper, Google said that for comparably sized systems, its supercomputer is up to 1.7 times faster and 1.9 times more power-efficient than a system based on Nvidia’s A100 chip that was on the market at the same time as the fourth-generation TPU.

Google said it did not compare its fourth-generation to Nvidia’s current flagship H100 chip because the H100 came to the market after Google’s chip and is made with newer technology.

Google hinted that it might be working on a new TPU that would compete with the Nvidia H100 but provided no details, with Jouppi telling Reuters that Google has “a healthy pipeline of future chips.”