Snowflake and Nvidia have joined forces to offer businesses a platform within the Snowflake Data Cloud where they can develop customized generative artificial intelligence (AI) applications using their proprietary data. This collaboration was announced at the Snowflake Summit 2023.
By integrating Nvidia’s NeMo platform, which includes large language models (LLMs) and GPU-accelerated computing, with Snowflake’s capabilities, enterprises can leverage their data in Snowflake accounts to create LLMs for advanced generative AI services such as chatbots, search, and summarization.
According to Manuvir Das, Nvidia’s head of enterprise computing, this partnership stands out by allowing customers to customize their generative AI models in the cloud to meet their specific enterprise requirements. They can work with their proprietary data to build cutting-edge generative AI applications without moving the data out of the secure Data Cloud environment, reducing costs, latency, and maintaining data security.
Jensen Huang, founder and CEO of Nvidia, highlighted the significance of data in developing generative AI applications that understand the unique operations and voice of each company. He stated that together, Nvidia and Snowflake will create an AI factory that empowers enterprises to transform their valuable data into custom generative AI models, all within the cloud platform they use to run their businesses.
The collaboration between Snowflake and Nvidia opens up new opportunities for enterprises to leverage their proprietary data, which can range from hundreds of terabytes to petabytes of raw and curated business information. This data can be utilized to create and refine custom LLMs, enabling the development of business-specific applications and services.
By enabling enterprises to use customized generative AI models trained on their proprietary data, Nvidia’s Das believes they can maintain a competitive advantage over those relying on vendor-specific models. Customizing models allows applications to leverage institutional knowledge, which includes the accumulated information about a company’s brand, voice, policies, and operational interactions with customers.
To achieve optimal results, abundant data, a robust model, and accelerated computing capabilities are essential in creating an LLM. The collaboration between Snowflake and Nvidia encompasses all three factors. Nvidia’s NeMo, a cloud-native enterprise platform, allows users to build, customize, and deploy generative AI models with billions of parameters. Snowflake will host and run NeMo within the Snowflake Data Cloud, enabling customers to develop and deploy custom LLMs for generative AI applications.
Nvidia also announced its commitment to providing accelerated computing and a comprehensive suite of AI software as part of the collaboration. The company is working on integrating the Nvidia AI engine into Snowflake’s Data Cloud through substantial co-engineering efforts.
Generative AI is considered one of the most transformative technologies of our time, potentially impacting nearly every business function. Das believes it is a multi-trillion-dollar opportunity that has the potential to transform every industry as enterprises begin to build and deploy custom models using their valuable data. Nvidia, as a platform company, aims to assist partners and customers in leveraging AI’s power with accelerated computing and full-stack software designed to meet the unique needs of various industries.