WordPress Ad Banner

Amazon Advances in Generative AI with Custom Chips and Tools


While Microsoft and Google have held the spotlight in the field of generative AI, Amazon, quietly driven by its founder Jeff Bezos, has been making significant strides in enabling its customers to directly engage with this cutting-edge technology. Discreetly housed within an unassuming building in Austin, Texas, Amazon engineers are fervently crafting two distinct categories of microchips with the sole purpose of facilitating the training and execution of AI models, according to a report by CNBC.

The global stage saw the emergence of generative AI with the launch of OpenAI’s ChatGPT last year. Swift to react, Microsoft capitalized on its previous collaboration with OpenAI, seamlessly integrating the AI model’s capabilities into its existing products. Yet, this landscape is now poised for a substantial transformation in terms of use cases for this technology. Amazon, boasting a robust 40 percent share of the cloud computing market, attributes this transformative potential to the dearth of tools that empower businesses to harness their pre-existing data and effectively train models with it.

WordPress Ad Banner

Executives from Amazon conveyed to CNBC that enterprises were reluctant to migrate their cloud data to Microsoft solely due to its leadership status in the realm of generative AI. Consequently, Amazon has chosen to invest its efforts in developing tools that empower businesses to leverage their cloud-stored data directly.

Amazon’s Tool Arsenal

Diverging from the trajectory of enabling users to deploy language models like GPT on its cloud servers, Amazon has forged its own collection of expansive language models, christened “Titan.” This suite of models comes complemented by an ancillary service called “Bedrock,” tailored for applications in generative AI. Bedrock not only provides users access to Amazon’s models but also grants entry to a spectrum of models devised by external entities such as Anthropic, Stability AI, and AI21 Labs.

However, Amazon’s innovation doesn’t end at providing its own Large Language Models (LLMs). The company remains cognizant of the fact that its LLM may not universally fit all use cases, and therefore aims to afford users the autonomy to select the model that optimally aligns with their specific application requirements.

Showcasing the Power of In-House Chips

Amazon appears to be mirroring the Silicon Valley trend, where companies are increasingly shunning traditional chip manufacturers in favor of crafting their own chips. Interestingly, this strategic shift isn’t new for Amazon. Almost a decade ago, the company integrated its custom-designed silicon, known as “Nitro,” into its cloud infrastructure. With over 20 million Nitro chips deployed—essentially one for each AWS server—Amazon has established a significant presence.

In 2018, Amazon unveiled the Graviton, an x86 chip competing with offerings from industry giants like AMD and Intel, designed specifically for Arm-based servers. Concurrently, Amazon embarked on the development of AI-focused chips, a move aimed at challenging Nvidia’s preeminence in this domain.

Branded as “Trainium” and “Inferentia,” Amazon’s chip offerings are named to underscore their roles in training and executing AI models, respectively. Inferentia, now in its second generation, prioritizes a cost-efficient, high-throughput framework for model execution. Meanwhile, Trainium delivers a remarkable 50 percent enhancement in price performance compared to alternative training model methods within the AWS ecosystem, as per insights shared by Amazon executives with CNBC. Confident in the appeal of its offerings, Amazon envisions companies opting for its chips to train their models rather than parting with data to OpenAI.

With an integrated infrastructure and a robust suite of tools at its disposal, Amazon is poised not only to catch up with the likes of Google and Microsoft but potentially to surpass them at an accelerated pace.