WordPress Ad Banner

A Crisis Spending Challenge of Cloud-Based Generative AI


The rapid adoption of generative AI is prompting organizations to allocate unprecedented resources. It’s no longer a discretionary choice to develop and deploy generative AI systems; it has become a top priority for boards and executive leadership. Consequently, the pressing question arises: how to fund these initiatives, whether in the cloud or not?

For those who have previously managed budgets, the figures can be daunting. IT executives are now bracing for 2023 generative AI budgets to be 3.4 times larger than originally anticipated. However, only a mere 15% of technology executives expect to cover this expansion with entirely new funding.

WordPress Ad Banner

Robbing Peter to Pay Paul

So, where is this influx of funding coming from? Few companies are sitting on untouched cash reserves, and as a result, 33% of technology executives are devising strategies to reallocate resources from other segments of their IT portfolios. This includes 37% of tech leaders who plan to divert generative AI funding from their broader AI investment pool.

The cost of generative AI extends beyond cloud fees for running these systems; it encompasses staffing expenses. The repercussions of generative AI on labor and cloud spending are expected to be extensive, with considerable investments required to identify, train, and retain the right talent for deploying generative AI systems. These individuals will command significantly higher salaries compared to those managing conventional systems – the very systems facing budget cuts.

CEOs must gain a clear grasp of how high-impact projects will efficiently utilize resources, enabling them to budget for the associated costs. It’s highly likely that this will give rise to numerous cautionary tales. Some enterprises may trim their budgets too drastically, inadvertently alienating the very individuals who are currently driving their business. I’ve witnessed this scenario unfold during past technological transitions, where the detriments to different facets of the company outweighed the benefits of the new technologies.

This is why I’ve consistently declined CIO positions when offered.

The Human Element in Generative AI and the Cloud

The cost of staffing could potentially undermine your AI strategy, and it should be your foremost concern. Currently, there are at least 20 open positions for every qualified candidate. This situation might improve as the market matures and more individuals take advantage of training and self-learning opportunities. However, the challenge remains that companies require in-house expertise to gain a competitive edge in generative AI in the cloud, and they may struggle to secure it in a timely manner.

For those wondering about the sought-after skills, they encompass data science, engineering, and design thinking. While understanding specific generative AI systems on a particular cloud platform is important, the focus should be on skills that can transcend these systems. Relying solely on a candidate with expertise limited to a single cloud provider will only yield limited benefits.

Navigating the Dual Challenges of Funding and Talent Acquisition

As we continue to expand AI in the cloud over the next few years, project failures won’t be attributed to technology falling short of expectations. Instead, the culprits will be underfunding and the inability to source the right talent—essentially, the same reasons traditional cloud projects fail. However, in this context, the consequences could be five times more severe, considering the nature of generative AI and our current stage of development.

I have a few recommendations, naturally. Firstly, question whether generative AI systems are genuinely necessary. We are already witnessing the misuse of generative AI in contexts where it doesn’t enhance straightforward business systems. It’s most valuable for systems that require access to large language models capable of delivering substantial returns on investment through cost savings and strategic advantages.

Furthermore, as most generative AI deployments will rely on the cloud, we need to actively explore all platforms, including on-premises data centers, to determine the most efficient operational approach. Once again, we need impartial architecture assessments to make decisions that may seem unconventional amid the prevailing hype but are ultimately the best choices for the business.

We’ve navigated similar challenges with previous trending technologies, such as client/server, the internet, service-oriented architecture, and cloud computing. Generative AI, given its capabilities, is poised to be a significant differentiator for enterprises that harness its potential. If that potential is still attainable, these types of challenges will continue to emerge.