WordPress Ad Banner

ServiceNow Expands Generative AI Capabilities

ServiceNow, the low-code enterprise automation company, has recently announced exciting expansions to its Now platform’s generative AI capabilities. These new features include case summarization and text-to-code functionalities, which are aimed at boosting speed, productivity, and overall value for customers across various industries.

The purpose of these generative AI capabilities is to alleviate repetitive tasks and enhance productivity. The case summarization feature automatically extracts crucial information from IT, HR, and customer service cases, streamlining the resolution process. On the other hand, text-to-code converts natural language text prompts into executable code for the ServiceNow platform, providing developers with an efficient way to create code.

Jon Sigler, VP of Now Platform at ServiceNow, highlighted the significance of these advancements, stating that by integrating generative AI into their platform and workflow offerings, they aim to help customers drive business value from a single source. The ServiceNow large language model (LLM) was specifically designed to comprehend the Now Platform, workflows, automation use cases, and processes, ensuring accuracy, performance, and increased trust for ServiceNow use cases.

Strategic partnerships with tech giants such as Nvidia and Hugging Face played a crucial role in the exclusive development of ServiceNow’s LLM. These alliances expedited the integration of enterprise generative AI capabilities by combining cutting-edge research, advanced AI infrastructure, and expert domain knowledge.

The company’s collaboration with Nvidia and Accenture also led to the creation of the AI Lighthouse program, allowing customer companies to swiftly develop their own generative AI applications without lengthy assessment and procurement processes. Through this initiative, customers gain access to ServiceNow’s enterprise automation platform and engine, Nvidia’s AI supercomputing and software, and Accenture’s consulting and deployment services.

Case summarization and text-to-code are expected to significantly improve enterprise productivity. Case summarization enhances efficiency by providing essential information from case details, prior interactions, and resolutions, leading to faster hand-offs between internal teams and streamlined resolutions for customers and employees. On the other hand, text-to-code empowers developers with a time-efficient approach to creating code for routine commands, thereby improving code hygiene, accuracy, and quality.

The integration of generative AI technology into the Now platform makes enterprise coding more accessible and effective for developers, regardless of their coding expertise. ServiceNow’s proprietary LLM, known as StarCoder, was meticulously trained and fine-tuned using vast amounts of proprietary enterprise data. This specialized LLM is expected to boost the productivity of all users who utilize the ServiceNow platform across their organizations.

Looking ahead, ServiceNow is actively exploring how generative AI can enhance the efficiency of their sales teams in onboarding and addressing product-related inquiries promptly. Additionally, they aim to accelerate employee growth and career development by implementing generative AI. The company’s future strategy involves integrating generative AI throughout the Now platform, enabling customers to operate with intelligence at scale and fostering gen AI-powered innovation across all aspects of their businesses.

In conclusion, ServiceNow’s expansion of generative AI capabilities brings significant advancements to its Now platform, promising increased productivity, streamlined processes, and intelligent automation for its customers.

Microsoft Unveils Advanced AI Fabric for Moody’s Utilizing Generative Technology

Moody’s, a renowned global player in financial risk assessment, has joined forces with Microsoft to integrate generative AI into its enterprise operations. The collaboration will leverage the Microsoft Azure OpenAI service to unlock research information and enhance risk assessment capabilities at Moody’s. One of the initial deployments will be Moody’s CoPilot, an internal tool that empowers the company’s 14,000 employees worldwide to easily access and query data and research using large language models (LLMs).

In addition to AI advancements, Moody’s is embracing the Microsoft Fabric data management platform, introduced recently, to streamline data management for AI and analytics purposes.

Nick Reed, Chief Product Officer at Moody’s, emphasized the benefits of the new generative AI tools, stating, “Users will leverage the technology to access tailored risk data and insights drawn from across Moody’s vast body of risk data, analytics, and research.”

Moody’s decision to adopt generative AI aligns with a growing trend in various industries, including financial services. Just last month, JPMorgan revealed plans for an investment service similar to ChatGPT.

Reed explained that Moody’s has already integrated traditional AI technologies into its solutions to scale and accelerate informed decision-making in risk assessment. However, the evaluation of generative AI became necessary when the rapid advancements in the field indicated that it could further harness the power of Moody’s proprietary data, analytics, and research to deliver new value and opportunities to customers.

By combining their extensive knowledge and opinions with Microsoft’s generative AI technology, Moody’s Copilot aims to seamlessly merge insights from different risk areas, such as credit risk, ESG exposure, and supply chain management. The goal is to eliminate existing silos and provide users with comprehensive risk information and insights.

Compliance, security, and enterprise AI are vital considerations in Moody’s partnership with Microsoft. Bill Borden, Corporate VP of Worldwide Financial Services at Microsoft, highlighted the importance of integrating generative AI with existing processes while meeting the strict security and compliance requirements of Moody’s. Microsoft’s established foundation in these areas positions it well to support financial service firms in their digital transformation journey, with a deep understanding of global regulations and robust controls and governance models.

In addition to leveraging the Microsoft Azure OpenAI service, Moody’s is utilizing the newly announced Microsoft Fabric data technology. Fabric enables Moody’s users to simplify data viewing and analysis by consolidating multiple data sources. With a wide range of proprietary risk data in areas such as credit, ESG, commercial real estate, and supply chain, Moody’s is exploring various use cases of Fabric to optimize its data strategies.

Microsoft’s Fabric serves as an integrated platform for horizontal data capabilities, providing customers with enhanced data management, governance, data cataloging, and valuable insights to support their data strategies in industries like banking, capital markets, and insurance.

MongoDB Enhances Atlas Platform for Modern Application Development, Announces Google Cloud Partnership for Generative AI

At its annual developer conference in New York, MongoDB unveiled several new capabilities for its Atlas platform, aiming to simplify the process of building modern applications for enterprises. The company’s President and CEO, Dev Ittycheria, highlighted the features as a means of supporting customers in running large-scale, mission-critical workloads with enhanced scalability and flexibility. MongoDB also announced industry-specific offerings and a partnership with Google Cloud to expedite the adoption of generative AI and enable the creation of innovative applications.

The key announcements from MongoDB’s event include significant improvements to Atlas, the fully-managed data platform that facilitates the development and deployment of scalable applications. Notably, Atlas now incorporates AI-powered vector search, enabling semantic search capabilities for text, images, audio, and video data. This opens doors to applications like text-to-image search and seamless integration of generative AI. Additionally, the introduction of stream processing allows developers to extract real-time insights from high-velocity and high-volume streaming data, enabling dynamic behavior adjustments and informed business actions.

MongoDB introduced Atlas Search Nodes, which provide dedicated resources for scaling search workloads independently of the database. Moreover, Atlas now supports querying data in Microsoft Azure Blob Storage through MongoDB Atlas Online Archive and Atlas Data Federation, expanding the service beyond its previous compatibility with AWS.

In collaboration with Google Cloud, MongoDB launched an AI initiative that integrates Google Cloud’s Vertex AI large language models (LLMs) with MongoDB Atlas. This integration empowers developers to accelerate their workflows and build new classes of generative AI applications, such as semantic search, classification, outlier detection, AI-powered chatbots, and text summarization. Kevin Ichhpurani, Corporate Vice President for Global Ecosystem and Channels at Google Cloud, expressed excitement about the initiative, emphasizing its potential for developers to create innovative applications and experiences that add significant business value.

MongoDB’s commitment to supporting generative AI development extends to its “AI Innovators” program, designed to aid organizations in building AI-powered solutions. The program offers up to $25,000 in MongoDB Atlas credits, partnership opportunities within the MongoDB ecosystem, and go-to-market support. It caters to both early-stage startups and established organizations with an existing customer base.

Another notable announcement was the introduction of Atlas for Industries. MongoDB plans to provide its data platform in industry-specific packages, beginning with Atlas for financial services. This program grants enterprises in the financial sector access to architectural design reviews, technology partnerships, and industry-specific knowledge accelerators, enabling rapid adoption and application development tailored to the industry’s unique challenges. MongoDB intends to expand this offering to manufacturing, automotive, insurance, healthcare, retail, and other industries throughout the year.

Lastly, MongoDB made its Relational Migrator generally available, offering a faster and easier migration process from legacy relational database technologies to MongoDB Atlas. The tool analyzes legacy databases, generates new data schemas and code automatically, and seamlessly migrates data to MongoDB Atlas with no downtime. Currently, it supports transfers from Oracle, Microsoft SQL Server, MySQL, and PostgreSQL.

With these advancements and strategic collaborations, MongoDB aims to empower developers and enterprises to leverage its Atlas platform, unleash the power of data and software, and build next-generation applications that shape the future of their businesses.

AWS Invests $100 Million to Help Customers Succeed with Generative AI

AWS has recently unveiled its plans to invest $100 million in the establishment of the AWS Generative AI Innovation Center. The initiative aims to facilitate and expedite the progress of generative AI innovation, deployment, and success for enterprise customers and partners worldwide.

The AWS Generative AI Innovation Center will consist of a dedicated team comprising strategists, data scientists, engineers, and solutions architects. These experts will collaborate closely with customers, guiding them through the process of developing tailored solutions that leverage AWS gen AI services. Notable offerings include Amazon CodeWhisperer and the recently announced AmazonBedrock cloud service. The latter enables developers to construct and scale generative AI chatbots and other applications within the cloud, utilizing internal organizational data to refine performance across various leading pretrained large language models (LLMs) such as Anthropic, AI21, Stability AI, and two new LLMs from Amazon’s Titan model family.

Sri Elaprolu, a senior leader in generative AI at AWS, emphasized the significance of proactive assistance for customers, stating that many customers expressed a strong desire to embark on their gen AI journey but lacked the necessary guidance. Elaprolu added that numerous existing AWS customers in the AI/ML space, as well as other sectors, had reached out seeking prescriptive advice. As AWS already boasts more than 100,000 customers utilizing its AI and machine learning services, exploring generative AI is viewed as a logical extension for these users.

This $100 million investment follows AWS’s recent announcement of Amazon Bedrock and further contributes to the intensifying competition in the cloud AI arena over the past year. According to Gartner analyst Sid Nag, Amazon’s move was long overdue, especially in response to the buzz and excitement generated by generative AI developments from competitors like Google and Microsoft. Nag emphasized the cloud providers’ advantage in handling data-intensive generative AI due to their expansive cloud computing and storage capabilities. He also highlighted Bedrock’s role in providing a usability meta-layer for foundational models on AWS and underlined Amazon’s focus on providing secure environments for organizations to employ generative AI.

In addition to the investment, the AWS Generative AI Innovation Center will offer complimentary workshops, engagements, and training opportunities. These resources aim to assist customers in envisioning and defining use cases that deliver maximum value for their businesses, drawing on best practices and industry expertise.

Elaprolu provided examples of companies leveraging gen AI on AWS, such as Fox, which utilizes natural language generation to create captivating storytelling content during live sports telecasts, and Ricoh, a workplace solutions provider that employs natural language generation to aid internal teams in crafting sales proposals.

Looking ahead, Elaprolu anticipates that more customers will publicly share their generative AI initiatives on AWS in the coming weeks and months. AWS is eager to play a pivotal role in supporting customers as they harness the integration of generative AI to drive business value.

Generative AI Boom: Unlocking a Wave of Success for Businesses

Generative AI (Gen AI) is the buzzword of the year, gripping the global tech ecosystem. Leading VC Sequoia declared that gen AI could “generate trillions of dollars of economic value,” and thousands of businesses, from Microsoft to Fiat, have raced to integrate the technology as a way to speed up productivity and deliver more value for customers.

Any nascent sector like generative AI, as was the case with Web3, also brings with it plenty of predictions about just how big it can/will become. The global AI market is currently worth $136.6 billion, with some estimating that it will grow by 40% over the next eight years. Even an overall slowdown in VC dealmaking has made an exception for Gen AI, with AI-assisted startups making up over half of VC investments in the last year.

However, although generative AI tools are attracting headlines and frugal VCs’ money, and while some of the first movers have developed nifty AI tools that respond to critical pain points, how many of these will go on to become long-term businesses? Most that have monetized have stumbled into becoming businesses rather than as part of any long-term strategy, so what will they do if/when they need to scale to meet demand?

There’s a lot that Gen AI startups still have to do to take this captivating technology and actually turn it into a sustainable business. In this article, I’ll explain where generative AI startups can start if they want to turn this short-term hype into long-term growth so they don’t miss a potentially huge market opportunity. 

Hype ≠ Success 

There are many hurdles standing between Gen AI startups and long-term profitability.

First, it’s difficult to take a new technology and actually turn it into something profitable. While Gen AI tech is certainly impressive, it’s unclear how to monetize or integrate it into a profitable business model. So far, some of the most successful AI startups have used the tech to boost operational efficiency — like Observe.ai, which automates repeating processes that drive revenue and retention — or to help with language processing and content creation, like AI copywriting assistant Jasper.ai. But you can only have so many AI chatbots. Emerging Gen AI startups will have to carve out their own niches if they want to be successful.

AI companies will also find it hard to maintain a competitive edge. Many AI startups are already struggling to differentiate themselves in an incredibly crowded market, and for every one entrepreneur with an innovative use case, there are ten more riding the wave with no destination in mind — presenting a “solution” without a clear idea of the problem it seeks to solve. There are already 130 Gen AI startups in Europe alone, and the chances of all of these companies reaching long-term profitability are slim. 

Finally, AI is still a nascent technology with big questions about ethics, misinformation and national security concerns to be answered. AI companies looking to streamline workflows will have to address concerns about third-party software accessing potentially sensitive internal data before they can be widely adopted, while startups leveraging the speed and efficiency of Gen AI must come up with sufficient guardrails to address the dystopian concerns that these “machines” could come to replace up to a quarter of our jobs

Riding the generative AI wave: How to turn short-term hype into long-term growth  

To tackle the above hurdles, generative AI startups serious about building long-term businesses need to adopt some basic principles. It’s true the AI market is particularly frothy with investor cash at the moment, but that is an outlier in wider VC sentiment. Given the recent market downturn, investors are keener than ever to see examples of real, rather than projected, growth and are scrutinizing whether recipients of their money are built on scalable business foundations. 

These are the key things Gen AI startups looking to turn hype into growth should consider: 

  • Focus on customer need: It’s very easy to get carried away with the potential of Gen AI technology, but the magic happens when that potential is applied in a way that clearly solves a known and understood customer problem. Step one should always be identifying that problem, then working your way up from there. 
  • Plan for global scale: Most of the startups we have seen launch using Gen AI are pursuing product-led growth. They often have a low monthly cost and serve an individual user. If these companies are serious about scaling, that requires being able to sell globally. More markets mean more buyers and more revenue, and quicker growth. With more money in the bank, you can extend the runway and be better insulated from individual shocks and market fluctuations. 
  • Build a monetisation thesis: The automation Gen AI provides can remove a huge amount of manual effort, and pricing can be difficult to get right given the cost of the underlying infrastructure. It’s important to decide your value metric, then test and refine it to arrive at the correct price point. If customer need is the beating heart of a business, the monetization thesis is the means to keep that heart beating. 

Ultimately, success will boil down to two things:

  1. Effective monetization:

No technology, regardless of hype, will sell itself, so it’s important to identify the relevant Gen AI revenue streams and then package them in the right way to make them profitable. Effective monetization will ultimately rely on three main pillars: increasing revenues, reducing costs (particularly important given the generative nature of these businesses), and reducing risk. Ensuring a clear line of sight to these value levers is essential, as they will impact the bottom lines of adopting companies in a significant way. Once you have all three, the money will follow.

  1. Overcome potential barriers to growth and growing sustainably: 

In the same way that AWS accelerated the speed and lowered the cost of building a startup, ChatGPT enables complex automation with human-like chat interfaces at the click of a button. As many AI startups are thin application layers built on top of deep but existing infrastructure, they can be brought to market very fast via a freemium or low-cost model.

This is perfect for a self-serve approach, where companies show the value of their product through usage rather than sales-assisted pitches, which means those companies riding the AI wave will grow much quicker than usual. However, it also means they will hit internationalization obstacles earlier, leaving them to trip over operational hurdles like localization of currency and payment methods and dealing with fraud. A comprehensive payment infrastructure is key to any successful Gen AI business, as this will allow it to scale rapidly and at growth. 

The road ahead 

While Gen AI has the potential to generate billions or even trillions of dollars in economic value, there are still genuine questions about how many of these first-movers will go on to create household-name businesses and how many will eventually fade with the hype.

At Paddle, we have seen the growth curves of thousands of software businesses, tracking nearly $30 billion of ARR. And we have seen a clear growth in the segment of businesses that are built on GPT and the AI-for-image-generation DALL-E 2.

When building on APIs like this, the path to a product is rapid, so the real battleground becomes distribution and monetization. We have seen a significant increase in these businesses becoming global by default, selling via a self-serve process to thousands of people across multiple markets at a low price point. Those that become successful are the ones that shift as much value as possible toward those first customer interactions.

For ambitious Gen AI startups wanting to create a truly global business, they, therefore, need to focus on three things: identify a clear need or problem; plan for expansion into new markets to acquire more revenue; build a monetization thesis and test and refine it to determine the right price point. 

While generative AI may be the shiny new thing in tech, the principles underpinning its success are the same as for any software innovation. Nail these core principles, and Gen AI startups will be able to pave the road to long-term success.