WordPress Ad Banner

Adopting Cloud Smart: The New Era in IT Architecture

The era of “Cloud First” has evolved, giving way to a more nuanced approach known as Cloud Smart. In this shifting landscape of IT architecture, hybrid cloud, with a mix of on-premises and off-premises solutions, has become the default choice. It’s not merely a transitional phase en route to “cloud maturity” but rather a preferred state for many IT leaders and organizations.

Hybrid cloud’s appeal lies in its flexibility, enabling organizations to leverage existing data center infrastructure while harnessing the advantages of the cloud. This approach optimizes costs and extends on-premises IT capabilities, making it an attractive and sustainable solution.

Moreover, hybrid cloud is gaining popularity among predominantly on-premises organizations eager to tap into the latest cloud technologies. As businesses increasingly rely on advanced technologies like AI for faster and more efficient data processing and analysis, the cloud offers a scalable and cost-effective solution without the need for significant hardware investments, all while addressing cybersecurity concerns.

However, navigating this transition requires careful planning. Rushing into the cloud can lead to hasty decisions that result in negative returns on investment. Some organizations make the mistake of migrating the wrong workloads to the cloud, necessitating a costly backtrack.

In addition to financial setbacks, organizations that fail to adopt a well-thought-out cloud strategy find themselves unable to keep pace with the exponential growth of data. Rather than enhancing efficiency and productivity, they risk falling behind their competitors and missing out on the potential benefits of a successful cloud migration.

One common pitfall is the failure to involve infrastructure teams in the migration process, leading to a disjointed solution that hampers performance. Cloud projects are often spearheaded by software architects who may overlook the critical infrastructure aspects that impact performance. It’s crucial to strike the right balance by aligning infrastructure and software architecture teams, fostering better communication to optimize hybrid cloud deployments.

The urgency to address these challenges is pressing, given the increasing demand for hybrid cloud solutions. Over three-quarters of enterprises now use multiple cloud providers, with one-third having more than half of their workloads in the cloud. Moreover, both on-premises and public cloud investments are expected to grow, with substantial spending on public cloud services projected by Gartner.

The Growing Demand for Hybrid Cloud

Hybrid cloud empowers organizations to harness the advantages of both public and private clouds, providing flexibility in hosting workloads. This flexibility optimizes resource allocation and enhances cloud infrastructure performance, contributing to cost savings.

Furthermore, hybrid cloud allows organizations to leverage the security benefits of both public and private clouds, offering greater control and advanced security approaches as needed. Many organizations also turn to hybrid cloud to rein in escalating monthly public cloud bills, especially when dealing with cloud sprawl and storage costs.

The “pay as you go” model is a boon, provided organizations understand how to manage it effectively, particularly in the case of long-lived and steadily growing storage needs.

In conclusion, “Cloud First” is giving way to “Cloud Smart.” This shift acknowledges the importance of optimizing the on-premises and cloud-based IT infrastructure. A “Cloud Smart” architectural approach empowers enterprises to design adaptable, resilient solutions that align with their evolving business needs. Striking the right balance between on-premises and cloud solutions ensures optimal performance, reliability, and cost-efficiency, ultimately driving better long-term outcomes for organizations.

Google Unveils Innovations in BigQuery, Revolutionizing Data Collaboration

Greetings, tech aficionados! We’re thrilled to share some groundbreaking news that’s about to revolutionize the way teams handle data. If you’re all about cutting-edge technology and innovative solutions, you’re in for a treat, courtesy of Google.

At the highly anticipated annual Cloud Next conference, the internet giant unveiled an array of major enhancements for its fully managed, serverless data warehouse, BigQuery. These improvements are set to foster a unified experience, linking data and workloads seamlessly. And that’s not all – Google also divulged plans to infuse AI into the platform and utilize its generative AI collaborator to amplify the efficiency of teams deciphering insights from data.

Gerrit Kazmaier, Vice President and General Manager for data and analytics at Google, perfectly summed it up in a blog post: “These innovations will help organizations harness the potential of data and AI to realize business value — from personalizing customer experiences, improving supply chain efficiency, and helping reduce operating costs, to helping drive incremental revenue.”

Now, before we dive into the specifics, a quick heads-up: most of these remarkable capabilities are currently in preview stage and aren’t yet universally available to customers. But let’s explore the exciting developments nonetheless!

BigQuery Studio: A Unified Data Hub

Google is taking data management to the next level by introducing BigQuery Studio within its BigQuery framework. This powerful feature offers users a single integrated interface for tasks ranging from data engineering and analytics to predictive analysis.

Until now, data teams had to juggle an assortment of tools, each catering to a specific task – a process that often hindered productivity due to the constant tool-switching. With the advent of BigQuery Studio, Google is simplifying this journey. Data teams can now utilize an all-inclusive environment to discover, prepare, and analyze datasets, as well as run machine learning (ML) workloads.

A spokesperson from Google stated, “BigQuery Studio provides data teams with a single interface for your data analytics in Google Cloud, including editing of SQL, Python, Spark and other languages, to easily run analytics at petabyte scale without any additional infrastructure management overhead.”

BigQuery Studio is already in preview, with enterprises like Shopify actively testing its capabilities. This innovation comes packed with enhanced support for open-source formats, performance acceleration features, and cross-cloud materialized views and joins in BigQuery Omni.

Expanding Horizons for Data Teams

But that’s not where Google’s innovation journey ends. The tech giant is bridging the gap between BigQuery and Vertex AI foundation models, including PaLM 2. This integration empowers data teams to scale SQL statements against large language models (LLMs) seamlessly. Furthermore, new model inference capabilities and vector embeddings in BigQuery are set to help teams run LLMs efficiently on unstructured datasets.

Kazmaier emphasized, “Using new model inference in BigQuery, customers can run model inferences across formats like TensorFlow, ONNX and XGBoost. In addition, new capabilities for real-time inference can identify patterns and automatically generate alerts.”

And brace yourselves, because Google is taking another stride by integrating its generative AI-powered collaborator, Duet AI, into the arsenal of tools like BigQuery, Looker, and Dataplex. This integration introduces natural language interaction and automatic recommendations, promising heightened productivity and extended accessibility.

Remember, this integration is still in its preview phase, and we’re eagerly awaiting further updates on its general availability.

The Google Cloud Next event is set to run through August 31, offering ample time for tech enthusiasts to delve deeper into these remarkable developments. Keep your eyes peeled for more insights and exciting updates from Google as they continue to reshape the landscape of data collaboration and AI integration. Stay tuned!

The Shift Towards Centralized Cloud Security: Addressing Security Silos in a Complex Landscape

The 2023 Cloud Security Report, generously supported by Fortinet, has unveiled insights from a comprehensive survey involving 752 cybersecurity professionals hailing from diverse industries and geographical locations. A substantial 90% of respondents have expressed their preference for a centralized cloud security platform, which can uniformly configure and manage security protocols across various cloud deployments. The inevitability of this sentiment is undeniable.

Within the realm of cloud computing, the emergence of security silos stands as a formidable challenge. This predicament primarily arises within distinct cloud platforms when organizations exclusively rely on the native security tools pertinent to that specific cloud provider. In scenarios where multiple cloud providers, as seen in most multicloud arrangements, are employed, the occurrence of three to five security silos is virtually certain.

The Pervasiveness of Security Silos

In practice, it is not uncommon to encounter even greater numbers of these security silos, given that many enterprises erect security domains around clusters of applications – a circumstance particularly prevalent within a single cloud. When multiplied by the count of leveraged clouds, the complexity engendered becomes overwhelming, inefficient, and gravely unsafe. A preponderance of breaches capitalizes on this predicament, with misconfigurations emerging as the primary conduit for attacks.

The notion of centralized security solutions has historical precedence in dealing with complex distributed systems. These solutions began surfacing around two decades ago, but many suffered from a “lowest common denominator” approach, endeavoring to offer a subset of security services applicable across diverse platforms. Invariably, this approach led to suboptimal functionality across all platforms due to the dissonance between what was needed and what was provided. Consequently, these solutions saw limited adoption, with native security offerings becoming the norm.

Akin Challenges in the Era of Multicloud

Analogous challenges now manifest within multicloud environments, resulting in the proliferation of security silos. The intricate nature of this conundrum itself begets security vulnerabilities, necessitating a holistic resolution in the form of centralized security capable of comprehensively addressing cloud-based systems via a unified abstraction and automation stratum. This concept aligns with what the industry terms the “supercloud” or “metacloud.”

Centralized Cloud Security: Key Advantages

The motivations behind CIOs’ pursuit of centralized security are underpinned by tangible advantages. A unified platform or abstraction empowers organizations to manage security measures cohesively. Uniform security policies can be enacted, access controls configured, and user activities monitored across a spectrum of cloud environments. This consolidated strategy streamlines security management, mitigates complexity, and enhances the ability to detect potential security threats, embodying around 80% of the benefits accrued from centralized security measures.

A Speedy Response to Threats

Centralized cloud security confers rapid response capabilities, facilitating swift identification and mitigation of security risks across the entire cloud infrastructure in the event of security incidents. Prompt actions in the face of security breaches serve to curtail their impacts.

Eliminating Duplication and Complexity

The allure of the “supercloud” or “metacloud” lies in its ability to dismantle security silos by reducing redundancy and complexity. The centralized security approach obviates the need for discrete implementations catering to each cloud-hosted application or service. This streamlined approach diminishes redundant efforts, simplifies security architectures, and ultimately yields cost savings.

Scalability and Agility as Cornerstones

Centralized cloud security solutions are designed to accommodate expansive growth requirements. Organizations can effortlessly scale their cloud infrastructure while maintaining consistent security measures. Moreover, the ability to effect changes is enhanced, as adjustments only need to be implemented within a singular platform.

Challenges on the Road to Centralization

However, there are noteworthy challenges associated with transitioning to centralized security for those entrenched in existing security silos. This shift comes with a high price tag, risks, and time commitments. While a phased migration from one security silo to a centralized platform is feasible, the prospect of selecting a single platform remains complex. The probable outcome entails integrating a suite of technologies encompassing governance, financial operations (finops), encryption, identity management, and more, to attain an optimal solution.

Furthermore, the proficiency required for executing this transition is not universally present among security professionals. While they might grasp the concept and potential benefits through articles like this one, the intricacies of executing the 30 to 40 steps necessary for a successful deployment might pose a formidable challenge. This predicament has emerged as a predominant gripe among enterprises embarking on the journey to centralize their security services, whether in cloud environments or beyond.

A Necessity for the Future

Nevertheless, the imperative remains unchanged. Most enterprises are destined to undertake this transformation at some juncture. The escalating risks and costs associated with cloud security render this a non-negotiable progression, as avoiding it could result in untenable consequences. The pivotal message is to intervene before matters deteriorate beyond salvage.

Amazon Unveils AWS HealthScribe

Amazon has unveiled a new platform called AWS HealthScribe at its annual AWS Summit conference in New York. This platform offers AI tools to assist clinicians in transcribing and analyzing their conversations with patients. The goal is to create transcripts, extract important details, and generate summaries from doctor-patient discussions, which can then be incorporated into electronic health record (EHR) systems.

AWS HealthScribe’s machine learning models can convert these transcripts into patient notes, making it easier for healthcare professionals to document their interactions with patients. This, in turn, can provide valuable insights for analysis and improve the consultation experience.

The platform uses generative AI, specifically powered by Amazon’s platform called Bedrock. While generative AI can have biases and inaccuracies, HealthScribe aims to address potential mistakes by limiting its capabilities to two medical specialties for now: general medicine and orthopedics. Additionally, clinicians have the opportunity to review and finalize notes before they are added to the EHR.

One concern with automated speech recognition programs is their ability to handle diverse accents and vernaculars. HealthScribe’s effectiveness in this area remains to be seen. However, Amazon highlights its focus on security and privacy aspects. The platform does not retain customer data after processing requests, encrypts data during transit and storage, and does not use the inputs and outputs to train its AI models.

HealthScribe is “HIPAA eligible,” meaning it can be made compliant with HIPAA requirements, the U.S. law safeguarding personal health information. Healthcare software providers who work with Amazon can achieve compliance by signing a business associate addendum.

In addition to HealthScribe, Amazon also introduced AWS HealthImaging, a service designed for storing, transforming, and analyzing medical imaging data at a large scale. This service enables dynamic pricing for data storage, potentially reducing the total cost of ownership for medical imaging storage by up to 40%.

Currently, several companies, including 3M Health Information Systems, Babylon Health, and ScribeEMR, are already using HealthScribe to streamline their healthcare processes. HealthImaging is available in various AWS regions, aiming to enhance medical imaging management for healthcare organizations.

DataStax Introduces Vector Search to Astra DB, Expanding AI/ML Capabilities

DataStax, a leading data platform vendor, has made its entry into the vector database space with the announcement of vector search availability in its flagship Astra DB cloud database.

Known for its contributions to the open-source Apache Cassandra database, DataStax offers Astra DB as a commercially supported Database-as-a-Service (DBaaS) solution. While Cassandra has traditionally been a NoSQL database, it has expanded its capabilities over the years to accommodate various data types and use cases, including AI/ML.

Throughout 2023, DataStax has been focusing on advancing its platform for AI/ML, evident in its acquisition of AI feature engineering vendor Kaskada in January. The integration of Kaskada’s technology into DataStax’s Luna ML service, launched in May, demonstrated the company’s commitment to enhancing its AI/ML capabilities.

The addition of vector support to Astra DB further strengthens DataStax’s AI/ML offerings, providing organizations with a trusted and widely deployed database platform suitable for both traditional workloads and newer AI workloads.

The vector capability was initially showcased on Google Cloud Platform in June and is now generally available on Amazon Web Services (AWS) and Microsoft Azure as well.

Ed Anuff, Chief Product Officer at DataStax, stated that Astra DB is now as much a native vector database as any other in the field.

Vector databases play a fundamental role in AI/ML operations by storing content as vector embeddings, which are numerical representations of data. Vectors are an effective way to represent the semantic meaning of content and find broad applications in large language models (LLMs) and content retrieval tasks.

The vector database space encompasses various approaches and vendors. Purpose-built vendors like Pinecone and the open-source Milvus vector database are popular options. Additionally, some existing database platforms, such as MongoDB and PostgreSQL, have incorporated vector search as an overlay or extension.

In the case of DataStax, vector search employs vector columns as a native data type in Astra DB. This allows users to query and search vectors just like any other data type.

While the vector database capabilities are being introduced to Astra DB before they are available in the open-source Cassandra project, Anuff clarified that the feature will be included in the upcoming Cassandra 5.0 release later in the year. As a commercial vendor, DataStax can integrate the feature into its platform earlier, providing Astra DB with vector capabilities ahead of the open-source release.

Cassandra’s architecture supports extensible data types, enabling the database to incorporate additional native data types over time. As a result, vectors, along with any other data, seamlessly integrate with Cassandra’s distributed index system. This unique architecture allows for the scalability of vectorized data without constraints on the number of vectorized rows.

DataStax’s Astra DB now also supports native integration with the open-source LangChain technology, a common approach for building AI-powered applications that leverage multiple LLMs. This integration enables developers to generate responses by feeding Astra DB’s vector search results into LangChain models. It simplifies the process of building real-time agents that not only make predictions but also provide recommendations based on vector search results from Astra DB and connected LangChain models.

Anuff emphasized that the availability of vector capabilities on the platform marks a significant advancement toward realizing generative AI for enterprise users. With customers expressing interest in deploying generative AI in production, DataStax is ready to support their requirements and is excited about the possibilities it presents.

Exploring Interplanetary Internet: Mars Potential Connection to Earth Web

As humanity’s aspirations turn towards Mars, one pressing question arises: will future Mars explorers have interplanetary internet? Is it possible for the internet to one day reach across space, connecting humans on Earth and Mars? These intriguing queries find some answers in a recent paper, published on the preprint server arXiv, which proposes a novel method for establishing an internet communications network for the Red Planet. The key ingredient in beaming data to Mars lies in the realm of edge computing.

Presently, NASA relies on the Deep Space Network (DSN) to communicate with Martian satellites and rovers. However, this process often takes hours for image and video transfer. On the other hand, astronauts aboard the International Space Station have web access, albeit at a speed comparable to dial-up internet.

The paper, titled ‘Can Orbital Servers Provide Mars-Wide Edge Computing?’, introduces the concept of utilizing edge computing—leveraging distributed servers—to enable rapid streaming and other applications for data transfer to Mars. The researchers propose constructing a system akin to SpaceX’s Starlink constellation, albeit on a smaller scale.

To implement edge computing on Mars, significant infrastructure would be required. The scientists suggest deploying a constellation of satellites around the Red Planet, precisely nine satellites distributed across nine distinct orbital planes, resulting in a total of 81 satellites. These satellites would establish communication links, enabling redundant backups of data. With this approach, multiple landing sites on Mars could simultaneously connect with several satellites. For crewed missions to Mars, ground servers would facilitate faster data retrieval.

Undoubtedly, such an endeavor would entail substantial costs. Hence, the researchers advocate for a gradual build-up of the satellite network over time. Initial missions aimed at laying the groundwork for crewed Mars missions could involve launching a few satellites to initiate the process.

Interestingly, this proposal shares similarities with Aquarian Space, a startup exploring the concept of Solnet. Aquarian’s vision entails creating a high-speed delivery satellite network, capable of speeds up to 100 megabits per second. With $650,000 in seed funding received last year, Aquarian aims to develop a broadband internet network that could connect Earth, the Moon, and potentially even Mars.

The quest to establish an interplanetary internet paves the way for unprecedented connectivity between Earth and Mars. While challenges remain, the prospect of bridging the internet divide across vast cosmic distances holds tremendous potential for future explorations and collaborations between the two worlds.

Enhanced Features Coming to iOS Chrome: Built-in Lens, Maps, and Calendar Integration

Google announced today that Chrome on iOS is getting a few new features, including built-in Lens support that will allow users to search using just their cameras. Although you can already use Lens in Chrome on iOS by long-pressing an image you find while browsing, you will soon also be able to use your camera to search with new pictures you take and existing images in your camera roll.

The company says the new integration is launching in the coming months. For context, Google Lens lets you search with images to do things like identify plants and translate languages in real time.

Image Credits: Google

Google also announced that when you see an address in Chrome on iOS, you no longer need to switch apps to look it up on a map. The company says now when you press and hold a detected address in Chrome, you will see the option to view it on a mini Google Maps right within Chrome.

In addition, users can now create Google Calendar events directly in Chrome without having to switch apps or copy information over manually. You just need to press and hold a detected date, and select the option to add it to your Google Calendar. Chrome will automatically create and populate the calendar event with important details like time, location and description.

Image Credits: Google

Last, Google announced that users now have the ability to translate a portion of a page by highlighting text and selecting the Google Translate option.

“As our AI models improve, Chrome has gotten better at detecting a webpage’s language and suggesting translations,” the company wrote in a blog post. “Let’s say you’re planning to visit a museum in Italy, but the site’s in Italian and you don’t speak the language. Chrome will automatically offer to translate the museum’s website into your preferred language.”

Cloudflare Unveils Observatory: Enhancing Web Monitoring Capabilities

Cloudflare has introduced Observatory, a performance monitoring approach that incorporates real-time user monitoring (RUM) data. This integration allows customers to understand their website’s performance from the perspective of visitors. RUM data is captured through Cloudflare’s Browser Insights feature, which utilizes JavaScript “beacons” embedded in web pages to collect data. It provides insights into users’ browsers or devices, recording metrics such as page load times, response times, and user interactions.

Described as “a single pane of glass,” Observatory captures customer experiences across different environments and network conditions, offering site owners a comprehensive view of performance and recommendations for improvement.

Cloudflare’s CTO, John Graham-Cumming, explained, “Website performance always significantly impacts user experience, particularly at a time when customers are tightening their spending due to inflation and rising living costs. Observatory answers two critical questions: how fast is my website, and how can I make it faster? We want to demonstrate that users don’t need to be experts in web performance to provide their visitors with an exceptional user experience.”

Observatory incorporates Google Lighthouse, a tool for evaluating web performance, and allows regional testing, enabling customers to simulate website performance in different parts of the world. The available testing options vary depending on the customer’s Cloudflare plan, with Pro customers, for example, being able to set up five recurring tests from five different locations for their most important page.

Graham-Cumming emphasized that Observatory was developed based on customer feedback and addresses their pain points. The platform provides tools for easy performance measurement, informed decision-making, and effective communication of results to stakeholders.

Observatory’s recommendations leverage information gathered from Lighthouse, RUM testing, and regional testing to identify issues and suggest tailored Cloudflare product settings for implementing fixes. For instance, the platform may recommend image resizing for a specific page and guide customers towards the relevant Cloudflare product.

Customers can access these recommendations directly within the Cloudflare dashboard, with convenient links to audits.

While other platforms offer similar suggestions and testing for website optimization, Graham-Cumming argues that Observatory stands out for its streamlined approach compared to many solutions in the market.

He stated, “Most competitive platforms are overly complicated and have a steep learning curve, relying on outdated technologies or confusing user interfaces. We designed Observatory to simplify and enhance website speed for a global audience. We understand that only a few companies have dedicated teams for each aspect of the business, such as compression or webpage optimization. However, Observatory eliminates the learning curve and provides customers with a clear understanding of their website’s real-world performance.”

Observatory seamlessly integrates with Cloudflare’s web analytics and development tools, serving as a conduit to its premium products and services. This launch could provide the boost Cloudflare needs following its recent earnings report, which showed a revenue shortfall and prompted a downward revision of its

Snowflake Launched Data Cloud for Government and Education

Snowflake, the prominent data giant, has expanded its product lineup by introducing a specialized offering called the government and education data cloud. This new solution is specifically designed for public-sector agencies at various levels of government, as well as educational institutions. It provides organizations with a fully managed package, enabling them to swiftly adopt the Snowflake data platform. With this offering, teams can easily consolidate their data assets and develop industry-specific applications, ranging from predictive capabilities to historical trend analysis reports.

This latest launch signifies Snowflake’s entrance into the realm of industry-specific clouds, marking the seventh distinct industry cloud in its portfolio. The company aims to attract a wider customer base in the enterprise sector while simultaneously competing against industry rivals like Databricks.

How does Snowflake Government and Education Data Cloud Help?

Just like the previous industry clouds, the government and education data cloud from Snowflake brings three different elements together: the company’s core cross-cloud data platform to consolidate structured, semi-structured and unstructured data; its own and partner-delivered prebuilt solutions; and industry-specific datasets.

These elements ensure teams get ready-made templates in one place to jumpstart their Snowflake data cloud instance and quickly coordinate datasets, plus everything else needed for downstream applications targeting different vertical-specific use cases. Without them, teams have to start from scratch and put together everything using their own know-how to get things up and running.

From the perspective of government agencies and educational institutions, Snowflake’s new data cloud will be particularly handy, as most organizations in these sectors struggle with the challenge of disparate data assets and find it difficult to unify, exchange and collaborate on them for improving citizen and student outcomes. When using the dedicated data cloud, they will be able to combine data sources to create holistic 360-degree views of stakeholders, as well as securely share data across teams, between agencies, and externally with the public or the private sector.

“With Snowflake, organizations have the data they need to drive meaningful change in their communities, including coordinating hurricane relief efforts, intervening when a student is at risk for falling behind, and improving community or patient health across public health systems,” Jeff Frazier, global head of public sector at Snowflake, said while sharing some of the use cases.

So, what solutions and capabilities do orgs get?

With the government and education data cloud, users get Snowflake-powered industry-specific applications like those from PowerSchool and Merit, data providers such as Carto and Vantage Point Consulting, and add-on integrations and out-of-the-box solutions from data infrastructure leaders like AWS, Collibra and Immuta

In addition, the package includes pre-built solutions from consulting companies like Booz Allen Hamilton, Deloitte and Plante Moran to help solve for top-priority use cases, including enabling decision dominance for federal customers, modernizing applications for optimal mission outcomes, and providing care for people experiencing homelessness.

While Frazier didn’t specify how many enterprises are using the industry cloud, he did note that the list of industry customers includes K-12 schools and higher education institutions as well as public-sector organizations, including the state of Montana and the city of Tacoma, Washington. 

“Tacoma used Snowflake to unite 25 distinct lines of business. This has resulted in 22-times growth in the number of users across the city with data access and has allowed the city to achieve greater financial transparency with citizens, and power other programs such as utility bill relief for citizens who were experiencing financial hardship during the pandemic,” Frazier told VentureBeat.

The Race for Industry Clouds

The launch of the government and education data cloud is another effort from Snowflake to simplify access to its offerings and build up its customer base across different sectors.

“This year, we responded to customer demand for dedicated industry data clouds with the launch of our telecom data cloud, manufacturing data cloud, and now our government and education data cloud. We’re excited to focus on growing these industry ecosystems and helping customers modernize their respective industries,” Frazier said.

It must be noted that the company is not alone with this strategy. Databricks, Snowflake’s rival, has also been launching industry-specific lakehouses for different verticals. Snowflake first made the move in September 2021 with its financial services data cloud while Databricks joined the fray in January 2022 with lakehouse for retail.

As of now, Snowflake has a total of seven distinct data clouds, while Databricks has five industry offerings.

Google Transforms the Cloud with AI for Developers and Users Alike

During the Google I/O conference, the extensive integration of generative AI throughout the Google portfolio took center stage. The highlight of the event was the unveiling of PaLM 2, a powerful large language model (LLM) that promises to revolutionize various Google services. Notably, the cloud sector is set to receive a significant enhancement through deep integration with generative AI.

Google is introducing a new interface, powered by Duet AI, a Google technology, which leverages the PaLM 2 model as its foundation. This integration aims to empower cloud developers and users, fostering increased productivity and efficiency within the cloud environment. By combining the capabilities of generative AI and cloud services, Google is poised to deliver transformative advancements that redefine the cloud computing experience.

How Duet AI could completely change how cloud is managed and developed

For new and experienced users of the cloud alike, there can often be a lot of complexity, which can lead to confusion about how to execute certain types of operations.

Seroter joked that when an individual buys a new car, they usually just get in and drive, without the need to first read the car manual. Cloud doesn’t work the same way in that users typically need to read some documentation — and there is a lot of documentation to go through.

The goal with Duet AI is to bring a conversational experience to the process of learning how to best deploy code and manage applications in the cloud. So instead of a user scrolling through StackOverflow answers, Google search results or YouTube videos, the user can simply ask a question and get an answer right in the cloud console.

“If I can pull good practices, including getting started and improving expert practices, into an in-console chat, that can steer me to some of the right places, I think it’s gonna be really powerful for people who feel intimidated by this giant powerful, awesome cloud experience,” Seroter said.

Duet AI was trained on Google Cloud data to optimize deployment

The modern cloud consists of many different options for developers to consider for app deployment, including different types of containers as well as virtual machines. 

Seroter said that the complexity of cloud deployment is why Google had to fine-tune Duet AI specifically with information about Google Cloud.

“So we found all of our docs, which is well over a million pages of docs, not to mention every code sample we’ve written, every reference application, every blog post and every YouTube video transcript,” Seroter said.

Rather than just relying on generic information that PaLM 2 might have, Duet AI has the right specific contextual information to provide accurate responses about Google Cloud.

The future of Duet AI in the cloud is ‘day two’ operations and SRE

The initial rollout of Duet AI for Google Cloud has a focus on developers, which will expand in the coming months to what are called “day two” operations, or ongoing cloud management (in software development parlance, development and deployment of code is typically referred to as a “day one” operation, which ongoing maintenance is “day two”).

Seroter said that future iteration of Duet AI for Google cloud will help organizations with site reliability engineering (SRE) and best practices at the architecture level that keep cloud applications running on day two and beyond. Going a step further, Seroter sees a future where Duet AI can also help with cloud cost optimization, to help organizations be more efficient with how they deploy and manage cloud infrastructure and applications.

“AI is the new interface for the cloud,” Seroter said. “It’s not just sitting outside the cloud, this is infused into the cloud experience.”