WordPress Ad Banner

Google Expands TensorFlow Open-Source Tooling to Accelerate Machine Learning Development

Google made significant announcements in the field of artificial intelligence (AI) at its Google I/O event, with the launch of PaLM 2, a large language model, being the highlight of the day. However, the company had more AI news to share during the event.

Google is introducing a range of updates and enhancements to its open-source machine learning (ML) technology, focusing on the growing TensorFlow ecosystem. TensorFlow, led by Google, offers ML tools that empower developers to build and train models effectively.

One of the notable updates is the introduction of DTensor technology, aimed at enhancing ML training through parallelism techniques. This technology improves model training and scaling efficiency, delivering more optimized results.

Google is also offering a preview release of the TF Quantization API, which helps optimize models for resource efficiency, ultimately reducing development costs.

Within the TensorFlow ecosystem, the Keras API suite plays a crucial role, providing deep learning capabilities in Python on top of the core TensorFlow framework. Google is introducing two new tools within Keras: KerasCV for computer vision (CV) applications and KerasNLP for natural language processing (NLP) tasks.

Alex Spinelli, Google’s Vice President of Product Management for Machine Learning, emphasized the company’s commitment to driving new capabilities, efficiency, and performance through open-source strategies. While Google continues to integrate exceptional AI and ML into its products, the company also aims to uplift the broader developer community by creating opportunities and advancements in the open-source space.

Google’s announcements at Google I/O showcase its dedication to innovation and collaboration in the AI and ML domains, providing developers with the tools and technologies necessary to build powerful and efficient models.

TensorFlow remains the ‘workhouse’ of machine learning at Google

In an era where large language models (LLMs) are all the rage, Spinelli emphasized that it’s now even more critical than ever to have the right ML training tools.

“TensorFlow is still today the workhorse of machine learning,” he said. “It is still … the fundamental underlying infrastructure [in Google] that powers a lot of our own machine learning developments.”

To that end, the DTensor updates will provide more “horsepower” as the requirements of ML training continue to grow. DTensor introduces more parallelization capabilities to help optimize training workflows.

Spinelli said that ML overall is just getting more hungry for data and compute resources. As such, finding ways to improve performance in order to process more data to serve the needs of increasingly larger models is extremely important. The new Keras updates will provide even more power, with modular components that actually let developers build their own computer vision and natural language processing capabilities. 

Still more power will come to TensorFlow thanks to the new JAX2TF technology. JAX is a research framework for AI, widely used at Google as a computational library, to build technologies such as the Bard AI chatbot. With JAX2TF, models written in JAX will now be more easily usable with the TensorFlow ecosystem.

“One of the things that we’re really excited about is how these things are going to make their way into products — and watch that developer community flourish,” he said.

PyTorch vs TensorFlow

While TensorFlow is the workhorse of Google’s ML efforts, it’s not the only open-source ML training library.

In recent years the open-source PyTorch framework, originally created by Facebook (now Meta), has become increasingly popular. In 2022, Meta contributed PyTorch to the Linux Foundation, creating the new PyTorch Foundation, a multi-stakeholder effort with an open governance model.

Spinelli said that what Google is trying to do is support developer choice when it comes to ML tooling. He also noted that TensorFlow isn’t just an ML framework, it’s a whole ecosystem of tools for ML that can help support training and development for a broad range of use cases and deployment scenarios.

“This is the same set of technologies, essentially, that Google uses to build machine learning,” Spinelli said. “I think we have a really competitive offering if you really want to build large-scale high-performance systems and you want to know that these are going to work on all the infrastructures of the future.”

One thing Google apparently will not be doing is following Meta’s lead and creating an independent TensorFlor Foundation organization.

“We feel pretty comfortable with the way it’s developed today and the way it’s managed,” Spinelli said. “We feel pretty comfortable about some of these great updates that we’re releasing now.”

Google Releases New Generative AI Products and Features for Google Cloud and Vertex AI

During its annual developer conference, Google I/O 2023, Google made several exciting announcements, introducing a range of new products and features that provide customers with access to innovative generative AI capabilities and expanded options for utilizing and fine-tuning custom models.

One notable highlight is the introduction of three new foundation models available within Vertex AI, Google Cloud’s comprehensive machine learning platform. These models include Codey, a text-to-code model designed to assist developers with code completion, generation, and chat functionalities. Additionally, Imagen, a text-to-image model, empowers customers to generate and edit high-quality images to meet various business needs. Lastly, Chirp, a speech-to-text model, enables organizations to engage with customers and constituents more inclusively by supporting their native languages.

These new generative AI offerings from Google further expand the capabilities of Google Cloud and Vertex AI, providing customers with powerful tools to enhance their development processes, generate visual content, and facilitate effective communication with diverse audiences. The announcements at Google I/O 2023 reinforce Google’s commitment to driving innovation and advancing the field of artificial intelligence.

New tools for generative AI

Google’s new offerings — which in total include three brand-new foundation models, an Embeddings API, and a unique tuning feature — aim to empower developers and data scientists with more capabilities to build generative AI applications more quickly.

The first of the new foundation models released today, Codey, aims to accelerate software development by providing real-time code completion and code generation. Perhaps best of all, it can be customized to a user’s own codebase. The model supports more than 20 coding languages and is able to streamline a wide variety of coding tasks. It essentially helps developers ship products faster, generating code based on natural language prompts, and offers code chat for assistance with debugging and documentation.

Imagen, the second foundation model, helps organizations generate and edit high-quality images for a wide variety of use cases. This text-to-image model simplifies the creation and editing of images at scale, offering low latency and enterprise-grade data governance capabilities.

In one of the most exciting capabilities launched today, mask-free edit allows users to make changes to a generated image through natural language processing. This essentially means you can have a conversation with the user interface about how to generate the perfect photo, continuously iterating on the output. The model also offers image upscaling and captioning in over 300 languages. Users can quickly generate production-ready images, while built-in content moderation ensures safety.

The third foundation model, Chirp, focuses on enhancing customer engagement through speech-to-text. Trained on millions of hours of audio, Chirp supports more than 100 languages, with additional languages and dialects being added today. Chirp is a new version of Google’s 2 billion-parameter speech model that now boasts 98% accuracy in English and up to 300% relative improvement in languages with fewer than 10 million speakers.

Finding new relationships in data

To complement its new foundation models, Google introduced the Embeddings API for text and images, which is now available in Vertex AI as well. This API converts text and image data into multi-dimensional numerical vectors that map semantic relationships, which allows developers to create more engaging apps and user experiences. Applications range from powerful semantic search and text classification functionality to Q&A chatbots based on an organization’s data.

Another standout feature of Vertex AI’s update is reinforcement learning from human feedback (RLHF), which Google claims makes Vertex AI the first end-to-end machine learning platform among hyperscalers to offer RLHF as a managed service. This feature enables organizations to incorporate human feedback to train a reward model for fine-tuning foundation models, making it particularly useful in industries where accuracy and customer satisfaction are crucial.

Google’s new generative AI advancements are poised to revolutionize the development landscape, offering developers and data scientists an increasingly sophisticated toolset for leveraging AI in the cloud. With these new foundation models and tools, the possibilities for innovation and responsible AI development are virtually limitless.

Google Opens Up About PaLM 2, Its New Generative AI LLM

Google has commenced its annual I/O conference with a strong emphasis on advancing artificial intelligence (AI) across its various domains, with a particular spotlight on PaLM 2.

Google I/O has traditionally served as a developer conference, covering a wide range of topics. However, this year’s event stands out as AI takes center stage in almost every aspect. Google aims to establish itself as a frontrunner in the market, even as competitors like Microsoft and OpenAI enjoy the success of ChatGPT.

The cornerstone of Google’s endeavors is its newly introduced PaLM 2, a large language model (LLM). PaLM 2 will provide the backbone for at least 25 Google products and services, which will be extensively discussed in sessions at I/O. These include Bard, Workspace, Cloud, Security, and Vertex AI.

Originally launched in April 2022, the initial version of PaLM (Pathways Language Model) served as Google’s foundational LLM for generative AI. According to Google, PaLM 2 significantly enhances the company’s generative AI capabilities in meaningful ways.

During a roundtable press briefing, Zoubin Ghahramani, VP of Google DeepMind, emphasized Google’s mission to make information universally accessible and useful. He highlighted how AI has accelerated this mission, providing opportunities to gain a deeper understanding of the world and create more helpful products.

As Google showcases PaLM 2 and its far-reaching implications at the I/O conference, it solidifies its commitment to advancing AI and harnessing its potential to improve user experiences and product functionality.

Putting state-of-the-art AI in the ‘palm’ of developers’ hands with PaLM 2

Ghahramani explained that PaLM 2 is a state-of-the-art language model that is good at math, coding, reasoning, multilingual translation and natural language generation. 

He emphasized that it’s better than Google’s previous LLMs in nearly every way that can be measured. That said, one way that previous models were measured was by the number of parameters. For example, in 2022 when the first iteration of PaLM was launched, Google claimed it had 540 billion parameters for its largest model. In response to a question posed by VentureBeat, Ghahramani declined to provide a specific figure for the parameter size of PaLM 2, only noting that counting parameters is not an ideal way to measure performance or capability.

Ghahramani instead said the model has been trained and built in a way that makes it better. Google trained PaLM 2 on the latest Tensor Processing Unit (TPU) infrastructure, which is Google’s custom silicon for machine learning (ML) training. 

PaLM 2 is also better at AI inference. Ghahramani noted that by bringing together compute, optimal scaling and improved dataset mixtures, as well as improvements to the model architectures, PaLM 2 is more efficient for serving models while performing better overall.

In terms of improved core capabilities for PaLM 2, there are three in particular that Ghahramani called out:

Multilinguality: The new model has been trained on over 100 spoken-word languages, which enables PaLM 2 to excel at multilingual tasks. Going a step further, Ghahramani said that it can understand nuanced phrases in different languages including the use of ambiguous or figurative meanings of words rather than the literal meaning.

Reasoning: PaLM 2 provides stronger logic, common sense reasoning, and mathematics than previous models. “We’ve trained on a massive amount of math and science texts, including scientific papers and mathematical expressions,” Ghahramani said.

Coding: PaLM 2 also understands, generates and debugs code and was pretrained on more than 20 programming languages. Alongside popular programming languages like Python and JavaScript, PaLM 2 can also handle older languages like Fortran.

“If you’re looking for help to fix a piece of code, PaLM 2 can not only fix the code, but also provide the documentation you need in any language,” Ghahramani said. “So this helps programmers around the world learn to code better and also to collaborate.”

PaLM 2 is one model powering 25 applications from Google, including Bard

Ghahramani said that PaLM 2 can adapt to a wide range of tasks, and at Google I/O the company has detailed how it supports 25 products that impact just about every aspect of the user experience.

Building off the general-purpose PaLM 2, Google has also developed the Med-PaLM 2, a model for the medical profession. For security use cases, Google has trained Sec-PaLM. Google’s ChatGPT competitor, Bard, will now also benefit from PaLM 2’s power, providing an intuitive prompt-based user interface that anyone can use, regardless of their technical ability. Google’s Workspace suite of productivity applications will also get an intelligence boost, thanks to PaLM 2.

“PaLM 2 excels when you fine-tune it on domain-specific data,” Ghahramani said. “So think of PaLM 2 as a general model that can be fine-tuned to achieve particular tasks.”

Alphabet to Unveil AI Advancements at Its Google I/O Event, Bard Could Get Bigger

Alphabet Inc. the parent company of Google, is gearing up to make a significant splash in the field of artificial intelligence (AI) at the highly anticipated Google I/O conference commencing on May 10. With AI being a prominent topic, Alphabet finds itself in a position where discussing AI is imperative. Simultaneously, the company faces the pressure to demonstrate its leadership in the domain, lest it relinquishes control to the formidable OpenAI-Microsoft partnership.

Following the remarkable success of ChatGPT, Microsoft solidified its collaboration with OpenAI through a substantial multi-year, multi-billion-dollar investment. Microsoft has been actively integrating AI into its extensive range of products, showcasing its determination to excel in the AI landscape.

In contrast, Alphabet has been perceived as lagging behind, encountering some challenges with the launch of its Bard AI and struggling with a sluggish rollout. However, the upcoming event presents an opportunity for Alphabet to reverse this narrative as it prepares to unveil numerous AI updates.

By showcasing its latest advancements at the conference, Alphabet aims to demonstrate its commitment to AI innovation and reclaim its position as a prominent player in the industry. The company seeks to impress attendees and industry observers alike with its refreshed AI offerings and showcase its ability to compete in the ever-evolving AI landscape.

As the conference unfolds, all eyes will be on Alphabet as it strives to make significant strides in AI and reclaim its foothold in the face of rising competition from the OpenAI-Microsoft collaboration.

What to expect at Google I/O?

According to CNBC, Alphabet is expected to focus on AI and how its products “help people reach their full potential.” According to the documents seen by the media company, Google will demonstrate “generative experiences” to Bard and Search operations using AI.

This will likely include using Bard to demonstrate its utility in coding, math, and logic, showing that the AI is at par with its OpenAI counterpart. However, the CNBC report said that Google would also showcase the AI’s expertise in following prompts in Korean and Japanese.

The expertise in multiple languages comes from its general-use large language model, PaLM, an improved iteration that will be unveiled at the event. PaLM2 supports more than 100 languages.

In March this year, the company also launched an experimental tool, a much more powerful version of Bard. Internally, Google has been working on a “multi-modal Bard” which uses a larger data set and also tested versions dubbed “Big Bard” and “Giant Bard”.

Much like Microsoft, Google is also expected to improve user experiences after incorporating AI into its products like Sheets, Slides, and Meet, which it began rolling out to limited sets of users starting March this year.

Google is also expected to update users on image recognition in Google Lens and allow users to search using camera and voice.

Open-Source AI Massive Threat to Google and OpenAI

According to a leaked internal document written by a senior Google engineer, neither OpenAI nor Google is likely to come out on top in the race for AI dominance. The document has been circulating in Silicon Valley for several months and was recently made public by consulting firm Semi-Analysis. While OpenAI gained fame last year with its ChatGPT conversational AI chatbot, Google has been working in the AI domain for over a decade and was previously thought to be the leader. However, an AI arms race has since ensued between the two companies, with both vying for supremacy.

In April, Google engineer Luke Sernau published a document internally that has since been widely circulated privately. Sernau does not believe that either company will ultimately emerge as the AI leader if they continue down this path. He notes that while these firms have been squabbling, open-source AI has surged ahead, citing examples such as large language models that can run on a smartphone and personal AI that can be fine-tuned on a laptop in one evening.

Sernau wrote that AI models developed by private organizations still held the edge, but not for long. Open-source models were closing in on achieving the same results as corporations that were spending billions of dollars at a fraction of the cost. Open-source tech was also iterating much faster, with new iterations coming up in weeks, as opposed to months for corporations.

Sernau highlighted that the giant models used by Google and OpenAI were the main reason why their progress was being slowed down, while the open-source community had discovered LLaMa from Meta, which was much smaller and easier to work with. The engineer emphasized the need for Google to shift to smaller models and learn from the open-source community, which is more nimble and can be quickly iterated upon.

Ultimately, if better AI models become available for free, clients will not pay to use inferior models from companies such as Google or OpenAI. Thus, it is essential for these companies to shift their focus and learn from the open-source community to stay competitive.

Google Says Goodbye to Passwords With Passkeys Launch

In a major announcement, Google has revealed that it is now offering support for passkeys across all its platforms. With this update, users will be able to enjoy a password less sign-in experience on websites and apps using fingerprinting, facial recognition, or a local pin, without the need to enter a password or complete 2-step verification (2SV).

To set up a passkey, users can log in to a website or app using their existing username and password, and then opt to create a passkey that can be stored in a solution like Google Password Manager for future logins.

Compared to traditional passwords, passkeys are much more secure and resistant to credential theft, phishing, and social engineering scams. This makes them a safer and more convenient alternative, especially considering how even the most tech-savvy users can be fooled by phishing attempts and other scams.

In their official blog post, Google software engineers Arnar Birgisson and Diana K. Smetters noted that “passkeys are a more convenient and safer alternative to passwords.” With broader support for password less sign-in options, Google accounts are now more resistant to identity-based attacks, offering users greater peace of mind and protection online.

Password-based security inefficient for modern enterprise

The release comes as the weaknesses of password-based security are becoming increasingly apparent, with hackers leaking more than 721 million passwords online last year. Vendors including Microsoft and Apple have committed to developing a common passwordless sign-in standard. 

While existing technologies like multi-factor authentication (MFA) have helped to enhance online account security, they haven’t fully addressed the risk of credential theft due to their susceptibility to SIM swap attacks that hijack the SMS verification process, and the inconvenience of adding additional authentication steps for end users. 

Password less login options like passkeys that enable users to log in with bio-metric data provide a user-friendly alternative that decreases the likelihood of a successful account takeover attempt. 

AI Pioneer Geoffrey Hinton Quits Google, Warns Against Rapid AI Development

One of the pioneers in the development of deep learning models that have become the basis for tools like ChatGPT and Bard, has quit Google to warn against the dangers of scaling AI technology too fast.

In an interview with the New York Times on Monday, Geoffrey Hinton – a 2018 recipient of the Turing Award – said he had quit his job at Google to speak freely about the risks of AI.

He told NYT journalist Cade Metz that part of him now regrets his life’s work, explaining how tech giants like Google and Microsoft had become locked in competition on AI that it may be impossible to stop.

“Look at how it was five years ago and how it is now,” he said. “Take the difference and propagate it forwards. That’s scary.”

As companies improve their AI systems, he said, they become increasingly dangerous: “It is hard to see how you can prevent the bad actors from using it for bad things”.

While chatbots today tend to complement human workers, it would not be long before they replaced a number of human roles. “It takes away the drudge work,” he said. “It might take away more than that.”

Perhaps more concerning, the article talked about how AI systems can learn unexpected behavior from the vast amounts of data they analyze, and what that might mean when AI not only generates computer code, but also deploys it.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

After publication of the interview, Hinton was keen to clarify that he had not intended to criticize his old employer, Tweeting: “In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.”

Back in 1986, Hinton, David Rumelhart and Ronald J Williams, wrote a highly-cited paper that popularised the backpropagation algorithm for training multi-layer neural networks, which mimics how biological brains learn.

For the last 10 years, the 75-year-old British/Canadian has divided his time between his work for the University of Toronto and his AI startup, DNNresearch, which was acquired by Google in 2013.

Big Tech Boosts AI, Cuts Jobs

The US tech giants like Alphabet, Microsoft, Amazon, and Meta are increasing their large language model (LLM) investments as a show of their dedication to utilizing the power of artificial intelligence (AI) while cutting costs and jobs.

Since the launch of OpenAI’s ChatGPT chatbot in late 2022, these businesses have put their artificial intelligence AI models on steroids to compete in the market, CNBC reported on Friday.

All the recently released quarterly reports by these tech behemoths show their efforts to increase AI productivity in the face of growing economic worries.

A significant amount of data and processing power are needed for generative AI programs to replicate human-like outputs like text, code excerpts, and computer-generated graphics. 

Tech titans and AI investments

During their respective earnings calls, the CEOs of Alphabet, Microsoft, Amazon, and Meta all discussed their plans and monetary investments for developing and deploying AI applications.

Sundar Pichai, CEO of Alphabet, acknowledged the demand to produce AI products and underlined the incorporation of generative AI developments to improve search skills. 

Beyond search, Google uses AI to improve ad conversion rates and fend off “toxic text.” Pichai noted ties with Nvidia for strong processors as well as cooperation between the two main AI teams, Brain and DeepMind.

Microsoft’s Teams teleconferencing system, Office program, and Bing search engine all use OpenAI’s GPT technology. 

Invoking Bing’s doubled downloads following the integration of a chatbot, CEO Satya Nadella emphasized that AI will drive revenue growth and increase app penetration. Microsoft’s expenditure on sizable data centers for AI applications will demand a sizeable sum of money.

Andy Jassy, the CEO of Amazon, showed interest in generative AI, highlighting the recent developments that provide game-changing possibilities. 

Amazon plans to use its resources as one of the few businesses capable of making the necessary infrastructure investments in developing its own LLMs and creating data center chips for machine learning, despite the fact that it primarily sells access to AI technology.

Jassy noted Amazon Web Services’ aspirations to create tools for developers and enhance user experiences, including Alexa.

Along with Meta’s emphasis on the metaverse, CEO Mark Zuckerberg emphasized the value of AI. Zuckerberg emphasized the company’s shift toward generative foundation models and its use of machine learning for suggestions. 

The AI initiatives from Meta will have an impact on a variety of products, including conversation features in Facebook Messenger and WhatsApp, as well as tools for creating images for Facebook and Instagram. 

In addition, Zuckerberg discussed the company’s expenditures in enlarging data centers for AI infrastructure as well as the possibilities of AI agents, such as the automation of customer service.

AI booms as tech job cuts gloom 

All the major tech companies like Alphabet, Microsoft, Amazon, and Meta are making significant investments in massive language models and artificial intelligence to improve their products and user experiences. 

According to the CNBC report, these tech behemoths are investing enormous resources to be on the cutting edge of this quickly developing industry because they see the revolutionary potential of AI. 

While AI generated positive media coverage, the loss of tech jobs also caused heartbreak. 

According to a Crunchbase News count, 136,569 employees at IT companies with US headquarters or with a sizable US workforce have been let go in a wave of layoffs as of 2023. In 2022, public and private tech enterprises in the US cut more than 93,000 jobs.

Google Bard Gets A Slate Of New Coding And Debugging Features

Back in March 2023, Google opened up its first conversational AI chatbot, Google Bard, to a wider set of users. In its initial iteration, Bard was extremely limited in capability — especially compared to its chief rival ChatGPT. However, Google has promised that Bard will gradually see a series of updates that will eventually make it as capable as ChatGPT (or even better) in the days to come. 

Just as Google promised, the company has constantly been updating Bard with new capabilities. Late last month, Bard incorporated something known as PaLM (Pathways Language Model), which improved the chatbot’s response to math and logic-related questions. When this happened, Google also announced that its next major goal would be to equip the chatbot with the capability to write and debug code.

Google confirmed on April 21, 2023 that it had fulfilled its promise to update Bard. The update added the ability to generate new code, debug code, and provide explanations to users. In the blog post announcing the update, Google confirmed that the ability to help with programming and developmental tasks was one of the top-requested features among Bard users. 

Google also confirmed that Bard is getting the capability to natively export Python code to Google Colab without having to copy-paste anything. As of now, Bard supports coding in more than 20 programming languages — with the notable ones being C++, Go, Java, Javascript, Python, and Typescript.

Coding using Google Bard: Everything you need to know

Google logo on smartphone

In the blog post announcing the changes, Google’s Group Project Manager Paige Bailey confirmed that the new changes to Bard also endow it with the capability to explain code snippets — making it easier for learners and novice users to understand coding. Experienced coders can also utilize this feature to understand the expected output from a chosen block of code.

Paige further explains that Bard can assist in debugging a code that is not functioning as intended. The process itself is as simple as typing, “this code didn’t work, please fix it.” Bard will then try to debug the code, and provide an explanation as to what really went wrong. 

In addition to these features, Bard can now attribute the source of a code, particularly if it is sourced from an existing open-source project. Another key area where Bard can be helpful to coders is its ability to optimize an existing piece of code. Again, all the user needs to do from their end is to ask Bard to make the code faster.

Even with these newfound capabilities, Google still insists that Bard is in an experimental stage, and that it could occasionally come up with unexpected outputs and sub-optimal code. The company insists that developers shouldn’t wholly rely on code generated by Bard alone without actually verifying it themselves.

Google Forms AI Dream Team By Merging ‘Brain’ and ‘DeepMind’ Projects

Alphabet and Google’s CEO Sundar Pichai has just released a public announcement that Google’s “Brain” and “DeepMind” artificial intelligence (AI) projects are to be combined into one. The former is a Google-owned artificial intelligence initiative, while the latter was a 2014 acquisition.

According to Pichai, Demis Hassabis, CEO of “DeepMind,” will head the creation of Google DeepMind and “lead the development of our most capable and responsible general AI systems.” The chief scientist for Google Research and Google “DeepMind” will be Jeff Dean, a co-founder of the Brain team and a former senior vice president of Google Research and Health.

“Together, in close collaboration with our fantastic colleagues across the Google Product Areas, we have a real opportunity to deliver AI research and products that dramatically improve the lives of billions of people, transform industries, advance science, and serve diverse communities,” Hassabis writes in a memo to employees. “By creating Google DeepMind, I believe we can get to that future faster. Building ever more capable and general AI, safely and responsibly, demands that we solve some of the hardest scientific and engineering challenges of our time,” he added.

This is interesting, as Google and “DeepMind” have occasionally clashed. “DeepMind” apparently failed in its long-running effort to break away from Google in 2021 as the tech giant started pressuring “DeepMind” to turn its research into a product. Google, though, is likely looking to consolidate its research teams as it forges ahead with its foray into the AI sector.

It also comes off the back of “Bard’s” seemingly botched release. In March, Google released early access to “Bard,” a competitor to “Bing Chat” and “ChatGPT.” However, as many technology outlets have reported, “Bard” is far less capable than its competitors thus far. This, despite Pichai stating that updates are on the way, it has been reported that Google employees criticized the product before its introduction and encouraged management not to make it available.

According to reports, Google is also pouring significant resources into “Magi,” a group of new search features with AI capabilities, in response to Microsoft’s close partnership with OpenAI on Bing Chat, an AI-powered chatbot linked with the latter’s Bing search engine. Microsoft sees this as a danger. Over 160 people comprise “Magi’s” task team, which was established this year.

“We’ve been an AI-first company since 2016, because we see AI as the most significant way to deliver on our mission,” Pichai wrote in the announcement. “The pace of progress is now faster than ever before. To ensure general AI’s bold and responsible development, we’re creating Google DeepMind to help us build more capable systems more safely and responsibly,” he added.