WordPress Ad Banner

Google Delays Gemini AI Launch

Google has chosen to delay the launch of its highly anticipated Gemini AI model, intended to compete with OpenAI’s GPT-4, until the following year. According to sources cited by The Information, Google CEO Sundar Pichai made the decision to postpone the scheduled launch events in California, New York, and Washington due to performance issues in languages other than English.

Gemini, designed as a multimodal AI model capable of comprehending and generating text, images, and various data types, has encountered challenges in multilingual functionality. In comparison to GPT-4, Gemini falls short in this aspect, prompting Google engineers to recognize the need for further improvement. While smaller versions of Gemini are undergoing testing, the development of the full-scale Gemini model is still in progress.

This isn’t the first instance of a delay for Gemini; earlier reports indicated a pushback for the cloud version of the model. Consequently, AI-driven products like the Bard chatbot, expected to benefit from Gemini enhancements, will now face a delay until the following year.

Google initially unveiled Gemini at its I/O event, emphasizing its impressive multimodal capabilities and efficiency in tool and API integrations. The company planned to offer Gemini in various sizes, including a mobile-friendly “Gecko” version, with the goal of attracting third-party developers.

The key question remains when Gemini will be seamlessly integrated into Google’s services, such as Bard, Search, and Workspace.

Gemini’s Role in Shaping the Future of Internet Information Flow

The significance of Gemini for Google lies in its potential to demonstrate the company’s ability to rival or surpass OpenAI, shaping a new internet landscape where information flow transitions from traditional search and the World Wide Web to chatbots.

Gemini’s success would also challenge the industry perception that GPT-4 is the ultimate benchmark, showcasing that there is still room for breakthroughs in underlying Transformer technology and scaling principles. While Google holds an advantage in data and computing, its ability to capitalize on this advantage has been hindered, in part, by Microsoft’s partnership with OpenAI.

Since March 2023, no company, whether a major tech player or an innovative startup, whether operating with closed or open source models, has managed to release a model comparable to GPT-4. Instead, the market is flooded with language models at the GPT-3.5 level, a standard that now seems easily attainable.

GPT-4’s advanced capabilities stem from its larger, more complex, and more expensive architecture. Using a mixture of interconnected AI models (a Mixture of Experts), GPT-4 surpasses a single large model. It is speculated that Google’s Gemini is built on a similar concept. OpenAI’s CEO, Sam Altman, has hinted at a timeline for the release of GPT-5, expected to be even more advanced.

However, the intricate architecture of these models also comes with a high cost for inference. In response, OpenAI is striving to lower prices with models like GPT-4 Turbo, even if it means compromising on some aspects of quality.

ChromeOS 117 Unveiled: A Closer Look at the Latest Features and Customization Options for Chromebook Users

Google has unveiled ChromeOS 117 for Chromebooks, introducing an array of exciting new features. This update boasts the highly anticipated Material You design, a customizable window-switching panel, and a seamless integration that allows users to join video calls directly from the calendar view.

One of the standout features of ChromeOS 117 is the introduction of customizable Material You design elements. Users can now personalize their Chromebook experience by selecting a wallpaper and a color palette. These choices will be reflected across various interface components, including quick settings, the desktop, and window headers. This customization not only enhances aesthetics but also brings a sense of individuality to the user experience. Additionally, ChromeOS 117 introduces a revamped quick settings menu featuring larger buttons and slider bars, reminiscent of the slide-down settings menu found in Android 13 on Pixel phones.

Effortless Multitasking

To further streamline multitasking, the update introduces a novel window organizer. Users can effortlessly arrange their open windows by either pressing the Everything button + Z or by hovering over the “Maximize” icon on an app window. This functionality empowers users to organize their workspace by splitting, partially viewing, or fully expanding app windows, or even allowing them to float above other windows.

ChromeOS users have long enjoyed quick access to the calendar view from the bottom bar. ChromeOS 117 elevates this convenience by enabling the ability to join video meetings directly from the calendar view. This enhancement simplifies the process of attending scheduled meetings, reducing unnecessary navigation and clicks.

In a bid to enhance battery life, ChromeOS 117 introduces adaptive charging. This feature can be activated through the Settings menu under Device > Power > Adaptive charging. Once enabled, the Chromebook will intelligently charge to 80% and then utilize machine learning to adapt to the user’s unplugging habits, gradually reaching a full charge of 100%. This approach not only conserves energy but also prolongs battery life.

Dynamic Personalization

Beyond these headline features, ChromeOS 117 offers several other notable enhancements. Users can now select shared albums from Google Photos for rotating wallpapers, making personalization even more dynamic. Searching for GIFs is made easier with the integration of GIF search within the emoji picker. For those interested in creative endeavors, the update supports time-lapse recording through the webcam. Additionally, users can access essential system information such as RAM usage, power status, and OS version directly from the launcher’s search results.

In conclusion, ChromeOS 117 is a significant update that brings a host of customization options, productivity enhancements, and power-saving features to Chromebook users. With Material You design and streamlined multitasking, Google continues to refine the ChromeOS experience, making it more user-centric and efficient.

Google Unveils Exciting Updates in Vertex AI

Hey there, tech enthusiasts and AI aficionados! The AI scene is buzzing with excitement as Google takes center stage at its Google Next event to announce a slew of updates across its portfolio, all harnessed by the incredible power of generative AI. Let’s dive into the latest advancements that are set to reshape the AI landscape.

Vertex AI Shines with Enhancements and New Features

At the heart of Google’s AI efforts lies Vertex AI, a platform that’s receiving significant upgrades to enhance developer experiences and bolster foundational models. These changes are bound to make waves in the AI community.

One of the headliners of the event is the expansion of Google’s PaLM 2 large language model (LLM). This model, initially introduced at the Google I/O conference earlier this year, is receiving a boost in terms of language support and token length. The Codey code generation LLM and the Imagen image generation LLMs are also getting updates to enhance their performance and output quality.

A Closer Look at PaLM 2’s Evolution

Google’s PaLM 2 is stealing the show with these updates. The once-announced LLM has now evolved with more language support, bringing the total to an impressive 38 languages, including Arabic, Chinese, Japanese, German, and Spanish. But that’s not all – the token length is getting a major expansion, growing from 4,000 to a whopping 32,000 tokens. This means PaLM 2 can now handle longer-form documents, which is a significant stride forward.

Elevating Code Development and Image Generation

The excitement doesn’t stop there. The Codey text-to-code LLM is getting a facelift, boasting a remarkable 25% improvement in code generation quality. Partners like GitLab are leveraging this model to assist developers in predicting and completing lines of code, generating test cases, and even explaining complex code segments.

The Imagen text-to-image LLM is also receiving upgrades, and one feature stands out – “style tuning.” This nifty addition allows users to create images that align perfectly with their brand guidelines or creative vision, using as few as 10 reference images. With this feature, you can infuse your brand’s style into the generated images, giving them a unique and cohesive look.

Expanding the Model Lineup with Llama 2

While PaLM 2 is Google’s flagship foundation model, the company is going the extra mile by providing access to third-party LLMs on Google Cloud. This aligns with the industry trend of supporting multiple foundation models. In this spirit, Google is introducing Meta’s Llama 2, a recently released model. With reinforcement learning and human feedback, users can further train Llama 2 on their enterprise data, ensuring results that are more relevant and precise.

Vertex AI Extensions: Bridging Models and Real-World Data

But let’s not forget that the true magic happens when foundation models are connected to real-world data, enabling tangible actions. This is where Vertex AI Extensions come into play. These fully managed developer tools act as bridges, connecting models to real-world data via APIs and enabling them to perform real-world actions.

With Vertex AI Extensions, developers can create powerful generative AI applications that encompass everything from digital assistants to custom search engines and automated workflows. This paves the way for innovative solutions that seamlessly integrate AI into various aspects of daily life.

Final Thoughts

Google’s Google Next event has certainly left us in awe of the advancements in AI, particularly within the realm of generative AI. With updates to Vertex AI, enhancements to foundational models like PaLM 2, and the introduction of exciting features like style tuning and Llama 2, the possibilities are limitless. As AI continues to evolve and empower enterprises, we can’t wait to see the groundbreaking applications that will emerge from these developments. Stay tuned for a future where AI transforms the way we interact with technology and data!

Google Introduces “Results About You” Privacy Update

In a recent blog post, Google has unveiled new updates to its privacy tools, making it easier for users to control the information that appears in their search results. The search giant has introduced additional features to the “results about you” tool, which allows users to remove search results containing personal information such as phone numbers, home addresses, or email addresses, thereby adding an extra layer of online privacy protection.

Results About You Tool: A Closer Look

The “results about you” tool, initially launched last year, has now been upgraded with a user-friendly dashboard that promptly alerts individuals whenever search results containing their personal information are detected. With just a few taps, users can swiftly request Google to remove these results, thereby safeguarding their privacy effectively.

This update is reminiscent of a feature introduced by Google One earlier in the year. The feature involved scanning the broader web to identify instances where user information might have been compromised in data breaches. On the other hand, the “results about you” tool takes a proactive approach by searching for and eliminating personal information from search results, providing an added layer of privacy protection.

To access the tool, users can tap their profile photo within the Google app and select “results about you.” Alternatively, Google has also created a dedicated webpage for this purpose. As of now, the tool is available in English for users in the United States, but Google has plans to expand its availability to other languages and regions in the near future.

Google Enhances Privacy

In another important update, Google has revised its policy on removing explicit photos of individuals from search results. While it has long offered the option to remove non-consensual explicit images, the policy has been extended to encompass consensual imagery as well. For instance, if someone has previously uploaded explicit content of themselves to a website but subsequently decided to delete it, they can now request Google to remove it from search results if it has been reuploaded elsewhere without their consent. Notably, this policy does not apply to content that is still being sold or monetized.

It’s important to understand that removing explicit content from Google Search does not erase it entirely from the web. However, the removal process can make it significantly more challenging for people to stumble upon such content. For detailed instructions on how to use this feature, users can search for “request removals” in the Google help center.

Updates to SafeSearch and Parental Controls

Google is rolling out updates to its parental controls and SafeSearch feature. From this month onward, explicit imagery, such as adult or graphic violent content, will be automatically blurred in search results, following an earlier announcement. Users can disable SafeSearch blurring in their settings, unless it has been locked by a school network admin or guardian on their account.

Lastly, Google is enhancing access to parental controls from the Search interface. By typing queries like “Google parental controls” or “Google family link,” users will see an information box explaining how to adjust their account settings or their child’s account settings more conveniently.

With these recent updates, Google is striving to empower users with greater control over their personal information and content visibility while reinforcing its commitment to online privacy and safety.

Enhancing User Safety: Google Rolls Out ‘Unknown Tracker Alerts’ for Android Users

Google is taking a significant step to enhance user safety by introducing a new safety feature known as “Unknown Tracker Alerts” for Android users. The feature, which was initially announced at the Google I/O developer event, is aimed at detecting potential stalkers who might be using Bluetooth tracking devices like Apple AirTags to track unsuspecting individuals.

Starting today, Android users will receive automatic alerts if an unknown Bluetooth device is detected traveling with them. This could indicate that someone is attempting to stalk them using a tracking device. To bolster security, users will also have the option to manually scan their surroundings for potential trackers using their Android device. If a tracking device is found, the user will be guided on the next steps to take.

The need for this safety feature arose due to the misuse of Bluetooth tracking, with reports of people employing AirTags for stalking and illegal activities, such as tracking vehicles for potential theft. In response to these concerns, Apple had taken measures to address privacy issues with AirTags, but these changes did not directly benefit Android users.

However, in May, Apple and Google jointly announced their plan to develop an industry-wide specification to alert users about unwanted tracking from Bluetooth devices. The finalized specification is expected to be ready by the end of the year.

Taking proactive steps to protect Android users, Google introduced improvements to its Find My Device network and initiated alerts regarding potential trackers traveling with them. This custom implementation seeks to safeguard Android users ahead of the official joint specification. Apple, on the other hand, has opted to wait for the joint spec’s implementation rather than rolling out its own custom version in the meantime.

The new Unknown Tracker Alerts feature will send notifications to Android users if an unknown tracker is detected in their vicinity. Users can then view a map of where the tracker was last seen and even play a sound to help locate the device. Additionally, if the device is found, users can obtain more information about the owner by bringing the tracker near the back of their phone.

The safety feature also provides guidance on how to disable the Bluetooth device entirely, ensuring the owner can no longer track the user or receive future updates from the tracker.

Unknown Tracker Alerts

For added control, users can manually scan their surroundings for potential Bluetooth trackers by accessing the “Unknown Tracker Alerts” option under “Safety & Emergency” in Android’s Settings. This manual scan takes around 10 seconds to complete and offers tips on what to do if a tracker is found, eliminating the need to wait for automatic alerts.

Furthermore, Google had previously announced plans to update its Find My Device network to help users locate other missing items, such as headphones, phones, luggage, and keys, through third-party Bluetooth tracker tags. This feature would also support popular tracker brands like Tile, Chipolo, and Pebblebee, as well as audio devices like Pixel Buds and headphones from Sony and JBL. However, this update has been put on hold as Google collaborates with Apple to finalize the joint unwanted tracker alert specification.

Google has decided to delay the rollout of the Find My Device network until Apple implements the necessary protections for iOS, reflecting the companies’ commitment to addressing user safety and security concerns jointly.

Google’s Genesis: An AI Tool for News Writing Raises Concerns Among Journalists

Google has engaged in discussions with media organizations under the News Corp umbrella, which includes The New York Times, The Washington Post, and The Wall Street Journal. The purpose of these meetings is to introduce an AI tool named Genesis, designed to produce and write news stories. According to an exclusive report by The New York Times, Google aims to promote journalism productivity through this tool.

Despite Google’s assurance that the AI tool is meant to assist journalists rather than replace them, it has sparked concerns among industry professionals. Some executives present during the pitch expressed discomfort with the AI’s lack of understanding of the effort required to produce accurate news stories.

While AI can support journalists with research tasks, it may struggle to provide credible and original reporting value. The risk of misinformation spreading is a significant concern, as large language models can sometimes produce incorrect information with confidence.

As the media landscape shifts toward AI-generated content and crowdsourced news, the importance of investigative journalism and fact-checking becomes even more critical.

Several media organizations, including Insider, NPR, and The Times, have already started exploring ways to integrate AI tools into their newsrooms. Google’s spokeswoman, Jenn Crider, clarified that the AI tool is intended to handle menial tasks, such as generating headline options, rather than replacing human journalists.

While some argue against outright rejection of Genesis, citing past instances where technology has transformed aspects of journalism, others highlight potential pitfalls. For instance, Joshua Benton, an American journalist and founder of Nieman Journalism Lab, conducted an experiment using ChatGPT, an AI language model. The result revealed that AI-generated reports can be problematic, containing purple prose, racist elements, and ethical concerns.

In conclusion, the introduction of AI tools like Genesis may have the potential to aid journalists in their work, but the concerns surrounding misinformation and ethical issues warrant cautious consideration. The evolution of technology in journalism has shown both promise and challenges, making it essential for the industry to strike a balance between embracing innovation and upholding journalistic integrity.

Google Unveils New Accessibility and Learning Features at ISTE Expo

Google showcased its latest accessibility and learning features during the International Society for Technology in Education (ISTE) expo. The new features introduced include an expanded reading mode, integration of sign language interpreters in Google Meet, and AI-powered question suggestions for educational assignments.

Building on its AI-powered tools integrated into consumer products like Search, Gmail, and Sheets, Google has now incorporated AI-generated questions into assignments related to YouTube videos. Teachers can customize the questions or modify the suggestions provided by the AI. The company is currently accepting applications to test this feature in English, with plans to extend support to Spanish, Portuguese, Japanese, and Malay.

In March, Google introduced Reading Mode, which allows users to focus on text by removing distracting elements such as videos and images. Initially available only for Chrome browsers on ChromeOS, Google announced that the feature will soon be accessible to all Chrome users.

Furthermore, Google announced that screen reader users will be able to convert images to text in PDFs using Chrome browsers on Chromebooks. However, copying the text from these converted PDFs may not be supported.

To enhance readability, Google introduced new fonts designed for Arabic, Cyrillic, and Latin systems. These optically variable fonts adapt their design for different sizes, improving legibility across various platforms.

Google Meet, the company’s video conferencing platform, introduced a tile-pairing feature. When enabled, this feature highlights both tiles when a participant speaks, facilitating seamless connection with a sign language interpreter.

In addition, Google Meet will soon incorporate features like polls and Q&A during livestreams for classrooms subscribed to the “Teaching and Learning Upgrade” or “Education Plus” plans.

While Google offers its Workspace for Education for free, advanced security, learning tools, device management, and analytics are available through the Standard tier, priced at $3 per student per year, and the Plus tier, priced at $5 per student per year.

With these new accessibility and learning features, Google aims to empower educators and students by providing innovative tools that enhance engagement and foster inclusivity within the learning environment.

Enhanced Features Coming to iOS Chrome: Built-in Lens, Maps, and Calendar Integration

Google announced today that Chrome on iOS is getting a few new features, including built-in Lens support that will allow users to search using just their cameras. Although you can already use Lens in Chrome on iOS by long-pressing an image you find while browsing, you will soon also be able to use your camera to search with new pictures you take and existing images in your camera roll.

The company says the new integration is launching in the coming months. For context, Google Lens lets you search with images to do things like identify plants and translate languages in real time.

Image Credits: Google

Google also announced that when you see an address in Chrome on iOS, you no longer need to switch apps to look it up on a map. The company says now when you press and hold a detected address in Chrome, you will see the option to view it on a mini Google Maps right within Chrome.

In addition, users can now create Google Calendar events directly in Chrome without having to switch apps or copy information over manually. You just need to press and hold a detected date, and select the option to add it to your Google Calendar. Chrome will automatically create and populate the calendar event with important details like time, location and description.

Image Credits: Google

Last, Google announced that users now have the ability to translate a portion of a page by highlighting text and selecting the Google Translate option.

“As our AI models improve, Chrome has gotten better at detecting a webpage’s language and suggesting translations,” the company wrote in a blog post. “Let’s say you’re planning to visit a museum in Italy, but the site’s in Italian and you don’t speak the language. Chrome will automatically offer to translate the museum’s website into your preferred language.”

Try On: Google’s New AI Tool for Enhanced Online Shopping Experience

The advent of online retail has completely transformed the way we shop, offering the convenience of shopping from home and saving valuable time, particularly during the pandemic. However, purchasing clothing online, despite its numerous benefits, has often come with its fair share of frustrations.

According to a recent online survey conducted by Google, a significant portion of online shoppers, approximately forty-two percent, expressed a sense of disconnect with the models showcasing the clothing, feeling that they were not adequately represented. Furthermore, a staggering fifty-nine percent reported dissatisfaction with their online clothing purchases, as the items appeared different in reality compared to how they appeared online.

In an effort to address these concerns and enhance the virtual shopping experience, Google has introduced a groundbreaking generative artificial intelligence (AI) tool called “Try On.” The announcement of this tool was made on Wednesday, and it has already been made available to select shoppers in the United States.

The “Try On” tool empowers shoppers to select models that resonate with them on a personal level, offering a greater sense of representation and relatability. This can be achieved by simply utilizing the “Try On” badge displayed alongside Google Search results.

Google has emphasized that the implementation of this innovative tool aims to deliver a more personalized and meaningful shopping experience for users. In addition to model selection, the tool provides advanced features, such as refined options and filters, enabling shoppers to curate their search results and create a truly relatable shopping journey.

With the introduction of the “Try On” tool, Google seeks to bridge the gap between online shoppers and the fashion items they desire, ensuring a more accurate representation and aligning the virtual experience with the reality of the products. This development is poised to alleviate the frustrations associated with online clothing shopping and empower consumers to make informed and satisfying purchase decisions from the comfort of their own homes.

Diversifying online retail

Google’s AI-powered 'Try On' tool to make online shopping more relatable
Select “Try On” option on Google’s search bar to experience the new AI feature 

Lilian Rincon, senior director of product in Google’s shopping sector, says that Google selected diverse models ranging from sizes XXS to 4XL representing different skin tones (using the Monk Skin Tone Scale as a guide), body shapes, ethnicities and hair types.

“Our new generative AI model can take just one clothing image and accurately reflect how it would drape, fold, cling, stretch, and form wrinkles and shadows on a diverse set of real models in various poses”, she said.

As of yesterday, US shoppers can virtually try on products using the AI models from a broad collection of fashion labels including AnthropologieEverlaneH&M and LOFT

The company is working in collaboration with Shopping Graph, a comprehensive data set of products and sellers.

The technology can scale to more brands and items over time, the product director says.  

Currently, the AI feature exclusively allows shoppers to preview women’s tops within product listings from an assortment of brands.

However, Google is set to launch a variety of options later this year, including men’s tops. 

AI dominance

Additionally, the new refine options allow users to custom pick colors, styles, and patterns made possible by harnessing machine learning and the company’s new visual matching algorithm.

“Unlike shopping in a store, you’re not limited to one retailer. You’ll see options from stores across the web,” Rincon said.

According to CNN, Google’s move to incorporate AI comes in response to the wave of new AI-powered tools such as ChatGPT.

“At the Google I/O developer conference last month, the company spent more than 90 minutes teasing a long list of AI announcements, including expanding access to its existing chatbot Bard and bringing new AI capabilities to Google Search,” reports CNN.

With AI growing rapidly, other companies such as Shopify, Instacart, eBay, and even Amazon have entered the sphere. CNN says that Amazon is experimenting with AI, to sum up customer feedback about products on the site.

As per this amusing summary of the Google I/O keynote from The Verge, AI did feature rather prominently at the event.

Google’s DeepMind: Optimized Algorithms Not Trained on Human Code

Google’s DeepMind AI group has released a reinforcement learning tool that can develop extremely optimized algorithms. It does this without first being trained on human code examples because it is set up to treat programming as a game.

This is according to a report by Ars Technica published on Thursday.

DeepMind already had the ability to teach itself how to play games, conquering games as varied as chess, Go, and StarCraft. The software was effective at learning to play by itself and discovering options that allowed it to maximize a score through approaches to the games that humans haven’t thought of. 

Removing the need for human models

Today, large language models write effective code because they have trained on human models. However, this training means they are not likely to develop something that humans haven’t done previously. 

That’s why to optimize well-understood algorithms, it’s best not to base them on human code. The question that surfaces is how do you get an AI to identify a truly new and unique approach?

Programmers at DeepMind decided to replicate the approach they used with Chess and Go and transformed code optimization into a game. They engineered algorithms that treated the latency of the code as a score and tried to minimize that score resulting in software that had the ability to write tight, highly efficient code.

They did this through a complex AI system called AlphaDev that consists of several distinct components. Its representation function tracks the overall performance of the code as it’s developed, including the general structure of the algorithm and the use of x86 registers and memory.

Benefits of New System of Google’s DeepMind

The main advantage of this new system is that its training doesn’t have to involve any code examples, as it generates its own code examples and then proceeds to evaluate them. Through this system, it collects information about combinations of instructions that are effective in sorting, reported Ars Technica.

In January 2023, Google Research and DeepMind launched MedPaLM, a large language model aligned to the medical domain.

The software was meant to generate safe and helpful answers in the medical field. It combines HealthSearchQA, a new free-response dataset of medical questions sought online, with six existing open-question answering datasets covering professional medical exams, research, and consumer queries. 

Meanwhile, last month, Demis Hassabis, the CEO of DeepMind, said artificial general intelligence (AGI), a machine intelligence that can comprehend the world as humans do, might be developed “within a decade.”