WordPress Ad Banner

Neuralangelo: NVIDIA’s New AI Model Turns 2D Videos into 3D Structures

Leading artificial intelligence (AI) company and chip manufacturer Nvidia has introduced its latest addition to their impressive lineup of AI tools called Neuralangelo.

Meet Neuralangelo, an advanced AI model designed to revolutionize the transformation of 2D video clips into highly detailed 3D structures. Powered by neural networks and cutting-edge algorithms for 3D reconstruction, Neuralangelo has the ability to generate virtual replicas of real-world objects with astonishing realism.

The name Neuralangelo pays homage to the renowned Italian sculptor and painter, Michelangelo, whose artistic brilliance during the Renaissance era produced iconic works such as the sculpture of David and the breathtaking paintings adorning the Sistine Chapel ceiling.

In a remarkable demonstration, Neuralangelo showcases its capabilities by recreating a diverse range of objects, from the timeless beauty of Michelangelo’s David to the ordinary yet familiar sight of a flatbed truck.

Through the utilization of Neuralangelo, the boundaries between the 2D and 3D realms are seamlessly bridged. This breakthrough AI model opens up endless possibilities for industries such as architecture, entertainment, and virtual reality, enabling the creation of immersive experiences and accurate virtual representations of physical objects.

Nvidia’s Neuralangelo marks a significant milestone in the field of AI-driven video transformation, propelling us into an era where the conversion of flat footage into captivating 3D structures is now within reach. With its remarkable capabilities, Neuralangelo is poised to reshape the way we perceive and interact with visual content, ushering in a new era of virtual creativity and innovation.

The AI model emerged from a study done in collaboration with the NVIDIA research team and the Johns Hopkins University in Maryland, U.S.

Neuralangelo is one of nearly 30 projects by NVIDIA Research to be presented at the Conference on Computer Vision and Pattern Recognition (CVPR), taking place June 18-22 in Vancouver. The papers span topics including pose estimation, 3D reconstruction, and video generation, said the company in a blog.

Multiple Images Observed From Different Viewpoints

How it works is that the AI-powered model will observe the depth, shape, and size of the characters or objects in a 2D video from multiple angles. Neuralangelo will at first create an initial 3D representation of the scene and then will optimize the render to enhance it further to lift the intricate details and textures.

Creative professionals can then use the 3D outcome in design applications, editing them further for use in art, video game development, and robotics, said the company in a blog. It also equips users with the capability of creating digital twins of the real world using ubiquitous mobile devices.

Many are wondering what this means for the gaming industry, in which Nvidia’s graphic cards are a leader. The company recently announced the Nvidia RTX 4060 Ti, an upgrade after RTX 4070.

“The 3D reconstruction capabilities Neuralangelo offers will be a huge benefit to creators, helping them recreate the real world in the digital world,” said Ming-Yu Liu, senior director of research and co-author of the paper, in the blog. 

“This tool will eventually enable developers to import detailed objects — whether small statues or massive buildings — into virtual environments for video games or industrial digital twins,” she added.

As one Twitter user described it as ‘photogrammetry on steroids,’ neural surface reconstruction methods used in Neuralangelo have shown potential in overcoming ambiguous observations like large areas of homogeneous colors, repetitive texture patterns, or strong color variations. Photogrammetry is a technique that uses photos as the primary medium for the measurement of physical objects.

The concept behind Neuralangelo is not new. NVIDIA research last year created NVIDIA 3D MoMa, which allows architects, designers, and game developers to import objects into a graphics engine for digital manipulation.

Very Powerful AI May Be Banned, Warns UK Govt Adviser

As the development of artificial intelligence (AI) continues at an accelerated pace, experts and prominent figures in the field, often referred to as AI ‘godfathers,’ are raising concerns about the potential risks it poses to privacy, human rights, and overall safety.

In response to these concerns, the United Kingdom government, in collaboration with the European Union and the United States, is taking steps to regulate this transformative technology. A member of the UK government’s non-statutory AI Council, Marc Warner, CEO of AI company Faculty, has expressed the view that highly powerful artificial general intelligence (AGI) systems may ultimately need to be banned.

Warner, a respected member of the AI Council, an independent committee providing guidance to the UK Government on the AI ecosystem, discussed the concept of AGI in an interview with the BBC. AGI refers to systems that surpass human intelligence, possessing the ability to reason, plan, and learn from experience at a level equal to or potentially exceeding human capabilities.

Expressing valid concerns, Warner emphasized that AGI poses much greater worries and requires an entirely different set of rules. He highlighted the significance of human intelligence in our position of prominence on this planet and questioned the safety implications of creating objects that are as intelligent as or even surpass human intelligence, without a solid scientific justification.

On the other hand, Warner suggested that narrow AI systems, which are designed for specific tasks like text translation or machine learning-based identification of bacteria, could be regulated similarly to existing technologies.

However, AGI systems have the potential to match or surpass human intelligence across various tasks. Warner called for prudent decision-making regarding AGI, emphasizing the need for strong limitations on the amount of computing power that can be applied arbitrarily to such systems.

Warner is also a signatory to the Center for AI Safety statement, which advocates for action to mitigate the risks of potential human extinction due to AI. Notable figures who have also signed the statement include Geoffrey Hinton, an AI pioneer, Yoshua Bengio, a renowned AI scientist and professor, Sam Altman, CEO of OpenAI, Bill Gates, co-founder of Microsoft, Dario Amodei, CEO of Anthropic, Demis Hassabis, CEO of Google DeepMind, and others.

While the EU Artificial Intelligence Act, one of the earliest attempts to regulate AI, is still undergoing legislative processes, the European Union Commissioner Margrethe Vestager stated that it would take two to three years for various pieces of legislation to come into effect. She stressed the urgency of addressing the rapid technological acceleration in AI.

Europe is leading the way in articulating regulations to govern AI in a safe manner, ahead of the United States. CEOs of prominent AI companies have called for the establishment of rules to manage this powerful technology. It is now more crucial than ever for countries like the U.S. to step up their efforts if they wish to be actively involved in international AI governance discussions.

AI ‘Godfather’ Professor Yoshua Bengio Expresses Concerns over Technology’s Rapid Evolution

Following Geoffrey Hinton’s recent warning about the potential dangers of artificial intelligence (AI), another prominent figure in the field, Professor Yoshua Bengio, has expressed his concerns regarding the pace at which technology is advancing.

In an interview with the BBC, Bengio, known as one of the ‘godfathers’ of machine learning, revealed that he feels “lost” in regard to his life’s work. As a Canadian computer scientist and professor at the University of Montreal, Bengio is renowned for his groundbreaking contributions to AI, particularly in the area of deep learning.

While acknowledging the emotional challenges faced by those deeply involved in AI, Bengio emphasized the importance of persevering and engaging in discussions to foster collective thinking.

Notably, Geoffrey Hinton, another prominent AI figure, recently resigned from his position at Google to freely address the risks associated with AI.

Bengio’s remarks come on the heels of a statement released by the Center of AI Safety (CAIS), a research nonprofit, cautioning about the potential existential threats posed by artificial intelligence. The statement, signed by Bengio, Hinton, Sam Altman (CEO of OpenAI), Demis Hassabis (CEO of Google DeepMind), and other notable AI scientists and figures, emphasizes the need to prioritize mitigating the risks of AI-induced extinction alongside other global-scale concerns such as pandemics and nuclear warfare.

According to CAIS, as AI continues to advance, it could potentially contribute to catastrophic risks. The organization’s blog highlights various ways in which AI systems could pose significant dangers, including the potential use of AI as a political weapon. OpenAI’s CEO echoed this sentiment during a recent appearance before a Senate committee, expressing concerns about AI’s interference with election integrity.

The collective voices of influential figures like Bengio and Hinton, along with organizations like CAIS, underscore the imperative of addressing the risks associated with AI as it progresses further into the future.

Calls for regulation

Professor Bengio further told the BBC that all AI companies must be registered. “Governments need to track what they’re doing, they need to be able to audit them, and that’s just the minimum thing we do for any other sector like building airplanes or cars or pharmaceuticals.”

“We also need the people close to these systems to have a kind of certification… we need ethical training here. Computer scientists don’t usually get that, by the way,” he added.

Countries across the globe are grappling to regulate AI, as its full potential remains unknown. U.S President Joe Biden and Vice President Kamala Harris had a meeting with tech industries’ bigwigs like Altman, Anthropic CEO Dario Amodei, Microsoft CEO Satya Nadella, and Google CEO Sundar Pichai earlier this month to address the risks associated with AI and the responsibility that their respective companies need to take to ensure safety and privacy.

Baidu Invests $140 Million to Foster Generative AI Startups

Baidu Inc., China’s internet search leader, is investing 1 billion yuan ($140 million) to incubate Chinese startups that focus on generative AI. As reported in Bloomberg, the move makes Baidu a part of the global investment wave centering on ChatGPT-like services.

Baidu into Generative AI Startups

According to a statement released by Baidu, the Investment to Foster Projects Utilizing its Ernie AI Model Could Reach up to 10 Million Yuan Each. Venture investors, including IDG Capital, will Evaluate Pitches from Founders, who will then Create Demo Products before a Decision on Seed Funding.

This Incubation Program Merges One of China’s Top Startup Investment Firms with its Leading Player in AI Development. This Represents Another Leap Forward in China’s Aggressive AI Investment Strategy, Catalyzed by the Media Buzz Surrounding the Debut of OpenAI’s ChatGPT.

Baidu Countered with its Version of ChatGPT, Dubbed Ernie Bot, Sparking Competition Among Rivals such as Alibaba Group Holding Ltd., Tencent Holdings Ltd., and SenseTime Group Inc. to Unveil Competing Platforms. Baidu, in Partnership with Venture Capitalists, will Evaluate Pitches from Aspiring Founders who Plan to Use Ernie to Expand their Services.

The Role of Ernie AI in China’s Burgeoning AI Industry

But it’s too soon to say if Ernie Bot could reach the “killer app” status, much like Tencent’s omnipresent WeChat. China’s leading internet regulator has stated that generative AI tools must pass security reviews before deployment. Furthermore, US sanctions have deprived Chinese tech firms of the best chips to train their AI models, potentially widening the divide between services like Ernie Bot and their Western counterparts.

Li unveiled Ernie Bot in March via a pre-recorded demo, leaving investors and analysts somewhat underwhelmed. While the Chinese chatbot received positive reviews among selected testers, Baidu’s shares still show a roughly 20% drop from their February high. Despite these challenges, the company remains optimistic and committed to redefining the AI landscape with its latest project.

Gaming Revolutionized: The Power of AI in Game Development

In recent years, the gaming industry has witnessed a remarkable transformation, largely driven by the emergence of artificial intelligence (AI) technology. The influence of AI in game development, however, has been present since its early days. Initially focused on creating unbeatable game-playing programs, AI has now expanded its reach to revolutionize various aspects of game design and development.

Game developers today harness the power of AI to enhance multiple facets of their creations. One prominent area where AI excels is in improving photorealistic effects, leading to visually stunning and immersive game environments. By analyzing vast amounts of data and employing sophisticated algorithms, AI enables developers to create virtual worlds that rival reality itself.

Another groundbreaking application of AI in game development lies in the generation of game content. AI algorithms can autonomously produce diverse and engaging game levels, characters, and narratives. This capability not only saves time and resources for developers but also ensures that players are constantly presented with fresh and exciting experiences.

AI also plays a crucial role in balancing in-game complexities. By monitoring player behavior and analyzing gameplay patterns, AI algorithms can dynamically adjust difficulty levels, ensuring an optimal and challenging experience for players of all skill levels. This adaptability keeps gamers engaged and prevents them from becoming bored or frustrated.

Moreover, AI provides the much-needed “intelligence” to Non-Playing Characters (NPCs). These AI-controlled entities can now exhibit advanced decision-making capabilities, adapting their behavior to the player’s actions and creating more realistic and immersive gameplay interactions. Whether it’s realistic enemy AI in a first-person shooter or intelligent companions in a role-playing game, AI-driven NPCs contribute to a more dynamic and engaging gaming experience.

Looking ahead, the future of gaming intelligence holds even more exciting possibilities. AI can be employed to analyze player behavior and preferences on a deeper level, allowing game developers to personalize gameplay experiences and deliver targeted content. This level of customization ensures that each player feels uniquely immersed in the game world, fostering a strong sense of connection and enjoyment.

AI-Powered Game Engines

Game engines are software frameworks that game developers use to create and develop video games. They provide tools, libraries, and frameworks that allow developers to build games faster and more efficiently across multiple platforms, such as PC, consoles, and mobile devices.

AI is revolutionizing game engines by allowing for the creation of more immersive and dynamic environments. Rather than manually coding a game engine’s various components, such as the physics engine and graphics rendering engine, developers can use neural networks to train the engine to create these components automatically. This can save time and resources while creating more realistic and complex game worlds.

Additionally, AI-powered game engines use machine learning algorithms to simulate complex behaviors and interactions and generate game content, such as levels, missions, and characters, using Procedural Content Generation (PCG) algorithms.

Other use cases of AI in game engines include optimizing game performance and balancing game difficulty making the game more engaging and challenging for players. 

One example of an AI-powered game engine is GameGAN, which uses a combination of neural networks, including LSTM, Neural Turing Machine, and GANs, to generate game environments. GameGAN can learn the difference between static and dynamic elements of a game, such as walls and moving characters, and create game environments that are both visually and physically realistic. 

AI-driven Game Design

Game design involves creating the rules, mechanics, and systems defining the gameplay experience. AI can play a crucial role in game design by providing designers with tools to create personalized and dynamic experiences for players.

One way AI can be used in game design is through procedural generation. Procedural generation uses algorithms to automatically create content, such as levels, maps, and items. This allows for a virtually infinite amount of content to be made, providing players with a unique experience each time they play the game. AI-powered procedural generation can also consider player preferences and behavior, adjusting the generated content to provide a more personalized experience.

Another way AI can be used in game design is through player modeling. By collecting data on how players interact with the game, designers can create player models that predict player behavior and preferences. This can inform the design of game mechanics, levels, and challenges to better fit the player’s needs.

AI can also be used to create more intelligent and responsive Non-Player Characters (NPCs) in games.

Using natural language processing (NLP) and machine learning techniques, NPCs can interact with players in more realistic and engaging ways, adapting to their behavior and providing a more immersive experience.

Furthermore, AI can analyze player behavior and provide game designers with feedback, helping them identify areas of the game that may need improvement or adjustment. This can also inform the design of future games, as designers can use the insights gained from player behavior to inform the design of new mechanics and systems.

AI and Game Characters

Artificial Intelligence is critical in developing game characters – the interactive entities players engage with during gameplay.

In the past, game characters were often pre-programmed to perform specific actions in response to player inputs. However, with the advent of AI, game characters can now exhibit more complex behaviors and respond to player inputs in more dynamic ways.

One of the most significant advances in AI-driven game character development is using machine learning algorithms to train characters to learn from player behavior.

Machine learning algorithms allow game developers to create characters that adapt to player actions and learn from their mistakes. This leads to more immersive gameplay experiences and can help make a greater sense of connection between players and game characters.

Another way that AI is transforming game characters is through the use of natural language processing (NLP) and speech recognition. These technologies allow game characters to understand and respond to player voice commands. For example, in Mass Effect 3, players can use voice commands to direct their team members during combat.

AI is also used to create more realistic and engaging game character animations. By analyzing motion capture data, AI algorithms can produce more fluid and natural character movements, enhancing the overall visual experience for players.

AI and Game Environments

AI can also generate specific game environments, such as landscapes, terrain, buildings, and other structures. By training deep neural networks on large datasets of real-world images, game developers can create highly realistic and diverse game environments that are visually appealing and engaging for players.

One method for generating game environments is using generative adversarial networks (GANs). GANs consist of two neural networks – a generator and a discriminator – that work together to create new images that resemble real-world images.

The generator network creates new images, while the discriminator network evaluates the realism of these images and provides feedback to the generator to improve its output.

Another method for generating game environments is through the use of procedural generation. Procedural generation involves creating game environments through mathematical algorithms and computer programs. This approach can create highly complex and diverse game environments that are unique each time the game is played.

AI can also adjust game environments based on player actions and preferences dynamically. For example, in a racing game, the AI could adjust the difficulty of the race track based on the player’s performance, or in a strategy game, the AI could change the difficulty of the game based on the player’s skill level.

AI and Game Narrative

AI can also be used to enhance the narrative in video games. Traditionally, human writers have developed game narratives, but AI can assist with generating narrative content or improving the overall storytelling experience.

Natural language processing (NLP) techniques can be used to analyze the player feedback and adjust the narrative in response. For example, AI could analyze player dialogue choices in a game with branching dialogue options and change the story accordingly.

Another use of AI in game narratives is to generate new content. This can include generating unique character backstories, creating new dialogue options, or even generating new storylines. 

AI and Game Testing

Game testing, another critical aspect of game development, can be enhanced by AI. Traditional game testing involves hiring testers to play the game and identify bugs, glitches, and other issues. However, this process can be time-consuming and expensive, and human testers may not always catch all the problems.

The other alternative is the use of scripted bots. Scripted bots are fast and scalable, but they lack the complexity and adaptability of human testers, making them unsuitable for testing large and intricate games.

AI-powered testing can address these limitations by automating many aspects of game testing, reducing the need for human testers, and speeding up the process. 

Reinforcement Learning (RL) is a branch of machine learning that enables an AI agent to learn from experience and make decisions that maximize rewards in a given environment.

In a game-testing context, the AI can take random actions and receive rewards or punishments based on the outcomes, such as earning points. Over time, it can develop an action policy that yields the best results and effectively test the game’s mechanics.

Machine learning algorithms can also identify bugs and glitches in the game. The algorithm can analyze the game’s code and data to identify patterns that indicate a problem, such as unexpected crashes or abnormal behavior. This can help developers catch issues earlier in the development process and reduce the time and cost of fixing them.

The Future of AI in Game Development

The gaming industry has always been at the forefront of technological advancements, and artificial Intelligence (AI) is no exception.

In recent years, AI has played an increasingly important role in game development, from improving game mechanics to enhancing game narratives and creating more immersive gaming experiences.

As AI technology continues to evolve, the possibilities for its application in game development are expanding rapidly.

Here are some potential areas that AI is expected to shape the future of the gaming industry:

Automated Game Design:

One of the most exciting prospects of AI in game development is automated game design.

By training AI models on large datasets of existing games, it could be possible to create new games automatically without human intervention. AI algorithms could generate game mechanics, levels, characters, and more, potentially significantly reducing development time and costs.

However, this technology is still in its infancy, and whether AI-generated games can replicate the creativity and originality of human-designed games remains to be seen.

Data Annotation:

Data annotation is the process of labeling data to train AI models. In the gaming industry, data annotation can improve the accuracy of AI algorithms for tasks such as object recognition, natural language processing, and player behavior analysis. This technology can help game developers better understand their players and improve gaming experiences.

Audio or Video Recognition based Games:

Another exciting prospect for AI in game development is audio or video-recognition-based games. These games use AI algorithms to analyze audio or video input from players, allowing them to interact with the game using their voice, body movements, or facial expressions.

This technology can potentially create entirely new game experiences, such as games that respond to players’ emotions or games that are accessible to players with disabilities.

Conclusion

AI has already significantly impacted the gaming industry and is poised to revolutionize game development in the coming years.

With the help of AI, game developers can create more engaging and immersive games while reducing development time and costs. AI-powered game engines, game design, characters, environments, and narratives are already enhancing the gaming experience for players.

Decision trees, reinforcement learning, and GANs are transforming how games are developed. The future of AI in gaming is promising with the advent of automated game design, data annotation, and hand and audio or video recognition-based games.

As AI technology advances, we can expect game development to become even more intelligent, intuitive, and personalized to each player’s preferences and abilities.

Nvidia and MediaTek Collaborate to Unveil Next-Generation AI-Powered In-Car Systems

As the demand for advanced in-car entertainment and communication systems continues to grow, Nvidia and MediaTek have announced a strategic partnership to introduce next-generation solutions that leverage artificial intelligence (AI) to enhance the driving experience.

Under the partnership, MediaTek will develop SoCs (system-on-a-chip) that integrate Nvidia’s GPU (graphics processing unit) chipset, which offers advanced AI and graphics capabilities. The collaboration aims to create a comprehensive, one-stop-shop for the automotive industry, delivering intelligent, always-connected vehicles that meet evolving consumer needs.

According to Rick Tsai, CEO of MediaTek, this partnership will enable the development of “the next generation of intelligent, always-connected vehicles.” With this collaboration, Nvidia and MediaTek are poised to transform the in-car infotainment experience, enabling drivers to stream video, play games, and interact with their vehicles using cutting-edge AI technology.

Partnership to widen the market for both players

Nvidia has a range of GPU solutions for computers and servers, and SoCs for automotive and robotic applications. Now, the firm hopes to cover broader markets with MediaTek integrating its GPU chipset into automotive SoCs. The chipset firm will have better access to the $12 billion market for infotainment SoCs, thanks to the cooperation with MediaTek.

Nvidia will be able to offer its “DRIVE OS, DRIVE IX, CUDA, and TensorRT software technologies on these new automotive SoCs to enable connected infotainment and in-cabin convenience and safety functions.” This will make in-vehicle infotainment options available to automakers on the Nvidia DRIVE platform.

Automakers have been employing NVIDIA’s technology for infotainment systems, graphical user interfaces, and touchscreens for well over a decade to help modernize their car cockpits. According to the statement, the capabilities of MediaTek’s Dimensity Auto platform are to see a marked improvement using NVIDIA’s core competencies in AI, cloud, graphics technology, and the software ecosystem in combination with NVIDIA’s advanced driver assistance systems. 

MediaTek’s Dimensity Auto platform enables smart multi-displays, high-dynamic range cameras, and audio processing, allowing drivers and passengers to engage with cockpit and infotainment systems easily. According to Reuters, till now, Nvidia has centered its efforts on high-end premium automakers, however, with its roots in the Android smartphone chip industry, MediaTek sells its Dimensity Auto technology to mass-market, cost-efficient automakers. The collaboration is set to benefit all car classes, from luxury to entry-level, offering new user experiences, improved safety, and new connected services.

“By integrating the NVIDIA GPU chiplet into its automotive offering, MediaTek aims to enhance the performance capabilities of its Dimensity Auto platform to deliver the most advanced in-cabin experience available in the market.” The platform also has Auto Connect, a function that uses high-speed telematics and Wi-Fi networking to guarantee that drivers stay wirelessly connected. The partnership plans to release its first offering by the end of 2025.

NVIDIA to Build Israel’s Most Potent AI Supercomputer

NVIDIA, the World’s Top-Ranking Chip Firm, is Pouring Hundreds of Millions into Building Israel’s Most Powerful Artificial Intelligence (AI) Supercomputer, Israel-1. This Move Comes as a Response to a Surge in Demand for AI Applications, as per the Company’s Announcement on Monday.

Set to Be Partly Operational by Year-End 2023, Israel-1 is Expected to Deliver up to Eight Exaflops of AI Computing, Placing It Among the Fastest AI Supercomputers Worldwide. Putting That into Perspective, a Single Exaflop Can Perform a Quintillion – That’s 18 Zeros – Calculations Every Second.

Super-AI

According to Gilad Shainer, Senior Vice President at NVIDIA, the upcoming supercomputer in Israel will be a game-changer for the thriving AI scene in the country. Shainer highlighted the extensive collaboration between NVIDIA and 800 startups nationwide, involving tens of thousands of software engineers.

Shainer emphasized the significance of large Graphics Processing Units (GPUs) in the development of AI and generative AI applications, stating, “AI is the most important technology in our lifetime.” He further explained the growing importance of generative AI, noting the need for robust training on large datasets.

The introduction of Israel-1 will provide Israeli companies with unprecedented access to a supercomputer resource. This high-performance system is expected to accelerate training processes, enabling the creation of frameworks and solutions capable of tackling more complex challenges.

An example of the potential of powerful computing resources is evident in projects like ChatGPT by OpenAI, which utilized thousands of NVIDIA GPUs. The conversational capabilities of ChatGPT showcase the possibilities when leveraging robust computing resources.

The development of the Israel-1 system was undertaken by the former Mellanox team, an Israeli chip design firm that NVIDIA acquired in 2019 for nearly $7 billion, surpassing Intel Corp.

While the primary focus of the new supercomputer is NVIDIA’s Israeli partners, the company remains open to expanding its reach. Shainer revealed, “We may use this system to work with partners outside of Israel down the road.”

In other news, NVIDIA recently announced a partnership with the University of Bristol in Britain. Their collaboration aims to build a new supercomputer powered by an innovative NVIDIA chip, positioning NVIDIA as a competitor to chip giants Intel and Advanced Micro Devices Inc.

Google Introduces Advanced Generative Technology for Users in the US

Google has announced that its world-leading search bar will now feature a generative technology AI if you live in the United States. Called “Google Search Generative Experience,” or SGE for short, rollouts for it began on the morning of May 25. Not only that, but Google Search users will also get access to Google’s “Search Labs” too. But you’ll need to sign up for a waiting list to be one of the first users of the new services.

Recently unveiled at Google I/O 2023, Google SGE is an innovative integration of conversational AI into the traditional search experience. If you’ve ever used Bing AI, you’ll find that Google’s product is familiar, but it does have its own unique properties too.

According to a preview on Engadget, Google’s AI-powered search still utilizes the same input bar as before, rather than a separate chatbot field like Bing. However, the generative AI results now appear in a shaded section beneath the search bar (but above sponsored results) and above the standard web results. A button on the top right of the AI results allows users to expand the snapshot, adding cards that display sourced articles. Also, users can ask follow-up questions by simply tapping a button below the results.

Google describes the snapshot as “key information to consider, with links to dig deeper.” Imagine a slice of Bard that has been integrated, relatively seamlessly, into the Google search you’re already familiar with.

“This experiment is our first step in adding generative technology AI to Search, and we’ll be making many updates and improvements over time. As we continue to reimagine how we can make it even more natural and intuitive to find what you’re looking for, we’re excited for you to test out these new capabilities and share feedback along the way,” says Google.

As previously mentioned, Google is also expanding access to its “Search Labs,” which now include “Code Tips” and “Add to Sheets” functions. Again, like its generative AI, these are currently only available in the US.

“Code Tips” uses large language models to provide guidance for more efficient and effective coding. This feature allows aspiring developers to ask questions about programming languages like C, C++, Go, Java, JavaScript, Kotlin, Python, and TypeScript, as well as tools like Docker, Git, shells, and other algorithms. “Add to Sheets,” on the other hand, allows users to directly insert search results into Google’s spreadsheet application. Simply tapping on the Sheets icon next to a search result displays a list of recent documents, from which users can select the one they want to attach the result to.

Pretty neat, we must say.

To join the “Search Labs” waitlist, simply click on the Labs icon (represented by a beaker symbol) on a new tab while using Chrome on a desktop or within the Google search app on Android or iOS. It’s important to note that the timeline and scope of availability have not been disclosed by the company, so for those not in the US, you’ll just have to wait a little longer.

Pictory.ai

Pictory.ai is an innovative platform that leverages artificial intelligence (AI) technology to enhance and transform images. Through its advanced AI algorithms, Pictory.ai offers a range of powerful image editing and enhancement capabilities. Users can utilize the platform to automatically remove backgrounds from images, upscale low-resolution images while preserving quality, apply stylistic filters, and perform other image editing tasks with ease. Pictory.ai provides a user-friendly interface, making it accessible to both professional designers and casual users seeking to enhance their visuals. With its AI-driven image editing tools, Pictory.ai aims to empower users to create visually appealing and engaging content.

Pictory.ai offers a range of capabilities and features:

  1. Background Removal: Pictory utilizes AI technology to automatically remove backgrounds from images, allowing you to isolate the main subject or replace the background with a different one.
  2. Image Upscaling: The platform can upscale low-resolution images while preserving details and improving overall image quality. This is particularly useful when you need to enlarge images without sacrificing clarity.
  3. Style Transfer: Pictory.ai enables you to apply artistic filters and styles to your images. You can transform your photos into various artistic styles, such as impressionist paintings, sketches, or other unique visual effects.
  4. Image Enhancement: Enhance your images using Pictory.ai’s AI algorithms. You can improve brightness, contrast, saturation, and other aspects to achieve optimal image quality and visual appeal.
  5. Noise Reduction: The platform can reduce noise and artifacts in images, resulting in cleaner and crisper visuals.
  6. Object Removal: With Pictory.ai, you can remove unwanted objects or elements from images seamlessly, making it easier to achieve a cleaner and more professional look.
  7. Image Retouching: The platform provides tools for retouching and refining images, allowing you to enhance skin tones, remove blemishes, and perform other touch-up tasks.

URL: https://pictory.ai/

Revolutionary AI Ballet Set to Grace Theatres Worldwide

The Leipzig Opera House is set to host the world’s first AI ballet, titled Fusion, from May 29 to July 8, 2023. This groundbreaking production utilizes generative AI to influence every aspect of the performance, including choreography, music composition, and costume design. Fusion is helmed by acclaimed speech artist and musician Harry Yeff, renowned as Reeps100.

The exciting news was reported by Wallpaper on Friday. Fusion showcases an impressive musical score composed and directed by Harry Yeff, in collaboration with associate composers Gadi Sassoon and Teddy Riley. The choreography, infused with AI elements, is masterfully crafted by Mario Schroder, while the stage design and costumes are envisioned by Paul Zoller.

Drawing inspiration from Plato’s concept of the divided self, Fusion explores the journey towards harmony. The AI technology employed in the ballet takes Yeff’s own voice as a catalyst for the performance, accompanied by generative synthetic voices. Reports indicate that Yeff dedicated over a thousand hours to training with voice and AI technology, achieving a machine-like mastery of his vocal expression.

An augmented voice

“My voice is now augmented as a result of hundreds of hours of training with A.I. – I am able to reach speeds and depths I didn’t believe were possible. I am a living breathing augmentation, soon there will be many more of me,” told Wallpaper Yeff.

“As a neuro-divergent director and coming from a working-class background, this feels like a moment to be trusted to fuse so many worlds into one work. It’s a sign that there is more openness for new kinds of expertise to be celebrated, regardless of where you come from.”

The Leipzig Ballet will be performing to Yeff’s voice. The performance artists have a long history of tackling radical new ideas making them the ideal troupe to support this new initiative. The question now is how well will audiences react to this novel form of art?

Will they embrace an AI based performance or will they see it as an invasion of what was once a purely human phenomenon? With AI slowly creeping in everywhere, it is safe to say it was just a matter of time before the technology took over the performing arts.