ChatGPT creators, OpenAI, have announced ten $100,000 grants for anyone with good ideas on how artificial intelligence (AI) can be governed to help address bias and other factors. The grants will be awarded to recipients who present the most compelling answers for some of the most pressing solutions around AI, like whether it should be allowed to have an opinion on public figures.
This comes in light of arguments around whether AI systems such as ChatGPT may have a built-in prejudice because of the data they are trained on (not to mention the opinions of human programmers behind the scenes). Reports have revealed instances of discriminatory or biased results generated by AI technology. There is a growing apprehension that AI, when working alongside search engines like Google and Bing, might generate misleading information with great conviction.
OpenAI, backed by a significant $10 billion investment from Microsoft, has long been a proponent of responsible AI regulation. However, the organization recently expressed apprehension regarding proposed rules in the European Union (EU) and even hinted at the possibility of withdrawing support. OpenAI’s CEO, Sam Altman, stated that the current draft of the EU AI Act appears to be overly restrictive, although there are indications that it might undergo revisions. “They are still discussing it,” Altman mentioned in an interview with Reuters.
Reuters noted that the $1 million grants offered by OpenAI might not fully address the needs of emerging AI startups. In the current market, most AI engineers earn salaries exceeding $100,000, and exceptional talent can command compensation surpassing $300,000. Nevertheless, OpenAI emphasized the importance of ensuring that AI systems benefit humanity as a whole and are designed to be inclusive. “To take an initial step in this direction,” OpenAI stated in a blog post, “we are launching this grant program.”
Altman, a prominent advocate for AI regulation, has been updating ChatGPT and image-generator DALL-E. However, he recently expressed concerns about potential risks associated with AI technology during his appearance before a U.S. Senate subcommittee. Altman emphasized that if something were to go wrong, the consequences could be significant.
Recently, Microsoft joined the call for comprehensive regulation of AI. However, the company remains committed to integrating the technology into its products and competing with other major players like OpenAI, Google, and various startups to deliver AI solutions to consumers and businesses.
AI’s potential to enhance efficiency and reduce labor costs has piqued the interest of almost every sector. However, there are also concerns that AI might spread misinformation or factual inaccuracies, which industry experts call “hallucinations.”
There have been instances where AI has also been involved in creating popular hoaxes. For example, a recent fake image of an explosion near the Pentagon caused a momentary impact on the stock market. Although there have been numerous requests for stricter regulations, Congress has been unsuccessful in enacting new laws that significantly limit the power of Big Tech.
In the race to extend their AI-powered app ecosystems, Microsoft recently made an announcement at Build that highlighted their plans to expand Copilot applications and adopt a standardized approach for plugins. This standard, introduced by their partner OpenAI for ChatGPT, enables developers to create plugins that seamlessly interact with APIs from various software and services. The expansion encompasses ChatGPT, Bing Chat, Dynamics 365 Copilot, Microsoft 365 Copilot, and the new Windows Copilot.
However, experts caution that this endeavor poses significant challenges for Microsoft. Google, during its I/O event, revealed plans to make Bard compatible with additional apps and services, both from Google itself (such as Docs, Drive, Gmail, and Maps) and from third-party partners like Adobe Firefly.
“When it comes to APIs, as opposed to hardware-dependent applications or apps, establishing a dominant position becomes much more difficult,” noted Whit Andrews, Vice President and Distinguished Analyst at Gartner Research, in an interview with VentureBeat. He further explained that if other companies develop APIs that are equally capable, the switching cost for users becomes less significant.
The competition between Microsoft and Google in the AI app ecosystem is poised to intensify as they vie for developer adoption and user loyalty. The ability to seamlessly integrate with a wide range of apps and services will play a crucial role in shaping the success of these platforms. As the battle unfolds, it will be intriguing to witness how developers and users embrace these AI-powered ecosystems and the unique advantages they bring to the table.
Microsoft is enjoying a head start
Andrews emphasized that Microsoft certainly has a head start and three key advantages.
First, Microsoft has an “extraordinary” first-mover advantage as OpenAI’s partner. “So the more they can establish familiarity and appeal, the more they can generate a defensible value,” he said.
In addition, without a moat, brand strength will also be an important driver, he explained. “With the intense value of Microsoft’s brand, that’s why things have to move so fast for Microsoft to have the best possible outcome.”
Finally, Microsoft, with its tremendous developer community, has the opportunity to grab market share and familiarity. “Microsoft attracts developers better than anybody else,” said Andrews. “So if you’re Microsoft, you lean on that this week [at Build]. Can you present your developers, your faithful, with the opportunities to participate in this extraordinary AI world that they will find attractive and familiar?” Microsoft needs to be synonymous in the developer’s mind with access to easy artificial intelligence-powered functionality, he added: “That means growth needs to be explosive — every developer in the Microsoft family needs to say to themselves, ‘I’ll start by looking there.’”
‘An impressive, all-out assault’ has limits
According to Matt Turck, a VC at FirstMark, Microsoft’s AI app ecosystem and plugin framework is an “impressive, all-out assault by Microsoft to be top of mind for developers around the world who want to build with AI.”
Microsoft is certainly pushing hard to lead the space and reap ROI on its multi-billion dollar investment in OpenAI, Turck told VentureBeat. But he said it “remains to be seen whether the world is ready to live in a Microsoft-dominated AI world” and suspects there will be “stiff resistance,” particularly on the enterprise side — where many want to leverage open source and multi-agents for customization, and will also want to protect their data from going out to a cloud provider (in this case, Azure).
Andrews agreed that it’s too early to know whether Microsoft will prevail — or if the AI app and plugin ecosystem will even flourish. “For lots of consumer users, ChatGPT is pretty amazing for what it does right now, and there might be problems with plugins that conflict with each other, things might begin to get a little challenging. The value of a plugin demands education, explanation and usage.”
Harder to implement effective controls and safeguards
Other experts point out that the growth of the app ecosystem will make it even harder to develop effective controls and safeguards in an era when AI regulation is becoming a top priority.
“The main concern in my mind is a distribution of accountability between the third parties and the entity that provides the source LLM,” Suresh Venkatasubramanian, professor of computer science at Brown University and former White House policy advisor, told VentureBeat in a message.
While he said there is also an opportunity if the companies proving the LLM service are willing and able to establish more controls, “I don’t see that happening any time soon. To me, this continues to reinforce the importance of guardrails ‘at the point of impact’ where people are affected.”
Artificial Intelligence (AI) has seamlessly integrated into almost every aspect of our lives, and search engines are no exception.
Just recently, the emergence of ChatGPT, a sophisticated language model developed by OpenAI, demonstrated the potential of AI in generating human-like text and engaging in meaningful conversations. This breakthrough laid the foundation for AI integration in search engines.
Leading the search engine industry, Google has introduced an AI-powered update to its core search product, aiming to strengthen its competitiveness against Microsoft’s Bing search, which utilizes OpenAI technology.
While Google already features its own AI chatbot called Bard, Google AI Search leverages AI to enhance the precision and relevance of search results. As a result, it remains the preferred choice for informational queries and locating specific information online.
On the other hand, Bard, with its chatbot persona and conversational capabilities, is specifically designed for creative collaboration. It enables users to engage in human-like conversations and harness AI-generated assistance for tasks like writing code.
As Google and its competitors continue to innovate in AI-powered search, it becomes essential to explore the advantages and limitations of Google AI Search and Bard, as well as their similarities, differences, and use cases. By examining their unique features and capabilities, we can gain valuable insights into how these AI tools can enhance our access to information in today’s digital era.
The Evolution of Search Engines
Before we dive into the day’s discussion, let’s go on a short trip down memory lane and review the history of search engines across the past decades as it evolves alongside the rapid advancement of technology.
From the early days of basic keyword-based searches to the emergence of AI-powered search engines, search engines have revolutionized how we navigate the vast expanse of the internet.
The birth of search engines can be traced back to 1990 when the first search advance was “Archie”. Developed by Alan Emtage, it made it possible to search through a site’s file directories. Afterward, we saw the development of Veronica, a service from the University of Nevada System Computing Services that provided searches for plain text files, and Gopher, which made it possible to search through online databases and text files.
After the creation of the World Wide Web, there were advances such as the WWW Virtual Library, created by Tim Berners-Lee, and the initial iteration of Yahoo. But these weren’t search engines as we know them. They were human-assembled catalogs of helpful web links. They used simple indexing techniques to organize and retrieve information. These primitive search tools were limited in their capabilities and often struggled to deliver relevant results.
As the internet expanded exponentially, search engines underwent a significant transformation with the introduction of web crawlers. These used automatic programs, called robots or spiders, to request webpages and report their findings to a database.
In 1994, an early recognized crawler search engine, WebCrawler, employed crawling technology to index web pages, allowing users to search for specific keywords across various websites. This marked a significant milestone in the evolution of search engines. By mid-1994, Lycos became the first search engine to have a whole page search for more than a million pages.
In the subsequent years, we witnessed the dominance of search engines like Yahoo! and AltaVista, which adopted a keyword-based search approach. Users were required to input specific keywords or phrases to retrieve relevant results. AltaVista also gave users the first successful Boolean search options.
In 1998, Google burst onto the scene, introducing a groundbreaking algorithm called PageRank. This innovation revolutionized search engines by ranking web pages based on relevance and popularity. Google’s efficient indexing methods and emphasis on delivering high-quality search results propelled it to become the dominant search engine worldwide.
Over the years, search engines have evolved significantly, incorporating increasingly complex algorithms to provide more accurate and relevant search results.
More recently, AI-powered search engines have taken search to a new level. These search engines utilize machine learning algorithms to analyze vast amounts of data, learning from user behavior and feedback to deliver personalized and highly relevant results.
Google is now transforming its traditional search functionalities with generative AI. During the 2023 Google I/O, Google Search AI was announced.
With this new tool, Google Search aims to provide users with more conversational and contextually relevant answers instead of a traditional list of links.
The generative AI in Google Search, known as Search Generative Experience (SGE), is an experiment that adds AI-powered snapshots of key information to the search results. The AI snapshots will give users a text response to search queries and other relevant information.
Google also introduces a Conversational mode, allowing users to ask follow-up questions and engage in a more interactive dialogue with the search engine. This feature, reminiscent of Microsoft’s Bing Chat AI, enables users to refine their search queries and obtain more specific and tailored information.
The SGE experiment is being rolled out, and interested users in the United States can sign up for the Google Labs SGE experiment waitlist to participate and explore the new AI-powered search experience. As this experiment progresses, users can anticipate a more dynamic, personalized, and engaging search journey powered by AI technology.
This video showcases a version of Google Search that AI has completely taken over.
Google’s demonstration at I/O offers a glimpse into the approaching future of search, where AI-driven search engines are poised to become the go-to resource for users.
Benefits of Using Google Search AI
As Google integrates AI technology to enhance the user search experience, here are a few reasons why you might want to give Google Search AI a try.
Improved Understanding and Insights: Google Search AI will help users understand topics faster. Rather than manually sifting through vast information on the Internet, Google Search AI will provide relevant and concise summaries, allowing users to understand key points and gain new insights quickly.
Streamlined Shopping Experience: Google Search AI aims to facilitate shopping decisions. When searching for a product, users receive a snapshot highlighting essential factors to consider and presenting relevant products. This will include comprehensive product descriptions with reviews, ratings, prices, and images.
Enhanced Decision Making: Google Search AI will help in making decisions. Whether choosing a destination for a family vacation or a course to study at the university, Google Search AI will provide users with sufficient information to make a good decision more quickly and efficiently.
Conversational Search: With Google Search AI, you can ask questions and interact with tools like Chatbots.
Stay Updated: Google’s AI-powered search has access to vast amounts of information, ensuring you have the latest and most accurate information.
Limitations of Search AI
With Google’s incorporation of generative AI and LLMs into its Search AI, there are certain limitations to be aware of. These limitations primarily stem from the experimental nature of the Search Generative Experience (SGE) and the inherent characteristics of the underlying models.
Here are some notable limitations and challenges:
Misinterpretation: In some cases, SGE may identify relevant information to support its snapshot but could misinterpret language, resulting in a slight change in the meaning of the output.
Hallucination: Google’s SGE occasionally provides inaccurate or ‘made up’ information or misrepresents facts and insights.
Bias: Google’s SGE aims to corroborate responses with high-quality resources. This could introduce biases in the highly ranked results, similar to those observed in traditional search results.
Opinionated content implying persona: Although Google’s SGE is designed to maintain a neutral and objective tone, sometimes its output may reflect opinions on the web that could give an impression of the model displaying a persona.
Duplication or contradiction with existing Search features: Since SGE is integrated alongside other search results, its output may appear contradictory to additional information on the search results page.
Google acknowledges these limitations and continues to refine and improve the models through ongoing updates and fine-tuning.
As SGE evolves, these limitations should be addressed to enhance the overall search experience and mitigate any potential drawbacks of generative AI in Search.
Bard is an AI chatbot developed by Google, similar to the popular ChatGPT.
With Bard, users can tap into its creative capabilities and utilize its vast knowledge to generate code snippets, solve math problems, and more. It’s like having a helpful companion or a virtual problem solver.
Like the Search AI, Bard is powered by Google’s advanced large language model (LLM) called PaLM 2. However, it does not have the web-browsing capabilities of traditional Google Search. Yet, it shines in its ability to provide human-like text based on its given prompts.
You can engage in conversations with Bard, and it will respond with informative and comprehensive answers, drawing from its extensive training on a massive amount of text data.
Bard AI defines itself as “I am Bard, a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories.”
Benefits of Using Bard
Here are a few benefits of using Bard AI:
Updated Information: Unlike other AI chatbots, Bard leverages the power of Google Search to provide up-to-date information from the web. This feature proves invaluable for research purposes and gathering the most recent data on various topics.
Human-Like Conversations: Bard excels in understanding natural language prompts, whether entered through text or spoken commands. It engages in conversations that closely resemble human interactions, making it a user-friendly chatbot. Its conversational capabilities rival those of ChatGPT and Bing Chat.
Specific Generative Capabilities: Bard is capable of creative writing. It can generate content in diverse styles and formats, from news articles and blog posts to letters and email messages.
Voice Command Support: Google Bard accepts voice commands, making it more convenient and accessible. Users can utilize the microphone option to input prompts to the chatbot. This feature differentiates it from OpenAI’s ChatGPT, which lacks native voice command support.
Limitations of Bard
Despite the benefits of Bard AI, there are a few limitations:
Creativity Limitations: While Bard possesses creative writing abilities, it is not always consistently creative. Some of its responses may lack originality or may not directly address the questions asked. It can produce ambiguous or irrelevant answers or unoriginal content.
No Citations: Bard can generate factual information and provide relevant answers, which can be helpful for research purposes. However, Bard does not cite its sources or provide links to validate the data it generates, so users are tasked with verifying the information provided by Bard.
Inconsistencies: Bard may provide inconsistent and incorrect responses, confusing users. Users should be aware of these inconsistencies and carefully evaluate the reliability of any information received from Bard.
Hallucinations: Bard has been criticized, including by Google employees, for providing not only false answers to queries but also dangerous advice. Bard has also been found to be less useful than Bing or ChatGPT in some tests.
Comparison of Google AI Search and Bard
Google AI Search and Bard are two distinct AI-powered search engines developed by Google. While they share some similarities, they also have notable differences in functionalities and use cases.
Key Similarities between Google AI Search and Bard:
AI-Powered: Both Google AI Search and Bard utilize artificial intelligence to enhance the search experience and generate relevant information.
Conversational Abilities: Both search engines have conversational capabilities, allowing users to ask questions and receive detailed responses.
Integration with Google’s Advanced LLM: Both search engines leverage Google’s large language model (LLM) technology to generate human-like text responses.
Information Retrieval: Both Google AI Search and Bard aim to retrieve relevant information and provide answers to user queries. They can provide factual information, summaries, and insights on various topics.
Real-Time Internet Access: Unlike other AI chatbots, Google AI Search and Bard AI can access real-time information. Hence, they can provide access to up-to-date information from the web.
Key Differences between Google AI Search and Bard:
Search Functionality: Google AI Search primarily provides contextually relevant answers to search queries by adding AI-powered snapshots to the search results. In contrast, Bard is a chatbot that generates human-like text based on user prompts but does not have web-browsing capabilities like traditional search engines.
Use Cases: Google AI Search is designed for traditional search purposes, such as finding information, making purchase decisions, and general research. Conversely, Bard is more suitable for creative collaboration, generating code snippets, creative writing, and engaging in human-like conversations.
Use Cases for Each Search Engine:
Google AI Search: It is better suited for finding information, making purchase decisions, conducting research, and obtaining contextually relevant answers to various queries.
Bard: It is well-suited for creative collaboration, generating code snippets, solving math problems, creative writing, obtaining informative summaries of factual topics, and more.
Which Search Engine is Better for Different Search Queries:
Google AI Search is better for traditional information-seeking queries, such as factual information, product searches, or general research. For creative purposes, code generation, creative writing, or engaging in human-like conversations, Bard is more suitable.
The Future of AI-Powered Search
The future of AI-powered search engines holds tremendous potential and is poised to transform how we discover and interact with information online. As AI-powered search tools advance, search engines will become more intelligent, personalized, and engaging, providing search experiences that are highly tailored to individual needs.
One key aspect of the future of AI-powered search engines is integrating natural language processing (NLP) capabilities. NLP allows search engines to understand and interpret user queries more nuanced and contextually. Instead of relying solely on keywords, search engines will be able to comprehend the intent behind user queries, leading to more accurate and relevant search results.
Another important trend is the use of generative AI models in search engines. These models can generate human-like responses and even create original content. This opens up possibilities for more interactive and conversational search experiences, where users can engage in dynamic dialogues with AI-powered assistants to refine their search queries and receive tailored recommendations.
Personalization will also play a significant role in the future of AI-powered search engines. Search engines can deliver highly personalized search results as they gather more data about users’ preferences, behaviors, and past interactions. This will enable search engines to anticipate users’ needs, provide recommendations based on their interests, and offer a more customized browsing experience.
However, along with the opportunities, there are also challenges that AI-powered search engines will face in the future. Privacy concerns will become even more critical as search engines collect and process vast amounts of user data. Striking a balance between delivering personalized experiences and respecting user privacy will be crucial.
Additionally, ensuring transparency and accountability in AI algorithms will be a crucial challenge. As AI models become more complex and sophisticated, it becomes increasingly important to understand how they make decisions and to address potential biases or ethical concerns that may arise.
Robots are rapidly advancing in intelligence and capabilities with each passing day. However, there remains a significant challenge that they continue to face – comprehending the materials they interact with.
Consider a scenario where a robot in a car garage needs to handle various items made from the same material. It would greatly benefit from being able to discern which items share similar compositions, enabling it to apply the appropriate amount of force.
Material selection, the ability to identify objects based on their material, has proven to be a difficult task for machines. Factors such as object shape and lighting conditions can further complicate the matter as materials may appear different.
Nevertheless, researchers from MIT and Adobe Research have made remarkable progress by leveraging the power of artificial intelligence (AI). They have developed a groundbreaking technique that empowers AI to identify all pixels in an image that represent a specific material.
What sets this method apart is its exceptional accuracy, even when faced with objects of varying shapes, sizes, and lighting conditions that may deceive human perception. None of these factors trick the machine-learning model.
This significant breakthrough brings us closer to a future where robots possess a profound understanding of the materials they interact with. Consequently, their capabilities and precision are substantially enhanced, paving the way for more efficient and effective robotic applications.
The development of the model
To train their model, the researchers used “synthetic” data—computer-generated images created by modifying 3D scenes to generate various ideas with different material appearances. Surprisingly, the developed system seamlessly works with natural indoor and outdoor settings, even those it has never encountered before.
Moreover, this technique isn’t limited to images but can also be applied to videos.
For example, once a user identifies a pixel representing a specific material in the first frame, the model can subsequently identify objects made from the same material throughout the rest of the video.
The potential applications of this research are vast and exciting.
Beyond its benefits in scene understanding for robotics, this technique could enhance image editing tools, allowing for more precise manipulation of materials.
Additionally, it could be integrated into computational systems that deduce material parameters from images, opening up new possibilities in fields such as material science and design.
One intriguing application is material-based web recommendation systems. For example, imagine a shopper searching for clothing from a particular fabric.
By leveraging this technique, online platforms could provide tailored recommendations based on the desired material properties.
Prafull Sharma, an electrical engineering and computer science graduate student at MIT and the lead author of the research paper, emphasizes the importance of knowing the material with which robots interact.
Even though two objects may appear similar, they can possess different material properties.
Sharma explains that their method enables robots and AI systems to select all other pixels in an image made from the same material, empowering them to make informed decisions.
As AI advances, we can look forward to a future where robots are intelligent and perceptive of the materials they encounter.
The collaboration between MIT and Adobe Research has brought us closer to this exciting reality.
Artificial Intelligence (AI) has come a long way in the past few decades, and we now live in a world filled with exciting AI technologies.
Specialized algorithms and machine learning techniques have been developed to process vast amounts of data and make predictions based on patterns. We have also seen the emergence of AI chatbots like ChatGPT, smart home devices, virtual assistants like Siri, Google Assistants, and many more.
But here’s the thing: AI is still pretty limited. It can only do what we humans tell it to do, and it’s not great at handling tasks it hasn’t seen before.
That’s where artificial general intelligence (AGI) would come in – it would be like the superstar of the AI world. AGI would be the type of AI that can learn and reason like we humans do, which means it would have the potential to solve complex problems and make decisions independently.
Imagine having an AI system that can actually figure things out independently – now that’s something worth getting excited about!
While AGI is still in its early stages of development, it has the potential to revolutionize numerous industries, including healthcare, finance, transportation, and manufacturing. With AGI, medical research could lead to more accurate diagnoses and personalized treatments, while transportation systems could become more efficient and safer, leading to fewer accidents and less road congestion.
In this article, we will delve into the fascinating world of artificial general intelligence. We’ll explore its history, its potential impact on society, and the ethical and regulatory implications of its use.
What is artificial general intelligence (AGI)?
Artificial general intelligence (AGI) is a theoretical form of AI that can learn and reason like humans, potentially solving complex problems and making decisions independently. However, definitions of AGI vary as there is no agreed-upon definition of human intelligence. Experts from different fields define human intelligence from different perspectives.
However, those working on the development of AGI aim to replicate the cognitive abilities of human beings, including perception, understanding, learning, and reasoning, across a broad range of domains.
Unlike other forms of AI, such as narrow or weak AI, which are designed to perform specific tasks, AGI would perform a wide range of tasks, adapt to new situations, and learn from experience. AGI would reason about the world, form abstract concepts, and generalize knowledge from one domain to another. In essence, AGI would behave like humans without being explicitly programmed to do so.
Here are some of the key characteristics that would make AGI so powerful:
Access to vast amounts of background knowledge: AGI would tap into an extensive pool of knowledge on virtually any topic. This information would allow it to learn, adapt quickly, and make informed decisions.
Common sense: AGI would understand the nuances of everyday situations and respond accordingly. It could reason through scenarios that have not been explicitly programmed and use common sense to guide its actions.
Transfer learning: AGI could transfer knowledge and skills learned from one task to other related tasks.
Abstract thinking: AGI could comprehend and work with abstract ideas, enabling it to tackle complex problems and develop innovative solutions.
Understanding of cause and effect: AGI would be able to anticipate the outcomes of its decisions and take proactive measures to achieve its goals by understanding and using cause-and-effect relationships. This means that it could predict the consequences of its decisions and take proactive measures to achieve its goals.
The main difference between AGI and other forms of AI is the scope of their capabilities. While other forms of AI are designed to perform specific tasks, AGI would have the potential to perform a wide range of tasks, similar to humans.
The history of AGI
The quest for AGI has been a long and winding road. It began in the mid-1950s when the early pioneers of AI were brimming with optimism about the prospect of machines being able to think like humans. They believed that AGI was possible and would exist within a few decades. However, they soon discovered that the project was much more complicated than they had anticipated.
During the early years of AGI research, there was a palpable sense of excitement. Herbert A. Simon, one of the leading AI researchers of the time, famously predicted in 1965 that machines would be capable of doing any work a human can do within twenty years. This bold claim inspired the creation of the infamous character HAL 9000 in Arthur C. Clarke’s sci-fi classic 2001: A Space Odyssey (and the movie version by Stanley Kubrick).
However, the optimism of the early years was short-lived. By the early 1970s, it had become evident that researchers had underestimated the complexity of the AGI project.
Funding agencies became increasingly skeptical of AGI, and researchers were pressured to produce useful “applied AI” systems. As a result, AI researchers shifted their focus to specific sub-problems where AI could produce verifiable results and commercial applications.
Although AGI research was put on the back burner for several decades, it resurfaced in the late 1990s when Mark Gubrud used the term “artificial general intelligence” to discuss the implications of fully automated military production and operations. Around 2002, Shane Legg and Ben Goertzel reintroduced and popularized the term.
Despite renewed interest in AGI, many AI researchers today claim that intelligence is too complex to be completely replicated in the short term. Consequently, most AI research focuses on narrow AI systems widely used in the technology industry. However, a few computer scientists remain actively engaged in AGI research, and they contribute to a series of AGI conferences.
The potential impact of AGI
Picture this: a world where machines can solve some of the most complex problems, from climate change to cancer. A world where we no longer have to worry about repetitive, menial tasks because intelligent machines take care of them and many higher-level tasks. This, and more, is the potential impact of AGI.
The benefits and opportunities of AGI are endless. With its ability to process large amounts of data and find patterns, AGI could help us solve problems that have long baffled us. For instance, it could help us develop new drugs and treatments for chronic diseases like cancer. It could also help us better understand the complexities of climate change and find new ways to mitigate its effects.
AGI could also improve human life in countless ways. Automating tedious and dangerous tasks could free up our time and resources to focus on more creative and fulfilling pursuits. It could also revolutionize industries such as transportation and logistics by making them more efficient and safer. In short, AGI can change our lives and work in ways we can’t imagine.
However, there are also risks and challenges associated with the development of AGI. One of the biggest concerns is the displacement of jobs, as machines take over tasks previously done by humans. This could lead to economic disruption and social unrest – or a world where the only jobs left were either very high-level or menial jobs requiring physical labor. There are also significant ethical concerns, such as the possibility of machine bias in decision-making and the potential for misuse of AGI by those with malicious intent.
Public figures, including Elon Musk, Steve Wozniak, and Stephen Hawking, have endorsed the view that AI poses an existential risk for humanity. Similarly, AI researchers like Stuart J. Russell, Roman Yampolskiy, and Alexey Turchin support the basic thesis of AI’s potential threat to humanity.
Sharon Zhou, the co-founder of a generative AI company, believes that AGI is advancing faster than we can process, and we must consider how we use this powerful technology.
There are also safety risks associated with AGI, particularly if it becomes more advanced than human intelligence. Such machines could potentially be dangerous if they develop goals incompatible with human values. For example, if it’s given the task of combating global warming and it decides the best way is to eliminate the cause – humans.
Therefore, it’s essential to approach AGI development cautiously and establish proper regulations and safeguards to mitigate these risks.
The ethics of AGI
As artificial general intelligence (AGI) continues to make strides, it’s becoming increasingly important to consider the ethical implications of this technology. One of the primary concerns is whether or not AGI can learn and understand human ethics.
One worry is that if AGI is left unchecked, machines may make decisions that conflict with human values, morals, and interests. To avoid such issues, researchers must train the system to prioritize human life, understand and explain moral behavior, and respect individual rights and privacy.
Another ethical concern with AGI is the potential for bias in decision-making. If the data sets used to train AGI systems are biased, the resulting decisions and actions may also be biased, leading to unfair treatment or discrimination. We are already seeing this with weak AI. Therefore, ensuring that the data sets used to train AGI are diverse, representative, and free from bias is crucial.
Furthermore, there is the issue of responsibility and accountability. Who will be held accountable if AGI makes a decision that harms humans or the environment? Establishing clear guidelines and regulations for developing and using AGI is crucial to ensure accountability and responsibility.
The issue of job displacement is another concern with AGI. As AI becomes more intelligent, it will take over tasks previously done by humans, leading to job displacement and economic disruption.
Regulation and governance will play a critical role in ensuring responsible AI. Governments and organizations must work together now to establish ethical guidelines and standards for the development and use of AGI. This includes creating mechanisms for accountability and transparency in machine decision-making, ensuring that AGI is developed unbiased and ethically, and establishing safeguards to protect human safety, jobs, and well-being.
The future of AGI
The future of AGI development is a topic of much debate and speculation among experts in the field. While some believe that AGI is inevitable and will arrive sooner rather than later, others are skeptical about the possibility of ever achieving true AGI.
One potential outcome of AGI development is the creation of Artificial Super Intelligence (ASI), which refers to an AI system capable of surpassing human intelligence in all areas. Some experts believe that once AGI systems learn self-improvement, they can operate at a rate humans cannot control, leading to the eventual development of ASI.
However, there are concerns about the potential implications of ASI for society and the workforce. English physicist and author Stephen Hawking warned of the dangers of developing full artificial intelligence, stating that it could spell the end of the human race, as machines would eventually redesign themselves at an ever-increasing rate, leaving humans unable to compete.
Some experts, like inventor and futurist Ray Kurzweil, believe that computers will achieve human levels of intelligence soon (Kurzweil believes this will be by 2029) and that AI will then continue to improve exponentially, leading to breakthroughs that enable it to operate at levels beyond human comprehension and control.
Recent developments in generative AI have brought us closer to realizing the vision of AGI. User-friendly generative AI interfaces like ChatGPT have demonstrated impressive capabilities to understand human text prompts and answer questions on a limitless range of topics, although this is still all based on interpreting data that has been produced by humans. Image generation systems like DALL-E have also upended the visual landscape, generating realistic images just from a scene description, again, based on work by humans.
Despite these developments, AGI’s limitations and dangers are already well-known among users. As a result, AGI development will likely continue to be a hotly debated topic, with significant implications for the future of work and society.
Conclusion
Artificial general intelligence (AGI) can potentially revolutionize the world as we know it. From advancements in medicine to space exploration and beyond, AGI could solve some of humanity’s most pressing problems.
However, the development and deployment of AGI must be approached with caution and responsibility. We must ensure that these systems are aligned with human values and interests and do not threaten our safety and well-being.
With continued research and collaboration among experts in various fields, we can strive towards a future where AGI benefits society while mitigating potential risks.
The future of AGI is an exciting and rapidly evolving field, and it is up to us to shape it in a way that serves humanity’s best interests.
Meta, a leading tech company, has developed new AI models that were trained using the Bible to recognize and generate speech in over 1,000 languages. The company aims to employ these algorithms in efforts to preserve languages that are at risk of disappearing.
Currently, there are approximately 7,000 languages spoken worldwide. To empower developers working with various languages, Meta is making its language models publicly available through GitHub, a popular code hosting service. This move encourages the creation of diverse and innovative speech applications.
The newly developed models were trained on two distinct datasets. The first dataset contains audio recordings of the New Testament Bible in 1,107 languages, while the second dataset comprises unlabeled New Testament audio recordings in 3,809 languages. By leveraging these comprehensive datasets, Meta’s research scientist, Michael Auli, explains that the models can be utilized to build speech systems with minimal data.
While languages like English possess extensive and reliable datasets, the same cannot be said for smaller languages spoken by limited populations, such as those spoken by only 1,000 individuals. Meta’s language models provide a solution to this data scarcity, enabling the development of speech applications for languages lacking adequate resources.
The researchers assert that their models can not only converse in over 1,000 languages but also recognize more than 4,000 languages. Furthermore, when compared to rival models like OpenAI Whisper, Meta’s version exhibited a significantly lower error rate despite covering a broader range of languages, exceeding even 11 times more language coverage.
However, the scientists acknowledge that the models may occasionally mistranscribe specific words or phrases. Additionally, their speech recognition models displayed a slightly higher occurrence of biased words compared to other models, albeit only by a marginal increase of 0.7%.
Chris Emezue, a researcher at Masakhane, an organization focused on natural-language processing for African languages, expressed concerns about the use of religious text, such as the Bible, as the basis for training these models. He believes that the Bible carries inherent biases and misrepresentations, which could impact the accuracy and neutrality of the models’ outputs.
This development poses an important question: Is Meta’s advancement in language models a step forward, or does its utilization of religious text for training introduce controversial elements that hinder its overall impact? The conversation around the ethical considerations and potential biases involved in training language models remains ongoing.
Organizations like the European Union (EU) are taking the lead in formulating new regulations for AI, which could potentially establish a global standard. However, the enforcement of these regulations is expected to be a time-consuming process that spans several years.
“In the absence of specific regulations, governments can only resort to the application of existing rules,” stated Massimiliano Cimnaghi, a European data governance expert at consultancy BIP, in a statement to Reuters.
As a result, regulators are turning to already-established laws, such as data protection regulations and safety measures, to tackle concerns related to personal data protection and public safety. The necessity for regulation became evident when national privacy watchdogs across Europe, including the Italian regulator Garante, took action against OpenAI’s ChatGPT, accusing the company of violating the EU’s General Data Protection Regulation (GDPR).
In response, OpenAI implemented age verification features and provided European users with the ability to block their data from being used to train the AI model.
However, this incident prompted additional data protection authorities in France and Spain to initiate investigations into OpenAI’s compliance with privacy laws.
Consequently, regulators are striving to apply existing rules that encompass various aspects, including copyright, data privacy, the data utilized to train AI models, and the content generated by these models.
In the European Union, proposals for the AI Act will require companies like OpenAI to disclose any copyrighted material used to train their models, exposing them to potential legal challenges. However, proving copyright infringement may not be straightforward, as Sergey Lagodinsky, a politician involved in drafting the EU proposals, explains.
“It’s like reading hundreds of novels before you write your own,” he said. “If you actually copy something and publish it, that’s one thing. But if you’re not directly plagiarizing someone else’s material, it doesn’t matter what you trained yourself on.
Regulators are now urged to “interpret and reinterpret their mandates,” says Suresh Venkatasubramanian, a former technology advisor to the White House. For instance, the U.S. Federal Trade Commission (FTC) has used its existing regulatory powers to investigate algorithms for discriminatory practices.
Similarly, French data regulator CNIL has started exploring how existing laws might apply to AI, considering provisions of the GDPR that protect individuals from automated decision-making.
As regulators adapt to the rapid pace of technological advances, some industry insiders call for increased engagement between regulators and corporate leaders.
Harry Borovick, general counsel at Luminance, a startup that utilizes AI to process legal documents, expresses concern over the limited dialogue between regulators and companies.
He believes that regulators should implement approaches that strike the right balance between consumer protection and business growth, as the future hinges on this cooperation.
While the development of regulations to govern generative AI is a complex task, regulators worldwide are taking steps to ensure the responsible use of this transformative technology.
Sci-fi author Tim Boucher recently shared his remarkable achievement of creating 97 books within a span of nine months, thanks to the assistance of artificial intelligence (AI). In an article published by Newsweek, Boucher revealed that he utilized various AI tools to bring his vision to life.
Boucher employed the AI image generator called Midjourney to illustrate the contents of his books. For brainstorming and text generation, he relied on ChatGPT and Anthropic’s Claude. Each of his novels ranged from 2,000 to 5,000 words and included an impressive 40 to 140 AI-generated images. The author mentioned that, on average, it took him approximately six to eight hours to create and publish a book using AI tools, although some could be completed in as little as three hours.
To make his creations available to the public, Boucher opted to sell his books online, with prices ranging from $1.99 to $3.99 per copy. In his article for Newsweek, he expressed his appreciation for the role AI played in his creative process, noting that it significantly boosted his productivity while maintaining consistent quality. Furthermore, AI tools enabled him to delve into intricate world-building with unparalleled efficiency.
The market has recently witnessed a surge in AI-generated novels. In February, ChatGPT was attributed as the author or coauthor of over 200 titles listed in Amazon’s bookstore. The genres that garnered the most attention were AI guides and children’s books.
One example of AI-assisted book creation comes from Ammaar Reshi, a product-design manager at a San Francisco-based financial-tech company. Reshi revealed that he utilized ChatGPT and Midjourney to write and illustrate a children’s book titled “Alice and Sparkle” in just 72 hours. However, the book stirred controversy on Twitter, as it faced backlash from creatives. Concerns were raised regarding the use of AI image generators and the perceived quality of the writing.
The advent of AI in the realm of book creation has opened up new possibilities, allowing authors like Tim Boucher to expand their creative output. While these developments have sparked debates within the artistic community, it is undeniable that AI continues to reshape the landscape of literary expression.
During OpenAI CEO Sam Altman’s testimony before a Senate panel, many became acquainted with SoundExchange, a Washington, DC-based nonprofit organization established two decades ago to collect royalties from digital music platforms and distribute them to music creators.
Senator Marsha Blackburn (R-TN) questioned Altman regarding the compensation of songwriters and musicians when their works are utilized by AI companies. She emphasized that the Nashville music community should have the authority to decide if their copyrighted songs and images are used for training these AI models. Blackburn inquired whether a system similar to SoundExchange could be employed for the collection and distribution of funds to compensate artists.
Although Altman claimed to be unfamiliar with SoundExchange, he acknowledged the importance of ensuring that content creators benefit from AI technology.
Michael Huppe, President and CEO of SoundExchange, as well as an adjunct professor in music law at Georgetown University, expressed his satisfaction with Blackburn’s remarks. He acknowledged the rapidly evolving landscape, where AI-generated songs mimicking popular artists can go viral, platforms like Grimes’ enable anyone to create AI-generated songs using their voice, and AI is used to release songs featuring deceased artists like Notorious B.I.G.
Huppe commended Senator Blackburn for her forward-thinking approach in recognizing the necessity of allowing the creative class to actively participate and be fairly compensated within this new technological landscape. He emphasized that AI is here to stay, underscoring the importance of compensating and protecting the work of artists in this realm.
Not just about artists — even the NFL is concerned
How AI development affects creative workers is not just about the music industry, Huppe emphasized. He pointed to the March launch of the Human Artistry Campaign, a set of principles that outline the responsible use of AI to “support human creativity and accomplishment with respect to the inimitable value of human artistry and expression.” The campaign, he said, has been joined by over 100 organizations representing songwriters, musicians, authors, literary agents, publishers, voice actors and photographers — as well as non-artistic entities like sports organizations, including the Major League Baseball Players Association and the NFL Players Association.
Why sports? “Many players profit off their name, image and likeness,” said Huppe. “So this isn’t just about copyright when we talk about what happens [with AI]. It’s also how generative AI — whether text, images, audio or video — can capitalize on those who have built up their brand and persona. You have someone else trying to capitalize on that without permission.”
Creative class “getting louder” about AI
The bottom line, Huppe said, is that how AI uses creators’ work should be their choice. “It’s about fairness and control, so that the creative class can’t just have these things taken away from them.”
Huppe pointed out that there is already a nascent marketplace developing of people licensing their works for AI, such as how OpenAI licensed images from Shutterstock to train its models. “You can imagine a world where that starts to be the norm,” he said, “where there’s an organized licensing structure and ethical AI companies can know what’s allowed to be scraped and what’s off-limits … and where they share part of their profits with the creative community.”
With other industries pushing back on generative AI — including lawsuits filed by visual artists, striking Hollywood writers and unionizingjournalists — and celebrities like Justine Bateman and Sting speaking out, Huppe said the creative class “is getting louder as we speak.”
Music, he said, has often been like “the marines on the beach” when it comes to dealing with new technologies that ultimately affect all industries: “There’s almost no industry that doesn’t have the risk of being really impacted by generative AI. It’s on everybody’s mind.”
According to a recent survey conducted by automation software firm UiPath, a substantial majority of workers (approximately 60%) believe that AI-powered automation solutions can mitigate burnout and significantly improve job satisfaction. Moreover, 57% of the respondents expressed a more positive perception of employers that integrate business automation to support their employees and streamline operations than of employers that do not, reflecting their favorable attitude towards such practices.
As workloads intensify, 28% of individuals report taking on extra responsibilities due to layoffs or hiring freezes. A full 29% of workers worldwide experience burnout. This is fueling an escalating dependence on AI tools for alleviation.
The automation generation
These factors are contributing to the emergence of what has been called the “automation generation” — professionals who proactively adopt automation and AI to enhance collaboration, foster creativity and boost productivity, regardless of age or demographic.
These individuals actively seek technologies that enhance their professional and personal lives, as they strive to avoid feeling dehumanized.
One of the survey’s primary revelations is that 31% of respondents actively employ business automation solutions in their workplaces.
The automation generation subgroup believes they have the resources and support they need (87%) to carry out their responsibilities effectively. Furthermore, 83% of these workers believe that business automation solutions can effectively mitigate burnout and enhance job satisfaction.
“With more than half of respondents stating they believe automation can address burnout and improve job fulfillment, it is clear that AI-powered business automation technology is already positively impacting business and technical workers and helping them to reduce time spent on repetitive tasks and focus on more critical and gratifying work,” Brigette McInnis-Day, chief people officer at UiPath.
She emphasized that this assertion is reinforced by the fact that among the respondents who are already using business automation solutions, 80% believe that these solutions enable them to perform their jobs more effectively, and 79% hold a more positive perception of employers that implement business automation than of those that don’t.
The survey, administered in March 2023, was conducted in partnership with Researchscape. It garnered online responses from 6,460 executives worldwide. Topline results were weighted to ensure representation of each country’s GDP, with the following distribution: U.S. (55%), Japan (10%), Germany (9%), India (8%), United Kingdom (7%), France (6%), Australia (4%) and Singapore (2%).
Tackling office workloads through AI
The survey reveals that workers worldwide are increasingly embracing automation and AI-powered tools to tackle mundane tasks.
Specifically, respondents expressed their desire for automation to assist in tasks such as data analysis (52%), data input/creation (50%), IT/technical issue resolution (49%) and report generation (48%).
When questioned about the sources of their burnout and work fatigue, respondents highlighted working beyond scheduled hours (40%), pressure from managers and leadership (32%), and excessive time dedicated to tactical tasks (27%) as the primary causes.
“AI-powered automation emerges as a solution to alleviate these leading causes of burnout, enabling workers to swiftly and effortlessly locate and analyze data while streamlining repetitive and time-consuming tasks,” McInnis-Day.
Workers of the automation generation emphasize flexibility, career advancement and focused work time. In terms of where automation tools impact their jobs, respondents expressed the desire for enhanced flexibility in their work environments (34%), allocated time for acquiring new skills (32%) and dedicated hours for critical tasks (27%).
“Unlike the previous defining generational categories, the automation generation encompasses all ages and demographics,” explained McInnis-Day. “It is the professionals embracing AI to be more collaborative, creative and productive as well as using these technologies to deliver more satisfying, positive workplace experiences, enrich their personal lives and prevent them from overall feeling like robots themselves. They are looking for a renewed and revived sense of purpose in their work — and automation is helping them realize that.”
Not surprisingly, the survey revealed that younger employees are more receptive to these new technologies. Majorities of Generation Z (69%), Millennials (63%) and Generation X (51%) respondents firmly believe that automation has the potential to enhance their job performance.
“Among the workers surveyed, 31% of respondents said they were already utilizing business automation solutions (of this group, 39% were Millennials and 42% were Gen Z). Additionally, of the 31% already using business automation solutions, 87% feel they have the resources and support needed to do their job effectively,” added McInnis-Day. “The findings prove that employees using AI-powered automation believe in its ability to advance their careers and support work-life balance.”
The growing demand for automation and AI-powered tools
According to McInnis-Day, persistent economic uncertainty and the need for organizations to accomplish more with fewer resources will drive increasing demand for automation and AI-powered tools. She said that companies that adopt an open and adaptable approach to deploying AI are best positioned to attract skilled employees who can contribute to their success.
“The top resource the automation generation identified as the key aspect that would help them do their jobs better and/or advance was technical tools and software,” she said. “Fifty-eight percent of respondents indicated they were looking for these technology tools to help them respond to today’s economic and labor market pressures.”
She advises business leaders to equip their workers with AI-powered automation tools to thrive in an automation-first world and alleviate resource constraints.
“These survey results provide compelling evidence that incorporating AI-powered automation across the organization is not only a wise investment but also aligns with employees’ preferences,” she said. “With workloads on the rise and employees seeking careers that offer a healthy work-life balance, the integration of AI-powered automation becomes crucial in delivering more fulfilling and positive workplace experiences.”