OpenAI CEO, Sam Altman, expressed his concerns regarding the potential interference of artificial intelligence (AI) in elections during his testimony before a Senate panel on Tuesday. Altman emphasized the need for rules and guidelines regarding disclosure from companies providing AI models, emphasizing his apprehension about the issue.
This marked Altman’s first appearance before Congress, where he advocated for stringent licensing and testing requirements for the development of AI models in the United States. When asked about the specific AI models that should require licensing, Altman suggested that any model capable of persuading or manipulating people’s beliefs should meet a high threshold for regulation.
Altman further asserted that companies should have the freedom to choose whether their data is used for AI training, a topic already under discussion in Congress. He mentioned that material available on the public web should generally be considered fair game for AI training, although the executive did not rule out the possibility of advertising, but leaned towards a subscription-based model.
The OpenAI CEO ‘s testimony highlighted the growing concerns surrounding the potential misuse of AI in electoral processes, emphasizing the need for proactive measures to address these challenges and ensure the integrity of democratic systems.
Top technology CEOs convened
Altman’s testimony was one of many at the Senate as the White House invited top technology CEOs to address AI concerns with U.S. lawmakers seeking to further the technology’s advantages, while limiting its misuse.
“There’s no way to put this genie in the bottle. Globally, this is exploding,” said Senator Cory Booker, a lawmaker concerned with how best to regulate AI.
Altman’s warnings about AI and elections come at a time when companies large and small have been competing to bring AI to market, with billions of dollars at play. But experts everywhere have warned that the technology may worsen societal harms such as prejudice and misinformation.
Some have even gone so far as to speculate AI could end humanity itself.
The White House is taking all these concerns seriously and convening with all relevant authorities and executives to try and ensure that the worst case scenarios do not come to pass
Claude AI, the ChatGPT-rival from Anthropic, can now comprehend a book containing about 75,000 words in a matter of seconds. This is a huge leap forward for chatbots as businesses seek technology that can churn out large pieces of information quickly.
Since the launch of ChatGPT, we have also seen companies such as Bloomberg and JP Morgan Chase look to leverage the power of AI to make better sense of the finance world. While this process has taken them at least a few months, Anthropic, with its Claude AI, can reduce the time taken to just a few seconds.
How Anthropic supercharged its AI
In computing terms, a token is a fragment of words used to simplify data processing. The amount of tokens that a large language model (LLM) can process at a given time is called a context window, which is similar to short-term memory.
An average human can read 100,000 tokens in about five hours’ time. However, this is only the time taken to read the tokens, and more time might be needed if one has to remember and analyze this information.
OpenAI’s GPT-4 LLM has a context window of 4,096 tokens (~3,000 words) when used with ChatGPT, but this can increase to 32,768 tokens while using GPT-4 API. ClaudeAI’s context window was about 9,000 tokens, but the company has now increased it to 100,000 tokens (75,000 words).
To demonstrate how this improves the AI’s performance, Anthropic loaded the entire text of The Great Gatsby (72,000 tokens) with one line modified from the original. The AI was tasked with spotting the difference, which it did in just 22 seconds, the company claimed in a press release.
This might not sound very impressive to those who have used word processors to find differences between two texts. Where AI trumps word processors is the ability to answer questions about the text and analyze it in depth.
Anthropic is looking at businesses that need large numbers of documents to be processed to use its AI instead and ask Claude specific questions on the way ahead. Like any chatbot, Claude can be prompted to look for specific information and return results, as a human assistant would.
Anthropic also used AI to process a transcript of a six-hour recording of a podcast and summarize it and answer questions. The company is confident the same can be applied to financial reports and legal documents as well as improving code or answering technical questions.
Claude AI with an improved context window is available via Claude API, for which there is a waitlist.
The European Parliament has made a noteworthy stride in formulating a regulatory framework to govern the use of artificial intelligence (AI) within Europe. The draft AI Act has garnered favorable votes from key committees in the Parliament, delineating restrictions on AI deployment while still fostering innovation. This response comes in light of the rapid advancement of ChatGPT and similar generative AI systems, which have demonstrated the benefits and opportunities afforded by advanced technology, but have also raised concerns about the potential dangers stemming from the dissemination of fabricated content.
The inception of the AI Act dates back to 2021, with the objective of regulating any product or service that employs an AI system. Classifying AI into four tiers based on risk, the Act imposes more stringent rules and demands greater transparency and accuracy for higher-risk applications. The aim is to ensure responsible development and utilization of AI, steering clear of a society controlled by AI.
An integral facet of the AI Act is its commitment to striking a balance between safeguarding fundamental rights and providing legal certainty for businesses while fostering innovation in Europe. Policymakers acknowledge the dual potential of AI technology for positive and negative purposes, and they perceive the associated risks as being too significant. Consequently, the AI Act aims to prevent the exploitation of AI to create a surveillance state or perpetuate discrimination against specific groups.
The AI Act will impose a ban on the use of remote facial recognition technology, with limited exceptions for countering and preventing specific terrorist threats. This significant measure arises from concerns surrounding the potential misuse of facial recognition technology in building a surveillance society. Additionally, the Act will prohibit the use of policing tools aimed at pre-determining crime occurrences and perpetrators, as lawmakers believe such tools are inherently discriminatory and violate human rights.
Another noteworthy inclusion in the AI Act is the classification of generative AI systems, including ChatGPT, as high-risk systems. Consequently, these systems will be subject to the same level of scrutiny and regulation as other high-risk applications like self-driving cars and medical devices. The decision to include generative AI within the purview of the AI Act reflects policymakers’ apprehensions regarding the potential misuse of this technology in generating harmful fabricated content.
The AI Act represents a significant stride forward in the regulation of AI within Europe. However, it is important to note that the parliamentary committees have reached an agreement that represents merely the initial step in a lengthy and bureaucratic process, which could span several years before the European Union’s 27 member states adopt it as law. Moreover, implementing the AI Act is expected to face significant challenges, particularly in terms of enforcing regulations and overseeing AI systems.
In today’s increasingly complex and geographically dispersed organizations, encompassing remote teams and diverse knowledge systems, the challenge of tracking down crucial data across the entire enterprise knowledge ecosystem has become a formidable task. Consequently, employees are experiencing the negative consequences of this knowledge access challenge, leading to reduced productivity and waning engagement within the workforce.
During the recent VB Spotlight event titled “The Impact of Generative AI on Enterprise Search: A Game-Changer for Businesses,” Phu Nguyen, the head of digital workplace at Pure Storage, emphasized the detrimental effects of this issue. He highlighted that employees are feeling frustrated due to the inability to locate the information they need, ultimately resulting in diminished engagement levels and decreased productivity.
To shed light on potential solutions, the event brought together industry experts including Jean-Claude Monney, a digital workplace, technology, and knowledge management advisor, and Eddie Zhou, the founding engineer specializing in intelligence at Glean. The panelists discussed the emergence of a revolutionary advancement in workplace-specific search tools, which harness the power of generative AI. These innovative tools aim to provide employees with comprehensive access to the knowledge they require, along with its contextual relevance, regardless of their location within the organization.
By leveraging generative AI, organizations can overcome the limitations of traditional search methods and unlock a wealth of information that was previously challenging to navigate. This transformative technology enables employees to swiftly and efficiently access the precise knowledge they need, empowering them to make informed decisions and perform their tasks effectively.
Moreover, the contextual understanding offered by generative AI allows users to grasp the interconnectedness of information across different departments and teams. This comprehensive perspective fosters collaboration, facilitates cross-functional problem-solving, and encourages knowledge sharing within the organization.
The adoption of generative AI-powered search tools marks a significant leap forward in the quest to streamline knowledge access within enterprises. As organizations embrace this game-changing technology, they can alleviate the frustration experienced by employees, enhance productivity, and ultimately drive higher levels of engagement throughout the workforce.
The evolution of enterprise search
Traditional enterprise search can’t reach all the knowledge in an organization, which is spread out in multiple systems. It can mine structured knowledge, such as the data found in Jira, Confluence, intranets and sales portals, but unstructured knowledge, the information communicated through IM, Teams, Slack, and email, has been uncharted territory, difficult to corral in any helpful contextual way, Nguyen adds.
“The paradigm of knowledge management has changed significantly,” he says. “How do you have a system that can look at both structured and unstructured data and provide you with the answers that you’re ultimately looking for? Not the information that you need, but the answer that you’re looking for.”
Solutions that integrate with multiple systems and utilize generative AI can address these challenges, and help employees find the information they need to perform their jobs effectively, no matter where that knowledge resides.
“Companies are now building searches specifically for the workplace, built for internal searches that work across your internal system,” Nguyen explains. “Most importantly, they’re built on a knowledge graph that returns a search that’s more relevant to your employees. This is all very exciting for us because we think of this as part of our employee information center strategy. Previously it was just an intranet and our support portal, but now we have this workplace search that can connect information across multiple systems inside our organization.”
How organizations can leverage generative AI
There are three major ways companies can leverage generative AI, and they’re game changers, Monney says. First, he says, are the benefits that an NLP interface brings.
“Time to knowledge is a new business currency,” says Monney. “What we’ve seen with generative AI is this quantum leap in user experience. ChatGPT has democratized ways to talk to a system and get very succinct responses.”
At home, users have grown accustomed to the ease and convenience of natural language interfaces like Alexa and Siri; generative AI brings that user experience to the workplace, giving workers not just an enterprise search tool, but a digital knowledge assistant, he adds. It enables employees to find not just information but precise answers quickly, boosting productivity and efficiency, especially in complex decision-making scenarios. Generative AI also has the potential to go beyond answering individual questions and assist in more complex decision journeys, providing users with synthesized and relevant information without the need for explicit queries.
Generative AI can also automate repetitive tasks and streamline workflows — for example, chat bots that are powered by generative AI can handle customer service inquiries, product recommendations, or simply assist with booking appointments. That frees time for more complex tasks and greatly increases productivity.
Lastly, these generative AI solutions can be precisely refined for industry-specific and case-specific use. Companies can add their own corpus of knowledge to the large language models that generative AI uses, to improve relevance and the time to knowledge.
Bringing generative AI into the workplace
“To bring this technology into the workplace is not an easy thing,” Zhou cautions. It requires a knowledge model, which is composed of three pillars. The first is company knowledge and context. An off-the-shelf model or system, without being properly connected to the right knowledge and the right data, will not be functional, correct, or relevant.
“You need to build generative AI into a system that has the company knowledge and context,” he explains. “That allows for this trusted knowledge model to form out of the combination of these things. Search is one such method that can deliver this company knowledge and context, in conjunction with generative AI. But it’s one of several.”
The second pillar of the trusted knowledge model is permissioning and data governance, or being aware, as a user interfaces with a product and with a system, of what information they should and should not have access.
“We speak of knowledge in the company as if it’s free-flowing currency, but the reality is that different users and different employees in a company have access to different pieces of knowledge,” he says. “That’s objective and clear when it comes to documents. You might be part of a group alias which has access to a shared drive, but there are plenty of other things that a given person should not have access to, and in the generative setting it’s incredibly important to get this right.”
The third and final one is referenceability. As the product interface has evolved, users need to build a trust with the system, and be able to verify where the system is pulling information from.
“Without that kind of provenance, it’s hard to build trust, and it can lead to runaway factuality errors and hallucinations,” he says – especially in an enterprise system where each user is accountable for their decisions.
The emerging possibilities of generative AI
Generative AI means moving from questions into decisions Zhou says, decreasing time to knowledge. Basic enterprise search might turn up a series of documents to read, leaving the user to dig out the information they need. With augmented answer-first enterprise search, the user doesn’t ask those questions individually; instead, they can express the underlying journey, the overall decisions that need to be made, and the LLM agent brings it all together.
“This generative technology, when we pair it with search, and not just single searches, it gives us the ability to say, ‘I’m going on a business trip to X. Tell me everything I need to know,’” he says. “An LLM agent can go and figure out all the information I might need and repeatedly issue different searches, collect that information, synthesize it for me and deliver it to me.”
For more on the ways that generative AI and large language models can transform how knowledge is accessed and used in enterprises, they types of use cases and more, don’t miss this VB Spotlight!
Artificial intelligence (AI) is quickly becoming a common tool for use in a wide number of industries, including business, finance, and medicine. Due to the many different types of AI platforms and applications available, the possibilities for their use are endless. For examples, AI can be used to detect anomalies in bank transactions to help spot fraudulent activities. AI platforms can also be used as diagnostic tools, to help pharmaceutical companies with drug discovery, and to aid doctors in spotting tumors that might otherwise be missed. And that is just the beginning.
However, the education industry still has some way to go before it has harnessed the full potential of AI. Ideas include using AI to make education more engaging and personalized, improve accessibility, complement individual learning styles, and enhance the learning experience for both the teacher and the student. In addition to improving the learning experience for students, AI could be used to help teachers save time and resources by automating tasks such as checking answer sheets and other administrative tasks.
In this article, we will take a look at ten examples of AI technology that have the potential to revolutionize the education industry and explore some organizations that are using the technology to improve performance in education.
1. Personalized learning
One theory in pedagogy is that everyone has a different learning style. Some are more visual learners, some are more aural learners, while others are more kinesthetic learners, etc. While this theory has been hotly debated, it is generally agreed that people do tend to learn in different ways – whether that involves different work and study styles, learning at different paces, or finding some subjects and concepts easier than others. Given this, it makes sense to personalize the learning experience, doesn’t it? But, if a school or teacher has to personalize lesson plans for every student, it would be impossible – there is simply not enough time. Enter – personalized learning using AI.
One of the strengths of AI is that it is capable of analyzing large amounts of data quickly and finding patterns, making it a perfect tool for developing personalized learning. AI can be used to devise individual lessons around a particular subject quickly. AI-based learning systems might also be able to give teachers detailed information about students’ learning styles, abilities, and progress and provide suggestions for how to customize their teaching methods to students’ individual needs. For example, suggesting more advanced work for some students and extra attention for others.
Additionally, AI could be used to predict results more accurately, thereby helping teachers understand whether their lesson planning will meet targets for learning.
It also helps with planning, scheduling, and producing lessons for students making the experience entirely unique and hugely rewarding. This could also free up time for teachers, which can then concentrate on high-value tasks, such as working with students.
For example, a number of universities have tested the use of chatbots for repetitive tasks that would normally be done by a professor or faculty member – such as providing answers to questions frequently asked by students. Both Staffordshire University in the U.K. and Georgia Tech have developed chatbots that offer 24/7 assistance to students.
Duolingo uses adaptive learning to enhance the user learning experienceDuolingo
2. Adaptive learning
Adaptive learning, or adaptive teaching, is an educational method in which AI is used to customize resources and learning activities to cater to the unique needs of each learner. This is especially useful in online learning.
This is done via rigorous analysis of a student’s performance data, after which the pace and difficulty of the course material are adjusted by the AI algorithm in order to optimize the learning process.
This method not only optimizes learning but can also save time and resources by removing unnecessary repetition and focusing on the concepts or areas that a student might be struggling with. The teacher can provide support wherever the student needs and the student can learn at a pace they are comfortable with.
Many companies are incorporating adaptive learning to improve the way content is delivered. One popular example is Duolingo, a language-learning app that provides listening, reading, and speaking exercises for learning around 40 different languages. The app uses AI to help ensure that lessons are paced and leveled for each student according to their performance.
3. Automated grading
Grading assignments and exams are one of the most time-consuming tasks in education. With the help of machine learning algorithms, AI tools can evaluate essays, multiple-choice tests, and programming assignments with great accuracy and efficiency, thereby saving teachers a lot of time.
A computer doing these tasks not only saves time but also ensures consistency in scoring, potentially eliminating bias, including unconscious bias, teachers may have and reducing human error in the correction process. The AI tool can also provide personalized feedback to students and teachers. This can help students improve in problem areas and enables students to take ownership of their learning.
Although automated grading powered by AI has a lot of advantages, bias may exist, even in AI. This is because machine learning algorithms are trained on data, which itself may have underlying biases. Therefore, this is still a field requiring more research to make the technology bias-free.
For example, according to a 2021 article published in OxJournal, China has been using AI auto grading platforms with increasing volume, with around 1 in 4 schools in the country testing a machine learning auto grading platform that can also give suggestions on work done.
Intelligent tutoring systems (ITS) are computer systems powered by machine learning algorithms that provide personalized and adaptive lesson plans based on every student’s learning needs and pace. Similar to previous AI tools, ITSs analyze student data to understand learning patterns which it then uses to provide customized suggestions, feedback, and exercises suiting the individual needs of each student.
ITSs are helpful to both students and teachers as it allows teachers to monitor students’ progress and modify their teaching approach to deliver their lessons effectively. ITSs can help students learn at their own pace while providing support when necessary and challenging them when they are ready to learn more advanced concepts.
A study by the U.S. Department of Education found that existing ITSs can improve student literacy by improving their reading comprehension and writing skills. However, implementation of the systems in a classroom remains a challenge. To overcome this, natural language processing techniques have been suggested for use in scoring student responses.
Despite the challenges faced by these systems, students have had some positive responses to the use of ITSs. Another study found that students find ITSs easy to use and learn, although not necessarily fun.
5. Smart content creation
Creating lesson plans is one of the greatest challenges for a teacher, as each student has unique requirements based on the way they learn and understand concepts. The term “smart content creation” describes the use of AI to automate and enhance the generation of educational content. The AI platforms can provide detailed insight by analyzing student data to create personalized and engaging educational material.
This is then used to create customized environments depending on various learning outcomes. The students can then choose the lesson plan that aligns with their requirements. AI can help to generate interactive quizzes, simulations, and experiments, via chatbots, augmented or virtual reality, which can then be used in the customized environment to enhance the learning experience.
The biggest and most successful demonstration of this is Coursera. It uses AI to curate multiple educational and professional courses that can help the learner. Teachers can also suggest appropriate courses based on a student’s learning performance, pace, and individual requirements.
Combing through large amounts of student data is a tedious task but can provide valuable insights into a student’s learning and performance. Using automated analytics makes it easier to analyze large amounts of student data, and this can be sped up using AI. It makes the challenging and time-consuming task of data analysis easier.
Teachers can use the data to track student performance and engagement as well as to make timely interventions and provide additional support to students who require it. Similarly, students can also use it to track their performance and learning and use it to ask for additional help if they need it. The University of Michigan has a dashboard called My Learning Analytics that allows students to visualize and track their grade distribution, assignment planning, and resources.
However, there are also potential issues with the implementation of learning analytics in the education sector. A study published in 2022 highlights ethical and privacy issues, data collection, and data analysis as potentially challenging implementation problems for learning analytics. While the latter of the two concerns can also be solved with the use of AI, there are still significant ethical concerns that will have to be dealt with.
7. Virtual assistants
Many administrative tasks, such as lesson planning and organizing schedules, can be automated thanks to the power of AI. Virtual assistants take on laborious, repetitive activities, freeing up teachers’ valuable time to focus on essential duties like giving lectures and interacting with students.
Additionally, virtual assistants can provide customized feedback to students, monitor their progress, and provide additional resources based on a student’s individual needs. Using AI-powered virtual assistants can help teachers streamline administrative work and focus on making the learning experience engaging for students.
A study in SpringerOpen even found a correlation between students who used virtual assistants, such as chatbots, and their academic performance. They found that students who interacted with chatbots outperformed those who interacted with the course teacher in terms of academic performance. The study was conducted on 68 undergraduate students in Ghana and made a positive case for the use of AI tools, such as virtual assistants, in the education sector.
NLP is a technique to make computer systems understand human languageWikimedia Commons
8. Natural language processing
Natural language processing (NLP) is a field of AI that deals with making computer systems that can understand and interpret human languages. NLP has many different applications, such as text generation, chatbots, and information extraction, among many others. One of the most popular uses of NLP is in large language models, such as ChatGPT, developed by OpenAI.
ChatGPT may be used by students to help with homework, prepare for an exam, or simply satisfy their curiosity while learning. Teachers can also use ChatGPT to prepare lesson plans and check assignments for grammar and information. As the popularity of the software has risen, more and more students are using this resource. And although it may seem like there are no downsides to this technology, many people think otherwise.
Students should not see ChatGPT as their answer to all the homework questions, and similarly, teachers should not see ChatGPT as the absolute of human knowledge. As mentioned in this study, it should be viewed more as an assistive technology that responds to societal values and needs. Other concerns also exist, such as the existence of bias, the knowledge not being current, plagiarism, its use as an aid in cheating, etc.
Other technologies that use NLP, like automated essay grading systems, have been covered earlier in the article. In the education sector, future developments should address the various concerns with the technology when using NLP technologies.
9. Predictive modeling
Similar to learning analytics, AI-powered predictive modeling deals with analyzing large amounts of data, which is then used to predict various outcomes, such as student performance. This information is valuable to teachers, parents, institutions, governments, and students as they can greatly help with the learning experience and setting benchmarks. This can help teachers offer timely guidance to students based on the student’s predicted performance and on their previous test or exam results.
Data-driven analysis is an important tool to have in education as it can improve individual student performance and give them additional support when needed, overall enriching their learning experience. It is also of value to governments for use in planning educational goals. A study on community college students used predictive modeling to identify at-risk students based on several key variables. This helped them to drive interventions to help these students.
Immersive technologies, such as augmented reality (AR) and virtual reality (VR), have become increasingly popular over the past few years. AR is an immersive technology that overlays computer-generated content onto real-world objects, thus enhancing a user’s perception of reality. On the other hand, VR is a simulated virtual environment that the user can experience as if it were real. These technologies are used for gaming and metaverse but have huge potential in the education sector.
Students can use immersive technologies to interact with the learning material to improve their understanding of complex concepts and overall enrich the learning experience. VR, in particular, has many promising applications, such as creating labs where students can conduct chemistry experiments or virtually dissect animals. AR can enable students to study stars and galaxies up close, allowing them to engage with physical things and providing them with more hands-on and experiential learning.
An article published by the Information Technology and Innovation Foundation (ITIF) explained that AR/VR technologies can reduce the learning curve for students. They also mention that AR/VR technologies can help teachers enhance STEM courses, medical simulations, arts and humanities materials, and technical education. AR/VR technologies are already being used in several institutions, such as Arizona State University (ASU), which has collaborated with Dreamscape Immersive to create Dreamscape Learn. ASU students even created a time travel experience using this technology.
Conclusion
And there you have it – 10 of the most promising examples of AI improving the education sector. While AI provides numerous advantages for both teachers and students, it’s crucial to keep in mind that it also has certain disadvantages.
One limitation of AI is that it cannot replace human interaction and empathy, which are essential in the teaching and learning process. Additionally, as the article already discussed, AI algorithms can perpetuate biases. And finally, there are always concerns about data privacy and security when it comes to AI. As a result, it is crucial to integrate AI into education, but doing so requires careful consideration of both its potential advantages and drawbacks.
The use of AI in education holds a lot of potentials and could even revolutionize the way future generations of students learn.
During the Google I/O conference, Google announced a forthcoming feature that will enable users to determine if an image is AI-generated. This new functionality, set to launch this summer, leverages hidden information embedded within the image.
As part of its focus on AI, Google unveiled a range of products and features. In a blog post, Cory Dunton, Google’s product manager for search, emphasized the importance of having the complete picture when assessing the reliability of information or images.
The new tool, named “About this image,” provides users with additional details about when the image was initially indexed by Google, its original sources, and whether it has appeared on news or fact-checking websites. Accessible through various methods such as clicking on the three dots above an image in search results, using Google Lens, or swiping up in the Google app, this feature aims to empower users with more context.
As Google prepares to launch its own text-to-image generator, the company commits to including data that indicates if an image was created by AI. By adding markup to the original files, Google intends to provide viewers with the necessary context when encountering AI-generated images outside its platforms. Additionally, image publishers like Shutterstock and Midjourney will introduce similar labels in the upcoming months.
While AI image generators like Midjourney, alongside OpenAI’s DALL-E, have gained recognition, there have been concerns about the misuse of such technology. Midjourney faced scrutiny for creating fake images depicting Donald Trump’s arrest, highlighting the potential ethical implications associated with AI-generated content.
Google’s image verification feature aims to enhance transparency and enable users to make more informed judgments about the authenticity and reliability of images in an AI-driven landscape.
In November 2022, OpenAI unleashed ChatGPT, setting a benchmark for conversational AI. Since then, Google has been playing catch-up with its own tool, Bard. However, at the Google I/O conference, Google Upgrades Bard to Compete with ChatGPT.
Initially launched with limited availability and a waitlist, Bard encountered challenges in gaining traction. Now, Google is removing the waitlist and opening Bard to a global audience, aiming to broaden its reach and impact.
Google also unveiled several advancements to outpace ChatGPT, including multi-language support, visual responses, export functionality, and new integrations. These enhancements are designed to provide users with an enhanced and more versatile conversational AI experience.
During a Google I/O keynote, Sissie Hsiao, VP and GM of Google Assistant and Bard, highlighted the transformative impact of large language models and the team’s dedication to rapid improvement and iterative development of Bard.
With these updates, Google aims to position Bard as a competitive alternative to ChatGPT, leveraging its own advancements and capabilities in the field of conversational AI.
The term “bard” is a word used to describe a storyteller and is a moniker that is also commonly associated with famous English playwright William Shakespeare.
Bard’s words aren’t written by Shakespeare, or any other human (at least, not directly), but rather are generated from Google’s newest large language model (LLM) PaLM 2, which was also announced at today’s Google I/O event.
PaLM 2 provides Bard with significantly enhanced generative AI capabilities that exceed the initial functionality that Bard launched with earlier this year.
“With PaLM 2, Bard’s math, logic and reasoning skills made a huge leap forward, underpinning its ability to help developers with programming,” Hsiao said. “Bard can now collaborate on tasks like code generation, debugging and explaining code snippets.”
With code generation, Bard is also going a step further in its bid to help outpace OpenAI’s capabilities. Hsiao said that starting next week, Bard will integrate precise code citations to help developers understand exactly where code snippets have come from.
What good is a Bard if you can’t share its work?
Another limitation of the original Bard was that responses and generated content remained in Bard, but that’s also about to change.
Hsiao announced that, starting today, Bard is adding export actions for Gmail and Google Docs, making it easy to integrate generated content. Going a step further, she announced that more extensibility is coming to Bard with the launch of tools and extensions.
“As you collaborate with Bard, you’ll be able to tap into services from Google and extensions with partners to let you do things never before possible,” Hsiao said.
Bard going multilingual
English isn’t the only language that Google’s users speak and soon it won’t be the only language that Bard supports either.
The plan is for Bard to support 40 different languages, starting today with Japanese and Korean, with more to come in the following months.
“It’s amazing to see the rate of progress so far with more advanced models. So many new capabilities and the ability for even more people to collaborate with Bard,” Hsiao said.
Informatica, a known provider of end-to-end data management solutions, today debuted Claire GPT, a generative AI tool to simplify different aspects of data handling.
Announced at the company’s annual conference in Las Vegas, the offering allows enterprise users to consume, process, manage and analyze data through plain natural language prompts. It will be integrated with Informatica’s Intelligent Data Management Cloud (IDMC) and begin to roll out in the second half of 2023.
The news comes as leading industry vendors, including many in the data space, continue to look at large language models (LLMs) as a way to make their products accessible to a broader spectrum of users.
Interact with data via natural language
Effective data management is essential for business success, but given the tsunami of data that enterprises have, manual approaches to managing data are no longer relevant. They take a lot of time, resources and effort. Plus, not every individual within the organization has the technical know-how for the job.
Claire GPT from Informatica aims to address this gap with a text-to-IDMC interface where users can enter simple natural language prompts to discover, interact with and manage their data assets.
Claire GPT. Image source: Informatica.
While the solution is yet to roll out widely, the company says it will support multiple jobs within the IDMC platform, including data discovery, data pipeline creation and editing, metadata exploration, data quality and relationships exploration, and data quality rule generation.
“It leverages a multi-LLM architecture, using public LLMs for non-sensitive queries (like intent classification, where we use LLMs to identify the user intent — metadata exploration, data exploration, pipeline creation, etc.) and fine-tuned Informatica-hosted LLMs that generate data management artifacts,” Amit Walia, CEO of Informatica, told VentureBeat. He claims that the solution can help experienced data users, such as engineers, analysts and scientists, realize up to an 80% reduction in time spent on key data management tasks.
Pairing with Claire
Informatica has designed the new conversational experience of data management by pairing its enterprise-scale AI engine Claire with GPT capabilities. Claire processes 54 trillion transactions on a monthly basis, which ensures that the answers produced by the chatbot are grounded in reality and not hallucinating.
“Ask ChatGPT to help you design and script a data pipeline, and it will. But that pipeline might not work. The problem is that LLMs in themselves lack any semblance of governance. They are black boxes that dodge questions about lineage. They emit errors and make things up. So, to really capture the productivity benefits of LLMs in a consistent way, you must put [them] in a governed setting. Pairing GPT capabilities with Informatica’s Claire platform creates the possibility that data teams can improve productivity with AI while maintaining governance and control,” Kevin Petrie, vice president of research at Eckerson Group.
Notably, Informatica is not the only player leveraging generative AI in such a way. Salesforce recently launched SlackGPT, combining Slack’s internal knowledge with LLMs, while observability major New Relic has launched Grok, an AI assistant for monitoring software for performance issues and fixing them.
What’s more at Informatica World?
Along with Claire GPT, Informatica also debuted new Claire-driven data management capabilities, including inferred data lineage, autogenerated classifications, multicolumn completeness analysis and automapping.
The company also took to the stage to announce IDMC for Environmental, Social and Governance, as well as “Cloud Data Integration for PowerCenter (CDI-PC)” to help customers migrate on-premises PowerCenter assets to IDMC. With the CDI, Informatica claims, enterprises will be able to move to the cloud up to six times faster, reuse 100% of PowerCenter artifacts and assets in the cloud, and realize anticipated cost savings of up to 20 times.
Claire GPT is currently in the private preview stage and is expected to see a wider rollout in the second half of 2023. The same goes for CDI-PC.
Google has commenced its annual I/O conference with a strong emphasis on advancing artificial intelligence (AI) across its various domains, with a particular spotlight on PaLM 2.
Google I/O has traditionally served as a developer conference, covering a wide range of topics. However, this year’s event stands out as AI takes center stage in almost every aspect. Google aims to establish itself as a frontrunner in the market, even as competitors like Microsoft and OpenAI enjoy the success of ChatGPT.
The cornerstone of Google’s endeavors is its newly introduced PaLM 2, a large language model (LLM). PaLM 2 will provide the backbone for at least 25 Google products and services, which will be extensively discussed in sessions at I/O. These include Bard, Workspace, Cloud, Security, and Vertex AI.
Originally launched in April 2022, the initial version of PaLM (Pathways Language Model) served as Google’s foundational LLM for generative AI. According to Google, PaLM 2 significantly enhances the company’s generative AI capabilities in meaningful ways.
During a roundtable press briefing, Zoubin Ghahramani, VP of Google DeepMind, emphasized Google’s mission to make information universally accessible and useful. He highlighted how AI has accelerated this mission, providing opportunities to gain a deeper understanding of the world and create more helpful products.
As Google showcases PaLM 2 and its far-reaching implications at the I/O conference, it solidifies its commitment to advancing AI and harnessing its potential to improve user experiences and product functionality.
Putting state-of-the-art AI in the ‘palm’ of developers’ hands with PaLM 2
Ghahramani explained that PaLM 2 is a state-of-the-art language model that is good at math, coding, reasoning, multilingual translation and natural language generation.
He emphasized that it’s better than Google’s previous LLMs in nearly every way that can be measured. That said, one way that previous models were measured was by the number of parameters. For example, in 2022 when the first iteration of PaLM was launched, Google claimed it had 540 billion parameters for its largest model. In response to a question posed by VentureBeat, Ghahramani declined to provide a specific figure for the parameter size of PaLM 2, only noting that counting parameters is not an ideal way to measure performance or capability.
Ghahramani instead said the model has been trained and built in a way that makes it better. Google trained PaLM 2 on the latest Tensor Processing Unit (TPU) infrastructure, which is Google’s custom silicon for machine learning (ML) training.
PaLM 2 is also better at AI inference. Ghahramani noted that by bringing together compute, optimal scaling and improved dataset mixtures, as well as improvements to the model architectures, PaLM 2 is more efficient for serving models while performing better overall.
In terms of improved core capabilities for PaLM 2, there are three in particular that Ghahramani called out:
Multilinguality: The new model has been trained on over 100 spoken-word languages, which enables PaLM 2 to excel at multilingual tasks. Going a step further, Ghahramani said that it can understand nuanced phrases in different languages including the use of ambiguous or figurative meanings of words rather than the literal meaning.
Reasoning: PaLM 2 provides stronger logic, common sense reasoning, and mathematics than previous models. “We’ve trained on a massive amount of math and science texts, including scientific papers and mathematical expressions,” Ghahramani said.
Coding: PaLM 2 also understands, generates and debugs code and was pretrained on more than 20 programming languages. Alongside popular programming languages like Python and JavaScript, PaLM 2 can also handle older languages like Fortran.
“If you’re looking for help to fix a piece of code, PaLM 2 can not only fix the code, but also provide the documentation you need in any language,” Ghahramani said. “So this helps programmers around the world learn to code better and also to collaborate.”
PaLM 2 is one model powering 25 applications from Google, including Bard
Ghahramani said that PaLM 2 can adapt to a wide range of tasks, and at Google I/O the company has detailed how it supports 25 products that impact just about every aspect of the user experience.
Building off the general-purpose PaLM 2, Google has also developed the Med-PaLM 2, a model for the medical profession. For security use cases, Google has trained Sec-PaLM. Google’s ChatGPT competitor, Bard, will now also benefit from PaLM 2’s power, providing an intuitive prompt-based user interface that anyone can use, regardless of their technical ability. Google’s Workspace suite of productivity applications will also get an intelligence boost, thanks to PaLM 2.
“PaLM 2 excels when you fine-tune it on domain-specific data,” Ghahramani said. “So think of PaLM 2 as a general model that can be fine-tuned to achieve particular tasks.”
Low-code automation and integration platform Tray.io today announced the launch of Merlin AI, a natural language automation feature on its platform. With Merlin AI, large language models (LLMs) can be transformed into complete business processes without exposing customer data to LLMs or mandating LLM training.
Merlin AI empowers employees and developers to construct, refine and enhance workflows without requiring IT or engineering participation, reducing integration time from weeks or months to minutes. The Tray.io platform merges the potency of adaptable, expandable automation, the provision of sophisticated business logic, and built-in generative AI capabilities to generate automated workflows.
“Merlin AI leverages OpenAI models and works seamlessly with Tray.io’s connector, workflow and API technologies, as well as other platform capabilities, to automatically translate natural language inputs — prompts or requests written in plain English — into sophisticated workflows,” Rich Waldron, cofounder and CEO at Tray.io, told VentureBeat. “Anyone can use Merlin to develop fully baked workflows to execute day-to-day tasks or retrieve information for specific business questions. It completely removes the learning curve for building automated workflows.”
The Tray.io platform’s generative AI capabilities, together with data transformation, authentication mechanisms, and backing for advanced business logic, allow users to construct comprehensive integrations with natural language processing (NLP). In addition, Merlin AI can automate intricate tasks, including aggregating or transferring data across systems, constructing automated workflows and addressing inquiries.
The company claims that the Tray platform is the first iPaaS (integration-platform-as-a-service) solution to provide generative AI capabilities accessible to all users.
“No other iPaaS on the market has native generative AI capabilities that anyone can use to securely automate complex business processes,” Alistair Russell, co-founder and CTO at Tray.io, told VentureBeat. “Unlike other applications that interface with LLMs, the operational capabilities of Merlin and the underlying Tray platform are self-contained, meaning Merlin only needs to fetch small pieces of information from the LLM on an as-needed basis during the integration building process. As a result, customer data is never exposed or sent to the LLM.”
The company said that Merlin employs a blend of GPT models, comprising GPT-3.5, GPT-4, Whisper and others, to handle distinct components of the natural language automation flow.
“Each of the models provides varying levels of capabilities, speed, and fine-tuning, which Merlin selects to ensure the best user experience,” said Russell.
Streamlining complex workflows through generative AI
The company believes that Merlin AI’s release marks the beginning of a new age in automation, given that it eliminates IT and engineering participation requirements. This not only liberates those teams to concentrate on other imperatives, but increases the tempo of innovation.
“Merlin … increases the pace of innovation because your line-of-business teams are no longer relying on scarce technical resources,” Russell told VentureBeat. “What makes Merlin so valuable compared to LLMs alone is that it can act on the query outputs. Merlin is giving the LLM ‘brain’ a Tray ‘body,’ which can take action on the query and build the integration required to complete the business process. It does this without passing customer data back to the LLM and requires no further training to execute complex business tasks.”
Furthermore, the system operates throughout the customer’s entire software stack.
“This is very different to most GPT-related chatbot announcements that, at best, will only be able to take pre-defined actions within its application,” said Russell.
Merlin AI empowers users to automate intricate workflows in two ways. First, through conversation, Merlin can construct and refine sophisticated automation among multiple systems. For instance, by taking natural language inputs, such as a plea to append a novel data enrichment source to a lead lifecycle management process, Merlin can identify the appropriate connector from the Tray connector library, provoke the requisite authentications, execute the query, and ensure that the outcomes are properly incorporated in the process’s progression.
Second, Merlin AI can perform assignments on a user’s behalf without directly interacting with the workflows, establishing an entirely new interface for resolving business problems.
“In this case, a CMO seeking to optimize social media investments can directly query Merlin to identify the top lead sources for the largest ‘closed won’ accounts by revenue and cross-reference the results with LinkedIn followers. The CMO never needs to see the complexity of the integrations that Merlin is building on the Tray platform; they simply get the data they need to make a more informed business decision,” said Tray.io’s Waldron.
The company clarified that it does not send customer data through a third-party LLM. Instead, the LLMs are used to create workflows that can be executed within the Tray.io platform entirely, ensuring that no data is exposed or shared with the LLM.
Leveraging Open AI models to enhance iPaaS
The company explained that Merlin enhances the OpenAI LLM by acting on its output, and claims that while LLMs can provide intelligent responses to questions rapidly, they do not take any action once they have responded.
“The burden is immediately returned to the person who asked the question, and it is their responsibility to take often complex actions on the response to achieve the desired outcome. Merlin can take that response and carry out the action on the user’s behalf,” said Waldron. “With the release of Merlin, Tray.io is the first iPaaS offering with generative AI that anyone, regardless of their technical expertise, can use to automate complex workflows.”
The Tray platform incorporates contemporary technology standards, enabling LLMs to code without requiring a comprehensive understanding of the platform or any Tray connectors. This crucial capability allows any user, including the integration builder, to leverage the potential of AI. Moreover, as Merlin constitutes a fundamental aspect of the Tray platform, a product not designed in this manner would face considerable difficulties replicating this experience.
“Tray.io provides a suite of powerful automation infrastructure accessible via APIs and low-code, coupled with the fact that Merlin AI is core to the Tray platform. Merlin can tap into Tray’s wide array of automation services via APIs to carry out actions using natural language on the user’s behalf. This opens up the infinite possibilities of automation to the entire workforce,” Russell explained. “By asking Merlin AI — like you would ask a colleague — you can obtain answers at the point of decision and automate critical business tasks.”
Tray.io believes that organizations are struggling with siloed information and multiple niche SaaS apps in each department, making automation and integration more critical than ever. Traditionally, organizations have turned to modern, elastic iPaaS vendors to “glue” their systems together and ensure that their data runs smoothly with the rest of the organization.
“Embracing digital transformation has been critical in the ‘real-time’ cloud-based reality we find ourselves in today. However, this movement comes with the consequence of application and data overload,” added Waldron. “Merlin AI takes this to an entirely new level because, for the first time, these issues can be solved faster, more accurately, and by a wider variety of people within the business through a natural language interface.”
Merlin enables users to input their requests and parameters and subsequently constructs a workflow with the necessary business logic. Once completed, the low-code visual builder will display all the required steps for review and modifications.
Tackling IT bottlenecks through AI
Waldron said that the scarcity of resources to combat the consequences of the mass adoption of cloud-based internal tooling is the biggest bottleneck in delivering critical digital initiatives.
“Merlin AI is the knight in shining armor for IT and developer teams — it provides AI support on their projects, enabling them to work faster, with greater accuracy than ever before,” he said. “In addition, a core element of the Tray platform is governance and security — which assures IT that it is safe for them to allow less technical users to leverage automation because there are established rules for application and data access that govern its use.”
According to Waldron, the new release unlocks the full potential of automation and makes building automated workflows more accessible to all employees by tapping into the power of AI through a natural language interface.
He believes that with Merlin AI, even individuals without technical expertise can build complete integrations solely using NLP, radically simplifying the automation-building process. In other words, complex integrations that span multiple applications are often necessary for requests regarding information or business processes.
“What seems simple to the requester, such as adding a new step in a company’s order-to-cash process, requires someone else, likely a developer who has a completely different set of priorities, to develop complex business logic and build and test the integration required to deliver that ‘simple’ business process change,” Waldon said. “With Tray Merlin AI, the requester can ask Merlin to do tasks in natural language, just as they would have asked the developer.”