WordPress Ad Banner

UN AI Adviser Warns About the Destructive Use of Deepfakes

Neil Sahota, an artificial intelligence (AI) expert and adviser to the United Nations, recently raised concerns about the increasing threat posed by highly realistic deepfakes. In an interview with CTVNews.ca on Friday, Sahota highlighted the risks associated with these manipulated media creations.

Sahota described deepfakes as digital replicas or mirror images of real-world individuals, often created without their consent and for malicious purposes, primarily aimed at deceiving or tricking others. The emergence of deepfakes has resulted in various instances of fake content going viral, encompassing a wide range of topics, including political simulations and celebrity endorsements.

While famous individuals have often been the primary targets, Sahota emphasized that ordinary civilians are also vulnerable to this form of manipulation. He noted that deepfakes initially gained traction through the distribution of revenge porn, highlighting the importance of remaining vigilant.

To identify manipulated media, Sahota advised individuals to pay attention to subtle inconsistencies in video and audio content. Signs to watch out for include unusual body language, odd shadowing effects, and discrepancies in the spoken words. By maintaining a vigilant eye and questioning the authenticity of media, individuals can become better equipped to identify potential deepfake content.

As deepfake technology continues to advance, Sahota’s warnings serve as a reminder of the critical need to exercise caution and skepticism when consuming digital media, as well as the urgent need for proactive measures to address the risks associated with deepfakes.

Not enough

Sahota also argued that currently policymakers are not doing enough in terms of educating the public on the many dangers of deepfakes and how to spot them. He recommended that a content verification system be implemented that would use digital tokens to authenticate media and identify deepfakes.

“Even celebrities are trying to figure out a way to create a trusted stamp, some sort of token or authentication system so that if you’re having any kind of non-in-person engagement, you have a way to verify,” he told CTVNews.ca.

“That’s kind of what’s starting to happen at the UN-level. Like, how do we authenticate conversations, authenticate video?”

Yellow AI Launches YellowG, A Generative AI Platform

Yellow AI, a leading conversational AI platform, has unveiled YellowG, a state-of-the-art conversational artificial intelligence (AI) platform specifically designed for automation technology. Leveraging generative AI and enterprise GPT capabilities, Yellow AI aims to empower enterprises across various industries by offering tailored solutions that streamline complex workflows, enhance existing processes, and foster innovation.

The platform features an advanced multi-large language model (LLM) architecture, continuously trained on billions of conversations. Yellow AI asserts that this architecture ensures exceptional scalability, speed, and accuracy, enabling businesses to unleash the full potential of the platform.

According to Yellow AI, integrating AI-driven chatbots like YellowG into customer and employee experiences across multiple channels can significantly elevate levels of automation for businesses. The company claims that such integration not only leads to substantial operational cost reductions but also enables up to 90% automation within the first 30 days.

“Our groundbreaking platform eliminates the need for setup time, providing instant usage as soon as a bot is built,” said Raghu Ravinutala, CEO and co-founder of Yellow AI, in an interview with VentureBeat. “With robust, enterprise-level security, it ensures maximum safety through a combination of centralized global and proprietary LLMs. Our real-time generative AI approach is specifically designed to drive enterprise conversations. This means YellowG can dynamically generate workflows and effortlessly handle complex scenarios.”

YellowG’s zero setup time, coupled with its robust security measures, positions it as a game-changer in the conversational AI landscape. By harnessing the power of generative AI, the platform empowers enterprises to automate processes, enhance customer interactions, and navigate intricate business scenarios with ease. Yellow AI is dedicated to delivering cutting-edge conversational AI solutions that drive efficiency, productivity, and growth for businesses of all sizes.

AI with human touch

The new tool empowers users to generate runtime workflows and make real-time decisions using dynamic AI agents, said Ravinutala. Moreover, it adds a unique human touch to AI conversations by demonstrating near-human empathy while maintaining an impressively low hallucination rate close to zero.

In addition to its multi-LLM architecture, YellowG utilizes enterprise data and industry-specific knowledge to navigate complex scenarios. The chatbot’s capacity to comprehend the context of conversations enables it to provide personalized responses that are finely tailored to specific use cases.

“The YellowG workflow generator is powered by the ‘dynamic AI agent,’ our orchestrator engine that harnesses the power of multiple LLMs,” said Ravinutala. “It utilizes knowledge from our proprietary platform data, the anonymized historical record of customer interactions and enterprise data.”

Yellow AI claims a response intent accuracy rate of more than 97%. In addition, the company asserts its capability to learn from extensive volumes of data, enabling it to generate responses to even the most intricate queries that traditional conversational AI platforms may find challenging.

Automating business workflows through generative AI

When a customer’s message enters the conversational interface, YellowG promptly analyzes it to decipher the request and develop a strategic plan for fulfilling their goal. Subsequently, generative AI interacts with the enterprise system to retrieve all relevant data necessary for processing the user’s request.

Leveraging this data, the platform utilizes an LLM orchestration layer to formulate and fine-tune the AI bot’s response. This ensures accurate alignment between the generated response, the obtained information and the customer’s initial request.

YellowG implements responsible AI practices during the post-processing stage by rigorously examining security, compliance and privacy measures. After that review, it delivers responses exhibiting human-like characteristics, showcasing exceptional accuracy and virtually no hallucinations.

“All the while, it remains focused on achieving the business objectives,” said Ravinutala. “Our multi-LLM architecture combines centralized LLMs’ intelligence with the precision and security of proprietary LLMs.”

Real-time generative AI

By integrating advanced AI and natural language processing (NLP) technologies, the platform provides customers with a human-like experience. The company said that the platform generates responses that are not pre-scripted by utilizing real-time generative AI, resulting in a more natural and seamless conversation flow.

“Our platform has been designed to detect and interpret the emotional tone and sentiment expressed in the customer’s message,” Ravinutala explained. “It can recognize various emotions such as frustration, confusion, happiness or the need for assistance, allowing it to adapt responses and provide the emotional support that one would typically expect from a human agent. This empathetic interaction establishes a deeper level of understanding, assuring customers that their sentiments are truly acknowledged.”

A prominent feature of YellowG is its capability to adapt to the customer’s unique communication style and requirements. For example, whether a customer prefers brief and concise answers or requires more comprehensive explanations, YellowG can adjust its responses accordingly.

The platform’s AI agent also leverages real-time analysis of the user’s responses to guide the conversation, resulting in highly personalized and tailored interaction.

Zero setup for instant LLM incorporation

YellowG’s zero setup feature empowers it to ingest and analyze its customers’ documents and websites. This comprehensive integration of knowledge enables the platform to deliver instant answers to any inquiries that fall within the scope of these resources.

“For customers with extensive knowledge repositories, this capability alone allows us to deliver a high level of automation from day one,” said Ravinutala.

Furthermore, the platform’s no-code solutions facilitate seamless connectivity with customer APIs, enabling the implementation of static workflows that unlock a new realm of automation. However, the company said it’s important to note that static workflows have limitations when handling fluid conversations, often imposing rigid conversational flows on users.

“To overcome this limitation, we have implemented dynamic runtime workflows that adapt based on user input,” Ravinutala added. “This approach empowers us to automate a significantly large number of customer queries.”

Ravinutala said the company has successfully developed proprietary data-trained LLMs in-house for various domains and use cases, including document Q&A, contextual history and summarization.

Yellow AI’s primary focus is tackling complex end-user-facing scenarios within customer support, marketing and employee experience where real-time decision-making is crucial. Ultimately, the goal is to leverage LLMs during runtime to redefine and enhance end-user experiences.

“One such use case that we solved using an in-house model is summarization for situations that demand fast response times,” he said. “We have also created a proprietary context model that empowers our dynamic AI agents to understand the conversation’s context more accurately.”

Safeguarding customer data through security compliance

According to the company, YellowG is engineered to be genuinely multi-cloud and multi-region, adhering to the most stringent security standards and compliance requirements. In addition, it implements rigorous measures to conceal Personally Identifiable Information (PII) from third-party LLMs, effectively safeguarding customer data.

Moreover, the platform successfully fulfills the criteria SOC 2 Type 2 certification sets forth. This certification attests to the fact that YellowG’s systems and processes are purposefully designed to protect customer data while maintaining exemplary levels of security and privacy.

“To enhance data access control, Yellow AI employs a role-based access control (RBAC) system, giving customers the ultimate authority to define access privileges,” said Ravinutala. “Every message exchanged through our platform is encrypted at rest using AES 256 encryption and in transit using TLS 1.2 and above.”

Empowering an AI-First Future: Meta Unveils New AI Data Centers and Supercomputer

Meta, formerly known as Facebook, has been at the forefront of artificial intelligence (AI) for over a decade, utilizing it to power their range of products and services, including News Feed, Facebook Ads, Messenger, and virtual reality. With the increasing demand for more advanced and scalable AI solutions, Meta recognizes the need for innovative and efficient AI infrastructure.

At the recent AI Infra @ Scale event, a virtual conference organized by Meta’s engineering and infrastructure teams, the company made several announcements regarding new hardware and software projects aimed at supporting the next generation of AI applications. The event featured Meta speakers who shared their valuable insights and experiences in building and deploying large-scale AI systems.

One significant announcement was the introduction of a new AI data center design optimized for both AI training and inference, the primary stages of developing and running AI models. These data centers will leverage Meta’s own silicon called the Meta training and inference accelerator (MTIA), a chip specifically designed to accelerate AI workloads across diverse domains, including computer vision, natural language processing, and recommendation systems.

Meta also unveiled the Research Supercluster (RSC), an AI supercomputer that integrates a staggering 16,000 GPUs. This supercomputer has been instrumental in training large language models (LLMs), such as the LLaMA project, which Meta had previously announced in February.

“We have been tirelessly building advanced AI infrastructure for years, and this ongoing work represents our commitment to enabling further advancements and more effective utilization of this technology across all aspects of our operations,” stated Meta CEO Mark Zuckerberg.

Meta’s dedication to advancing AI infrastructure demonstrates their long-term vision for utilizing cutting-edge technology and enhancing the application of AI in their products and services. As the demand for AI continues to evolve, Meta remains at the forefront, driving innovation and pushing the boundaries of what is possible in the field of artificial intelligence.

Building AI infrastructure is table stakes in 2023

Meta is far from being the only hyperscaler or large IT vendor that is thinking about purpose-built AI infrastructure. In November, Microsoft and Nvidia announced a partnership for an AI supercomputer in the cloud. The system benefits (not surprisingly) from Nvidia GPUs, connected with Nvidia’s Quantum 2 InfiniBand networking technology.

A few months later in February, IBM outlined details of its AI supercomputer, codenamed Vela. IBM’s system is using x86 silicon, alongside Nvidia GPUs and ethernet-based networking. Each node in the Vela system is packed with eight 80GB A100 GPUs. IBM’s goal is to build out new foundation models that can help serve enterprise AI needs.

Not to be outdone, Google has also jumped into the AI supercomputer race with an announcement on May 10. The Google system is using Nvidia GPUs along with custom designed infrastructure processing units (IPUs) to enable rapid data flow. 

What Meta’s new AI inference accelerator brings to the table

Meta is now also jumping into the custom silicon space with its MTIA chip. Custom built AI inference chips are also not a new thing either. Google has been building out its tensor processing unit (TPU) for several years and Amazon has had its own AWS inferentia chips since 2018.

For Meta, the need for AI inference spans multiple aspects of its operations for its social media sites, including news feeds, ranking, content understanding and recommendations. In a video outlining the MTIA silicon, Meta research scientist for infrastructure Amin Firoozshahian commented that traditional CPUs are not designed to handle the inference demands from the applications that Meta runs. That’s why the company decided to build its own custom silicon.

“MTIA is a chip that is optimized for the workloads we care about and tailored specifically for those needs,” Firoozshahian said.

Meta is also a big user of the open source PyTorch machine learning (ML) framework, which it originally created. Since 2022, PyTorch has been under the governance of the Linux Foundation’s PyTorch Foundation effort. Part of the goal with MTIA is to have highly optimized silicon for running PyTorch workloads at Meta’s large scale.

The MTIA silicon is a 7nm (nanometer) process design and can provide up to 102.4 TOPS (Trillion Operations per Second). The MTIA is part of a highly integrated approach within Meta to optimize AI operations, including networking, data center optimization and power utilization.

Zoom Makes a Big Bet on AI with Investment in Anthropic

In a significant move towards harnessing the power of generative AI, Zoom has announced its collaboration with AI startup Anthropic. Building on their existing partnership with OpenAI, the enterprise communication company revealed its plans to integrate Anthropic’s innovative Claude AI assistant into Zoom’s productivity platform. To solidify this collaboration, Zoom’s global investment arm has also made an undisclosed financial investment in Anthropic, which is supported by Google.

This strategic partnership aligns with Zoom’s federated approach to AI and comes at a time when competitors like Microsoft are actively incorporating AI-driven capabilities into Teams, Google is integrating AI into Workspace, and Salesforce is focusing on SlackGPT.

However, Zoom has outlined its initial objective of incorporating Claude within its omni channel contact center offerings before expanding its integration to other segments of the platform. The specific timeline and execution details for the broader integration have not been disclosed at this time.

How exactly will Claude help in Zoom Contact Center?

Zoom’s Contact Center is a video-first support hub that improves customer support for enterprises. It includes multiple products, including Zoom Virtual Agent and Zoom Workforce Management.

With the Anthropic partnership, Zoom plans to integrate Claude across the entire Contact Center portfolio to build self-service features that not only improve end-user outcomes but also enable superior agent experiences. 

For instance, it will be able to understand customers’ intent from their inputs and guide them to the best solution, and provide actionable insights that managers can use to coach agents.

“Anthropic’s Constitutional AI model is primed to provide safe and responsible integrations for our next-generation innovations, beginning with the Zoom Contact Center portfolio,” said Smita Hashim, chief product officer at Zoom. “With Claude guiding agents toward trustworthy resolutions and powering self-service for end users, companies will be able to take customer relationships to another level.”

Moving ahead, Zoom Contact Center will also use Claude to provide the right resources to agents, enabling them to deliver improved customer service, a company spokesperson told VentureBeat. They added that Claude’s capabilities will be expanded across the Zoom platform — which includes Team Chat, Meetings, Phone and Whiteboard — but did not share specific details.

The federated approach

The partnership with Anthropic is Zoom’s latest move in its federated approach to AI, where it is using its own proprietary AI models along with those from leading AI companies and select customers’ own models.

“With this flexibility to incorporate multiple types of models, our goal is to provide the most value for our customers’ diverse needs. These models are also customizable, so they can be tuned to a given company’s vocabulary and scenarios for better performance,” Hashim said in a blog post.

Zoom has already been working with OpenAI for IQ, its conversational intelligence product. In fact, back in March, Zoom announced multiple AI-powered capabilities for the product with OpenAI, including the ability to generate draft messages and emails and provide summaries for chat threads. The capabilities started rolling out for select customers in April.

OpenAI CEO Warns Senate About AI Interfering with Elections

OpenAI CEO, Sam Altman, expressed his concerns regarding the potential interference of artificial intelligence (AI) in elections during his testimony before a Senate panel on Tuesday. Altman emphasized the need for rules and guidelines regarding disclosure from companies providing AI models, emphasizing his apprehension about the issue.

This marked Altman’s first appearance before Congress, where he advocated for stringent licensing and testing requirements for the development of AI models in the United States. When asked about the specific AI models that should require licensing, Altman suggested that any model capable of persuading or manipulating people’s beliefs should meet a high threshold for regulation.

Altman further asserted that companies should have the freedom to choose whether their data is used for AI training, a topic already under discussion in Congress. He mentioned that material available on the public web should generally be considered fair game for AI training, although the executive did not rule out the possibility of advertising, but leaned towards a subscription-based model.

The OpenAI CEO ‘s testimony highlighted the growing concerns surrounding the potential misuse of AI in electoral processes, emphasizing the need for proactive measures to address these challenges and ensure the integrity of democratic systems.

Top technology CEOs convened

Altman’s testimony was one of many at the Senate as the White House invited top technology CEOs to address AI concerns with U.S. lawmakers seeking to further the technology’s advantages, while limiting its misuse. 

“There’s no way to put this genie in the bottle. Globally, this is exploding,” said Senator Cory Booker, a lawmaker concerned with how best to regulate AI.

Altman’s warnings about AI and elections come at a time when companies large and small have been competing to bring AI to market, with billions of dollars at play. But experts everywhere have warned that the technology may worsen societal harms such as prejudice and misinformation.

Some have even gone so far as to speculate AI could end humanity itself.

The White House is taking all these concerns seriously and convening with all relevant authorities and executives to try and ensure that the worst case scenarios do not come to pass

Claude AI, Achieves Rapid Comprehension of Entire Books in Seconds

Claude AI, the ChatGPT-rival from Anthropic, can now comprehend a book containing about 75,000 words in a matter of seconds. This is a huge leap forward for chatbots as businesses seek technology that can churn out large pieces of information quickly.

Since the launch of ChatGPT, we have also seen companies such as Bloomberg and JP Morgan Chase look to leverage the power of AI to make better sense of the finance world. While this process has taken them at least a few months, Anthropic, with its Claude AI, can reduce the time taken to just a few seconds.

How Anthropic supercharged its AI

In computing terms, a token is a fragment of words used to simplify data processing. The amount of tokens that a large language model (LLM) can process at a given time is called a context window, which is similar to short-term memory.

An average human can read 100,000 tokens in about five hours’ time. However, this is only the time taken to read the tokens, and more time might be needed if one has to remember and analyze this information.

OpenAI’s GPT-4 LLM has a context window of 4,096 tokens (~3,000 words) when used with ChatGPT, but this can increase to 32,768 tokens while using GPT-4 API. ClaudeAI’s context window was about 9,000 tokens, but the company has now increased it to 100,000 tokens (75,000 words).

To demonstrate how this improves the AI’s performance, Anthropic loaded the entire text of The Great Gatsby (72,000 tokens) with one line modified from the original. The AI was tasked with spotting the difference, which it did in just 22 seconds, the company claimed in a press release.

This might not sound very impressive to those who have used word processors to find differences between two texts. Where AI trumps word processors is the ability to answer questions about the text and analyze it in depth.

Anthropic is looking at businesses that need large numbers of documents to be processed to use its AI instead and ask Claude specific questions on the way ahead. Like any chatbot, Claude can be prompted to look for specific information and return results, as a human assistant would.

Anthropic also used AI to process a transcript of a six-hour recording of a podcast and summarize it and answer questions. The company is confident the same can be applied to financial reports and legal documents as well as improving code or answering technical questions.

Claude AI with an improved context window is available via Claude API, for which there is a waitlist.

Europe To Make Strict AI Rules Soon

The European Parliament has made a noteworthy stride in formulating a regulatory framework to govern the use of artificial intelligence (AI) within Europe. The draft AI Act has garnered favorable votes from key committees in the Parliament, delineating restrictions on AI deployment while still fostering innovation. This response comes in light of the rapid advancement of ChatGPT and similar generative AI systems, which have demonstrated the benefits and opportunities afforded by advanced technology, but have also raised concerns about the potential dangers stemming from the dissemination of fabricated content.

The inception of the AI Act dates back to 2021, with the objective of regulating any product or service that employs an AI system. Classifying AI into four tiers based on risk, the Act imposes more stringent rules and demands greater transparency and accuracy for higher-risk applications. The aim is to ensure responsible development and utilization of AI, steering clear of a society controlled by AI.

An integral facet of the AI Act is its commitment to striking a balance between safeguarding fundamental rights and providing legal certainty for businesses while fostering innovation in Europe. Policymakers acknowledge the dual potential of AI technology for positive and negative purposes, and they perceive the associated risks as being too significant. Consequently, the AI Act aims to prevent the exploitation of AI to create a surveillance state or perpetuate discrimination against specific groups.

The AI Act will impose a ban on the use of remote facial recognition technology, with limited exceptions for countering and preventing specific terrorist threats. This significant measure arises from concerns surrounding the potential misuse of facial recognition technology in building a surveillance society. Additionally, the Act will prohibit the use of policing tools aimed at pre-determining crime occurrences and perpetrators, as lawmakers believe such tools are inherently discriminatory and violate human rights.

Another noteworthy inclusion in the AI Act is the classification of generative AI systems, including ChatGPT, as high-risk systems. Consequently, these systems will be subject to the same level of scrutiny and regulation as other high-risk applications like self-driving cars and medical devices. The decision to include generative AI within the purview of the AI Act reflects policymakers’ apprehensions regarding the potential misuse of this technology in generating harmful fabricated content.

The AI Act represents a significant stride forward in the regulation of AI within Europe. However, it is important to note that the parliamentary committees have reached an agreement that represents merely the initial step in a lengthy and bureaucratic process, which could span several years before the European Union’s 27 member states adopt it as law. Moreover, implementing the AI Act is expected to face significant challenges, particularly in terms of enforcing regulations and overseeing AI systems.

The Game-Changing Role of Generative AI in Revolutionizing Enterprise Search

In today’s increasingly complex and geographically dispersed organizations, encompassing remote teams and diverse knowledge systems, the challenge of tracking down crucial data across the entire enterprise knowledge ecosystem has become a formidable task. Consequently, employees are experiencing the negative consequences of this knowledge access challenge, leading to reduced productivity and waning engagement within the workforce.

During the recent VB Spotlight event titled “The Impact of Generative AI on Enterprise Search: A Game-Changer for Businesses,” Phu Nguyen, the head of digital workplace at Pure Storage, emphasized the detrimental effects of this issue. He highlighted that employees are feeling frustrated due to the inability to locate the information they need, ultimately resulting in diminished engagement levels and decreased productivity.

To shed light on potential solutions, the event brought together industry experts including Jean-Claude Monney, a digital workplace, technology, and knowledge management advisor, and Eddie Zhou, the founding engineer specializing in intelligence at Glean. The panelists discussed the emergence of a revolutionary advancement in workplace-specific search tools, which harness the power of generative AI. These innovative tools aim to provide employees with comprehensive access to the knowledge they require, along with its contextual relevance, regardless of their location within the organization.

By leveraging generative AI, organizations can overcome the limitations of traditional search methods and unlock a wealth of information that was previously challenging to navigate. This transformative technology enables employees to swiftly and efficiently access the precise knowledge they need, empowering them to make informed decisions and perform their tasks effectively.

Moreover, the contextual understanding offered by generative AI allows users to grasp the interconnectedness of information across different departments and teams. This comprehensive perspective fosters collaboration, facilitates cross-functional problem-solving, and encourages knowledge sharing within the organization.

The adoption of generative AI-powered search tools marks a significant leap forward in the quest to streamline knowledge access within enterprises. As organizations embrace this game-changing technology, they can alleviate the frustration experienced by employees, enhance productivity, and ultimately drive higher levels of engagement throughout the workforce.

Traditional enterprise search can’t reach all the knowledge in an organization, which is spread out in multiple systems. It can mine structured knowledge, such as the data found in Jira, Confluence, intranets and sales portals, but unstructured knowledge, the information communicated through IM, Teams, Slack, and email, has been uncharted territory, difficult to corral in any helpful contextual way, Nguyen adds.

“The paradigm of knowledge management has changed significantly,” he says. “How do you have a system that can look at both structured and unstructured data and provide you with the answers that you’re ultimately looking for? Not the information that you need, but the answer that you’re looking for.”

Solutions that integrate with multiple systems and utilize generative AI can address these challenges, and help employees find the information they need to perform their jobs effectively, no matter where that knowledge resides.

“Companies are now building searches specifically for the workplace, built for internal searches that work across your internal system,” Nguyen explains. “Most importantly, they’re built on a knowledge graph that returns a search that’s more relevant to your employees. This is all very exciting for us because we think of this as part of our employee information center strategy. Previously it was just an intranet and our support portal, but now we have this workplace search that can connect information across multiple systems inside our organization.”

How organizations can leverage generative AI

There are three major ways companies can leverage generative AI, and they’re game changers, Monney says. First, he says, are the benefits that an NLP interface brings.

“Time to knowledge is a new business currency,” says Monney. “What we’ve seen with generative AI is this quantum leap in user experience. ChatGPT has democratized ways to talk to a system and get very succinct responses.”

At home, users have grown accustomed to the ease and convenience of natural language interfaces like Alexa and Siri; generative AI brings that user experience to the workplace, giving workers not just an enterprise search tool, but a digital knowledge assistant, he adds. It enables employees to find not just information but precise answers quickly, boosting productivity and efficiency, especially in complex decision-making scenarios. Generative AI also has the potential to go beyond answering individual questions and assist in more complex decision journeys, providing users with synthesized and relevant information without the need for explicit queries.

Generative AI can also automate repetitive tasks and streamline workflows — for example, chat bots that are powered by generative AI can handle customer service inquiries, product recommendations, or simply assist with booking appointments. That frees time for more complex tasks and greatly increases productivity.

Lastly, these generative AI solutions can be precisely refined for industry-specific and case-specific use. Companies can add their own corpus of knowledge to the large language models that generative AI uses, to improve relevance and the time to knowledge.

Bringing generative AI into the workplace

“To bring this technology into the workplace is not an easy thing,” Zhou cautions. It requires a knowledge model, which is composed of three pillars. The first is company knowledge and context. An off-the-shelf model or system, without being properly connected to the right knowledge and the right data, will not be functional, correct, or relevant.

“You need to build generative AI into a system that has the company knowledge and context,” he explains. “That allows for this trusted knowledge model to form out of the combination of these things. Search is one such method that can deliver this company knowledge and context, in conjunction with generative AI. But it’s one of several.”

The second pillar of the trusted knowledge model is permissioning and data governance, or being aware, as a user interfaces with a product and with a system, of what information they should and should not have access.

“We speak of knowledge in the company as if it’s free-flowing currency, but the reality is that different users and different employees in a company have access to different pieces of knowledge,” he says. “That’s objective and clear when it comes to documents. You might be part of a group alias which has access to a shared drive, but there are plenty of other things that a given person should not have access to, and in the generative setting it’s incredibly important to get this right.”

The third and final one is referenceability. As the product interface has evolved, users need to build a trust with the system, and be able to verify where the system is pulling information from.

“Without that kind of provenance, it’s hard to build trust, and it can lead to runaway factuality errors and hallucinations,” he says – especially in an enterprise system where each user is accountable for their decisions.

The emerging possibilities of generative AI

Generative AI means moving from questions into decisions Zhou says, decreasing time to knowledge. Basic enterprise search might turn up a series of documents to read, leaving the user to dig out the information they need. With augmented answer-first enterprise search, the user doesn’t ask those questions individually; instead, they can express the underlying journey, the overall decisions that need to be made, and the LLM agent brings it all together.

“This generative technology, when we pair it with search, and not just single searches, it gives us the ability to say, ‘I’m going on a business trip to X. Tell me everything I need to know,’” he says. “An LLM agent can go and figure out all the information I might need and repeatedly issue different searches, collect that information, synthesize it for me and deliver it to me.”

For more on the ways that generative AI and large language models can transform how knowledge is accessed and used in enterprises, they types of use cases and more, don’t miss this VB Spotlight!

10 Examples of How AI Is Improving Education

Artificial intelligence (AI) is quickly becoming a common tool for use in a wide number of industries, including business, finance, and medicine. Due to the many different types of AI platforms and applications available, the possibilities for their use are endless. For examples, AI can be used to detect anomalies in bank transactions to help spot fraudulent activities. AI platforms can also be used as diagnostic tools, to help pharmaceutical companies with drug discovery, and to aid doctors in spotting tumors that might otherwise be missed. And that is just the beginning.

However, the education industry still has some way to go before it has harnessed the full potential of AI. Ideas include using AI to make education more engaging and personalized, improve accessibility, complement individual learning styles, and enhance the learning experience for both the teacher and the student. In addition to improving the learning experience for students, AI could be used to help teachers save time and resources by automating tasks such as checking answer sheets and other administrative tasks.

In this article, we will take a look at ten examples of AI technology that have the potential to revolutionize the education industry and explore some organizations that are using the technology to improve performance in education. 

1. Personalized learning

One theory in pedagogy is that everyone has a different learning style. Some are more visual learners, some are more aural learners, while others are more kinesthetic learners, etc. While this theory has been hotly debated, it is generally agreed that people do tend to learn in different ways – whether that involves different work and study styles, learning at different paces, or finding some subjects and concepts easier than others. Given this, it makes sense to personalize the learning experience, doesn’t it? But, if a school or teacher has to personalize lesson plans for every student, it would be impossible – there is simply not enough time. Enter – personalized learning using AI. 

One of the strengths of AI is that it is capable of analyzing large amounts of data quickly and finding patterns, making it a perfect tool for developing personalized learning. AI can be used to devise individual lessons around a particular subject quickly. AI-based learning systems might also be able to give teachers detailed information about students’ learning styles, abilities, and progress and provide suggestions for how to customize their teaching methods to students’ individual needs. For example, suggesting more advanced work for some students and extra attention for others.

Additionally, AI could be used to predict results more accurately, thereby helping teachers understand whether their lesson planning will meet targets for learning.

It also helps with planning, scheduling, and producing lessons for students making the experience entirely unique and hugely rewarding. This could also free up time for teachers, which can then concentrate on high-value tasks, such as working with students.

For example, a number of universities have tested the use of chatbots for repetitive tasks that would normally be done by a professor or faculty member – such as providing answers to questions frequently asked by students. Both Staffordshire University in the U.K. and Georgia Tech have developed chatbots that offer 24/7 assistance to students.

10 examples of how artificial intelligence is improving education
Duolingo uses adaptive learning to enhance the user learning experienceDuolingo 

2. Adaptive learning

Adaptive learning, or adaptive teaching, is an educational method in which AI is used to customize resources and learning activities to cater to the unique needs of each learner. This is especially useful in online learning.

This is done via rigorous analysis of a student’s performance data, after which the pace and difficulty of the course material are adjusted by the AI algorithm in order to optimize the learning process.

This method not only optimizes learning but can also save time and resources by removing unnecessary repetition and focusing on the concepts or areas that a student might be struggling with. The teacher can provide support wherever the student needs and the student can learn at a pace they are comfortable with. 

Many companies are incorporating adaptive learning to improve the way content is delivered. One popular example is Duolingo, a language-learning app that provides listening, reading, and speaking exercises for learning around 40 different languages. The app uses AI to help ensure that lessons are paced and leveled for each student according to their performance.

3. Automated grading

Grading assignments and exams are one of the most time-consuming tasks in education. With the help of machine learning algorithms, AI tools can evaluate essays, multiple-choice tests, and programming assignments with great accuracy and efficiency, thereby saving teachers a lot of time.

A computer doing these tasks not only saves time but also ensures consistency in scoring, potentially eliminating bias, including unconscious bias, teachers may have and reducing human error in the correction process. The AI tool can also provide personalized feedback to students and teachers. This can help students improve in problem areas and enables students to take ownership of their learning.

Although automated grading powered by AI has a lot of advantages, bias may exist, even in AI. This is because machine learning algorithms are trained on data, which itself may have underlying biases. Therefore, this is still a field requiring more research to make the technology bias-free.

For example, according to a 2021 article published in OxJournal, China has been using AI auto grading platforms with increasing volume, with around 1 in 4 schools in the country testing a machine learning auto grading platform that can also give suggestions on work done.

10 examples of how artificial intelligence is improving education
ITS can help students learn at their own paceEnglish106/Wikimedia Commons 

4. Intelligent tutoring systems

Intelligent tutoring systems (ITS) are computer systems powered by machine learning algorithms that provide personalized and adaptive lesson plans based on every student’s learning needs and pace. Similar to previous AI tools, ITSs analyze student data to understand learning patterns which it then uses to provide customized suggestions, feedback, and exercises suiting the individual needs of each student. 

ITSs are helpful to both students and teachers as it allows teachers to monitor students’ progress and modify their teaching approach to deliver their lessons effectively. ITSs can help students learn at their own pace while providing support when necessary and challenging them when they are ready to learn more advanced concepts. 

study by the U.S. Department of Education found that existing ITSs can improve student literacy by improving their reading comprehension and writing skills. However, implementation of the systems in a classroom remains a challenge. To overcome this, natural language processing techniques have been suggested for use in scoring student responses. 

Despite the challenges faced by these systems, students have had some positive responses to the use of ITSs. Another study found that students find ITSs easy to use and learn, although not necessarily fun.

5. Smart content creation

Creating lesson plans is one of the greatest challenges for a teacher, as each student has unique requirements based on the way they learn and understand concepts. The term “smart content creation” describes the use of AI to automate and enhance the generation of educational content. The AI platforms can provide detailed insight by analyzing student data to create personalized and engaging educational material. 

This is then used to create customized environments depending on various learning outcomes. The students can then choose the lesson plan that aligns with their requirements. AI can help to generate interactive quizzes, simulations, and experiments, via chatbots, augmented or virtual reality, which can then be used in the customized environment to enhance the learning experience. 

The biggest and most successful demonstration of this is Coursera. It uses AI to curate multiple educational and professional courses that can help the learner. Teachers can also suggest appropriate courses based on a student’s learning performance, pace, and individual requirements.

10 examples of how artificial intelligence is improving education
Learning analytics using AI makes sifting through large amounts of student data easyGiulia Forsythe/Wikimedia Commons 

6. Learning analytics

Combing through large amounts of student data is a tedious task but can provide valuable insights into a student’s learning and performance. Using automated analytics makes it easier to analyze large amounts of student data, and this can be sped up using AI. It makes the challenging and time-consuming task of data analysis easier. 

Teachers can use the data to track student performance and engagement as well as to make timely interventions and provide additional support to students who require it. Similarly, students can also use it to track their performance and learning and use it to ask for additional help if they need it. The University of Michigan has a dashboard called My Learning Analytics that allows students to visualize and track their grade distribution, assignment planning, and resources. 

However, there are also potential issues with the implementation of learning analytics in the education sector. A study published in 2022 highlights ethical and privacy issues, data collection, and data analysis as potentially challenging implementation problems for learning analytics. While the latter of the two concerns can also be solved with the use of AI, there are still significant ethical concerns that will have to be dealt with.

7. Virtual assistants

Many administrative tasks, such as lesson planning and organizing schedules, can be automated thanks to the power of AI. Virtual assistants take on laborious, repetitive activities, freeing up teachers’ valuable time to focus on essential duties like giving lectures and interacting with students.

Additionally, virtual assistants can provide customized feedback to students, monitor their progress, and provide additional resources based on a student’s individual needs. Using AI-powered virtual assistants can help teachers streamline administrative work and focus on making the learning experience engaging for students.

study in SpringerOpen even found a correlation between students who used virtual assistants, such as chatbots, and their academic performance. They found that students who interacted with chatbots outperformed those who interacted with the course teacher in terms of academic performance. The study was conducted on 68 undergraduate students in Ghana and made a positive case for the use of AI tools, such as virtual assistants, in the education sector.

10 examples of how artificial intelligence is improving education
NLP is a technique to make computer systems understand human languageWikimedia Commons   

8. Natural language processing

Natural language processing (NLP) is a field of AI that deals with making computer systems that can understand and interpret human languages. NLP has many different applications, such as text generation, chatbots, and information extraction, among many others. One of the most popular uses of NLP is in large language models, such as ChatGPT, developed by OpenAI.

ChatGPT may be used by students to help with homework, prepare for an exam, or simply satisfy their curiosity while learning. Teachers can also use ChatGPT to prepare lesson plans and check assignments for grammar and information. As the popularity of the software has risen, more and more students are using this resource. And although it may seem like there are no downsides to this technology, many people think otherwise. 

Students should not see ChatGPT as their answer to all the homework questions, and similarly, teachers should not see ChatGPT as the absolute of human knowledge. As mentioned in this study, it should be viewed more as an assistive technology that responds to societal values and needs. Other concerns also exist, such as the existence of bias, the knowledge not being current, plagiarism, its use as an aid in cheating, etc.  

Other technologies that use NLP, like automated essay grading systems, have been covered earlier in the article. In the education sector, future developments should address the various concerns with the technology when using NLP technologies.

9. Predictive modeling

Similar to learning analytics, AI-powered predictive modeling deals with analyzing large amounts of data, which is then used to predict various outcomes, such as student performance. This information is valuable to teachers, parents, institutions, governments, and students as they can greatly help with the learning experience and setting benchmarks. This can help teachers offer timely guidance to students based on the student’s predicted performance and on their previous test or exam results. 

Data-driven analysis is an important tool to have in education as it can improve individual student performance and give them additional support when needed, overall enriching their learning experience. It is also of value to governments for use in planning educational goals. A study on community college students used predictive modeling to identify at-risk students based on several key variables. This helped them to drive interventions to help these students.

10 examples of how artificial intelligence is improving education
AR can help students get a hands on experienceKirill Ruchyov/Wikimedia Commons 

10. Augmented and virtual reality

Immersive technologies, such as augmented reality (AR) and virtual reality (VR), have become increasingly popular over the past few years. AR is an immersive technology that overlays computer-generated content onto real-world objects, thus enhancing a user’s perception of reality. On the other hand, VR is a simulated virtual environment that the user can experience as if it were real. These technologies are used for gaming and metaverse but have huge potential in the education sector. 

Students can use immersive technologies to interact with the learning material to improve their understanding of complex concepts and overall enrich the learning experience. VR, in particular, has many promising applications, such as creating labs where students can conduct chemistry experiments or virtually dissect animals. AR can enable students to study stars and galaxies up close, allowing them to engage with physical things and providing them with more hands-on and experiential learning. 

An article published by the Information Technology and Innovation Foundation (ITIF) explained that AR/VR technologies can reduce the learning curve for students. They also mention that AR/VR technologies can help teachers enhance STEM courses, medical simulations, arts and humanities materials, and technical education. AR/VR technologies are already being used in several institutions, such as Arizona State University (ASU), which has collaborated with Dreamscape Immersive to create Dreamscape Learn. ASU students even created a time travel experience using this technology.

Conclusion

And there you have it – 10 of the most promising examples of AI improving the education sector. While AI provides numerous advantages for both teachers and students, it’s crucial to keep in mind that it also has certain disadvantages.

One limitation of AI is that it cannot replace human interaction and empathy, which are essential in the teaching and learning process. Additionally, as the article already discussed, AI algorithms can perpetuate biases. And finally, there are always concerns about data privacy and security when it comes to AI. As a result, it is crucial to integrate AI into education, but doing so requires careful consideration of both its potential advantages and drawbacks. 

The use of AI in education holds a lot of potentials and could even revolutionize the way future generations of students learn. 

Google Introduces Image Verification Feature to Identify AI-Generated Images

During the Google I/O conference, Google announced a forthcoming feature that will enable users to determine if an image is AI-generated. This new functionality, set to launch this summer, leverages hidden information embedded within the image.

As part of its focus on AI, Google unveiled a range of products and features. In a blog post, Cory Dunton, Google’s product manager for search, emphasized the importance of having the complete picture when assessing the reliability of information or images.

The new tool, named “About this image,” provides users with additional details about when the image was initially indexed by Google, its original sources, and whether it has appeared on news or fact-checking websites. Accessible through various methods such as clicking on the three dots above an image in search results, using Google Lens, or swiping up in the Google app, this feature aims to empower users with more context.

As Google prepares to launch its own text-to-image generator, the company commits to including data that indicates if an image was created by AI. By adding markup to the original files, Google intends to provide viewers with the necessary context when encountering AI-generated images outside its platforms. Additionally, image publishers like Shutterstock and Midjourney will introduce similar labels in the upcoming months.

While AI image generators like Midjourney, alongside OpenAI’s DALL-E, have gained recognition, there have been concerns about the misuse of such technology. Midjourney faced scrutiny for creating fake images depicting Donald Trump’s arrest, highlighting the potential ethical implications associated with AI-generated content.

Google’s image verification feature aims to enhance transparency and enable users to make more informed judgments about the authenticity and reliability of images in an AI-driven landscape.