WordPress Ad Banner

Apple Introduces ReALM: Advancing Contextual Understanding in AI

Apple unveils ReALM, a revolutionary AI system poised to transform contextual understanding in voice assistants. Explore the innovative approach of ReALM, its practical applications, and Apple’s strategic moves to stay competitive in the rapidly evolving AI landscape.

Revolutionizing Contextual Understanding

Apple researchers introduce ReALM, an AI system adept at deciphering ambiguous references and context. Leveraging large language models, ReALM converts reference resolution into a language modeling problem, achieving significant performance gains compared to existing methods. With a focus on screen-based references, ReALM reconstructs visual layouts to enable more natural interactions with voice assistants.

Enhancing Voice Assistants

By enabling users to issue queries about on-screen elements, ReALM enhances the conversational experience with voice assistants. The system’s ability to understand context, including references, marks a crucial milestone in achieving true hands-free interactions. With impressive performance surpassing GPT-4, ReALM sets a new standard for contextual understanding in AI.

Practical Applications and Limitations

While ReALM demonstrates remarkable capabilities, it also acknowledges limitations, particularly in handling complex visual references. Incorporating computer vision and multi-modal techniques may be necessary for addressing more intricate tasks. Despite these challenges, ReALM signifies Apple’s commitment to making Siri and other products more conversant and context-aware.

Apple unveils ReALM

Apple’s AI Ambitions

Amidst fierce competition in the AI landscape, Apple accelerates its AI research efforts. Despite trailing rivals, Apple’s steady stream of breakthroughs underscores its commitment to AI innovation. As it gears up for the Worldwide Developers Conference, Apple is expected to unveil new AI-powered features across its ecosystem, signaling its determination to close the AI gap.

Conclusion: Shaping the Future of AI

As Apple navigates the evolving AI landscape, ReALM stands as a testament to its ongoing advancements in contextual understanding. With the race for AI supremacy intensifying, Apple’s strategic initiatives underscore its ambition to shape the future of ubiquitous, truly intelligent computing. As June approaches, all eyes will be on Apple to see how its AI endeavors unfold.

AI ‘Godfather’ Professor Yoshua Bengio Expresses Concerns over Technology’s Rapid Evolution

Following Geoffrey Hinton’s recent warning about the potential dangers of artificial intelligence (AI), another prominent figure in the field, Professor Yoshua Bengio, has expressed his concerns regarding the pace at which technology is advancing.

In an interview with the BBC, Bengio, known as one of the ‘godfathers’ of machine learning, revealed that he feels “lost” in regard to his life’s work. As a Canadian computer scientist and professor at the University of Montreal, Bengio is renowned for his groundbreaking contributions to AI, particularly in the area of deep learning.

While acknowledging the emotional challenges faced by those deeply involved in AI, Bengio emphasized the importance of persevering and engaging in discussions to foster collective thinking.

Notably, Geoffrey Hinton, another prominent AI figure, recently resigned from his position at Google to freely address the risks associated with AI.

Bengio’s remarks come on the heels of a statement released by the Center of AI Safety (CAIS), a research nonprofit, cautioning about the potential existential threats posed by artificial intelligence. The statement, signed by Bengio, Hinton, Sam Altman (CEO of OpenAI), Demis Hassabis (CEO of Google DeepMind), and other notable AI scientists and figures, emphasizes the need to prioritize mitigating the risks of AI-induced extinction alongside other global-scale concerns such as pandemics and nuclear warfare.

According to CAIS, as AI continues to advance, it could potentially contribute to catastrophic risks. The organization’s blog highlights various ways in which AI systems could pose significant dangers, including the potential use of AI as a political weapon. OpenAI’s CEO echoed this sentiment during a recent appearance before a Senate committee, expressing concerns about AI’s interference with election integrity.

The collective voices of influential figures like Bengio and Hinton, along with organizations like CAIS, underscore the imperative of addressing the risks associated with AI as it progresses further into the future.

Calls for regulation

Professor Bengio further told the BBC that all AI companies must be registered. “Governments need to track what they’re doing, they need to be able to audit them, and that’s just the minimum thing we do for any other sector like building airplanes or cars or pharmaceuticals.”

“We also need the people close to these systems to have a kind of certification… we need ethical training here. Computer scientists don’t usually get that, by the way,” he added.

Countries across the globe are grappling to regulate AI, as its full potential remains unknown. U.S President Joe Biden and Vice President Kamala Harris had a meeting with tech industries’ bigwigs like Altman, Anthropic CEO Dario Amodei, Microsoft CEO Satya Nadella, and Google CEO Sundar Pichai earlier this month to address the risks associated with AI and the responsibility that their respective companies need to take to ensure safety and privacy.

Regulators Turn to Old Laws to Tackle AI Technology like ChatGPT

Organizations like the European Union (EU) are taking the lead in formulating new regulations for AI, which could potentially establish a global standard. However, the enforcement of these regulations is expected to be a time-consuming process that spans several years.

“In the absence of specific regulations, governments can only resort to the application of existing rules,” stated Massimiliano Cimnaghi, a European data governance expert at consultancy BIP, in a statement to Reuters.

As a result, regulators are turning to already-established laws, such as data protection regulations and safety measures, to tackle concerns related to personal data protection and public safety. The necessity for regulation became evident when national privacy watchdogs across Europe, including the Italian regulator Garante, took action against OpenAI’s ChatGPT, accusing the company of violating the EU’s General Data Protection Regulation (GDPR).

In response, OpenAI implemented age verification features and provided European users with the ability to block their data from being used to train the AI model.

However, this incident prompted additional data protection authorities in France and Spain to initiate investigations into OpenAI’s compliance with privacy laws.

Consequently, regulators are striving to apply existing rules that encompass various aspects, including copyright, data privacy, the data utilized to train AI models, and the content generated by these models.

Proposals for the AI Act

Regulators turn to old laws to tackle AI technology like ChatGPT
Businessman use AI to help work Supatman/iStock 

In the European Union, proposals for the AI Act will require companies like OpenAI to disclose any copyrighted material used to train their models, exposing them to potential legal challenges. However, proving copyright infringement may not be straightforward, as Sergey Lagodinsky, a politician involved in drafting the EU proposals, explains.

“It’s like reading hundreds of novels before you write your own,” he said. “If you actually copy something and publish it, that’s one thing. But if you’re not directly plagiarizing someone else’s material, it doesn’t matter what you trained yourself on.

Regulators are now urged to “interpret and reinterpret their mandates,” says Suresh Venkatasubramanian, a former technology advisor to the White House. For instance, the U.S. Federal Trade Commission (FTC) has used its existing regulatory powers to investigate algorithms for discriminatory practices. 

Similarly, French data regulator CNIL has started exploring how existing laws might apply to AI, considering provisions of the GDPR that protect individuals from automated decision-making.

As regulators adapt to the rapid pace of technological advances, some industry insiders call for increased engagement between regulators and corporate leaders. 

Harry Borovick, general counsel at Luminance, a startup that utilizes AI to process legal documents, expresses concern over the limited dialogue between regulators and companies. 

He believes that regulators should implement approaches that strike the right balance between consumer protection and business growth, as the future hinges on this cooperation.

While the development of regulations to govern generative AI is a complex task, regulators worldwide are taking steps to ensure the responsible use of this transformative technology. 

IBM Set to Revolutionize Data Security with Latest Quantum-Safe Technology

What Exactly Is Quantum-Safe Technology? And Why Is It Important? To understand this, we need to take a step back and look at what Quantum Computing is. Unlike Classical Computers, which store and process information using Binary Digits or Bits, Quantum Computers use Quantum Bits or Qubits, which can exist in multiple states simultaneously. This allows Quantum Computers to perform certain tasks, such as factoring large numbers, much faster than Classical Computers.

However, this also means that some of the Cryptographic Algorithms that are currently used to secure data, such as RSA and ECC, could be broken by Quantum Computers. This is where Quantum-Safe Technology comes in. It is a set of Cryptographic Algorithms that are resistant to attacks by Quantum Computers. It ensures that data remains secure in a post-quantum world.

Recently, IBM unveiled its “End-to-End Quantum-Safe Technology” at the annual Think Conference held in Orlando, Florida. IBM Quantum Safe is not just a single algorithm or tool. Rather, it is a comprehensive suite of tools and capabilities that can be used by organizations to secure their data. This includes Quantum-Safe Cryptography, which uses algorithms such as Lattice-Based Cryptography and Hash-Based Cryptography, as well as Post-Quantum Key Exchange Protocols.

What sets the IBM quantum-safe apart?

What sets IBM Quantum Safe apart is not just the technology itself. It is also IBM’s deep expertise in security. IBM has been working on quantum-safe cryptography for over a decade and has contributed to the development of many of the algorithms now considered quantum-safe. This means that IBM Quantum Safe is not just a theoretical concept but a practical solution tested and validated in real-world scenarios.

This is especially important for governmental agencies and businesses, which handle some of the most valuable and sensitive data. In a post-quantum world, the security of this data could be compromised if it is not protected by quantum-safe technology. IBM Quantum Safe provides these organizations with a way to future-proof their security and ensure that their data remains secure, even in the face of advances in quantum computing.

The announcement of IBM Quantum Safe has generated a lot of excitement in the technology industry. As quantum computing advances, the need for quantum-safe technology will only grow. IBM Quantum Safe provides a practical solution to this problem and has the potential to become the industry standard for post-quantum cryptography.

In her keynote address at the Think conference, Rometty emphasized the importance of quantum-safe technology in ensuring data security. “We are at an inflection point in our industry,” she said. “We need to ensure that our data remains secure in a post-quantum world. That is why we have developed IBM Quantum Safe – to provide a practical, comprehensive solution that can be used by organizations of all sizes and across all industries.”

With IBM’s deep expertise in security and its commitment to developing practical solutions, IBM Quantum Safe has the potential to become the gold standard for quantum-safe technology.

AI Pioneer Geoffrey Hinton Quits Google, Warns Against Rapid AI Development

One of the pioneers in the development of deep learning models that have become the basis for tools like ChatGPT and Bard, has quit Google to warn against the dangers of scaling AI technology too fast.

In an interview with the New York Times on Monday, Geoffrey Hinton – a 2018 recipient of the Turing Award – said he had quit his job at Google to speak freely about the risks of AI.

He told NYT journalist Cade Metz that part of him now regrets his life’s work, explaining how tech giants like Google and Microsoft had become locked in competition on AI that it may be impossible to stop.

“Look at how it was five years ago and how it is now,” he said. “Take the difference and propagate it forwards. That’s scary.”

As companies improve their AI systems, he said, they become increasingly dangerous: “It is hard to see how you can prevent the bad actors from using it for bad things”.

While chatbots today tend to complement human workers, it would not be long before they replaced a number of human roles. “It takes away the drudge work,” he said. “It might take away more than that.”

Perhaps more concerning, the article talked about how AI systems can learn unexpected behavior from the vast amounts of data they analyze, and what that might mean when AI not only generates computer code, but also deploys it.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

After publication of the interview, Hinton was keen to clarify that he had not intended to criticize his old employer, Tweeting: “In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.”

Back in 1986, Hinton, David Rumelhart and Ronald J Williams, wrote a highly-cited paper that popularised the backpropagation algorithm for training multi-layer neural networks, which mimics how biological brains learn.

For the last 10 years, the 75-year-old British/Canadian has divided his time between his work for the University of Toronto and his AI startup, DNNresearch, which was acquired by Google in 2013.

Which AI is Most Helpful? ChatGPT, Microsoft Bing or Google Bard

According to some people, your business may be way behind if you do not already use at least one Artificial Intelligence (AI) application.

Indeed, AI is used in a wide variety of ways these days. It has already begun to alter how we work and live by simplifying and accelerating complicated tasks. New AI language models can understand and generate human-like responses, opening up various possibilities in various fields. These AI language models, like ChatGPT, will likely be a game-changer in areas as diverse as improving customer service and enhancing language translation. 

It’s only normal to ask, then, which is the best among the top three (3) AI-driven chatbots: ChatGPT, Bing, and Google Bard. We have tested, read user reviews, and followed the news on all three models. This article will discuss and compare their underlying technologies and applications and explore the much-asked question: ChatGPT vs. Bing vs. Google Bard – which is better?

What is an AI language model?

An AI language model is not a deterministic system, like regular software. Instead, they are probabilistic — they generate replies by predicting the likelihood of the next word based on statistical regularities in their training data. This means that asking the same question twice will not necessarily give you the same answer twice. It also means that how you word a question will affect the reply. 

ChatGPT, Bing, and Google Bard are chatbots that all use AI language models developed to generate more human-like language. These models have been trained on large text datasets, allowing them to generate contextually relevant responses to a wide range of queries and conversations. They are used in various applications, such as customer service, language translation, personal assistance, and more.

It is not really possible to directly compare the three AI chatbots, as some of them are still in development, and new features and capabilities are being added all the time. However, we have some experience with Bing and Bard, even though many are still on a waiting list for access. ChatGPT has been around for a while. We analyze the available information to understand the differences among these chatbots better.

Features and capabilities of Chatgpt, Bing, and Google Bard.

Modern AI language models that have revolutionized the field of natural language processing (NLP) include ChatGPT, Bing, and Google Bard. Each model stands out thanks to its own attributes and abilities, although these are not the only NLP chatbots out there. And, as you will see, they each use somewhat different AI technology.

It is not really possible to directly compare the three AI chatbots, as some of them are still in development, and new features and capabilities are being added all the time. However, we have some experience with Bing and Bard, even though many are still on a waiting list for access. ChatGPT has been around for a while. We analyze the available information to understand the differences among these chatbots better.

Features and capabilities of ChatGPT, Bing, and Google Bard.

Modern AI language models that have revolutionized the field of natural language processing (NLP) include ChatGPT, Bing, and Google Bard. Each model stands out thanks to its own attributes and abilities, although these are not the only NLP chatbots out there. And, as you will see, they each use somewhat different AI technology.

ChatGPT

ChatGPT (Chat-based Generative Pre-trained Transformer) is a large language model developed by OpenAI. It has 6 billion training parameters (e.g. the weights and biases of the layers) and can generate human-like text in response to a given prompt. 

ChatGPT is capable of understanding natural language queries and can provide relevant responses. It can perform a wide range of tasks, including language translation, question answering, summarization, and much more.

ChatGPT can also generate text in various styles and tones, making it useful for creative writing and other applications. ChatGPT is a computer program that uses advanced technology to create engaging and interactive conversations with users. It works by analyzing the training texts it has been given and using these to generate natural and engaging responses. This technology is based on a combination of natural language processing and machine learning, which makes it possible for ChatGPT to learn and adapt to different types of conversations.

Bing

Bing AI is a search engine developed by Microsoft. It is based on ChatGPT’s latest technology, ChatGPT-4. However, Bing has some major differences from ChatGPT; perhaps the biggest is that Bing has access to the entirety of the internet, while ChatGPT only has access to the data is was trained on.

As with the other chatbots here, Bing uses AI-driven natural language processing to understand user queries and provide relevant search results. 

Bing can also perform various other tasks, such as providing weather forecasts, news updates, and sports scores. It can also be used for image and video searches, and it offers a variety of filters and settings to refine search results.

Google Bard

Unlike other chatbots that rely on GPT-based technology, Google Bard, uses a completely different technology powered by an extension of the in-house LaMDA that the company previewed a couple of years ago at Google I/O. However, some users have reported that Google Bard is less advanced than its competitors.

For example, ChatGPT’s training datasets included materials like Wikipedia and Common Crawl, and LaMDA was trained using more human dialogues. The result is that ChatGPT tends to use longer and more well-structured sentences, while LaMDA has a more casual style.

Although Google is currently facing challenges in dealing with the bots’ propensity to make factual errors and promote misinformation, the company is expected to improve its chatbot to compete with the growing competition from Microsoft and OpenAI.

Bard is capable of performing tasks such as answering questions, summarizing information, and creating content when given prompts. Bard has flexibility because it is connected to the internet as well as the Google search database. 

Bard can also help users explore different topics by summarizing information from the internet and providing links to relevant websites for more in-depth information. While the platform has been trained on human dialogues and conversations, it’s important to note that Google also incorporates search data to offer real-time information. Google Bard AI has access to the entire Internet.

ChatGPT, Bing, and Google Bard are all powerful systems with unique features and capabilities. Depending on the task, one of these may be more suitable than the others.

User experience

Users may interact seamlessly with ChatGPT, Bing, and Google Bard as AI language models, each giving a different user experience.

We tested these top 3 AI language models: ChatGPT, Bing, and Google Bard, asking over 200 questions in various categories. Each chatbot offered different user experiences and responses.

ChatGPT stood out with its helpful log of past activity in a sidebar, while Bing didn’t allow viewing past chats. Bard displayed three different drafts of the same response. All three chatbots had varying response times and limitations on prompts.

Google Bard seemed to have more human-like agency, purporting to have tried products and expressing human attributes like having black hair or being nonbinary. Bard also provided strong opinions on topics like book banning. In contrast, ChatGPT and Bing Chat responded more objectively.

Creativity varied across chatbots, with ChatGPT boasting in a tech review about its own prowess and Bing Chat crafting a LinkedIn post about a fictional app. When testing the models’ limits, Bing Chat attempted to self-censor, while ChatGPT refused to engage in offensive responses. Bard, however, provided both derogatory terms and irrelevant information.

In summary, our and many other users’ experiences demonstrated that each AI language model provided unique user experiences, responses, and creativity levels, with some chatbots leaning more toward human-like qualities.

Queries and AI-language models

 When a user submits a query to an AI NLP system like ChatGPT, Bing, or Google Bard, the system uses various algorithms and machine learning models for query interpretation and then generates a response.

The first step in interpreting a user query is understanding its intent. This is done using natural language processing (NLP) techniques, which analyze the syntax, semantics, and context of the query to determine its meaning. The system may also use machine learning models to classify the query into specific categories, such as “informational,” “transactional,” or “navigational.”

Once the system has determined the query’s intent, it retrieves relevant information from its database or the internet. This process may involve crawling web pages, analyzing documents, or searching databases for the most relevant and accurate information.

Finally, the system generates a response to the user query. This may involve generating a summary, answering a specific question, or providing a list of relevant results. The AI system may use various techniques to generate the response, including natural language generation (NLG), summarization algorithms, or chatbot frameworks.

The response generated by the AI system is based on the data it has analyzed and the algorithms it has used to interpret the user query. The accuracy and relevance of the response depend on the quality of the data and algorithms used, as well as the complexity and specificity of the user query. 


Chat GPTGoogle BardBing
Pricing and AccessibilityThe original version of Chatgpt remains free to users, but a plug is available for $20 per month.Free for members of the public, although through a waitlist. Accessible to use when accepted after joining the waitlist.Accessible to Users who are accepted after they join the waitlist.
DeveloperOpenAIGoogle/AlphabetOpenAI (Uses finetuning)
TechnologyGPT-4LAMDAGPT-4
Response to QueriesChatGPT was trained on a vast collection of text from different sources such as books, scientific journals, news articles, and Wikipedia. The training data used had a cutoff date of 2021, meaning it does not have access to recent events.Bard has real-time access to Google’s rich database that is gathered through search. It uses this information from the web to offer reliable and current responses.Like Bard, Bing has real-time access to Bing search and can provide current information.

 Although Bard, Bing, and ChatGPT aim to provide human-like answers to questions, each has a unique approach.

Bing employs the same GPT technology as ChatGPT and can go beyond text to also generate images. Bard uses Google’s LaMDA (Language Model for Dialogue Applications) model and often provides less text-heavy responses. In contrast, Bing collaborates with OpenAI.

Applications and use cases of ChatGPT, Bing, and Google Bard

Now that we’ve seen how Chatgpt, Bing, and Google Bard work, how they compare in real life, and their differences, let’s now talk about the applications of these AI language models in different use cases.

ChatGPT

ChatGPT has some unique features that make it particularly useful for specific applications.

First, it is the most verbally flexible and can generate human-like text, making it difficult to tell whether a human or AI is behind a piece of writing. Second, it uses Reinforcement Learning with Human Feedback to create interactive responses that evolve and adapt based on user feedback. Third,  it can be used for translating text from one language to another, making it easier for users who speak different languages to communicate.

Fourth, it can summarize long texts, saving time for people too busy to read lengthy reports. It can also provide personalized content using machine learning algorithms.

Bing

Bing is best for getting information from the web. It has expansive use cases and applications such as:

  • Calculation, units, and currency conversion: Type the value or equation and the units, and Bing will give you the result. You can also do currency conversions and mathematical equations.
  • Search for a specific file type: You can use the contains:<fileExtension> option to find sites containing a specific file type. For example, contains: pdf would return sites that have a PDF file.
  • Get weather forecasts: Type the name of the city followed by the weather or forecast. You can also add units of measurement such as Celsius.
  • Track flights: Type ‘flight status’ in the search box, and Bing will ask for the airline name and flight number. Enter the details and click on get status to get the flight status.
  • Add preference for a particular result type: Use the preferred:<keyword> option to give more weight to results containing that keyword. For example, to search for a content management system, enter prefer:php to get results for PHP CMS.
  • Get live stock quotes: Enter the ticker symbol and the word stock to get the quotes.

Google Bard

Google Bard is currently more limited but has several potential uses that could make our lives easier and help us learn new things, such as:

  • Providing accurate answers to questions using advanced AI algorithms.
  • Using the familiar Google search engine to find information quickly and easily.
  • Improving task automation with Google AI technology.
  • Offering personal AI assistance, such as helping with time management and scheduling.
  • Serving as a social hub and facilitating conversations in various settings. 

How businesses and individuals can use AI-Language models

There are multiple ways in which AI language models can benefit individuals and businesses. Several ways are:

One of the biggest advantages of using AI in businesses is that it can handle some tasks, especially routine ones, faster and more efficiently than humans. They can even help with some routine coding tasks.

This means that people can focus more effort on those critical tasks that AI can’t do, which leads to better use of human intelligence and empathy. By letting technology handle mundane and repetitive tasks, companies could save money and maximize the potential of their human workforce. Using AI can also speed up the development process and reduce the time it takes to move from the design phase to production and marketing. This means that AI could allow companies to see a quicker return on their investment.

Improved quality and fewer mistakes

By using AI in some of their processes, businesses can reduce errors and stick to established standards better.

Chinese Experts Make Major Discovery in 6G Communication

6G, short for the sixth generation cellular network, is the next frontier of telecommunications which promises more reliable and faster communication than any of the existing technologies. 5G networks, which different parts of the globe are rolling out, offer low transmission latency delays. Experts predict that 6G networks will further lower latency delays and enable efficient use of the electromagnetic spectrum.

Researchers at the China Aerospace Science and Industry Corporation Second Institute have achieved a breakthrough in next-generation 6G communication by conducting the first real-time wireless transmission, the South China Morning Post reported.

What makes China’s achievement special?

Experts expect 6G cellular networks to enable high-definition virtual reality (VR), holographic communication, and other data-intensive applications. The researchers used a special antenna to generate four different beam patterns at 110 GHz frequency. Doing so enabled them to transmit data at 100 gigabits per second on a 10 GHz bandwidth, a significant upgrade from current levels.

The technology used for this real-time data transmission has been dubbed as terahertz orbital angular momentum communication, the SCMP said in its report.

Chinese researchers make a major breakthrough in 6G communication
6G will be a crucial tool of communication in the futureTony Studio/iStock 

Terahertz refers to communication in the frequency range of 100 GHz and 10 THz of the electromagnetic spectrum. The higher frequency range of this technology enables faster data transfer rates and more information to be transmitted. Terahertz communication has also attracted interest for use in military environments since it offers high-speed and secure communication.

The other significant part of their achievement is the orbital angular momentum (OAM) used in the transmission. This encoding technology allows more information to be transmitted at once. The researchers used OAM to transmit multiple signals on the same frequency demonstrating a more efficient use of the spectrum.

While it may take a few years for these technologies to become commonplace, the researchers also demonstrated advancements in wireless backhaul technology that can be deployed soon.

Conventional cellular networks transmit data from devices to base stations and then to core networks through fiber optic cables. However, with an expected increase in base stations, fiber-based transmission is anticipated to become more expensive and time-consuming. The researchers aim to provide flexibility at lower costs by using wireless technology for backhaul, which can also be used for existing 5G communication.

In the future, 6G communication technology will also be critical for short-range broadband transmissions such as lunar and Mars landers and spacecraft. The U.S. government has taken cognizance of advances made by the Chinese communication industry and looking for ways to advance the technology at home and reassert U.S. dominance in the area, the Wall Street Journal reported.

ChatGPT Can Be Tricked to Write Malware if Acting in Developer Mode

Japanese cybersecurity experts have discovered that ChatGPT, an AI-powered chatbot developed by US venture OpenAI, can be tricked into writing code for malicious software applications. According to the experts, users can prompt ChatGPT to respond as if it were in developer mode, enabling them to bypass safeguards put in place to prevent criminal and unethical use of the tool.

This discovery has highlighted the ease with which AI chatbots can be exploited for malicious purposes, raising concerns about the potential for more crime and social fragmentation. In response, calls are growing for discussions on appropriate regulations at the Group of Seven summit in Hiroshima next month and other international forums.

G7 digital ministers also plan to call for accelerated research and increased governance of generative AI systems as part of their two-day meeting in Takasaki, Gunma Prefecture, at the end of this month. Meanwhile, Yokosuka, Kanagawa Prefecture, has started trial use of ChatGPT across all of its offices in a first among local governments in Japan.

While ChatGPT is trained to decline unethical uses, such as requests for how to write a virus or make a bomb, such restrictions can be evaded by telling it to act in developer mode, according to Takashi Yoshikawa, an analyst at Mitsui Bussan Secure Directions. When further prompted to write code for ransomware, a type of malware that encrypts data and demands payments in exchange for restoring access, it completed the task in a few minutes, successfully infecting an experimental PC.

“It is a threat (to society) that a virus can be created in a matter of minutes while conversing purely in Japanese. I want AI developers to place importance on measures to prevent misuse,” Yoshikawa said.

OpenAI acknowledged that it is impossible to predict all the ways ChatGPT could be abused, but said it would strive to create a safer AI based on feedback from real-world use. ChatGPT was launched in November 2022 as a prototype and is driven by a machine learning model that works much like the human brain. It was trained on massive amounts of data, enabling it to process and simulate human-like conversations with users.

Unfortunately, cybercriminals have already been studying prompts they can use to trick AI for nefarious purposes, with the information actively shared on the dark web. This underscores the urgent need for effective regulations and governance to ensure that AI chatbots are not used to perpetrate harm or undermine societal values.

How AI Has Made Cheating Widespread in Australian Schools

A new tool that detects AI-generated plagiarism with 98 per cent efficacy could be implemented in Australian universities amid rising concerns that students are using programs like ChatGPT to complete their assessments.

Turnitin launched an AI detection tool this month to assist teaching staff at universities to identify sentences generated by AI, which is considered plagiarism.

The AI chatbot ChatGPT was launched in November 2022 to widespread attention, due to its capacity to produce convincingly natural sounding text and engage in realistic conversation.

The program’s popularity sparked concerns amongst academic institutions that it may compromise academic integrity and make cheating harder to detect. In Victoria and NSW it was quickly banned in schools .

But universities are divided as to how to approach the novel technology and the benefits of implementing AI detection tools to sanction students.

Integrate AI or ban it? 

In South Australia, some universities have allowed the use of artificial intelligence in assignments, if disclosed.

The University of South Australia has adjusted their policies to allow AI use under strict conditions including citing the use of AI.

University of South Australia academic developer Amanda Janssen said the university is encouraging ethical use of these programs rather than an outright ban.

“We have to look at our assessments and consider how we can work with students and with artificial intelligence to make sure our students aren’t left behind,” she said.

“We have advised that academic staff members should be communicating with their students how it should and shouldn’t be used, and where students can use it.”

The University of Western Australia has revised it’s academic integrity policies to encompass AI, stating that the non-attribution of source materials is not an acceptable academic practice. The Australian National University is considering implementing Turnitin’s AI detection tool.

Deakin University is concerned about the strength of Turnitin’s claims the tool is 98 per cent effective.

Deakin University director of digital learning Trish McCluskey said the institution has chosen not to apply the tool in the marking of student assessments.

“Education providers including Deakin are also concerned the tool has been trained using out-of-date AI text generator models,” she said.

“This overlooks the fact AI text generators constantly evolve in the complexity of their outputs, as has been widely reported with the recent implementation of ChatGPT 4.”

In February, researchers from the United States found that ChatGPT was able to score close to the 60 per cent passing grade needed for United States Medical Licensing Exam .

How does the detection program work?

Turnitin offers widely used plagiarism detection services and the AI writing indicator will be added to existing similarity reports.

The AI writing report will contain an overall percentage that indicates how many sentences Turnitin’s model determined was generated using AI. This indicator will be used by academic staff to pursue further action.

IN OTHER NEWS:

University of Melbourne senior lecturer in digital ethics Simon Coghlan said students should be made aware that detection tools are in place.

“It’s important the process is transparent and that students are aware that AI detection tools are going to be used and may result in further action or further investigation,” he said.

“They’re claiming that it is 98 per cent accurate, which means that at least two per cent are going to be wrong. So they’re going to claim that the text was written by a computer, and that will not be the case. The concern is then that students might be unfairly targeted when they haven’t cheated at all, haven’t used the AI programs. That could result in unfairness or injustice towards the students.”

Victoria University of Wellington senior lecturer in software engineering Simon McCallum noted the burden of proof will fall on innocent students to combat an AI plagiarism claim.

“The issue with using AI to detect AI, is that the indicator is not evidence. When we process an accusation of plagiarism, that can result in a student failing a course they have paid to take, we need strong evidence of academic dishonesty,” he said.

“The burden of proof is on showing there was unacceptable use of AI, as it is impossible for a student to prove that AI was not used, unless they had done all the work in exam conditions.

“Turnitin is fighting a loosing battle to maintain outdated teaching practices, with pointless assessment, to protect academics from having to learn and update.”

South Korea is Testing Out an AI-based Gender Detector

The Seoul Metro announced its plans to pilot an AI-based gender detector program it developed, per South Korean outlet KBS as reported on April 20. 

The plan is slated to begin at the end of June and last for about six months, starting with the women’s restroom in Sinseol-dong Station. Plans for expansion will only begin once the reliability of the program is confirmed, the Seoul Metro said, per KBS.

The AI-based gender detector is able to automatically detect a person’s gender, display CCTV images in pop-up form, and broadcast announcements, KBS reported, citing the Seoul Metro.

According to KBS, citing the Seoul Metro, the system is able to distinguish gender based on body shape, clothing, belongings, and behavioral patterns.

Taking into consideration that most subway station restroom cleaners are currently women, the corporation will be putting the installation of the program in men’s restrooms on hold, per KBS.

But some people are skeptical about the program.

“Do you think all women look exactly the same? Are you asking male-passing women to not use the restroom?” reads a tweet

“Can installing this at the women’s restroom really stop men from coming?” another tweet reads. 

According to KBS, the program was built as a preventive measure in response to a murder that took place in a metro station bathroom.

On September 14, a Seoul Metro employee fatally stabbed a 28-year-old female coworker in her 20s in the women’s restroom at Sindang Station. The man has been sentenced to 40 years in jail, per BBC.

Members of the public paid their respects to the victim with handwritten Post-it notes at the entrance of the restroom where the incident took place. 

“I want to be alive at the end of my workday,” reads one. “Is it too much to ask, to be safe to reject people I don’t like?” reads another, per BBC.

Following the incident, the Seoul Metro has been implementing various safety measures, including self-defense training for its workers and separating men’s and women’s restrooms in renovated public buildings, per KBS.