WordPress Ad Banner

Departments in Australia Granted Autonomy to Explore AI Tools

According to a report by ABC News, the Australian federal government has delegated the decision-making authority regarding the utilization of AI tools, such as ChatGPT, to individual departments rather than formulating a unified policy for public services. While the Department of Home Affairs has emphasized the need for coordination and monitoring in implementing AI tools, lawmakers have raised concerns about the absence of clear guidelines and safeguards.

Since its introduction last year, millions of users have experimented with tools like ChatGPT, and private organizations have swiftly integrated AI into their products and services to enhance productivity and reduce costs. However, government entities have been comparatively slow in responding to this technological breakthrough and have refrained from imposing any moratoriums on its usage, despite calls from certain segments of the public.

It has now come to light that Australian government departments have been independently deploying these AI tools without the presence of a comprehensive federal policy governing their application.

Which Australian Departments are Using AI Tools?

According to the confirmation given by Home Affairs, various departments such as Information Computer Technology Division, Refugee Humanitarian & Settlement Division, Data & Economic Analysis Centre, and Cyber and Critical Technology Coordination Centre have been identified as those using ChatGPT.

The usage of the AI tool is “coordinated and monitored,” as per the ABC report. Parts of a department have sought access to use the tool for “experimentation and learning purposes” and were looking at the “utility for innovation.” The department also said that it was not aware of employees using the tool as part of their everyday jobs.

Former barrister turned politician David Shoebridge has, however, criticized the move and called it “concerning” that the Refugee Humanitarian & Settlement Division was also part of this experiment. Shoebridge highlighted that leak of personal information in such a use case could literally cost lives.

Other departments, such as the Australian Federal Police (AFP) and Australian Criminal Intelligence Commission (ACIC), have, however, prohibited the use of these tools and advised staff not to enter work-related information into such tools using their personal devices.

How AI Has Made Cheating Widespread in Australian Schools

A new tool that detects AI-generated plagiarism with 98 per cent efficacy could be implemented in Australian universities amid rising concerns that students are using programs like ChatGPT to complete their assessments.

Turnitin launched an AI detection tool this month to assist teaching staff at universities to identify sentences generated by AI, which is considered plagiarism.

The AI chatbot ChatGPT was launched in November 2022 to widespread attention, due to its capacity to produce convincingly natural sounding text and engage in realistic conversation.

The program’s popularity sparked concerns amongst academic institutions that it may compromise academic integrity and make cheating harder to detect. In Victoria and NSW it was quickly banned in schools .

But universities are divided as to how to approach the novel technology and the benefits of implementing AI detection tools to sanction students.

Integrate AI or ban it? 

In South Australia, some universities have allowed the use of artificial intelligence in assignments, if disclosed.

The University of South Australia has adjusted their policies to allow AI use under strict conditions including citing the use of AI.

University of South Australia academic developer Amanda Janssen said the university is encouraging ethical use of these programs rather than an outright ban.

“We have to look at our assessments and consider how we can work with students and with artificial intelligence to make sure our students aren’t left behind,” she said.

“We have advised that academic staff members should be communicating with their students how it should and shouldn’t be used, and where students can use it.”

The University of Western Australia has revised it’s academic integrity policies to encompass AI, stating that the non-attribution of source materials is not an acceptable academic practice. The Australian National University is considering implementing Turnitin’s AI detection tool.

Deakin University is concerned about the strength of Turnitin’s claims the tool is 98 per cent effective.

Deakin University director of digital learning Trish McCluskey said the institution has chosen not to apply the tool in the marking of student assessments.

“Education providers including Deakin are also concerned the tool has been trained using out-of-date AI text generator models,” she said.

“This overlooks the fact AI text generators constantly evolve in the complexity of their outputs, as has been widely reported with the recent implementation of ChatGPT 4.”

In February, researchers from the United States found that ChatGPT was able to score close to the 60 per cent passing grade needed for United States Medical Licensing Exam .

How does the detection program work?

Turnitin offers widely used plagiarism detection services and the AI writing indicator will be added to existing similarity reports.

The AI writing report will contain an overall percentage that indicates how many sentences Turnitin’s model determined was generated using AI. This indicator will be used by academic staff to pursue further action.

IN OTHER NEWS:

University of Melbourne senior lecturer in digital ethics Simon Coghlan said students should be made aware that detection tools are in place.

“It’s important the process is transparent and that students are aware that AI detection tools are going to be used and may result in further action or further investigation,” he said.

“They’re claiming that it is 98 per cent accurate, which means that at least two per cent are going to be wrong. So they’re going to claim that the text was written by a computer, and that will not be the case. The concern is then that students might be unfairly targeted when they haven’t cheated at all, haven’t used the AI programs. That could result in unfairness or injustice towards the students.”

Victoria University of Wellington senior lecturer in software engineering Simon McCallum noted the burden of proof will fall on innocent students to combat an AI plagiarism claim.

“The issue with using AI to detect AI, is that the indicator is not evidence. When we process an accusation of plagiarism, that can result in a student failing a course they have paid to take, we need strong evidence of academic dishonesty,” he said.

“The burden of proof is on showing there was unacceptable use of AI, as it is impossible for a student to prove that AI was not used, unless they had done all the work in exam conditions.

“Turnitin is fighting a loosing battle to maintain outdated teaching practices, with pointless assessment, to protect academics from having to learn and update.”