Published Date : 10/10/2025
A major Australian university has been accused of misusing artificial intelligence (AI) to falsely label thousands of students as cheaters. The Australian Catholic University (ACU) reportedly used AI technology to flag about 6000 cases of alleged academic misconduct in 2024, with the majority related to AI use. However, many of these students have been cleared of any wrongdoing, leaving them with damaged academic records and lost job opportunities.
Madeleine, a nursing student at ACU, was one of those falsely accused. She was in the middle of her final-year placement and applying for graduate jobs when she received an email from the university’s academic misconduct board. The email accused her of using AI to cheat on an assignment. This accusation added significant stress to an already challenging period in her life.
“It was already a stressful enough time of my life,” Madeleine told the ABC. “And on top of that, I was getting emails from the academic misconduct board saying I needed to write out an explanation as to why I think this might have happened.”
The investigation lasted six months, during which her academic transcript was marked “results withheld.” Madeleine believes this incomplete document hindered her chances of securing a graduate position as a nurse. “It was really difficult to then get into the field as a nurse because most places require you to have a grad year,” she said.
ACU registered nearly 6000 cases of alleged academic misconduct across its nine campuses in 2024, according to internal documents seen by the ABC. About 90 percent of these referrals were related to AI use. Deputy Vice-Chancellor Tania Broadley acknowledged an increase in misconduct referrals but stated that the figures were “substantially overstated.” She added that approximately half of all confirmed breaches involved the unauthorized or undisclosed use of AI, and one-quarter of all referrals were dismissed following investigation.
Several ACU students who were issued AI-related infringements in the past year shared their experiences with the ABC. They described a process that was both stressful and unfair, with little time to respond and months of waiting for the university to clear their names. During this time, the onus was on them to prove their innocence, often by providing handwritten and typed notes and internet search histories. Broadley confirmed that this practice has been discontinued.
One paramedic student, who wished to remain anonymous, said the university’s approach felt invasive and unjust. “They don’t have a search warrant to request your search history, but when you’re facing the cost of having to repeat a unit, you just do what they want,” the student said. Despite providing extensive evidence, the university’s case against them was based on a single AI-generated report that highlighted 84 percent of their essay as AI-generated.
Students who tried to engage with ACU to complain about the academic misconduct process found their concerns either ignored or dismissed. In a letter from ACU’s complaints, conduct, and appeals team, one student was told that a 10-week investigation period was “well within the timeframe allowed.” The letter cited high volumes of referrals and insufficient staff, issues that students repeatedly brought up in face-to-face meetings with university staff.
Broadley acknowledged that investigations were not always timely and said significant improvements had been made in the past 12 months. “We regret the impact this had on some students,” she said.
For years, universities have relied on software like Turnitin to detect plagiarism. In 2023, Turnitin added an AI detector to its toolkit. However, the company cautioned that AI reports may not always be accurate and should not be used as the sole basis for adverse actions against students. Turnitin recommends further scrutiny and human judgment to determine whether misconduct has occurred. Despite this, email chains and documentation obtained by the ABC indicate that ACU often relied on the AI report alone.
Internal ACU documents show that the university was aware of issues with the AI detector tool for over a year before it was abandoned in March. While ACU no longer uses Turnitin, frustration and anxiety over accusations of AI-related misconduct still feature prominently on student social media pages.
At ACU, students are not the only ones suffering. Staff are also struggling to keep up with the rapidly changing landscape of AI. “Staff are struggling to keep up,” said ACU academic and National Tertiary Education Union vice-president Leah Kaufmann. Despite limited knowledge and few resources, the university has made AI a significant burden for staff, according to Kaufmann.
Broadley said that this year, ACU introduced new modules on ethical AI use for all staff and students. However, the low AI literacy among academics and the constantly changing policies have created confusion for everyone involved.
Across Australia, universities are grappling with AI-related misconduct. University of Sydney’s Danny Liu, an educational technology professor, believes that banning AI is the wrong approach. “Academics are teachers, not police,” Liu said. “And yet the focus has always been control, restrict, detect. If a student knows how to use AI, that’s useless.”
Liu was part of the team that pioneered the University of Sydney’s “two-lane” system, allowing AI use in certain assessments. This approach aims to teach students how to use AI responsibly and ethically, rather than treating it as a forbidden tool.
As universities continue to navigate the challenges of AI in academic settings, the experiences of students like Madeleine highlight the need for a more balanced and fair approach to academic integrity.
Q: What is the main issue at ACU?
A: The main issue at ACU is the misuse of AI technology to accuse students of academic misconduct, leading to false accusations and significant stress for students.
Q: How many students were falsely accused?
A: ACU registered nearly 6000 cases of alleged academic misconduct in 2024, with about 90 percent related to AI use. Many of these students were later cleared of any wrongdoing.
Q: What is the university's response to these accusations?
A: The university acknowledges that the figures were 'substantially overstated' and that there were issues with the AI detection tool. They have since introduced new modules on ethical AI use.
Q: What is the 'two-lane' system at the University of Sydney?
A: The 'two-lane' system at the University of Sydney allows AI use in certain assessments, aiming to teach students how to use AI responsibly and ethically.
Q: How can students protect themselves from false accusations?
A: Students can protect themselves by maintaining detailed records of their work, including handwritten notes and search histories, and by staying informed about their university's policies on AI use.