Published Date : 14/10/2025
Artificial Intelligence (AI) is an immensely powerful tool that has transformed the way industries work. It has enabled rapid advancements in various sectors, from healthcare to finance. However, AI, like all other technologies, is not flawless. Its capacity for errors and biases is significant and raises serious ethical questions. It is always in the headlines, not just for its innovations but also for its failures and problematic impacts. Here is a list of recent incidents that highlight critical concerns around the trustworthiness, economic disruption, and systemic risks of AI.
Deloitte Australia's multi-million-dollar mistake | In a high-profile case exposing the risk of AI, consulting giant Deloitte was forced to issue a partial refund over a $440,000 report to the Australian government. A report it produced for the Department of Employment and Workplace Relations (DEWR) used generative AI and included fabricated academic citations and false references. The errors were initially highlighted by University of Sydney academic, Dr Christopher Rudge, who mentioned that it contained ‘hallucinations’ where AI may fill in gaps, misinterpret data, or try to guess answers.
Instead of just substituting one hallucinated fake reference for a new ‘real’ reference, they’ve substituted the fake hallucinations and in the new version, there’s like five, six or seven or eight in their place, he was quoted as saying by The Guardian. The company, in its updated version, added a reference to the use of generative AI in its appendix. It mentioned that a part 'included the use of a generative artificial intelligence (AI) large language model (Azure OpenAI GPT – 4o) based tool chain licensed by DEWR and hosted on DEWR’s Azure tenancy.'
IMF and Bank of England warn of an AI bubble | The International Monetary Fund and Bank of England are the latest financial institutions to warn about soaring stock market valuations. IMF chief Kristalina Georgieva advised investors, ‘Buckle up: uncertainty is the new normal and it is here to stay.’ Earlier in 2024, Georgieva warned that AI could disrupt nearly 40% of jobs worldwide, creating significant economic inequality.
The Apple Card controversy | After the Apple Card's launch, customers reported that men received significantly higher credit limits than women. A series of viral tweets by tech entrepreneur David Heinemeier Hansson in 2019 drew public attention and triggered a regulatory investigation. Later, Apple's co-founder Steve Wozniak tweeted that the same thing happened to him and his wife. Hansson noted that it highlights how algorithms, not just people, can discriminate. On X, Hansson said, 'Apple Card is a sexist program. It does not matter what the intent of individual Apple reps are; it matters what THE ALGORITHM they've placed their complete faith in does. And what it does is discriminate.'
Cigna's automatic denials: A lawsuit was filed against the US healthcare and insurance company Cigna that alleges it uses artificial intelligence (AI) algorithms to process claims, which reportedly denied over 300,000 claims. 'Relying on the PXDX system, Cigna's doctors instantly reject claims on medical grounds without ever opening patient files, leaving thousands of patients effectively without coverage and with unexpected bills. The scope of this problem is massive. For example, over a period of two months in 2022, Cigna doctors denied over 300,000 requests for payments using this method, spending an average of just 1.2 seconds 'reviewing' each request,' the suit alleged. This case raises rising concerns about AI's ability to replace people for tasks and interactions in healthcare, industry, and beyond.
Air Canada chatbot's bad advice | After being sued for a chatbot's incorrect advice on a bereavement fare, Air Canada was ordered by a Canadian tribunal to pay a customer damages of $812. The tribunal rejected Air Canada's argument that the chatbot was a separate entity and instead ruled that the airline was accountable for all information on its website. 'While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot,' Civil Resolution Tribunal (CRT) member Christopher Rivers said.
Q: What was the major issue with Deloitte's AI-generated report?
A: Deloitte's AI-generated report for the Australian government included fabricated academic citations and false references, leading to a partial refund and public scrutiny.
Q: What did the IMF and Bank of England warn about AI?
A: The IMF and Bank of England warned about the potential for an AI bubble in the stock market and the economic disruption AI could cause, potentially affecting 40% of jobs worldwide.
Q: What was the controversy surrounding the Apple Card?
A: The Apple Card was criticized for allegedly providing higher credit limits to men compared to women, highlighting algorithmic discrimination.
Q: What was the lawsuit against Cigna about?
A: Cigna was sued for using AI algorithms to automatically deny over 300,000 insurance claims, spending an average of just 1.2 seconds reviewing each request.
Q: Why did Air Canada have to pay damages related to its chatbot?
A: Air Canada was ordered to pay a customer damages of $812 after its chatbot provided incorrect advice on a bereavement fare, and the airline was held accountable for the chatbot's actions.