Deloitte Faces Controversy Over AI-Generated Report
Fake Documents Included in Report References
Global consulting firm Deloitte is facing controversy after it was revealed that a report it prepared for the Australian government cited fake court rulings and other fabricated documents generated by OpenAI's artificial intelligence (AI).
According to the Financial Times (FT) and the Guardian on October 7 (local time), the Australian Department of Employment and Workplace Relations announced the previous day that Deloitte had agreed to partially refund its consulting fee after errors were found in some citations and references in the report it submitted to the government.
In December last year, the Department of Employment and Workplace Relations commissioned Deloitte to produce a report evaluating flaws in the welfare system that disadvantage job seekers, at a cost of 439,000 Australian dollars (approximately 410 million won). However, after the report was released in July, academics and local media pointed out numerous errors. The report cited non-existent, fabricated documents as footnotes and references, and even manipulated and quoted Australian court rulings that did not actually exist.
In response, Deloitte recently submitted a revised version of the report, removing 14 problematic sources out of the 141 references, as well as falsified quotations in the main text. The revision also disclosed that, during the report’s preparation, Deloitte had partially used a tool based on OpenAI’s large language model (LLM), GPT-4o.
Deloitte stated in the revised version that it had corrected the errors, but emphasized, "This (report) update does not affect the substantive content, findings, or recommendations of the report." The Australian government explained that the content and recommendations of the report remain unchanged and that details such as the refund amount will be disclosed once the transaction is finalized.
FT described this incident as a clear example of the risks of "hallucination"-the phenomenon in which AI generates fictional information or content that does not actually exist-associated with the use of AI technology. Christopher Rudge, a professor at the University of Sydney Law School who pointed out the errors, told local media, "The very foundation of the report is flawed, it was never meant to be published, and its methodology is unprofessional, so the recommendations cannot be trusted."
Deborah O'Neill, an Australian Senator who has previously overseen parliamentary investigations into the integrity of consulting firms, told the Guardian, "Anyone seeking to contract with these firms must ensure they know exactly who is carrying out the work they are paying for, and whether their expertise and use of AI have been properly vetted," adding, "It might be better to just subscribe to ChatGPT instead of hiring a large consulting firm."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


