top of page

When AI Gets It Wrong: What Deloitte’s Slip-Up Means for HR

  • jrezvani
  • Oct 9
  • 2 min read

What Happened

Deloitte Australia recently made headlines after using generative AI (GPT-4o via Azure OpenAI) to help draft a government report. The issue? The final document included fabricated citations and even a made-up court quote.


After the inaccuracies were discovered, Deloitte agreed to partially refund its $440,000 fee, and the department republished the corrected report.


ree

Why HR Should Care

AI is no longer confined to tech or marketing. It now touches every corner of organizational life including policy writing, research, client deliverables and even internal board reports.


When AI tools are used without clear rules, human oversight and disclosure, the risks multiply:

  • Inaccurate outputs

  • Brand and reputational damage

  • Erosion of trust with funders, boards and the public


The Deloitte incident is a wake-up call that “human in the loop” isn’t optional, it’s essential.



Your Near-Term HR Playbook

Here’s how HR can help their organization stay smart, safe and transparent with AI-assisted work:


1) Set an AI Use Standard (org-wide)

  • Define approved tools and what data types are permitted with each.

  • Require disclosure for any AI-assisted client, board or regulator-facing content.

  • Prohibit AI from inventing references. Every citation needs a verifiable source link or archived copy.

  • Assign human accountability: a named reviewer must sign off before publication.


2) Build a Lightweight AI QA Checklist

  • Verify facts against primary sources.

  • Trace all quotations to the original documents.

  • Re-calculate figures manually or with a secondary tool.

  • Run a “red flag” scan for hallucinations, fake footnotes or out-of-scope claims.


3) Train Managers and Teams

  • Offer a 60-minute micro-session on when and how to use AI responsibly.

  • Use your own templates (policy memos, job analyses, board updates) for hands-on scenario practice.


4) Turn On Logging

  • Save prompts, versions and reviewer sign-offs alongside final deliverables.

  • This simple step creates an audit trail if questions arise later.


5) Update Vendor and Freelancer Clauses

  • Require all third parties to disclose any AI use.

  • Include QA expectations and indemnities for fabricated content in their deliverables.




ree

Sample Policy Language: AI-Assisted Work Standard

“Employees may use approved AI tools to support drafting, analysis and summarization for internal work only where appropriate. Any content prepared for clients, regulators, funders or the Board that uses AI assistance must be disclosed as such and must undergo human fact-checking and source verification before release. AI outputs must not include unverified claims, fabricated citations or quotes that cannot be traced to an original source. The accountable author and reviewer are responsible for accuracy and compliance with this policy.”


Suggested Disclosure Line for Reports

“This document includes AI-assisted drafting and analysis. All findings, references and quotations have been independently verified by the author.”



The Bottom Line

AI can accelerate work, but it can also amplify errors if left unchecked. HR’s role is to define the guardrails that keep innovation credible and compliant. By setting clear standards, training teams and enforcing human review, you’re not just preventing a Deloitte-style headline, you’re protecting your organization's integrity.

 
 
bottom of page