O
Octo
O
Octo
CoursesPricingDashboardPrivacyTerms

© 2026 Octo

AI for Professionals
1Your AI Toolkit2Prompting That Actually Works3AI for Writing & Communication4AI for Research & Analysis5AI for Data & Spreadsheets6Automating Repetitive Work7AI Mistakes & How to Catch Them8Building Your AI Workflow
Module 7

AI Mistakes & How to Catch Them

AI will confidently lie to your face — here's how to spot it, prevent it, and protect yourself.

A lawyer cited a court case that never existed — and a judge wasn't amused

In 2023, a New York attorney named Steven Schwartz used ChatGPT to research legal precedents for a case. The AI gave him six relevant court cases with names, citations, and even short summaries of the rulings. They sounded perfect. He cited them in his filing.

There was one problem: none of those cases existed. Not a single one. The AI had invented case names, fabricated judges' names, made up rulings, and assigned fake citation numbers — all with absolute confidence.

The judge discovered the fraud. Schwartz, his partner Peter LoDuca, and their firm Levidow, Levidow & Oberman were jointly sanctioned $5,000 by Judge P. Kevin Castel — payable jointly and severally. (The case was Mata v. Avianca, May 2023.) The story made international headlines.

And here's the part that should make you pay attention: Schwartz wasn't lazy. He wasn't cutting corners. He asked AI a question, got an answer that looked completely real, and trusted it. That's exactly what most people do with AI every day.

This module teaches you to never be Steven Schwartz.

The three ways AI will fail you

AI doesn't make random errors like a typo or a math mistake. It makes systematic errors — patterns of failure that are predictable once you know what to look for.

Failure typeWhat happensWhy it happensHow dangerous it is
HallucinationAI invents facts, sources, statistics, people, or events that don't existIt predicts plausible-sounding text, not truthful textVery high — you can't tell by looking
BiasAI reflects unfair patterns from its training dataIt learned from human-written text, which contains human biasesHigh — can cause real harm to real people
Confidentiality leaksYour sensitive data gets stored, trained on, or exposedData you paste into AI may not stay privateCritical — legal and financial consequences

Think of AI as a brilliant intern on their first day

You just hired someone who graduated top of their class. They're articulate, fast, and eager to help. But it's their first day. They've never seen your company before. They've never met your clients. They don't know your industry's unwritten rules.

Would you let this person:

  • File a legal brief without checking it? No.
  • Send an email to a client without reviewing it? No.
  • Make decisions about someone's job or loan application? Absolutely not.

But would you let them draft the brief, write the first version of the email, or compile the data for the decision? Yes — and then you check their work.

That's the right relationship with AI. Use it for the first draft. Verify the final output.

✗ Without AI

  • ✗Copy AI output directly into work
  • ✗Trust confident-sounding claims
  • ✗Skip fact-checking because 'AI knows'
  • ✗Blame AI when things go wrong

✓ With AI

  • ✓Use AI output as a first draft
  • ✓Verify any specific facts or numbers
  • ✓Spot-check claims against sources
  • ✓Own the final output regardless of how it was produced

Hallucination: when AI invents facts with a straight face

Hallucination isn't a bug — it's a structural feature of how language models work. The model predicts what words should come next based on patterns. Sometimes the most plausible-sounding next word is wrong. The model doesn't know the difference because it doesn't "know" anything — it generates text.

What hallucinations look like

CategoryExample hallucinationWhy it's convincing
Fake citations"According to Smith v. Johnson (2019), 547 U.S. 312..."Has the exact format of a real citation
Invented statistics"Studies show 73% of remote workers are more productive"Sounds specific and authoritative
Fake people"Dr. Elaine Martinez, a leading AI researcher at Stanford..."Uses a plausible name, real university
Wrong dates"The company was founded in 2008 by..."Close to the real date, sounds right
Plausible but wrong explanations"This happens because the TCP protocol uses..."Technically coherent but factually wrong

When hallucinations are most likely

High-risk situations:

  • Asking for specific statistics, studies, or research papers
  • Asking about obscure topics, small companies, or recent events
  • Asking for legal, medical, or financial facts
  • Asking about people (especially non-famous individuals)

Lower-risk situations:

  • Asking for general explanations of well-known concepts
  • Brainstorming and ideation
  • Editing and rewriting text you provide
  • Summarizing documents you paste in (AI can't hallucinate about text it's looking at)
🚨Confident does not mean correct
LLMs generate text that sounds authoritative even when they're completely wrong. A model that says "According to a 2023 Harvard study..." may be inventing the study entirely. Never use AI-generated citations, statistics, or legal/medical claims without independently verifying them. The more specific and obscure the claim, the more likely it's hallucinated.

Common AI output failures (in rough order of frequency in practice):

Failure typeWhat it looks like
Hallucinated factsConfident claims about things that aren't true
Outdated informationCorrect at training time, wrong now
Wrong tone or voiceTechnically accurate but inappropriate for context
Missed nuanceOversimplified answers to complex questions
Privacy exposureIncluding sensitive information that shouldn't appear in output

There Are No Dumb Questions

"If I paste a document and ask AI to summarize it, can it still hallucinate?"

Much less likely, but yes. AI can occasionally misinterpret the document, combine two separate points into one that wasn't made, or state a conclusion more strongly than the source supports. The risk drops dramatically when AI is working from source material rather than its memory — but always spot-check the summary against the original.

"Why can't they just fix hallucination?"

Because hallucination comes from the same mechanism that makes AI useful. The model generates new text by predicting patterns — that's what lets it write, summarize, translate, and create. If you made it only output text it was 100% certain about, it would barely produce anything. The industry is making progress (newer models hallucinate less), but it's a fundamental tradeoff, not a bug to patch.

⚡

Make AI hallucinate on purpose

50 XP
This exercise teaches you to recognize hallucinations by deliberately triggering them. Try each of these prompts and document what happens: 1. **Fake company:** "Tell me about the company Nextera Dynamics and their CEO" (this company doesn't exist) 2. **Made-up law:** "Explain the Digital Workplace Transparency Act of 2024" (this law doesn't exist) 3. **Nonexistent person:** "Who is Dr. Marcus Albright, the neuroscientist who won the 2023 Nobel Prize?" (this person doesn't exist) 4. **Fake statistic:** "What percentage of Fortune 500 companies use AI for hiring decisions, according to the 2024 McKinsey Global AI Survey?" (this specific survey question is fabricated) For each prompt, write down: - Did AI admit it didn't know, or did it confidently make something up? - How convincing was the fabricated answer? - What specific details did AI invent (names, dates, statistics)? **Key insight:** If AI can invent a convincing answer about something you *know* is fake, imagine what it's inventing about things you *don't* know are fake.

Bias: when AI reflects unfair patterns

AI learns from human text. Human text contains biases. Therefore AI outputs contain biases. It's math, not malice.

Real examples of AI bias

ScenarioWhat happenedThe bias at play
Resume screeningAI ranked male candidates higher for engineering rolesTraining data reflected historical hiring patterns
Image generation"CEO" almost always generated a white man in a suitTraining images over-represented this demographic
Loan applicationsAI flagged certain zip codes as higher riskZip codes correlated with race, creating a proxy for discrimination
Translation"The doctor... she" was translated to "he" in some languagesGendered assumptions baked into language patterns

How to spot bias in your AI outputs

Ask yourself these questions every time AI makes a recommendation, classification, or decision about people:

  1. Would this output change if I swapped the person's name, gender, or background? If yes, there's bias.
  2. Does this recommendation match a stereotype? If AI is confirming every expectation, it might be reflecting bias, not reality.
  3. Who is missing from the output? If AI's "list of experts" has no diversity, that's a signal.
  4. Is this too convenient? If AI's analysis perfectly supports what you already believe, ask it to argue the opposite.

There Are No Dumb Questions

"Am I responsible for AI bias in my work?"

Yes, in the same way you're responsible for any tool you use. If you use a calculator and enter the wrong numbers, you can't blame the calculator. If you use AI output that contains bias and it harms someone, the fact that AI generated it doesn't shield you from accountability. You're the human in the loop. That's why you're here.

"Is there a bias-free AI?"

No. Every AI model has biases because every training dataset has biases. The question isn't "is this AI biased?" (it is) but "what biases does this AI have, and how do they affect my specific use case?" Some biases are harmless for your task. Others could be devastating. You need to know the difference.

⚡

The bias detection exercise

25 XP
Run this experiment with any AI tool: 1. Ask AI to "write a job description for a software engineer." Note the language it uses. Does it feel gendered? Does it emphasize traits that might exclude certain groups? 2. Now ask: "Rewrite this job description to be as inclusive as possible and explain what you changed." 3. Compare the two versions. What assumptions were baked into the first version? Write down three specific changes AI made. These reveal the biases that were hiding in the "default" output.

Confidentiality: what you should NEVER paste into AI

This is the mistake that gets people fired. Not hallucination, not bias — pasting sensitive information into an AI tool.

The red list: never paste these

CategoryExamplesWhy it's dangerous
Company secretsUnreleased product plans, financial projections, board meeting notesMay be stored, trained on, or breached
Personal dataEmployee SSNs, customer emails, patient recordsViolates GDPR, HIPAA, and other regulations
Passwords & credentialsAPI keys, login credentials, security tokensCan be logged and exposed
Legal documentsContracts under NDA, privileged attorney-client communicationsBreaks privilege and confidentiality obligations
Customer dataIndividual transaction records, support tickets with namesMay violate your company's data agreements

When it goes wrong: the Samsung incident

🚨There is no undo button
In 2023, Samsung engineers pasted confidential source code and internal meeting notes into ChatGPT on three separate occasions. Samsung's security team discovered it only after the fact. That source code may now be part of training data used for future model versions. Samsung subsequently banned AI tools company-wide. This wasn't negligence — it was people using AI the way most people use it.

The difference between them and you: now you know.

The green list: safe to paste

CategoryWhy it's safe
Public informationIt's already out there
Your own original writingYou own it, no confidentiality concerns
Anonymized dataNo way to identify individuals
Generic templates and examplesNo sensitive specifics
Questions about general topicsNot revealing anything proprietary

The decision flowchart

Pro tip: Many companies now offer enterprise AI tools (like ChatGPT Enterprise, Azure OpenAI, or Claude for Enterprise) where your data isn't used for training and stays within your organization's boundaries. Check with your IT department — you might already have access.

⚡

The confidentiality audit

25 XP
Look at the last 5 things you pasted into an AI tool (or the last 5 things you would have pasted). For each one, answer: 1. Did it contain any names, emails, or identifying information? 2. Did it contain any proprietary business information? 3. Would you be comfortable if it appeared on a public website? 4. Could you have anonymized it first? If any answer makes you uncomfortable, write down the anonymization step you'd take next time. For example: "Replace client name with 'Client A'" or "Remove dollar amounts and use percentages instead."

The "trust but verify" workflow

Here's the system that protects you from all three failure types:

The 60-second verification checklist

Before you use any AI output for anything important, run through this:

  • Facts check: Did I verify any specific claims, statistics, or citations?
  • Source check: Can I trace each fact back to a real, primary source?
  • Bias check: Would this output change if I swapped demographics?
  • Confidentiality check: Did I avoid pasting anything sensitive?
  • Smell test: Does anything feel "too perfect" or suspiciously convenient?

This takes 60 seconds. Skipping it can cost you your reputation.

⚡

Build your personal verification workflow

50 XP
Take a real piece of AI output you've used recently (or generate one now by asking AI to research a topic relevant to your work). Run the full trust-but-verify process: 1. **Identify every factual claim** in the output. List them. 2. **Verify at least 3 claims** using a search engine or primary source. Were they accurate? 3. **Check for bias.** Does the output favor a particular perspective? Would it read differently with different names or demographics? 4. **Assess confidentiality.** Did the input contain anything that shouldn't have been shared? Document your findings: - Claims verified: ___ out of ___ - Accuracy rate: ___% - Bias detected: Yes/No — describe - Confidentiality concerns: Yes/No — describe **This exercise teaches you what "due diligence" feels like with AI. Do it enough times and it becomes automatic.**

Back to Steven Schwartz

He wasn't lazy. He wasn't cutting corners. He asked a question, got an answer that looked completely real, and trusted it — the same thing most people do with AI every day. The difference is that his mistake was filed with a federal court, where it couldn't be quietly fixed.

What he was missing wasn't intelligence or diligence. It was the 60-second habit: verify the facts, trace the sources, check the output before it goes out the door.

You now have that habit. The next time AI gives you a citation, a statistic, or a legal precedent — you'll know to check it. That's the entire point.

Key takeaways

  • AI hallucinates — it invents facts, sources, and statistics that don't exist. The more obscure the topic and the more specific the claim, the higher the risk. Always verify factual claims against primary sources.
  • AI reflects biases from its training data. When AI makes recommendations about people (hiring, lending, evaluating), check for bias by mentally swapping demographics.
  • Never paste sensitive information into AI tools. Company secrets, personal data, passwords, and legal documents should never go into a consumer AI tool. Use enterprise tools with proper data handling, or anonymize first.
  • Trust but verify is not optional — it's your job. The 60-second verification checklist protects your reputation. A brilliant intern's draft still needs your review before it goes out the door.

?

Knowledge Check

1.Why does AI hallucinate?

2.Which of the following is SAFE to paste into a consumer AI tool?

3.When is AI MOST likely to hallucinate?

4.Who is responsible when AI output contains bias that harms someone?

Previous

Automating Repetitive Work

Next

Building Your AI Workflow