How to Check AI's Work
The Verification Mindset
In Lesson 1.2, you saw all the ways AI can fail. Now for the good news: every single one of those failures is catchable. You just need a system.
Think of it this way: when a friend tells you something surprising — "I heard they're canceling summer break" — you don't just believe it. You check. You look it up. You ask someone else. That same instinct is exactly what you need when working with AI. The difference is that AI sounds so confident that it's easy to skip the checking step. That's the trap.
Professional builders — developers, writers, analysts, designers — who use AI successfully all share one habit: they verify before they trust. Not because they think AI is bad, but because they know it's a prediction engine, and predictions aren't always right.
Your goal isn't to distrust everything AI produces. That would make it useless. Your goal is to build a verification reflex — an automatic habit of checking the things that matter before you use them.
Answer: Not because AI is unreliable overall, but because prediction-based systems inevitably produce errors that are hard to spot. Verification isn't about distrust — it's about professional-quality work.
Three Strategies That Catch Almost Everything
Three strategies catch the vast majority of AI errors:
Strategy 1: Cross-Reference
If AI tells you a fact, check it against a second source. Search for it. If it appears in multiple reliable places, it's likely solid. If you can't find it anywhere, treat it as suspicious. Best for catching: Hallucinations, fabricated citations, wrong dates and names.
Strategy 2: Logic-Test
Read the AI's output and ask: does this make sense? Do conclusions follow from evidence? Are there contradictions? Best for catching: Reasoning errors, contradictions, math mistakes.
Strategy 3: Perspective-Check
Ask: whose perspective is this written from? What voices might be missing? Does this feel one-sided? Best for catching: Bias, one-sided information, stereotypes, missing perspectives.
The Verification Triangle
Three points: Cross-Reference, Logic-Test, Perspective-Check. Center: Trusted Output
When to Trust, When to Verify, When to Reject
Not every AI output needs the same level of checking:
Trust (with a quick scan)
When AI is doing creative/generative work where you're the judge. Brainstorming ideas, drafting text you'll edit.
Verify carefully
When AI states facts, provides data, makes recommendations, or produces code. Anything others will see or you'll rely on.
Reject and redo
When something feels off — obvious errors, contradictions, unverifiable claims. Don't fix bad output; give AI better instructions and try again.
Answer: Trust with a quick scan. Creative brainstorming where you're the judge, no facts to verify.
Building Your Personal Verification Habit
Pause before you use — every time you're about to use AI output, take three seconds: "Does this need checking?"
Flag your weak spots — notice where you tend to skip verification.
Make it part of your workflow — every project checkpoint will include verification steps.
Key Concepts
- Verification is a professional skill, not distrust
- Cross-reference catches hallucinations
- Logic-test catches reasoning errors
- Perspective-check catches bias
- Match verification effort to stakes
Try It: Trust or Bust
Review AI-generated claims and decide: TRUST, VERIFY carefully, or BUST (reject and redo). You'll encounter 12 claims across science, history, math, recommendations, and code.
Check Your Understanding
1. AI claims the human heart beats 120,000 times per day. Which verification strategy should you use first?
Explanation: Cross-reference is the right tool for fact-checking. Search for "heart beats per day" and you'll quickly verify whether the number is accurate or hallucinated.
2. AI generates a pros/cons list for a new school policy, but heavily favors one viewpoint. Which strategy catches this?
Explanation: Perspective-check is designed specifically to catch bias and one-sided information. It asks "whose voice is missing?" and helps you identify when an argument is skewed.
3. You ask AI to brainstorm 10 creative names for a school club. How much verification is needed?
Explanation: Creative outputs where you're the final judge need minimal verification — just a quick scan. The goal is to help you brainstorm, and you'll naturally evaluate the ideas.
4. AI says "Solar panels convert heat from the sun into electricity." What's wrong with this, and which strategy catches it?
Explanation: The AI made an error in understanding how solar panels work. Logic-test catches this — you read the claim, ask "does this make physical sense?" and realize the distinction between heat and light matters. Cross-reference would confirm the correct answer.
5. What's the most important verification habit to build?
Explanation: The foundational habit is the three-second pause. Not everything needs the same verification effort. By asking "does this need checking?" you match your effort to the stakes — that's professional-quality work.
Reflect & Write
Think about a project you might build. Which verification strategy would be most important for your work? What type of AI error would be especially damaging to your project if you missed it?
Project Checkpoint
No formal project action yet. But you now have three tools for every checkpoint: Cross-reference facts, Logic-test code and reasoning, Perspective-check content others will see.
Helpful Resource
Tool Verification Report (PDF) — A structured template for documenting your verification checks on AI-generated content. Use it whenever you need to fact-check AI output for a project.
Find this and all other resources on the Dashboard Resources page.
Level Up: Coming Next
Lesson 1.4 — Your Project Starts Here. You've learned how AI works, where it fails, and how to check it. Now it's time to choose what to build.
Continue to Lesson 1.4 →