by Chad Bishop

The first time I saw an AI confidently invent a source, I didn’t catch it immediately. The paragraph looked polished. The tone was academic. The citations were formatted correctly. It wasn’t until I tried to track down the article—an impressive-sounding journal with a very real-sounding author—that I realized none of it existed.
This phenomenon has a name: AI hallucinations. And in classrooms, especially at the high school level, they are quietly becoming one of the most important literacy challenges we face.
I teach high school in San Diego, where students already navigate a dense and fast-moving information environment. They juggle news alerts, social media explainers, half-remembered videos, and AI-generated summaries alongside traditional academic sources. In that context, generative AI does not arrive as a shortcut or a threat. It arrives sounding certain. And certainty is persuasive.
What AI Hallucinations Actually Look Like in Student Work
AI hallucinations are not obvious errors. They rarely announce themselves. Instead, they hide in plausibility.
In one case, a student submitted a history paper that cited a Supreme Court case with a believable name, a convincing year, and a detailed holding that aligned perfectly with the student’s argument. The case did not exist. The student had assumed that because the citation looked correct and fit the topic, it must be real. When asked how they verified it, the student admitted they had not tried to locate the case itself.
In another instance, a science assignment included a peer-reviewed study supposedly published in a well-known journal. The journal was real. The formatting was correct. The author’s name sounded credible. But the article had never been published. The AI had blended fragments of real research into a fabricated citation that passed a quick visual check.
These are not examples of students trying to deceive. They are examples of students trusting fluency over verification.
Why This Is More Dangerous Than Plagiarism
Early concerns about AI in education focused heavily on cheating. But hallucinations reveal a deeper problem. When students accept AI output at face value, they are not just outsourcing writing; they are outsourcing judgment.
In a traditional plagiarism scenario, a student copies someone else’s work. With AI hallucinations, the student believes the information is original, accurate, and authoritative. The confidence of the language masks the absence of truth.
This is particularly concerning in high-stakes subjects. In social sciences, fabricated studies can reinforce misconceptions. In science courses, invented data can undermine core concepts about evidence and replication. In English classes, false historical or contextual claims distort literary analysis.
The risk is not that students stop thinking. The risk is that they think they no longer need to question.
Teaching Verification as a Skill, Not a Rule
Outright bans on AI are increasingly impractical, and they miss an opportunity. The more productive response is to explicitly teach verification as a core academic skill.
In my classroom, this often begins with structured exposure. I provide students with an AI-generated paragraph and ask them to verify every factual claim. They must locate the original source, confirm dates, confirm authorship, and determine whether the claim is supported or exaggerated. Students are consistently surprised by how often small but meaningful inaccuracies appear.
One particularly effective exercise involves citations. Students are asked to choose one AI-generated citation and prove that it exists by locating the original publication. Many discover that while the journal is real, the article is not. Others find that the title has been subtly altered or that the quoted conclusion does not match the study’s findings.
Once students experience this firsthand, their relationship with AI output changes. Skepticism becomes habitual rather than imposed.
Why Source Literacy Matters More Than Ever
AI hallucinations have inadvertently highlighted the value of traditional source evaluation. Peer-reviewed journals, primary documents, institutional reports, and reputable news outlets are no longer just academic preferences. They are safeguards.
Teaching in San Diego provides a useful backdrop for this work. Students have access to public libraries, local universities, and credible digital databases. When they learn how to trace a claim back to a real researcher, a real institution, or a real dataset, they begin to see knowledge as something produced through effort and accountability rather than generated on demand.
Students also begin to understand that citations are not decorative. They are evidence of intellectual lineage. AI can mimic that lineage convincingly, but it cannot guarantee its authenticity.
What AI Hallucinations Force Us to Clarify
AI hallucinations push educators to confront a fundamental question: what are we actually teaching?
If education is about producing polished answers quickly, then AI excels. If education is about developing judgment, discernment, and intellectual responsibility, then those skills must be taught explicitly.
Teaching students to double-check sources is not about rejecting technology. It is about refusing to confuse confidence with credibility. In a world where machines can sound authoritative without being accurate, the ability to pause, verify, and question becomes one of the most valuable skills a student can learn.
That lesson belongs in every classroom, whether the words come from a textbook, a website, or an AI that sounds very sure of itself.