It's 2am and you're searching for fever guidance for your three-month-old. The AI gives you the standard answer: contact your pediatrician. It's medically defensible. It's what every mainstream source says. It's also not what you came for. The data-driven pediatrician you follow has a more specific position: for healthy full-term infants above a certain age and weight, evidence-based acetaminophen dosing is safe, and the reflexive "call your doctor" directive produces unnecessary ER visits. The AI answered a question you didn't ask. You wanted your trusted expert's read. You got the internet average.
That gap - between what the AI said and what the expert actually says - is what most people would call "working correctly." The answer wasn't wrong, but by our definition, it was a hallucination.
What the Industry Calls a Hallucination
Ask a dozen AI companies what a hallucination is and you'll get the same answer. IBM defines it as "confident responses that don't seem justified by training data." Google Cloud calls it output that "sounds plausible but is incorrect." Wikipedia summarizes it as "content that is nonsensical or unfaithful to the provided source."
The definitions converge on one axis: factual accuracy - a fake court case, an invented statistic, a product that doesn't exist. The industry's working definition catches those failures but misses the biggest failure mode for experts: answers that are defensible in general but silently replace the expert's voice with the internet average.
A pediatrician who recommends evidence-based Tylenol dosing doesn't need AI to repeat the mainstream position back. A financial advisor with a specific allocation philosophy doesn't need AI to serve up the textbook take. Their value is their position.
Most of what audiences go to experts for is judgment: intuition, synthesis, heuristics, the benefit of having seen things recently. None of this is falsifiable. The standard definition operates entirely on the falsifiable slice.
What We Call a Hallucination
We use a stricter definition.
A hallucination is any output drawn from outside the expert's knowledge corpus, regardless of whether the claim is objectively true. We don't check truth; we check alignment.
This shifts the question from "Is this factually correct?" to "Did this come from the expert's actual body of work?" A true statement sourced from the internet average is still a hallucination under this definition. It didn't come from the expert; it came from everywhere else.
When an AI system answers on behalf of an expert using knowledge they never published, it's speaking out of turn. The claim might be true, and it might even be something the expert would agree with, but the audience didn't ask the internet. They asked the expert. That distinction is the founding idea behind Dewey.
What This Looks Like in Practice
AI responses have altitude. A high-altitude answer is the internet average: safe, defensible, not personalized. A low-altitude answer is expert-specific: actionable, voiced, useful precisely because it reflects a particular point of view.
Most AI systems default to high altitude. They give you the WebMD answer, the textbook take, the disclaimer-wrapped version. That's fine for casual queries. For expert audiences, altitude is the problem.
The Pediatrician
A parent asks an AI system built on a specific pediatrician's content: "My three-month-old has a 100.8°F fever. What should I do?"
High altitude: "Contact your pediatrician or visit the emergency room. Fevers in infants under three months can be serious."
Low altitude: "For a healthy, full-term infant at three months with a low-grade fever and no other symptoms, evidence-based acetaminophen dosing at the appropriate weight-based amount is a reasonable first step. Here's when to escalate."
The high-altitude answer isn't wrong; it's what every mainstream parenting resource says. The parent wanted this pediatrician's specific guidance. They got the internet average.
The Finance Writer
An independent finance writer has built a following around a specific retirement thesis: high-earning early retirees should run aggressive Roth conversions during their low-income years, against the conventional "wait and see" advice.
A subscriber asks: "I just retired at 58 with $3M in a traditional IRA. Should I start Roth conversions now?"
High altitude: "Roth conversions can make sense for some retirees. Consider your tax bracket, future income needs, and estate plans. Consult a financial advisor."
Low altitude: "At 58 with $3M in a traditional IRA and no other income, this is the best Roth-conversion window you will ever have. Convert aggressively up to the top of the 24% bracket for the next seven years. Here is how the math plays out against a do-nothing baseline."
The high-altitude answer is the internet average on Roth conversions; the low-altitude answer is the position she's spent years defending. Her audience paid for the second one.
The Investigative Reporter
An investigative reporter has spent two years covering financial fraud in a specific industry. She's published dozens of pieces with original sourcing and built an analytical framework her readers rely on.
A reader asks: "What's driving the recent enforcement actions in this sector?"
High altitude: "Regulatory agencies have increased scrutiny of this industry due to growing concerns about compliance failures."
Low altitude: "Three factors are driving enforcement right now: the pattern of self-reporting delays I documented in March, the whistleblower pipeline that accelerated after the settlement, and the new enforcement chief's stated priority shift. Here's how they connect."
One answer is the internet average. The other sounds like the reporter.
What This Means if Your Reputation Is on the Line
If your audience comes to you specifically because of your specific position, the industry's hallucination bar is not high enough.
The standard definition protects against fabrication but not dilution. It doesn't catch the moment your AI takes your nuanced, experience-driven position and quietly swaps it for the safe, internet-average version.
Your audience chose you for a reason. Your AI should reflect that reason back - or say nothing at all. When it doesn't, even when it's technically correct, something has been lost.
