This article was originally published on Fast Company.
Early in my career, I was a bit of a know-it-all. I had an opinion on everything and wanted to share it in every meeting, client call, and brainstorm. A mentor finally pulled me aside with feedback I really needed to hear: Telling a client “I don’t know, can I get back to you on that?” is one of the most impactful ways to build trust.
Don’t wing it. Show vulnerability, get it right, and follow through.
That advice changed how I worked. And years later, when I started building AI systems, it became a design principle. Because right now, most AI tools are doing the opposite. They’re winging it at scale.
The Confidence Trap
Consider the numbers. Wiley’s 2025 survey of researchers found that AI usage jumped from 57% to 84% in a single year, and concerns about hallucinations climbed from 51% to 64% over the same period. The more people use AI, the less they trust what it tells them. That’s not a growth story. It’s a warning sign that the industry is ignoring.
The business incentives make this worse. Users reward speed. Engagement metrics favor confident responses. EY’s 2025 Responsible AI survey found a significant gap between what C-suite leaders believe their customers are comfortable with and what those customers actually expect. The result is AI systems optimized to sound right rather than be right.
The business consequences are real. Air Canada’s bot offered a fake refund policy, leading to a court-ordered payout. Klarna replaced staff with AI, then reversed course after quality issues.
These aren’t outliers. Carnegie Mellon researchers found that large language models remain overconfident even after getting answers wrong. Unlike humans, who recalibrate after making mistakes, AI systems actually grew more overconfident. And because we lack the social cues we normally use to gauge whether someone knows what they’re talking about, we take the AI’s confidence at face value—until something goes wrong.
When “I Don’t Know” Becomes an Advantage
As the founder of Dewey Labs, we made a deliberate choice early on: When our system doesn’t have a reliable, expert-backed answer, it says so. This felt risky at first, going exactly against the industry tide. The opposite of what we feared happened.
One early project was an election center we built with Spotlight PA to help voters prepare for an upcoming primary. We anticipated questions about voting procedures, specific candidates, and race details. Their content covered those topics well. But users kept asking foundational questions: What does a judge actually do? Why do we vote for a comptroller?
The Spotlight PA team knew the answers, but they had never published them. Rather than improvise from generic internet knowledge, the system noted the gaps. The team filled them quickly, producing a set of “101” guides that now answer hundreds of questions the initial reporting never addressed.
And users loved it. They didn’t find the “I don’t know” responses frustrating. They found them reassuring. Every admission of uncertainty made them trust the answers that did come back. And when the editorial team added new content, they closed the loop in a way that made users feel heard.
Three things happen when an AI system admits its limits. You protect users from bad information. You surface knowledge gaps your team can actually fill. And you build the kind of trust that grows with use.
What Leaders Should Look For
For business leaders evaluating or deploying AI, a few questions are worth asking before you go live. Does the system cite its sources so users can verify what they’re seeing? Can it recognize when it doesn’t have enough information to answer well? Does it stay within defined knowledge boundaries, or does it improvise from its parametric knowledge and the open internet? When it hits an edge case, does it take a pause—or guess?
These aren’t technical nice-to-haves. They’re the difference between AI that builds your brand and AI that quietly erodes it. ISACA’s 2025 analysis of the year’s biggest AI incidents concluded that every high-impact AI system should be designed with the assumption it will sometimes be confidently wrong.
The companies that succeed in the long term won’t be the ones with the fastest AI answers. They’ll be the ones whose AI knows when to pause.
That mentor was right 20 years ago. The trust you build by saying “I don’t know, let me get back to you” is worth far more than the confidence you project by guessing. The best AI systems will understand that too.
