It's New Years Eve, and by now you are probably drowning in end-of-year recaps, things wrapped, crunched, and more. I'll spare you that, I promise.
But I did do some end-of-year analysis of my own. Last week, I asked Claude to analyze a year of conversation data for one of our partners. Ninety seconds later, I had an end-of-year roundup: most popular topics, most honest questions, most quirky requests.
Then I paused. How did it decide what makes a question "honest" versus "quirky"? Whose judgment was I seeing?
It's a bigger question than it sounds.
This Month's Take
That afternoon I spent last year doing the same analysis manually? It wasn't wasted time. It was an investment in building judgment.
When I saw "My kid is in love, what do I do?" in one partner's logs, I recognized it immediately as raw and vulnerable. "How can we avoid inviting the mean girl to a party?"—that's funny, slightly absurd, deeply human. And those repeated attempts to play tic-tac-toe? Claude might flag that as quirky. We know it's users testing our boundaries.
That knowledge is values in action. Judgment built from context Claude can’t access.
Here's the paradox: AI has made things that used to require human judgment feel trivially easy. That ease creates a dangerous illusion—that judgment itself has become less important.
The opposite is true. Anthropic recently analyzed 300,000 conversations with Claude and found 3,307 distinct values expressed—mostly by mirroring what the AI detects from users. Same question, different framing, different answer.
AI can tell you the facts. Only you—and your network of trusted experts—can tell you what matters.
Read this if you're building AI where trust isn't optional.
From Our Work
We quietly passed a milestone a few months ago: one million questions asked and answered through Dewey.
Doing the math: that's 50,000+ hours of expert time saved. Or five and a half years. That is time our partners can spend writing the next book, developing critical research, and moving their fields forward.
This is why we built Dewey.
My 6-year-old drew a "cat narwhal" last month. No prompt, no reference - pure creation.
With the holiday season right behind us, I've been thinking about AI toys. From sticker makers to creative tools to companions that chat back. They're everywhere now. And I'll admit: some feel tempting.
Then I watch my kid draw. I know how hard that kind of imagination feels for me now, without daily practice. Will kids who outsource imagination lose the muscle to dream things into existence themselves?
I'm not hiding AI from my kids. But for now, I'm holding off on the toys. That cat narwhal came from somewhere no algorithm can reach.
Merriam-Webster named "slop" its 2025 word of the year.
The word has been around since the 1700s - soft mud, then food waste, now the low-effort AI content clogging every corner of the internet. Each era gets the slop it deserves.
When AI can generate content at near-zero cost, the floor drops out. Quality becomes the differentiator—but only if people can find it.
2025 was the year we named the problem. What would it take for 2026 to be the year we start solving it?
Looking to 2026
Anne Griffin asked me for my 2026 AI predictions for Forbes. Three things I'm watching:
The end of the chatbot era. Generative AI isn't about a form factor—it's about a thinking model. In 2026, more people will interact with AI embedded in products without ever realizing they're "using AI."
AI as true collaborator. Collaborators need to be able to do work. Our expectations will increasingly be that AI can access the data sources we would access and output into the tools we would use.
From experimentation to execution. 2025 was a year of experimentation - you see it in the headlines. I'm feeling the shift in my conversations. Way fewer "what if" discussions. Way more problem-solving execution.
Read this for more predictions from Lenny Rachitsky and Becca Lewy.
One Question for You
Have you caught AI making a judgment call you disagreed with?
2026 is going to be the year a lot of organizations figure out what they actually value in AI solutions- and what they don't. If you're working through these questions yourself, I'd genuinely love to hear what's on your mind.
Reply to this email or grab time on my calendar if you want to connect in January.
See you in 2026,
Alex
You're receiving this because you signed up with Dewey Labs - where we build expert-first AI tools. We work with publishers, brands, researchers, and non-profits to create search and Q&A experiences grounded in trust and accuracy. We’ll be sending a monthly dispatch on the future of the expert economy like this one, along with occasional product updates.
