You're mid-draft in Claude, building an argument you've made before, and you can't remember where you made it. So you open a new tab, dig through your own archive, find the piece, copy the relevant paragraphs, paste them back into the conversation, and pick up where you left off. Or you skip the search and let the assistant answer from its training data, hoping it lands close enough to what you actually wrote. Either way, the workflow broke. Your years of published work sat behind a wall your best AI tool couldn't reach.
Your Archive Just Became a Tool
An open standard called Model Context Protocol lets any AI assistant reach into your archive. Not a chatbot on your site. Your published body of work, available everywhere you work.
Claude, ChatGPT, Cursor, and hundreds of other AI tools now support MCP connections. If you've spent years building a body of work, that content is probably your most underused asset. It sits on your website, organized by date, discoverable only if someone knows exactly what to search for. Every day you use an AI assistant that can't access it, that expertise sits idle while the assistant improvises from training data instead. MCP turns it from a destination people visit into an input you call from wherever you already work. Dewey makes this possible for experts who don't want to build their own server.
What You Can Do Tomorrow
Planning a New Post
You open Claude and ask: "What have I already written about this topic?" Instead of a vague summary drawn from the internet, the assistant surfaces your own archive: which pieces are foundational, which are tangential, which took positions that need updating. The workflow that used to break is now immediately productive.
The difference is the source. Without your archive connected, the assistant answers from training data, a composite assembled from everything ever published on the internet about that topic. That composite may not reflect your actual position. It may flatten the nuance you spent three thousand words building in a piece from two years ago. It may confidently present the mainstream take on a topic where your published expertise runs against the grain. And you'd have no way of knowing, because the assistant presents both answers with the same confidence.
With MCP, the answer comes from your work. Ask which of your pieces are most connected to each other. Ask where a new piece would add something your archive doesn't cover yet. Ask whether the next article would overlap with something you've already written or extend it. The archive answers with your own published record, not internet consensus.
Prepping for a Podcast
You're appearing on a podcast tomorrow. The host wants to discuss a topic you've covered extensively, but from different angles across different pieces over the past two years. You ask the assistant: "What are my positions on this topic, and when did I last write about each?" You get your actual stances with citations to specific pieces, not a summary drawn from the internet's composite view of the subject.
Those citations are where this gets practical. You can click through and verify the assistant surfaced the right pieces. You can refresh your memory on the specific argument you made, the data you cited, the conclusion you reached, the caveat you were careful to include. Not a generic briefing on the topic, but your published record on it, with links back to the source.
Wrong prep from training data is worse than no prep at all. An assistant answering from the internet might give you a reasonable-sounding summary that subtly misrepresents your actual position, and you wouldn't know until you're live on air and the host asks a follow-up you can't reconcile with what the assistant told you. Right prep from your own archive means you walk in with your published record, not your memory of it.
Responding to a Hot Take
You see a hot take in your field. You ask: "Find everywhere I've addressed this." The assistant returns your full history on the topic, grounded in your published work, not the internet's consensus view. Three articles from different years, a newsletter where you refined the argument, a podcast transcript where you addressed the counterpoint directly. All yours, all cited, all traceable back to the original pieces.
Without it, the assistant would have summarized the internet's general take on the topic and served it as if it were yours. You might not notice the difference until you're mid-reply and realize the framing doesn't match anything you've ever argued.
The assistant is research staff, not a ghostwriter. It surfaces what you've written so you can respond with your actual track record rather than improvising from memory. You decide what to say next.
Finding the Gap
You ask about something your archive doesn't cover. The assistant says so: "I don't have anything in your archive about that."
That's not a failure. That's a content-strategy signal. The gap between what your audience needs and what your archive covers is now visible, and it tells you what to write next.
Before MCP, the assistant would have filled that gap with training data and presented it with the same confidence it uses for everything else. You would never know the difference between an answer drawn from your own work and one assembled from the internet. Now you can see exactly where your published expertise ends. That visibility turns an unknown into a decision. Write the piece, update an older position, or decide the topic is outside your lane entirely.
Updating a Stale Position
You ask: "When did I last write about this, and what did I say?" The archive surfaces the original piece with its date and its full argument. You can see where your thinking has evolved since you published, where the landscape has shifted underneath a position you took eighteen months ago, where your argument may need a revision or an update.
The archive compounds: it doesn't just answer, it shows its own age. A position you committed to in early 2025 might still hold. It might need adjusting. Either way, you know, because the archive tells you when you last committed to it and what exactly you said.
Extension, Not Substitution
Every scene above follows the same pattern. The expert asks a question, the assistant answers from the expert's archive, and the expert evaluates the result, decides what to do with it, and acts. The assistant never makes the decision. It retrieves, surfaces, and cites, but the expert always decides what to do with what it found.
Without your archive connected, the assistant substitutes for your expertise. It answers from training data, which diverges from experts roughly 26% of the time. Not stylistically different, but substantively different: different guidance, different conclusions, different tradeoffs. The training data produces an internet average, a statistical composite of everything ever written on the internet about a topic, weighted by volume, not by quality. That average is not your position.
With MCP, the assistant extends your expertise. It answers from your archive, grounded in what you've actually written. Same assistant, same interface, same workflow, but with fundamentally different input and fundamentally different results.
This is not about building a copy of yourself that answers on your behalf. The assistant doesn't become you, and it doesn't need to. It gains access to your published work, your actual positions, your archived arguments. MCP doesn't change the assistant. It changes what the assistant can reach. When the input shifts from training data to your archive, the result shifts from substitution to something that amplifies your expertise instead of replacing it.
Hospitality Beyond Your Site
Site search meets your audience where they are on your site, making your archive accessible to readers with questions. MCP meets you where you are in your daily tools, making that same archive accessible to you while you work. Your published expertise stops living behind a single URL and starts being available in every place you think, draft, prepare, and respond.
The Archive Stops Being a Destination
Every workflow that starts with "what did I already write about this?" stops being a dead end. The archive stops being a place people visit and becomes a tool you call from wherever you already work.
If you want to see what this looks like with your own archive, join the waitlist. Dewey builds MCP connections for expert archives, so your years of published work become available in every AI tool you already use. Visit askdewey.com to see how it works.
