There’s a shift happening in knowledge work that most people haven’t named yet. Creation cost collapsed. Drafting a document, writing an explanation, summarizing a meeting — AI reduced these to near-zero effort. The bottleneck moved.
It moved to distribution.
Not distribution in the marketing sense. I mean the basic problem of getting knowledge to the right consumer, in the right format, at the right time. That problem got harder, not easier, right at the moment everyone assumed it got solved.
The Consumer Problem Nobody Prepared For
Knowledge has always had multiple consumers. You write documentation knowing that a new hire, a veteran engineer, and a confused customer might all read the same page — and they all need something different from it.
That problem is old. The new problem is that the workforce now includes a third consumer type that didn’t exist a few years ago: agents.
There are now three distinct consumer types for any piece of knowledge:
- Humans — read prose, follow narrative, need context and explanation
- LLMs — consume token streams, work best with structured summaries, perform better when content is pre-chunked and labeled
- Agents — query knowledge at runtime, need machine-readable indexes, often can’t follow a hyperlink or parse a full HTML page
Here’s the problem: every existing knowledge platform was built for exactly one of these. Wikis are for humans. REST APIs are for machines. Chat interfaces are for synchronous human conversation. None of them serve all three, and nobody is retrofitting them fast enough.
If your knowledge lives in exactly one format on exactly one platform, you’ve built a bottleneck for at least two of your three consumer types.
What One Write Path Actually Looks Like
The pattern I’ve been running: single source file, multiple consumption paths.
The source is plain markdown. From that one file, five different consumers can access the same content:
- Humans read it directly — rendered HTML, clean typography, skimmable
- LLMs get a context-window-friendly index — the content described and summarized rather than exhaustively reproduced
- Agents query a structured knowledge base at runtime — the same content chunked and indexed for retrieval
- Automated pipelines hit the live surface — URLs that exist and return consistent structure
- A git repo is the system of record — versioned, diffable, auditable
One write. Five consumption paths. The source file never changes — only the rendering layer does, and the rendering is automated.
This sounds like an infrastructure problem, and it is. But the insight is simpler than it might appear: the consumption paths are predictable. You know humans, agents, and LLMs all need this knowledge. Building the rendering layer once eliminates the conversion tax on every future write.
This Site Is a Working Example
I built this into artificialcuriositylabs.dev from the start, mostly as a forcing function to think through the problem concretely.
The site has the usual human-readable layer — HTML pages, RSS feed, search. But it also has:
/llms.txt— a curated index of pages and topics formatted for AI crawlers, analogous torobots.txtbut for LLM consumption rather than exclusion/llms-full.txt— every post on the site concatenated into a single clean file, auto-generated at build time
The second one is the one worth dwelling on. Any LLM or agent that fetches llms-full.txt gets the complete content of everything published here in one request, structured and clean — no crawling individual pages, no HTML parsing, no pagination. It’s a better interface for machine consumption than anything a crawler would produce.
The generator runs in under a second as part of the normal build. It reads the content directory, strips frontmatter, concatenates everything with labeled separators. There’s no manual step. When I publish something new, every consumption path updates automatically.
This is what “one write path” looks like in practice. The write happens once. The distribution layer handles the rest.
The Architectural Implication
The platforms that win the next cycle won’t be the ones that make creation faster — creation is already fast enough. They’ll be the ones that serve all three consumer types without requiring a different write per audience.
The ones that won’t: any platform where the human-readable surface is the only surface. That includes most wikis, most documentation tools, most internal knowledge bases built in the last decade. They were designed when the only consumer was a human with a browser.
The pattern worth adopting now: treat knowledge architecture the way you treat software architecture. Define your consumers. Design the interfaces. Automate the rendering. Keep the source clean.
One write, many readers — and readers now include machines that don’t have browsers, don’t follow narrative, and don’t have patience for anything that wasn’t formatted for them.
The creation bottleneck is gone. The question is whether your distribution layer caught up.