Every AI session starts fresh. You open a new conversation and the agent has no memory of what you learned last week — the constraints you identified, the failure modes you hit, the sequencing that finally worked. You explain it again. Or you don’t, and the session goes sideways in the same place it always does.
This is not a hallucination problem. It’s not a token cost problem. It is an amnesia problem, and it compounds.
The Pattern Nobody Names
Here’s what happens when you work seriously with AI systems over time: you develop methodology. Not in a formal sense — nobody sits down and writes “The Official Workflow for Email Triage.” It happens through iteration. You try something. It breaks in a specific way. You adjust. You try again. Over weeks, you accumulate a model of how to approach a class of problem: which tools to invoke, in what sequence, what to validate before moving forward, when to stop and ask a human for input.
That methodology is valuable. It represents hours of trial and error that produced a repeatable result. And if it lives only in your head, it dies every time the context resets.
The insight I kept circling back to: methodology should be treated like infrastructure. Version it. Test it. Share it. When it lives in a text file checked into a repo, it persists. It can be inspected. It can be improved. It can be given to someone else.
What “Skills as Files” Actually Means
The agent framework I use has a primitive for this — a skill file. Each file encodes a complete workflow: which tools to use, in what sequence, what failure modes to watch for, when to surface a decision to the human. A skill isn’t a prompt. It’s closer to a documented procedure with executable steps.
I have built over 80 of them.
That number sounds like it might be overkill. It isn’t. Each one represents a workflow I do regularly enough that rebuilding context each session was a measurable tax. The skills range from five-step research patterns to multi-stage content pipelines to agentic triage loops that run on a schedule.
The concrete example: my email triage workflow doesn’t just “summarize email.” It classifies every message into three priority tiers using an eleven-step heuristic. It cross-references the calendar — if a message is from someone I’m meeting tomorrow, it gets elevated regardless of content. It extracts durable signals into customer files so observations persist across sessions. It drafts responses for the highest-priority items using context it pulled from the knowledge graph. That methodology took weeks to develop through trial and error. Now it runs in ninety seconds, every morning, automatically, and it produces the same quality result whether I’m fully focused or in back-to-back meetings.
The ROI is not “I’m faster.” The ROI is that the methodology doesn’t degrade when I’m distracted, doesn’t have to be rebuilt after a break, and doesn’t die when the session ends.
The Compounding Problem
Individual methodology that stays in your head has no leverage. You get faster. Nobody else does.
When methodology is a file, the calculus changes. One person’s solved workflow becomes installable institutional knowledge. Someone on your team can take your triage skill, read it, adapt the heuristic for their context, and get ninety percent of the benefit without repeating the trial-and-error path you took to develop it. The team never makes your early mistakes. They start from your current state.
This is how individual craft becomes collective capability — but only when there’s a mechanism for sharing that’s as easy as building. Historically that mechanism has been missing, which is why individually-built workflows live in someone’s notes and never compound. A text file in a shared repo is a low-friction version of that mechanism.
There’s a second-order effect worth naming: when methodology is explicit and readable, it gets better faster. A skill file can be reviewed. Someone can read it and say “this step is redundant” or “you’re missing the case where the calendar is empty.” The same feedback loop that improves code improves documented methodology — but only once the methodology is written down somewhere it can be read.
The Shift Worth Making
The common frame for AI productivity is: AI completes tasks faster. That’s true but incomplete. The more durable frame: AI is a runtime for methodology you’ve already developed.
If your methodology lives in your head, every session is a cold start. You get the benefit of AI execution but not the benefit of accumulated learning. The agent is fast but you’re always starting over.
If your methodology lives in versioned files, each session starts from your current best state. The agent executes what you’ve already figured out. Your learning compounds instead of resetting.
The operational question worth sitting with: what have you figured out that would save your team weeks — if only they didn’t have to figure it out again from scratch?
That answer is a file waiting to be written.