You built a workflow that saves four hours a week. An agent that triages your inbox. A skill that drafts emails in your voice. A chain that pulls Slack, calendar, and CRM data into a morning briefing.
It works. It’s good. It lives on your laptop.
Nobody else benefits.
This is the institutional memory failure mode of the AI-native era: the knowledge that compounds is trapped inside individual contexts, undiscoverable by anyone who didn’t build it, unreachable by the systems that could amplify it.
The Traditional Knowledge Flow Is Broken
Here’s how institutional knowledge has worked for the past thirty years:
- Someone discovers a better way to do something
- They write a wiki page (maybe)
- Nobody reads it
- The knowledge dies when they change teams
The numbers are damning. Research consistently shows that 65–70% of company-produced content goes unused by the people it was built for. Not because the content is bad — because the distribution mechanism is wrong. You can’t search for something you don’t know exists. You can’t apply a methodology document by reading it once and hoping you remember the edge cases under pressure.
Wiki pages are write-once, read-never artifacts. They capture what someone knew at one point in time, in a format optimized for a consumer who may never arrive. They are not executable. They do not activate at the moment of need. They do not improve when someone downstream discovers a new failure mode.
This is not a literacy problem or a discipline problem. It’s an architecture problem. The format itself — passive documentation — cannot produce compounding knowledge.
The Skill Changes the Unit of Knowledge
A skill is an executable file that encodes methodology for an AI agent. It tells the agent when to activate, what inputs to gather, what tools to use in what order, what quality checks to run, and what mistakes to avoid.
The difference from a wiki page is not cosmetic. It’s structural:
| Property | Wiki page | Skill |
|---|---|---|
| Activation | Reader must find it, read it, internalize it | Agent loads it automatically when trigger matches |
| Execution | Human interprets and applies manually | Agent follows the encoded methodology directly |
| Evolution | Author updates (rarely) | Any user who encounters a new failure mode can push a fix |
| Distribution | Lives in one place, discoverable by search | Installable, adaptable, forkable |
| Testing | None — you hope it works | Eval cases verify correct behavior after changes |
| Persistence | Rots in place until someone notices | Active in every session that loads it |
When one person encodes a failure mode into a skill, everyone who installs that skill inherits the fix. The knowledge doesn’t wait to be read. It doesn’t require a training session. It activates at the moment of relevance and produces correct behavior automatically.
This is what Sanjay Srivastava described in Forbes as the difference between “a system that references the enterprise” and “a system that becomes competent inside it.” The model is brilliant. The tools are connected. But without encoded institutional memory — methodology that persists across sessions and people — “autonomy becomes performative: impressive in a pilot, fragile in the real world.”
The skill is the unit that makes institutional memory executable.
Why Individual Craft Doesn’t Compound
Here’s the pattern I’ve watched repeat across every builder I’ve talked to:
- Builder encounters a problem (triage is too slow, emails take too long, meeting prep is manual)
- Builder solves it with an AI workflow — trial and error, multiple sessions, iterative refinement
- Workflow works. Builder uses it daily. Saves hours per week.
- Workflow stays on builder’s machine. Nobody else knows it exists.
- Three months later, a teammate encounters the exact same problem and starts from zero.
The craft compounded for one person. For the organization, it might as well not exist.
This happens because the mechanisms for sharing workflows don’t exist at the same layer the work happens. The work happens inside agent sessions. The sharing mechanisms are external — Slack messages, documentation sites, email threads. There’s no native path from “this works for me” to “this works for everyone.”
Flowtivity’s research captures this: “When one agent figures out how to deploy a Dockerised Next.js app to AWS, every other agent on your team can use that knowledge the next time a similar task comes up.” But only if the path from discovery to distribution is native. If it requires manual extraction, documentation, and republishing — it won’t happen. The activation energy is too high relative to the perceived payoff for a builder who already solved their own problem.
The Distribution Gap
Creation cost collapsed. We covered this. AI reduced the cost of drafting, summarizing, building, and prototyping to near-zero. What remained expensive is getting that work from one context to another.
The distribution gap has three layers:
Discovery: How does someone find a skill that solves their problem? Today it’s word of mouth, Slack threads, or stumbling across it in a shared folder. There’s no semantic search over skill stores. No “I need help triaging customer emails” → here are 6 skills ranked by quality and adoption.
Installation: Finding a skill is not the same as using it. A skill built for one person’s context (their Slack channels, their CRM fields, their voice settings) doesn’t work for another person without adaptation. Installation means parameterizing — swapping the personal context for the new user’s context while preserving the methodology.
Feedback: Knowledge compounds only when downstream users can push improvements back upstream. When someone installs a skill, encounters a new failure mode, and fixes it — that fix needs a path back to the shared version. Without bidirectional flow, skills fork and diverge instead of improving.
The MIT/UCSB study on agent skills validated this empirically: flat skill libraries fail without structured discovery and adaptation mechanisms. Skills need taxonomy, not just a directory listing. They need parameterized installation, not copy-paste. They need quality signals — usage metrics, failure rates, adoption curves — so users can distinguish between a battle-tested methodology and someone’s afternoon experiment.
The Compounding Flywheel
When distribution works, the dynamics change completely:
Builder catches failure → encodes fix in skill → pushes to shared store
↓
Teammate discovers skill → installs → adapts
↓
Teammate catches new failure → pushes fix back
↓
Skill improves → every user inherits improvement
↓
Next teammate installs a better skill than existed
when the first builder shipped their version
Every iteration makes the skill more durable, more complete, more resistant to failure. And unlike documentation — which requires a human to read it, internalize it, and apply it correctly — the skill executes. The improvements produce correct behavior automatically.
Timewell’s 2026 analysis of organizational AI adoption identifies skill sharing as phase 3 of the five-phase maturity model — the inflection point where individual experiments become organizational capability. Phase 1 is infrastructure. Phase 2 is individual skill authoring. Phase 3 is the marketplace: “In-House Marketplaces, Top-Down x Bottom-Up Operations.” The organizations stuck in phase 2 have builders producing great individual workflows. The organizations in phase 3 have those workflows compounding across teams.
What “npm for Skills” Actually Requires
The analogy everyone reaches for is “npm for AI skills” — a package manager where you search, install, and update encoded methodology the way developers manage code dependencies.
Vercel articulates it: “AI skills turn ‘one-off prompts’ into a reusable, versionable library of agent behavior.” Glean launched Skills in March 2026 as “a new way to package and reuse enterprise expertise.” The market signal is clear — everyone recognizes the gap.
But npm worked because of four properties that skill stores don’t yet have:
-
Semantic versioning: You know what changed and whether it’s safe to upgrade. Skills don’t yet have a standard for communicating breaking changes to methodology.
-
Dependency resolution: Skills compose — morning briefing chains
slack-triage→email-triage→calendar-triage. If one skill updates its output format, the chain breaks. There’s no lockfile for skill dependencies. -
Scoped registries: Enterprise skills contain proprietary methodology. You need private stores with access control — “this skill is internal to our team” — alongside public ones. The npm distinction between public packages and private org-scoped packages maps directly.
-
Quality signals at install time: Downloads, GitHub stars, test coverage — npm gives you signals before you commit to a dependency. Skill stores need equivalent signals: adoption count, failure rate, last-updated date, eval pass rate.
The platforms building toward this — Anthropic, Glean, Salesforce, the emerging open-source ecosystem around SKILL.md — are all solving pieces. Nobody has the full lifecycle yet. The one that ships all four properties first owns the institutional memory layer.
The Argument Nobody Is Making
Here’s the implication that most skill-distribution writing misses:
The value of a skill store is not the sum of its individual skills. It’s the network effect of institutional methodology becoming discoverable and composable.
A single skill saves one person time. A store of 500 skills, semantically searchable, with quality signals and one-click installation, changes the default behavior of an entire organization. Instead of every person solving common problems from scratch, the default becomes: check the store first, install what exists, adapt if needed, build only what’s genuinely novel.
This is the same transition that happened with open-source software libraries. Before npm/pip/Maven, every team wrote their own HTTP client, their own date parser, their own CSV reader. After package managers, the default became: search, install, use. The only code that got written from scratch was genuinely novel code.
The same transition is happening now for knowledge work methodology. Before skill stores, every person builds their own triage workflow, their own email templates, their own meeting prep routine. After skill stores, the default becomes: search, install, adapt. The only methodology that gets built from scratch is genuinely novel methodology.
That transition is what “institutional memory” actually means in practice. Not a better wiki. Not a better search engine. An executable layer where one person’s solved problem becomes everyone’s starting point.
So What
Individual craft is necessary but not sufficient. Building a workflow that works for you is the first step — but if it stops there, the organization’s knowledge base hasn’t grown. It’s grown for you.
The distribution problem is not a nice-to-have. It’s the difference between a team of individual builders and an organization whose collective methodology compounds over time. The builders who figure out distribution — who make their patterns installable, adaptable, and improvable by others — are building something that outlasts their own context.
Everyone else is building for one.
The question worth sitting with: of the workflows you’ve built in the last six months, how many are discoverable by your teammates? How many could a new hire install and benefit from on day one? How many improve when someone else encounters a failure mode you never hit?
If the answer is zero — the craft is real, but it isn’t compounding.