I was building a workflow that generated and published content automatically. Ran it through review before shipping. The review caught something standard safety checks had missed: a personal folder name leaking into the output.
Names — caught. Emails — caught. File paths — caught. Folder names — not caught, because folder names sit in a gap that most safety checklists don’t acknowledge. They’re not obviously private the way a social security number is. But they’re not obviously safe either. A folder named after a client, a project codename, or a personal alias is identifying information. It just doesn’t look like the things that show up on PII checklists.
The fix wasn’t removing the folder name from that specific run. That would have been a one-time patch. The fix was encoding the failure mode into the workflow itself — “personal folder names are PII; scan for them.” Now every future run checks for that class of problem. It never has to be caught by a reviewer again, because it’s baked into the process.
That incident clarified something I hadn’t been able to articulate cleanly before: anti-patterns are more valuable than patterns.
The knowledge that a thing works is useful. The knowledge that a specific thing fails in your specific context is rarer, harder to earn, and almost never written down.
Think about why. When something works, you ship it and move on. The success case has a natural documentation path — it becomes your standard operating procedure, your template, your example. The failure case has a different arc: you feel bad about it, you fix it, and then you try not to think about it. Nobody writes the postmortem that says “here’s how we leaked a folder name into a public gallery.” They write “shipping process updated.” The mechanism disappears.
This asymmetry means your anti-patterns are almost always under-documented relative to their actual value. The best practices guide tells you what to do. Nothing tells you what specifically breaks in your environment, with your data, on your team’s output.
Three things I’ve come to believe about failure modes:
Patterns tell you what to do. Anti-patterns tell you what not to do — which is harder to discover. There are a thousand ways to build a publishing workflow correctly. There are far fewer ways it will fail given your specific inputs, your specific data sources, your specific team’s naming conventions. The failure space is more constrained and therefore more actionable. A pattern is a starting point. An anti-pattern is a map of the specific terrain you’re crossing.
Failure modes compound. Every one you encode is a mistake your entire team never makes again — not just you. When you put “personal folder names are PII” into a workflow, you’re not patching that run. You’re closing that category permanently. The next person who builds a similar workflow on top of yours inherits the fix without knowing there was ever a problem. That’s leverage. The cost was one review catch; the benefit is every downstream run, every future builder, every workflow that extends from this one.
The most useful institutional knowledge isn’t the process that shipped. It’s the review that prevented the bug. The shipped process is visible; it exists in the codebase or the runbook. The review that caught the edge case lives in someone’s head, or in a Slack thread from eight months ago that nobody can find. When that person leaves, or when you’re building version two, or when a new team member spins up a similar workflow — the catch is gone. Encoding failure modes into the artifact itself is the only way to make that knowledge durable.
What changes if you treat failure modes as first-class knowledge?
A few things, practically. You start writing anti-patterns into the skill or workflow description, not just the postmortem. You stop treating a review catch as something to be embarrassed about and start treating it as the most valuable output of the review. You build a habit of asking, when something almost shipped wrong: what’s the class of problem here, and where does it live? Is this a one-off, or is this a gap in the checklist?
The folder name wasn’t a one-off. It was a whole category of quasi-private identifiers that don’t look like PII until they’re in public output. Encoding that took two minutes. The value of it doesn’t expire.
There’s a broader frame here. Every workflow you build in a specific context — with specific data, specific conventions, specific constraints — is accumulating a fingerprint. The things it fails at aren’t bugs to suppress. They’re the most precise description of the environment you’re operating in. The failure modes are the fingerprint.
If you’re building workflows that others will reuse, the most durable thing you can ship alongside them isn’t the documentation of how they work. It’s the documentation of how they’ve failed, and what you did about it.