Skip to content
Go back

Four Things That Can't Be Delegated

For several weeks, I’ve been pushing delegation as far as it goes — handing off research, drafting, synthesis, scheduling, formatting, triage, and deployment to AI agents. The goal was practical: find the floor. Find what’s left when you systematically offload everything offloadable.

I found four things. Not things that shouldn’t be delegated. Things that can’t be.

This distinction matters. “Shouldn’t” is a preference. “Can’t” is structural. And these four are structural — the work either requires something an agent cannot possess, or the output is worthless without a human at the center of it.


The Four Things

Conviction. What’s worth doing? Which problem matters enough to pursue? Agents can research every option, score them against criteria, rank them by expected value. They cannot tell you which hill is worth dying on. That judgment requires a point of view that was formed through experience, loss, and the accumulated weight of caring about something. Conviction is not analysis. It’s a bet you’ve chosen to make with your finite time. Agents don’t have finite time. They don’t have anything at stake.

Relationships. Trust is built through shared context, vulnerability, and follow-through over time. An agent can draft the perfect email. It can read the history of a relationship and construct something plausible and warm. But it cannot replace the fact that you showed up when things were hard, that you remembered what mattered, that you were the one who made the call at the moment it cost something. Relationships are not content. They are a record of presence. You cannot outsource presence.

Curiosity. The most valuable question is the one nobody’s asking yet. Agents answer questions brilliantly — with speed, range, and synthesis that no person can match. But they answer. They don’t wonder. Curiosity is generative in a way that’s different from search or retrieval — it’s the act of noticing that something is strange, that a pattern doesn’t fit, that a gap exists that nobody has named. That noticing happens unprompted, sideways, while you’re doing something else. Agents don’t have the sideways. They work the queue.

High-stakes judgment. When the blast radius is large and the decision is irreversible — a hire, a strategic pivot, a public commitment, a hard conversation — the human must own it. Not because humans are smarter. Because accountability requires a person who understood the tradeoffs and chose anyway. An agent can surface every consideration and present a recommendation. It cannot be responsible for what happens next. Judgment without accountability is just suggestion. Someone has to sign.


What This Actually Means

Everything else — research, formatting, scheduling, triaging, drafting, synthesizing, deploying — is execution. Important execution, in many cases hard execution. But execution: the conversion of a decision into output. That’s what agents are extraordinarily good at.

Which means the practical question is not “will AI take my job.” The practical question is: what percentage of your actual working hours are you spending on conviction, relationships, curiosity, and high-stakes judgment — versus execution?

For most knowledge workers, the honest answer is a small percentage. Most days are dominated by execution overhead: the reformatting, the scheduling back-and-forth, the summarizing of the thing that was already summarized, the chasing of inputs that should have been automated months ago.

That overhead is now, largely, delegable. Which means the hours it occupied are freed — and the question becomes what you do with them.

This is where the leverage question bites. If your job has been mostly execution, you now have two options: find the conviction, relationship, curiosity, and judgment work that’s been crowded out — or fill the recovered hours with more execution because it’s familiar and quantifiable.

Most people will fill. That’s not a prediction about weakness; it’s a prediction about organizational defaults. The work that gets rewarded visibly is usually execution. The work that creates long-run value — the right question asked, the relationship maintained, the judgment call made clearly — is often invisible until it isn’t.


Reframing the Question

The question is not whether AI agents will replace knowledge workers.

The question is whether knowledge workers will learn to identify the work that’s irreducibly theirs — and then actually show up there, consistently, instead of staying busy with the work that’s comfortable and countable.

The agents are fast. They don’t get tired. They don’t have opinions about which meetings to take or which problems are worth the effort. They will execute whatever queue you give them.

What you put in the queue is still yours.


Share this post on:


Previous Post
AI Didn't Reduce My Work. It Expanded What Work Means.
Next Post
Build vs. Buy for AI Knowledge Infrastructure: Capability First, Cost Second