Every conversation I have with people who are working natively with AI — builders, founders, executives, practitioners who’ve reorganized their entire workflow around agents — hits the same beat. I ask how much time they’re spending working. They pause. Then: “More. Definitely more.”
And then, almost always: “But it’s different. I’m actually enjoying it.”
Both things are true simultaneously, and understanding why tells you something important about what AI-native work actually is.
The pattern has a name: Jevons Paradox
William Stanley Jevons observed in 1865 that as steam engines became more efficient, coal consumption went up, not down. Making a resource cheaper doesn’t reduce how much you use — it expands what you do with it.
AI did this to cognitive work. 67% of workers who adopted AI tools in 2025 reported working more hours by year-end, not fewer. OpenAI’s enterprise survey found that heavy AI users — people saving 10+ hours a week — don’t use that time as slack. They expand into new tasks: coding, data analysis, agent design, work they couldn’t have taken on before.
The productivity gains are real. The hours reduction isn’t coming. Every efficiency you gain reveals more that’s worth doing. The frontier expands faster than the work compresses.
This is disorienting if you expected AI to give you your evenings back. It makes complete sense if you understand Jevons.
Why it’s exhausting and engaging at the same time
Working at agent speed means parallelizing. You’re not waiting for an answer — you’re running multiple threads simultaneously, reviewing outputs, making decisions, redirecting. You’re not doing one thing at a time; you’re operating across several contexts at once, each at high cognitive intensity.
This is genuinely fun in a way that sequential knowledge work often isn’t. There’s a flow state available here that’s hard to get from writing a report or sitting in a meeting. You’re building momentum, chasing the interesting branch, following the thread that the agent surfaced and you hadn’t thought to pursue.
But it’s also relentless. Research on AI cognitive load describes the pattern as “cognitive fragmentation” — app switches every 47 seconds, parallel threads competing for attention, verification work that requires holding your own expertise in tension with the agent’s output. The engagement and the exhaustion are produced by the same mechanism. You’re always on.
An eight-hour day at agent speed is a different thing than an eight-hour day of ordinary knowledge work. Not worse — different. More like being a director running multiple shoots than a writer with a blank page. The cognitive load of orchestration is its own kind of demanding.
The part the research misses: the meta-work detour
What I’ve observed — and what the productivity studies don’t capture — is a specific tax that appears only in people who have gone deep on AI-native work. I’m calling it the meta-work detour.
It goes like this: I’m doing a task that should take 15 minutes. During the task, I notice that a skill is missing — some pattern I keep invoking manually that should be encoded. Or a skill misfires. Or I realize that a 20-minute detour to fix the skill would save hours of re-doing the same correction across every future session.
So I take the detour. Because it’s right there, visible in the context. And because the cost of not taking it is paying the tax on every future occurrence.
What started as a 15-minute task ends up being a 45-minute one — and often longer, because the fix has to be wired into other skills that call it, tested, and distributed so future sessions inherit the correction.
This is the meta-work layer. Every task is potentially a task plus the infrastructure the task revealed was missing.
The difference from ordinary work is that you can see it. When you’re working at agent speed with live context, the gap between “what I’m doing right now” and “what I should have built to make this automatic” is visible and immediate. You’re not filing a ticket for future-you to address. The debt is right there.
The people who have gone deepest on this pattern don’t experience it as burden — they experience it as compounding. Every meta-work detour eliminates a whole class of future friction. The infrastructure gets better with every task. The skills accumulate. Eventually, tasks that used to cost 45 minutes just cost 15, because past-you already handled it.
But the ROI isn’t only personal. When a skill is fixed, tested, and distributed, everyone running the same workflow inherits the correction. The math here is worth doing explicitly.
A skill that saves 5 minutes per use, used 5 times a day:
| Scope | Daily saving | Annual saving |
|---|---|---|
| You alone | 25 min/day | ~104 hours/year |
| 1,000 builders using the same skill | 416 hours/day | ~104,000 hours/year |
Investment: 20 minutes to build the skill. Return: 104,000 hours of annual savings across the user base.
That’s a 312,000:1 return ratio on time. Personal break-even happens at 4 uses — within the first hour of the skill being live. Everything after that is compounding.
The individual overhead funds collective intelligence. That changes the math significantly — you’re not just investing in your own future productivity, you’re investing in everyone downstream of the same infrastructure.
The compounding takes time to realize, and early on the meta-work tax consistently exceeds the task. You’re always doing two things: the work, and the infrastructure for the work.
What this means in practice
Working with AI natively is not a productivity tool. It’s an operating mode — one that requires different expectations about what a working day looks like and what it produces.
You will work more hours, not fewer. That’s Jevons — the efficiency unlocks more scope, not more leisure.
You will be more tired at the end of a day, not less. That’s the cognitive load of parallelization and orchestration — real and different from the fatigue of ordinary focus work.
You will also find work more engaging than you did before. That’s the flow state available when you’re genuinely operating at the edge of what you can orchestrate.
And you will periodically need to take the meta-work detour — to stop doing the task and go build the infrastructure the task revealed. This is not a distraction. It’s the mechanism by which AI-native work compounds. The people who resist it, who stay on-task through every revealed gap, accumulate debt that eventually costs more than the detours would have.
The question I keep returning to: what does sustainable look like at this pace? The parallelization doesn’t go away. The meta-work layer doesn’t go away. The Jevons expansion of scope doesn’t go away.
What does, eventually, is the infrastructure gap — if you keep taking the detours.