The Other Side of 1-Pizza Teams
AI boosts productivity and burns people out. Managing that tradeoff is the actual job.
Last week I wrote about 1-pizza teams — the idea that AI is shrinking team sizes because individuals with good AI tools can accomplish what used to take small teams. Anthropic’s engineers report a 50% productivity boost. The Harvard/Wharton study showed individuals with AI matching traditional team output.
A few days later, HBR published “AI Doesn’t Reduce Work—It Intensifies It”. An 8-month field study found that AI tools don’t lighten workloads — they increase them. Workers using AI got more done, but they also experienced higher cognitive fatigue and longer hours.
Both studies are looking at the same thing. One measured the output; the other measured what it cost the people producing it.
Productivity and Intensity
The pitch for AI tools makes intuitive sense: let the model handle drafting, summarizing, debugging, and you focus on the harder problems. In practice, it’s not that clean.
Yes, you get more done. But “more done” means more outputs to review, more AI suggestions to evaluate, more parallel workstreams to manage. The cognitive load shifts from doing the work to orchestrating the work. And orchestrating is exhausting in its own way.
I’ve felt this myself. I’ll spin up multiple Kilo Slack and CLI sessions working on different parts of a problem, and suddenly I’m context-switching between four different conversations, holding all the threads in my head, making judgment calls every few minutes. I get more done, but I’m more exhausted at the end of the day than I used to be.
The Missing Piece in 1-Pizza Teams
When I wrote about teams shrinking, I focused on what organizations gain: fewer people doing more, coordination costs dropping, each engineer having outsized impact through AI leverage.
What I didn’t address is what those individuals experience. If one person with AI tools can do what a 4-person team used to do, that person is now carrying the cognitive load of a 4-person team. That math makes sense on a spreadsheet. But the person absorbing a 4-person team’s cognitive load doesn’t get to be four people.
At Kilo, we talk a lot about engineers owning numbers and shipping at “Kilo Speed.” We mean it when we talk about speed. But if that speed is grinding people down, we’re just measuring the wrong thing.
What Actually Helps
The HBR research names the risk clearly: companies chase AI productivity gains and their people burn out. A few things have actually worked for us:
Build in recovery. When engineers are orchestrating multiple AI workstreams, the breaks between tasks matter even more than they used to. The old model of “crank through your ticket queue” doesn’t account for decision fatigue from constantly reviewing AI output.
Measure intensity, not just output. Most teams track what gets shipped. Almost nobody tracks how much cognitive load shipping required. We’re experimenting with this at Kilo — asking engineers to self-report not just what they built, but how draining the week felt.
Don’t staff for the capability ceiling. If an engineer with AI tools can do 4x the work, that doesn’t mean they should every sprint. Plan for sustainable velocity, not theoretical maximum output.
Let AI reduce work, not just increase throughput. Some tasks should just... go away. If AI can handle 80% of code review comments, let it handle them. Don’t backfill that time with more work. Let people recover the capacity.
The Balance
Nobody’s making up the productivity numbers. But nobody’s making up the fatigue either.
Shrinking teams is the easy part. Keeping those smaller teams healthy while they carry more weight — that’s the actual management challenge.
If the 1-pizza team burns out in six months, you didn’t gain anything. You just moved the cost somewhere harder to see.


At first glance, it seems that cognitive load could be put on the AI itself. IOW, take me up one level in the development hierarchy, where now I'm managing managers-of-programmers instead of managing the programmers myself. Instead of me concerned with the algorithms being developed (and at this point, that might be a scary proposition), I'm concerned more with the UI/UX. Of course, you would have to have capable AI agents who focus on security, UI, UX, database backend I/O, etc. The conductor is not a musician. The conductor doesn't have to worry about how the violinist accomplishes the vibrato. The conductor just looks for the harmonies produced amongst a whole variety of musicians and their instruments. Maybe something to explore and develop further.