Decision Density
Something is happening to people who build software with AI.
Consider a few blog posts from the last few weeks. Steve Yegge describes AI as an “energy vampire” that drains developers. Researchers at UC Berkeley published findings in the Harvard Business Review showing that AI intensifies work instead of reducing it. Workers voluntarily expanded their scope, blurred the boundaries of their workday, and ended up more exhausted than before, all without being asked to do more. At the same time, a viral post by Matt Shumer argues with pandemic-level urgency that this is the most important year of your career and you need to lean in harder, now.
The signals are fundamentally incompatible demands:
Skill up urgently. Or get left behind.
But the work is more intense than ever.
Also, you must fight the compulsion to overwork or else burn out.
The messages seem contradictory, but I wonder if they are all ways of describing the same elephant?
If we were going to give the elephant a name, we could call it decision density, to mean the rate of decisions that must be made per some unit of time. Hypothesizing that people who are coding with AI are experiencing a rapid increase in decision density, I think we are watching the effects in real time.
Decision density describes a condition within an environment. In the literature,decision fatigue describes consequence or end state, cognitive load describes the acculumative cost of tasks, and choice anxiety captures the subjective experience. You conceivably rachet up all of these effects when you rachet up the rate or “density” of decisions.
In coding, AI eliminates the “incubation effect” that was a result of manually pounding out code for hours. In the before times, you would make a decision and then spend substantial time (and often labourous effort) implementing it. Now you make an architectural decision and the implementation arrives in minutes. You are immediately confronted with the next consequential choice. And the next. And the next. The space between decisions evaporates. That space felt like mundane work, but it was also recovery time between decisions.
Decision density increases not because the decisions get harder, but because the space between them collapses.
Every decision, whether the outcome succeeds or fails, generates the next one. If your implementation works, the reward is a new set of choices. If it fails, you face a different set of choices. Success and failure are two different doors into the same room: both routes lead to more decisions. The more you feel like bad decisions or suboptimal results can be corrected or reversed, the more you lubricate the acceleration. Choices are vicious.
Conceivably, people like CEOs and heads of state operate at extreme decision density all day, every day. But they are sustained by institutional scaffolding, like chiefs of staff, advisors, analysts. Presumably, having an army of extra brains at your disposal reduces your cognitive cost per decision. But historically, improvements in executive support infrastructure has not reduced executive workload. It just enables larger organizations, more initiatives, and more complexity. Helping CEOs make decisions has not lessened CEO workloads.
The upper bound on decision density, given perfect support scaffolding, is simply time. The limit is the number of seconds it takes a human brain to evaluate a pre-optimized binary choice. This suggests that we are not going to solve the emerging decision density problem in software development by building better AI “wrappers” to ease the burden of AI-assisted work. This will only, if paradoxically, intensify the issue. The easier decisions are to make, the more of them emerge. The ceiling rises to meet whatever capacity you build. You only chop off hydra heads faster.
I think software developers feel like they are on a fasttrack to becoming burned out mini-CEOs of their little codebase empires. The question of who (or what) imposes a ceiling on decision density remains unknown. The tools will not do it. The market will not do it. And the individual, caught inside a loop that feels productive (if not downright necessary) at every step, is probably the most unlikely to do it. (I don’t know about you, but I am typically helpless at resisting a loop that is self-accelerating, intrinsically rewarding, and structurally resistant to pausing.)
Naming the problem of “decision density” in software development will not slow it down. But I am cognizant that I am especially helpless if I cannot name or define the issue.
Yegge’s proposed solution is a three-to-four-hour workday. It is conceptually sound. If the feedback loop has no internal brake — if every resolved decision generates the next one, if better tooling only increases the rate, if the loop runs the same whether decisions succeed or fail — then the only effective intervention is a hard limit on time.
But who enforces it?
The Berkeley researchers acknowledged as much when they wrote that “asking employees to self-regulate isn’t a winning strategy.” Companies cannot voluntarily capture less value from their workers — this is not what companies are designed to do. The history of limits on work intensity is not a history of enlightened blog posts. It is a history of organized labour, legislation, and decades of political struggle. The eight-hour day was imposed from outside the system because the system had no internal mechanism to produce it.