The Autocomplete Tragedy: Agency vs. Autonomy
We have elevated “frictionless” to a moral imperative in software design. The ultimate goal of modern technology, it seems, is to anticipate our needs and execute them before we even have to think. But every design choice encodes a trade-off, and we must ask what we are sacrificing when we engineer the friction of choice out of the human experience.
Aristotle argued that character is not innate; it is formed by habit. The institutions, laws, and systems we live within shape the kind of people we become. If we apply this lens to the rise of AI agents that optimize our financial, dietary, and social decisions, we are forced to confront a structural trap.
We must differentiate between agency and autonomy. Agency is the ability to execute a desire—getting exactly what you want, as quickly as possible. Autonomy is the capacity to deliberate on what to desire. Modern AI grants us near-infinite agency while systematically eroding our autonomy.
Think about the design philosophy of predictive algorithms. They look at our past behavior, extrapolate our likely preferences, and serve up the optimized choice. Over time, the user is nudged into a passive state of acceptance. This focus on execution over exploration reduces the human being to a mere node of consumption, reminiscent of Adam Smith’s warnings about the division of labor creating soulless individuals. Just as a destination-first toy restricts a child’s imagination to following instructions, an autocomplete economy restricts the adult’s moral and economic imagination to whatever the algorithm deems most efficient.
What happens to the muscles of human judgment when they are no longer exercised?
If an AI manages your budget, selects your investments, and automatically cancels your unused subscriptions, you are certainly wealthier and more efficient in the short term. But you are also alienated from the very mechanism of your own economic life. You are trained to accept the system’s outputs passively. You lose the friction of deliberation—the momentary pause where you ask, “Do I actually value this?”
Technology exists to serve human flourishing. That is its telos. But flourishing requires the active exercise of human judgment, complete with all its inefficiencies, hesitations, and mistakes. When we build systems that flawlessly optimize our lives, we don’t just solve our problems—we outsource our character. And I am not entirely convinced that the efficiency gained is worth the sovereignty lost.