AI Burnout and the Loss of an Ideal
Ky Decker published a post this week titled Do I belong in tech anymore? and I have read it twice. Not because I needed to understand it. Because I recognised it.
The short version: Ky is a design engineer who quit their job. Not because it was a bad job by the conventional measures — good pay, remote work, health insurance, colleagues they liked. They were doing good work. But work was making them miserable, and the primary driver was the relentless, unasked-for, unchosen presence of AI tooling in every corner of their professional life. Notes taken by bots without consent. Code reviews generated by AI and submitted without a human thought appended. Twelve thousand lines of code merged in a day without anyone having read them. An engineering culture that had stopped reading altogether because things were being generated faster than they could be consumed.
Ky handed in their resignation. They are, as they put it, “without a job. Recovering from burnout.”
I find my self feeling for them, and in total agreement with them. Looking around the Technodome I honestly don’t believe I’m alone.
The Code Review Insight
One of the most important line in Ky’s post is this:
The point of a code review is not simply for good code to make it into a codebase, but to build institutional knowledge as people debate and iterate and compromise, slow as it may be ( You know my feelings on Shadow Engineering, this is it)
I want to dwell on this, because it explains a lot of what is being lost that does not show up in any productivity metric.
A code review performed by an AI agent and submitted by a colleague without additional thought is not a code review. It is a check that returns a result. It is not a conversation. It is not an act of teaching. It does not build the shared understanding of a codebase that makes it possible for a team to work in it without stepping on each other. It does not surface the question of why this approach was chosen over the other three that were considered. It does not transfer the context that lives in someone’s head into the repository and the people around them.
This is shadow engineering. The code review that actually teaches rather than just approves is the thing that compounds. It is the thing that brings a junior engineer closer to having senior judgment. It is the thing that means the knowledge survives when someone leaves the team. When you replace that with a generated review that nobody has read, you are not producing the same output faster. You are producing a different output that is cheaper to produce and far more expensive to recover from over time.
The AI note that Ky describes — the one that “mischaracterises what you discuss” and was turned on without asking — is the same pattern. Human note-taking in a meeting is not just about capture. It is about attention. The act of deciding what to write down is the act of deciding what matters. That is a deliberate cognitive engagement with the content of the conversation. When you outsource that to a model, you outsource the judgment about what mattered. You also outsource your obligation to be present. The cost of that distributed over many meetings across many teams is not small.
The Extraction Pattern
I have written about what happens when contributions are produced without a stake in the outcome. The AI bots flooding open source repositories are optimising for merge, not for correctness or long-term maintainability. The AI code review that Ky describes is the same problem inside a company: someone has found a way to generate the appearance of reviewing code without the cognitive cost of actually doing it. The work — the real work, the thinking — has been transferred to the maintainer, or to the team that has to live with what gets merged.
Ky lists the scenarios they encountered over the past few years, and each one is a version of this pattern. Consent not sought. Human judgment not applied. Generated output substituted for deliberate thought. The cost of each individual instance is small. The cumulative cost — the erosion of trust, the degradation of institutional knowledge, the felt sense that nothing is quite real — is enormous.
The question Ky kept having to ask — do I raise this? do I push back? will I become the “difficult” coworker? — is the tax that was being levied on them. Whether they raised it or let it slide, energy was spent. That is not a neutral outcome. That is a form of extraction, and it wore them down.
The Loss of an Ideal
Ky draws on Hannah Proctor’s work on burnout to make a distinction I think is important:
Burnout in Freudenberger’s articles from this period is not just defined in terms of physical tiredness as a result of doing too many things; rather, it emerges from emotional investment in a cause and from the disappointments that arise when flaws in a political project become apparent… a malaise arising from politically committed activities.
This is the “loss of an ideal.” Not tiredness. Not overwork. The particular kind of exhaustion that comes from caring about something and watching it become something else.
Ky describes the tech industry they joined — the version that seemed to take equity, climate, and “don’t be evil” seriously — and the one they are leaving, where CEOs smile for photos with people doing genuine harm and climate pledges are abandoned because data centres need more gas. This is not a peripheral observation. It is the frame that the whole post sits inside.
I recognise that frame. The DevOps community model I believe in — the one where capability, communication, and responsibility sit at a centre that holds — depends on people who are invested in the outcome. Who care whether the systems work well, whether the team is learning, whether the thing being built is worth building. That investment is not separable from a broader set of beliefs about what technology is for and what the people building it owe each other.
When those beliefs are betrayed — when the organisations doing this work signal clearly that the principles were expedient rather than principled — the investment collapses. Not all at once. Scenario by scenario, resignation by resignation, quiet decision by quiet decision. That is what Ky is describing. It is the cumulative result of watching the industry act against its stated values often enough that the cognitive dissonance becomes unsustainable.
What Ky Believes
Near the end of the post, Ky states five beliefs they are coalescing around:
- Things that are worth doing are worth doing well.
- Things that are done well require time and effort.
- You make meaning through the doing.
- Ideas are common; effort is not.
- There are no shortcuts.
I do not have anything to add to these. They are right. I will note, though, that every single one of them is in tension with the way AI tooling is being deployed in the scenarios Ky describes. Not because AI tooling cannot be used in ways consistent with these beliefs — I believe it can, and I write about it. But because the deployment pattern Ky describes is precisely a pattern of shortcutting: skipping the deliberate engagement, replacing the time and effort, outsourcing the meaning-making.
The fifth belief — there are no shortcuts — is not a claim that everything must be done slowly. It is a claim that the thing you skip is not nothing. That the process is part of the product. That the institutional knowledge built in a real code review is not a by-product you can safely eliminate. That the attention paid in a meeting is not overhead you can automate away.
The shortcut that looks like a gain is very often a debt, taken on against a future you will not like when it arrives.
What This Costs
I am still in this industry. I still find the work worthwhile. But Ky’s post describes something real: a cost that is being levied on people who care, at exactly the moment when the leverage of caring is being argued away.
The shadow engineering that makes organisations work — the code reviews that teach, the documentation nobody asked for but everyone needs, the pair programming that builds someone else’s capability — is hard to sustain when the culture around you has decided that generating the appearance of doing these things is close enough. It is exhausting to be the person who believes in the craft when the craft has been priced out.
Ky is asking whether they belong in tech. I do not have an answer to that. What I do know is that tech is poorer when people who hold these beliefs leave it. The people who believe that things worth doing are worth doing well, that there are no shortcuts, that meaning comes through the doing — those are exactly the people any engineering organisation should be keeping, not burning out.
Read Ky’s post. It is at ky.fyi/posts/ai-burnout. The comment thread on Hacker News is also worth your time. There are a lot of people quietly nodding along. That is not a neutral signal.