AI Code and the Broken Reward System
We are living through a strange inversion in software engineering. AI tools have made it trivially easy to produce code — any code — at incredible speed. What used to take an engineer hours of careful thought can now be generated in seconds with a well-crafted prompt. On the surface, this looks like pure productivity gain. In practice, it is creating a cultural problem.
The Effort Has Shifted
Here is what I see happening: an engineer uses an AI assistant to generate a large code change. They spend most of their time on the prompt, maybe a few iterations, and then submit the result for code review. The code compiles. It might even pass the tests. But it is slop — hard to read, poorly structured, full of subtle issues that only become apparent when a human with real experience looks at it carefully.
That human is the reviewer.
The reviewer now has to read and understand code they did not write, in a style they did not choose, often solving a problem in a way they would not have approached. They have to identify what is wrong, explain why it is wrong, suggest how to fix it, and then wait for the author to feed that feedback back into their AI tool for another round. Rinse, repeat.
What used to be a collaborative process has turned into something resembling outsourcing — except the outsourcing goes in the wrong direction. The experienced engineer, the one who actually understands the system, is now doing the heavy cognitive lifting. But they are doing it in the role of reviewer, not author. The credit, the impact metrics, the launch — all of that goes to the person who typed the prompt.
The Reward Problem
In most tech companies, performance is measured by impact. Ship features, land code, deliver results. The person whose name is on the code change is the one who gets credit for it. This has always been a simplification, but it worked well enough when writing code required deep understanding of the problem and the system.
Now that equation is broken. If the real intellectual work — the understanding, the quality enforcement, the architectural judgment — is happening on the reviewer side, but we keep rewarding the author, we have built a system that incentivizes the wrong behavior. We are rewarding people for generating volume and penalizing the people who ensure quality.
This is not just a code review problem. The same dynamic plays out with dashboards, design documents, configuration changes — anything where AI makes the production easy but the evaluation hard. If all the author does is generate output with an AI and then rely on experienced colleagues to catch problems and suggest fixes, what exactly is the author contributing?
Push Responsibility Back to the Author
I think we need a cultural shift. The responsibility for producing good, reviewable, well-understood code needs to stay with the author. Using AI tools is fine — great, even. But if you cannot explain every line of your change, if you cannot articulate why your approach is correct, if you need the reviewer to essentially rewrite your logic through review comments, then you are not ready to submit that change.
The bar for code review should not drop just because producing code got easier. If anything, it should go up. Authors should be expected to demonstrate understanding, not just output.
And if it turns out that the experienced reviewer could have written the whole thing faster and better themselves — then maybe that is exactly what should happen. There is no productivity gain in having a junior engineer prompt-generate code that a senior engineer has to painstakingly fix through five rounds of review comments. That is just the senior engineer writing the code with extra steps.
Prompting an AI and clicking “submit” is not the same as engineering. The tools are powerful, but they do not absolve the author of the responsibility to understand and own their work. Our reward systems need to reflect where the real effort and judgment lies — or we will end up in a world where the people who actually keep the code working get no credit for it.