Vibe Coding - The Emperor's New Code
I grew up programming in the times when “lazy programmer” was a term of endearment. Lazy programmers produced the best software. They hated repetitive tasks, so they automated everything. They were annoyed by having to type a lot of code or spend a lot of time on refactoring, so they spent time thinking about design that was clear and concise and allowed for extensibility and better maintenance. They disliked explaining the code to new programmers or watching programmers make mistakes trying to extend the software due to lack of understanding, so they made code more explicit, beautiful, and understandable. Lazy programmers loved programming, loved solving problems, but hated doing superfluous stuff. This is the complete opposite of what I’m seeing being produced with AI.
In contrast, generative AI is ambitious. It’s full of energy. It doesn’t get tired. It’s patient. It doesn’t get annoyed. It’s indiscriminate and uncritical. It thrives on more. This is the exact opposite of what drives good design. Good code comes from someone annoyed enough by complexity to simplify it, impatient enough with tedious work to automate it, and lazy enough to eliminate superfluous effort. Annoyance, impatience, and laziness toward the superfluous is a killer creative mixture that drive good design and progress. AI has none of these properties. It feels nothing. It’s just as happy to write 10k lines of code to do something a human would find a way to do in a few hundred. It’s happy rewriting the same program 100 times, without truly understanding what’s happening under the hood. It has no design taste.
I see a reckoning coming. The world is producing a lot more software, but is it good software? Does it solve the problem well? Is it well designed? Is it maintainable and understandable? Can it be easily extended? Does it have taste?
GenAI tools are exciting. I use them daily. Our teams are building AI-enhanced capabilities that would have been impossible just a few years ago. There’s amazing energy in the air, and it feels like the mid-90s and the birth of the web again. The possibilities are broad and deep. But I’m worried about the blind spots and blind enthusiasm I’m seeing in the industry when it comes to outsourcing human reason, creativity, thinking, and taste. The very things we see as limitations, like our impatience with bad code, our annoyance at complexity, and our laziness toward the superfluous, aren’t bugs. They’re features.
P.S. This post was not written by AI, but the image was generated by Sora :-)


Loved getting your take on this. If your LLM never tires and never gets fed up with any task, how do you train it to produce the elegance that comes from human “laziness”? 🙂
This post really resonates with me because it highlights a fundamental tension between human creativity and the capabilities of generative AI. You’re absolutely right—what you’re describing as “lazy programming” isn’t laziness in the negative sense. It’s a kind of productive dissatisfaction. It’s that itch to simplify, to streamline, to make things better—not just for yourself, but for the people who come after you. That’s where great design comes from: the struggle to eliminate friction and make progress.
AI, on the other hand, doesn’t have that struggle. It doesn’t feel the pain of complexity or the frustration of inefficiency. It doesn’t care if the code it generates is elegant or maintainable. It just produces. And while that can be incredibly powerful, it also introduces a real risk: the loss of intentionality. When humans write code, there’s a purpose behind every line. There’s thought, context, and, as you said, taste. AI lacks that—it’s indiscriminate. It generates without judgment, and that’s where the cracks start to show.
The reckoning you’re talking about is real. We’re at a point where the sheer volume of software being produced is staggering, but volume doesn’t equal quality. The questions you’re asking—“Is it good software? Does it solve the problem well? Is it maintainable?”—are the right ones. They’re the questions that force us to slow down and think critically about what we’re building and why. And that’s where humans still have the edge. AI can assist, but it can’t replace the human ability to wrestle with trade-offs, to simplify complexity, and to design with empathy.
That said, I don’t think this is an either/or situation. AI isn’t going away, and it shouldn’t. The energy and possibilities it brings are undeniable. But we need to be intentional about how we use it. Instead of outsourcing our creativity and judgment, we should be using AI to amplify them. Let it handle the repetitive, tedious tasks so we can focus on the higher-order work—the thinking, the designing, the simplifying.
The key is to treat AI as a tool, not a crutch. It’s there to support us, not replace us. And that means we need to stay vigilant. We need to keep asking the hard questions, keep pushing for better design, and keep valuing the very human qualities—like impatience, annoyance, and laziness—that drive innovation. Because at the end of the day, it’s not just about producing more software. It’s about producing software that makes progress, that solves real problems, and that stands the test of time. That’s something AI can’t do on its own. It needs us for that.