I grew up programming in the times when “lazy programmer” was a term of endearment. Lazy programmers produced the best software. They hated repetitive tasks, so they automated everything. They were annoyed by having to type a lot of code or spend a lot of time on refactoring, so they spent time thinking about design that was clear and concise and allowed for extensibility and better maintenance. They disliked explaining the code to new programmers or watching programmers make mistakes trying to extend the software due to lack of understanding, so they made code more explicit, beautiful, and understandable. Lazy programmers loved programming, loved solving problems, but hated doing superfluous stuff. This is the complete opposite of what I’m seeing being produced with AI.
In contrast, generative AI is ambitious. It’s full of energy. It doesn’t get tired. It’s patient. It doesn’t get annoyed. It’s indiscriminate and uncritical. It thrives on more. This is the exact opposite of what drives innovation, progress, ingenuity, creativity, and good design. Innovation and progress require someone to be annoyed with the status quo and be impatient enough to create a solution. It also needs to be lazy. Lazy not in terms of avoiding hard work, but lazy in terms of hating superfluous shit. Optimizing for stuff that actually makes an impact. Annoyance, impatience, and laziness toward the superfluous is a killer creative mixture. AI has none of these properties. It feels nothing. It’s just as happy to write 10k lines of code to do something a human would find a way to do in a few hundred. It’s happy rewriting the same program 100 times, without truly understanding what’s happening under the hood. It has no design taste.
I see a reckoning coming. The world is producing a lot more software, but is it good software? Does it solve the problem well? Is it well designed? Is it maintainable and understandable? Does it have taste? Will humans be able to understand it, or will be blindly drive prompts, until there is no more understanding?
GenAI tools are exciting. I use them daily. Our teams are building AI-enhanced capabilities that would have been impossible just a few years ago. There’s amazing energy in the air, and it feels like the mid-90s and the birth of the web again. The possibilities are broad and deep. But I’m worried about the blind spots and blind enthusiasm I’m seeing in the industry when it comes to outsourcing human reason, creativity, thinking, and taste. The very things we see as limitations, like our impatience with bad code, our annoyance at complexity, and our laziness toward the superfluous, aren’t bugs. They’re features.
P.S. This post was not written by AI, but the image was generated by Sora :-)
Your take on AI’s failure with coding is, as a writer and editor of the English language, very much my opinion of how it botches the written and spoken word: In short, it has no soul.