There's a particular kind of collective amnesia at work in how the software industry talks about AI-assisted development. The discourse treats this moment as unprecedented - as if the idea of machines generating code were a new and threatening concept that demands fresh scrutiny. It isn't. We've been here before, repeatedly, and our reaction then was almost uniformly enthusiastic, and if you were there, you remember.
PeopleSoft. Paradox. (Ah, poor Paradox - we barely understood you.) Delphi, following Paradox with a healthy helping of Turbo Pascal 7. CASE tools. RAD environments. Fourth-generation languages. Fifth-generation languages. (Was there a sixth-generation?) Model-driven architecture. Naked architecture. UML-to-code generators...
Each of these was explicitly marketed around the same core promise: let the machine handle the implementation so the human can focus on the problem. Business logic without programmers. Database applications without developers. Diagrams that compiled into running code. The whole history of developer tooling is, in large part, a history of trying to move humans up the abstraction stack and let machines handle what's below: that's how the first assemblers were created, then the first compilers, then the first virtual machines...
Response was ... mixed. Some of these technologies were rather famous flame-outs. Managers leaned into them because they promised speed and predictability; developers leaned into them because they reduced effort and imposed structure. Neither group was wrong, but the tools couldn’t compensate for poor specifications or incomplete understanding.
Adopting them was popular, often recommended, and often disappointing: they were aspirational, not reality. Most of us wanted to live in a world where the computer could infer what we meant - but the tools never really got close unless the problem was so simple that no tool was required.
Technology's Promise Hasn't Changed.
What's actually different about LLMs isn't the direction of the promise - it's the degree to which the promise is now being kept. The prior generations of tools moved the needle some. They were able to create the simplest parts of an application well, the bits that a junior developer would use to learn a framework, but without having to have the junior developer actively reading a tutorial.
They reduced boilerplate, accelerated scaffolding, made certain classes of application accessible to people who couldn't write production code. But those people still couldn't write production code. We already mentioned Paradox - one of the hallmarks of PAL applications was that they felt like they were nothing more than relational structures displayed "in a form," but that's because that's what they were. PAL applications were fast but they left out all the hard parts of applications until you got to Delphi. The interesting work - the architecture, the edge cases, the business logic that didn't fit the template - remained stubbornly beyond the reach of simple heuristics.
LLMs close that gap in ways the prior tools didn't. Not perfectly, not without verification, not without judgment - but meaningfully, across a much wider range of tasks than any previous generation of tooling managed, because they leverage the vast reserve of applications that humans have already described: if humans have told other humans how to create something well, an LLM can leverage that pattern and create something good itself.
That's what changed. The promises are the same. The landscape beneath them is different.
Responding to the Promise
If the tools can now largely fulfill the promise, the rational response is to adjust your expectations and practices to the actual landscape, not the imagined one. We still live in the real world, we still reach for the ideal, but we anticipate both rather than preferring one over the other.
Concretely: with LLM tooling, the skill that matters isn't "typing code." It's specifying intent precisely enough that the machine can fulfill it, and verifying the output with enough understanding to catch what it got wrong. These are learnable skills. They're also the skills that make you a better thinker regardless of what tools you're using, because the discipline of writing a good specification is the discipline of actually understanding what you want.
If you're using a simple compiler like GCC to compile C++, you don't get an error from the compiler and tell yourself "I have an error, let me see if I can fix it without looking at the error." You'd, well, read the error message, and infer what you did wrong from that error content, rather than just flinging more code at the compiler and praying. This is what humans do - but it's a process we've decided to avoid when an LLM is involved.
Why?
What we should be doing is validating the output - just like we would from a C++ compiler. We should demand tests - and why not? They're easy to write, especially with AI help. And we should be using all of this information to help us refine our tests, our inputs, our outputs, our specifications, to get as precise a design as we can - and that loop creates more precision, and can create more understanding with better compliance, especially if we demand test coverage.
We've Been Here Before
The objection that LLM output is unreliable is fair (and observable!), but it's worth recognizing what it actually argues for. Unreliable output from underspecified input is an argument for better specifications and more careful verification - not for abandoning technology. This is the same lesson every generation of tooling taught, and every generation of developers had to learn it: the tool is not a replacement for understanding. It never was. Reaching for automation as a way to avoid thinking is how you get bad outcomes regardless of which decade's tools you're using.
The laziness failure mode is not new technology's fault. It predates all of this and will outlast all of it - and it will outlast us, too.
What AI Collapse Would Mean
There's a version of this conversation that treats a potential AI bubble - a contraction in capability or infrastructure - as vindication for the skeptics. It isn't. If that contraction happens, the failure state isn't a comfortable return to craft. It's a development ecosystem that has partially atrophied skills it no longer practiced, with demand for software that didn't contract alongside the tooling that was producing it.
That's not an argument against adoption. It's an argument for doing the transition seriously - building real specification skills, maintaining enough comprehension to verify output, not hollowing out your own understanding because the tool makes it temporarily unnecessary to exercise it. It's an argument for approaching the LLM vendors seriously, too: not as opponents or replacements, but with an eye to sustainability and legacy.
The Luddites, for the record, had a point. Industrialization did destroy real skills and real livelihoods. Their error wasn't in observing the loss - it was in assuming the destroyed skills were permanently necessary. What the industrial revolution took, it also made obsolete: we lost the ability to hand-spin thread at scale, and we lost the need to. The atrophy and the liberation arrived together. The liberation was generally considered worth it - how many readers know assembly for modern CPUs? (Why?)
The same logic applies here. Yes, a generation of developers working primarily through LLMs may atrophy certain implementation skills. That's a real loss - but only if the need for those skills at that scale survives the tools that are displacing them. (We actually do need people to know machine language.) If the tools genuinely deliver, the need contracts alongside the skill, and the loss is no loss at all. The genuinely dangerous scenario isn't atrophy - it's atrophy without delivery: skills eroded before the tools proved out, leaving a gap that neither humans nor machines can fill. That's the failure mode worth taking seriously, and it's an argument for doing this transition carefully, not for avoiding it.
(It's also an argument for keeping one's hand in: we don't need to lose those skills, it's just that the vast majority of those skills' practitioners aren't all that great at it anyway, and losing a generation of neophytes isn't as great a loss as losing the skill altogether. Keep your great coders: they're still worth their weight in gold.)
Who Farms On the Enterprise? And Why?
In Star Trek, the Enterprise crew doesn't hand-replicate their food to prove they understand molecular assembly. The replicator handling that isn't laziness - it's what frees them to do the things that actually require human judgment, the same way that a hammer and nails prevent us from having to understand how to build a complex join for wood.
If you're going "Wait, I didn't see Star Wars II: The Wrath of Khan," the crew of the Enterprise uses replicators to make food: they go to a machine, say "I would like lutefisk," and it creates it out of nothing for them. Then the rest of the crew goes "ewww, lutefisk" and leaves. The point is: the machine does the "hard part." The crew member makes the bad choices.
Human value in software has always been in ideation, with execution a distant second: in understanding the problem, in knowing what to build and why, in making the judgment calls that can't be automated because they require understanding context the machine (or junior coder) doesn't have. The implementation was the tax we paid because machines couldn't handle it. For most of software history, you couldn't have the ideas without also doing the manual typing, without enduring the cost of all the shortcuts we took because there were too many holes to plug in the dike - we just slapped plywood over the leaks and hoped it was enough, and got Windows 11 as a result.
That constraint is lifting. (Well, maybe not for Copilot With Windows 365 Copilot Oh Geez Can You Just Use Copilot Please.) The question is whether the industry is willing to stop identifying with the constraint - to stop treating the tax as the skill - and start serving humanity and purpose rather than compiler or linker.
We cheered every prior tool that promised to do this, perhaps because we suspected that the replacement left a desperate need for us to fill. But the reasonable response to a tool that can actually lighten our burdens isn't to sneer - it's to learn to write better specifications, write better concepts, and work with our tools rather than against them: to learn to wield the hammer well instead of waving it about and hoping.
Read your AI article. Pretty good.
A few comments, having spent a lot of my career in developer tools and lived through CASE tools and 4GLs too.
I’m not saying don’t use AI for coding - it’s here and it is inevitable that it will be used.
But we could use a lot less “gung ho” and a lot more people exploring the failure modes, or it’s going to take airplanes falling out of the sky and accidental missile launches and a few more Therac-25s for people to understand what it is and isn’t good for. We could avoid that, but it requires not jumping on the bandwagon with both feet.