In light of what I've seen, LLM use seems to provide the illusion of addressing both kinds of complexity in the short-term while it increases essential complexity in the long-term. It may or may not reduce some accidental complexity; the verdict is still out, as far as I can tell. Judging from the recent leak of the Claude Code front end source code, LLMs seem to introduce an enormous amount of new accidental complexity.
To the extent that LLMs are in the category of what Brooks calls automatic programming, this quote he shares from David Parnas is important:
Automatic programming always has been a euphemism for programming with a higher-level language than was presently available to the programmer.The "higher-level language" LLMs trade in is English, which Dijkstra has strongly argued makes for a poor programming language. But even if, contra Dijkstra, it turns out LLMs somehow change that equation, we are still stuck with Brooks's argument that higher-level programming languages largely only address accidental complexity. They do not get at the essence of what makes creating software systems difficult.
Taken together, all this seems to suggest that LLM use in coding is a net negative, and if the above is to be believed it will result in worse outcomes, not better, over the long haul. Definitely re-read Brooks and see if you agree.
#AI #GenAI #GenerativeAI #AgenticAI #LLMs #ClaudeCode #software #tech #dev #SoftwareEngineering #SoftwareDevelopment