Svante
There comes a time where you have to stop accepting things you can't change, and start changing things you can't accept.
Posts
There comes a time where you have to stop accepting things you can't change, and start changing things you can't accept.
There comes a time where you have to stop accepting things you can't change, and start changing things you can't accept.
There comes a time where you have to stop accepting things you can't change, and start changing things you can't accept.
Yes, »worse is better« morphed from /description/ to /prescription/. (There is a nice talk by Romeu Moura about this fallacy: https://www.youtube.com/watch?v=92Pq4-e0QyI)
In short: people erroneously move from »it's like this« to »it should be like this« or »it's inevitable like this«, and then enshrine it as a given fact, assumption or axiom instead of asking what can be done about it.
There comes a time where you have to stop accepting things you can't change, and start changing things you can't accept.
…
- LLMs generate straw-fire software. It seems to burn at first, but it's not even hot enough to start a real fire.
- This seems cheap in a very short-term view, and it might satisfy short-term “wants”, but it's not sustainable.
- We need to start fixing somewhere. Two holes in a bucket are not a dilemma, but two tasks.
There comes a time where you have to stop accepting things you can't change, and start changing things you can't accept.
You are stating a lot of assumptions:
- That the qualities that software exposes on the outside are largely independent of its inner workings.
- That LLMs make the creation process more efficient.
- That LLM-generated software is cheap and does what users “want.”
- That fixing one thing is not worthwhile while other things are not fixed.
But:
- Inner quality does matter a lot. E. g. JIRA receives a lot of complaints because it is not well designed internally.
…
There comes a time where you have to stop accepting things you can't change, and start changing things you can't accept.
No, the job of actually understanding will not go away. But a lot of people will get hurt on the way to understanding that.
I believe that it is our job not to casually spread or tolerate such misconceptions, but to try to mitigate the damage.
There comes a time where you have to stop accepting things you can't change, and start changing things you can't accept.
Imagine a random PO send you some code with the note »hey, I just made this, can you put it into prod?« — aren't you shuddering? This is just automated script kiddies with even less props.
There comes a time where you have to stop accepting things you can't change, and start changing things you can't accept.
You write »all of it is in service of making the human more efficient at the mechanical act of coding« — but that's not the point: it's to give the human a faster way to put their thoughts to canvas, thus reducing interruption of those thoughts, and to give them more time to /have/ thoughts.
Put that way, I don't see how LLMs help at all, and frankly, I actually do believe they don't.
There comes a time where you have to stop accepting things you can't change, and start changing things you can't accept.
A “risk” is something that might not happen.
This is not a risk, it is certain.
Better headline probably: “How the Digital Omnibus damages GDPR and ePrivacy rights”
There comes a time where you have to stop accepting things you can't change, and start changing things you can't accept.
When will I finally get a linux phone that actually fits my pocket? Height max. 13.5 cm.
There comes a time where you have to stop accepting things you can't change, and start changing things you can't accept.
There are still parts that could be salvaged for a comparatively quick restart (compared to starting from scratch), but the people deciding these things currently are determined to make it harder every day.
It will be hilarious when they realize in 15 years that even Poland decarbonizes faster than we do.