The LLM topic has been all over my mastodon feed for months. I find the consequences of LLM adoption depressing overall, with all the damage resulting in several segments of our societies worldwide.
Until now, I have been ignoring LLMs, but there is increasing use of LLMs among customers of my company, which means I can no longer ignore this topic entirely.
I observe use of LLMs mostly by people who don't write programs regularly, who are using these tools to fill gaps in their own skills or available time, with variable success.
The only work item related to LLMs I have accepted so far is reviewing LLM-generated security bug reports, where someone else is running various AI tools to scan open source projects, sends us reports, and with respect for our time (unlike some other people who just spam open source projects with such reports) pays me and another open source developer to take a look at them.
Most of these reports are garbage and get discarded. About 1 or 2 in 25 reports are on to something. We write required fixes the good old fashioned way.
I have been reviewing reports from code scanners for more than a decade every now and then. The only thing which is new to me here is the entanglement of the code-scanning tool with all the harmful side-effects and consequences of its existence.
I haven't yet received significantly higher quality reports than what I have seen before LLMs. A big problem is that the severity of the bugs reported is often blown out of proportion, which can cause wrong judgement or even panic when non-experts are evaluating such reports without a sufficiently critical lens.
Reluctantly setting aside the larger issues surrounding LLMs, code-scanning is as far as I will accept going along with this, but no further.
My company is now borrowing the EU's "Certified Organic" logo to deter potential clients who would require use of LLMs. I hope this gets the point across, without having to explicitly mention LLMs or "AI", cause I am very much sick of seeing them mentioned everywhere.