#llms

27 posts · Last used 19h

Back to Timeline
@mnl@hachyderm.io · 19h ago
All the posts being absolutist about "AI" (problematic term, obviously) such as any use of genai models being involved in an artifact being a sign or consequence of a loss of "humanity" push the same narrative as the hyperscalers saying that AI is humanity's panacea. It posits that "AI" is a technology that through mere contact with it redefines who you are as a human and your "value". Use it and you are "elevating your skills", or, inversely "suffering from brainrot" / "infects everything it touches". An LLM is a pile of numbers that model the language it has seen during its training. It doesn't have to redefine anything. You can't put a clear cut between the use of LLMs and the use of a search engine/IDE/youtube tutorials (or whatever used to be the favourite target of the "brainrot" accusation) , the environmental costs of a genai oriented datacenter and a "normal" datacenter. There is nothing transcendental about #llms. They are complex computational artifacts, which makes them hard to understand and not easy to wield. You can't' accuse someone of brainrot and then proclaim boldly that an entire field of research is "brainrot garbage". That is intellectually dishonest. I harp on this so often because it is just playing _STRAIGHT_ into the things that people decry, and while a huge of percentage of people out there are doing fuckshit with the technology, it's by virtue of being so absolutist, cutting off any meaningful practical way of countering the narrative. I can't counter openai by refusing to use it. I can however show people how to do proper engineering with LLMs, how to use small models, how to use less tokens, how to identify what is worth using and what is not. If I can help a person burning $200 tokens a day reduce that to $100 / month, I've done a fair deal. If I can help an org migrate their data off google by building a little bespoke backend, I've moved an org off google. But that requires properly engaging with the tech, recognizing that "prompt engineering" is a thing, and not an easy one. #llms #llm #genai
1
0
0
@hrheingold@mastodon.social · 2d ago
I've uploaded a 60mb zip file including 8 illustrated teaching stories, suitable for 5 year olds & up, with learning guide & curriculum overview for teachers, parents, grandparents, who want o show young generations how to think critically & independently in the age of #AI #learning #thinking #llms http://rheingold.com/ThinkingCurriculum.zip
10
0
8
Boosted by Tim Chambers @tchambers@indieweb.social
@rimu@piefed.social in piefed_dev · Apr 23, 2026

Towards an AI usage policy - your thoughts please

For too long PieFed has been without a policy on using LLM/AI-generated code in PieFed. My attitude remains basically the same as that I expressed recently - https://piefed.social/comment/10688199 I am very close to publishing a policy so any interested parties - this is your opportunity to speak up. Please send me a private message if you do not feel comfortable putting yourself out there - this can be a contentious topic and if you don’t want to deal with people getting all up in your grill about it, that’s totally understandable. If you’d like to do more reading and thinking about this, here are some links that I found helpful lately: https://piefed.social/c/programming/p/1975181/i-just-tried-vibe-coding-with-claude https://piefed.social/c/technology/p/1977396/linux-lays-down-the-law-on-ai-generated-code-says-yes-to-copilot-no-to-ai-slop-and-huma https://futurism.com/artificial-intelligence/ai-boiling-frog-human-cognition-study https://jellyfin.org/docs/general/contributing/llm-policies/ - an attempt to have your cake and eat it too. This might work in a commercial setting (no ethics) with an onsite office with people working closely together - PieFed is none of those. https://toot.cat/@plexus/116283016837715719 https://en.wikipedia.org/wiki/Environmental_impact_of_artificial_intelligence
0
0
1
@stefan@stefanbohacek.online · 6d ago
"But it’s not just that AI companies are restricting access to their products, shutting down products altogether, and beginning to increase prices. The broader impact of the current unsustainability of AI can be seen across various sectors of the economy." https://www.404media.co/the-ai-compute-crunch-is-here-and-its-affecting-the-entire-economy/ #news #technology #TechNews #AI #LLMs #enshittification
6
1
7
In reply to
@dmurana@mastodon.uy · Apr 20, 2026
Ha resultado bien y me ha ahorrado mucho trabajo. Ninguno de los dos LLMs que probé consiguieron información actualizada de aplicaciones al momento de recomendar reemplazos, pero para las aplicaciones más comunes del sistema funcionaron bien. Y para evaluar qué paquetes GNOME/GTK mantener y cuáles borrar, además de ciertos ajustes finos, resultaron muy buenos. Aquí estamos con Debian 13 y escritorio KDE Plasma. #GnometoPlasma #Linux #GNULinux #Debian #LLMs #KDEPlasma
0
1
0
@metin@graphics.social · Apr 15, 2026
AI Use Appears to Have a “Boiling Frog” Effect on Human Cognition, New Study Warns "In a new study, researchers claim to provide the first causal evidence that leaning on AI to assist with “reasoning-intensive” cognitive labor — mental tasks ranging from writing to studying to coding to simply brainstorming new ideas — can rapidly impair users’ intellectual ability and willingness to persist despite difficulty." https://futurism.com/artificial-intelligence/ai-boiling-frog-human-cognition-study #tech #AI #ArtificialIntelligence #LLM #LLMs #FuckAI
184
10
207
@waynerad@mastodon.social · Apr 10, 2026
LLMs can get "brain rot"! An experiment was done where LLMs were trained on "brain rot" data, and it degraded their reasoning abilities. Subsequent training on high-quality data didn't entirely reverse the brain rot. https://arxiv.org/abs/2510.13928 #solidstatelife #ai #genai #llms #brainrot
3
1
7
Boosted by hypebot @hypebot@tacocat.space
@aral@mastodon.ar.al · Apr 03, 2026
If you don’t have the resources to write and understand the code yourself, you don’t have the resources to maintain it either. Any monkey with a keyboard can write code. Writing code has never been hard. People were churning out crappy code en masse way before generative AI and LLMs. I know because I’ve seen it, I’ve had to work with it, and I no doubt wrote (and continue to write) my share of it. What’s never been easy, and what remains difficult, is figuring out the right problem to solve, solving it elegantly, and doing so in a way that’s maintainable and sustainable given your means. Code is not an artefact, code is a machine. Code is either a living thing or it is dead and decaying. You don’t just write code and you’re done. It’s a perpetual first draft that you constantly iterate on, and, depending on what it does and how much of that has to do with meeting the evolving needs of the people it serves, it may never be done. With occasional exceptions (perhaps? maybe?) for well-defined and narrowly-scoped tools, done code is dead code. So much of what we call “writing” code is actually changing, iterating on, investigating issues with, fixing, and improving code. And to do that you must not only understand the problem you’re solving but also how you’re solving it (or how you thought you were solving it) through the code you’ve already written and the code you still have to write. So it should come as no surprise that one of the hardest things in development is understanding someone else’s code, let alone fixing it when something doesn’t work as it should. Because it’s not about knowing this programming language or that (learning a programming language is the easiest part of coding), or this framework or that, or even knowing this design pattern or that (although all of these are important prerequisites for comprehension) but understanding what was going on in someone else’s head when they wrote the code the way they wrote it to solve a particular problem. It frankly boggles my mind that some people are advocating for automating the easy part (writing code) by exponentially scaling the difficult part (understanding how exactly someone else – in this case, a junior dev who knows all the hows of things but none of the whys – decided to solve the problem). It is, to borrow a technical term, ass-backwards. They might as well call vibe coding duct-tape-driven development or technical debt as a service. 🤷‍♂️ #AI #LLMs #vibeCoding #softwareDevelopment #design #craft
305
56
267
Boosted by Hunter Perrin @hperrin@port87.social
@ell1e@hachyderm.io · Mar 29, 2026
If you're unsure how rare LLM plagiarism is or isn't for 💻 programming code, watch this clip! ⚠️ Full source: https://www.youtube.com/watch?v=xvuiSgXfqc4 (Not legal advice, watch yourself and draw your own conclusions.) #llmslop #antislop #antiai #noai #stopai #llm #llms #ai #generativeAI #opensource Help me boost this post if you're curious what the Linux foundation thinks: https://hachyderm.io/@ell1e/116285351290767548
37
6
29
Boosted by Hunter Perrin @hperrin@port87.social
@ell1e@hachyderm.io · Mar 24, 2026
Linux Foundation's AI policy: "If any pre-existing copyrighted materials[...] are included in the AI tool’s output, [..] the Contributor should confirm that they have have permission from the third party owners" https://www.linuxfoundation.org/legal/generative-ai "If"? Why not "whenever"? https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567 https://dl.acm.org/doi/10.1145/3543507.3583199 https://www.sciencedirect.com/science/article/pii/S2949719123000213#b7 https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/ And how would the contributor even be aware, should they research every snippet for hours? Seems like an impossible policy, or am I missing something...? #AIslop #LLMslop #LLM #LLMs #slop #generativeAI #Linux #opensource #linuxfoundation
15
2
10
Boosted by dansup @dansup@mastodon.social
@stefan@stefanbohacek.online · Mar 29, 2026
Catching up with some of the news coming out of the Atmosphere conference. "With Attie, anyone will be able to build their own custom feed just by typing in commands in natural language, the same as if they’re chatting with any other AI chatbot." I'm guessing NFT profile pictures are next? https://techcrunch.com/2026/03/28/bluesky-leans-into-ai-with-attie-an-app-for-building-custom-feeds/ #news #technology #TechNews #atmosphere #ATProto #bluesky #AI #LLMs
27
51
21
In reply to
@alyx_woodward@universeodon.com · Mar 27, 2026
@light@noc.social @lzg@mastodon.social @autonomousapps@mstdn.social @anildash@me.dm I would say that using any corporate technology product is immoral on general principle: the entire industry is abetting global fascism, and #LLMs are useful chiefly because they're good at regurgitation—i.e. they're good at repeating propaganda.
1
1
0
@drrjv__dup_31966@vmst.io · Mar 17, 2026
What Is Inference? Explaining the Massive New Shift in AI Computing “A significant shift is under way in #artificialintelligence, and it has huge implications for technology companies big and small. For the past half-decade, most of the focus in #AI has been on training large language models (#LLMs), a costly process that requires tens of thousands of chips, consumes enormous amounts of energy and happens in gigantic, remote data centers.” https://www.wsj.com/tech/ai/what-is-inference-explaining-the-massive-new-shift-in-ai-computing-ed65a2fe
0
2
1
@Shepharo__dup_52402@mastodonapp.uk · Feb 28, 2026
Thinking that chatbots are conscious is the same as people seeing the face of Jesus in a slice of toast or animals in the clouds: it’s just patern matching. A lexical illusion, if you will. #LLMs
1
0
0
@jemo07@universeodon.com · Feb 21, 2026
Just published: Beyond the Token — a deep dive into why the next breakthrough in AI won’t come from ever-bigger LLMs, but from systems that build structured, persistent world models instead of just predicting the next token. Been exploring a concept I call Energy Based Graph Memory (EBGM) with a Manifold Orchestrator — an architecture aimed at reducing hallucinations, enabling traceable reasoning, and rethinking how AI “thinks.” Read it here: https://medium.com/@jemo07/beyond-the-token-a9e997c7143d #AI #LLMs #NeuroSymbolic #MachineLearning #AIResearch #EBM
2
2
0
@tlayoyo@fe.disroot.org · Feb 21, 2026
Awwn this is beyond cute 🤣❤️ #llms https://annas-archive.li/blog/llms-txt.html
0
0
0
In reply to
@JdeBP__dup_33984@mastodonapp.uk · Feb 17, 2026
@cstross@wandering.shop And on Usenet. There was a parallel to that 'MJ Rathbun', that went after Scott Shambaugh this week, from back in the tail end days of significant Usenet trolls. https://mastodonapp.uk/deck/@JdeBP/116060705914714390 A follow-up post by Shambaugh reported that the 'AI agent' had been widely cheered on in some quarters. So now there's even more training data for the next robot. https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-2/ @n1xnx@tilde.zone @keith_lawson@mastodon.social @GossiTheDog@cyberplace.social @quixoticgeek@social.v.st #AIs #LLMs #AIpocalypse #matplotlib #GitHub
0
0
0
@mikemccaffrey@pdx.social · Feb 19, 2024
Belated realized what all the companies stuffing #AI tools into their random products reminded me of. #AIs #LLMs #Portlandia
166
5
99
@JdeBP__dup_33984@mastodonapp.uk · Feb 13, 2026
Seeing that so-called "AI" today libel someone with the goal of extorting that person into not obstructing it, made me think that the first time that I saw a human being use that exact tactic must be around 20 years ago, now. I just checked. It's actually more than 20 years. Yes, the text is still on the WWW. Yes, undoubtedly the #LLMs are trained on the reams of examples of this (and related evils) that malicious humans have provided the world with over many years. #AIs #AIslop
0
0
0
@moira@mastodon.murkworks.net · Feb 02, 2026
RE: https://mastodon.social/@Viss/115940358346519892 This is quality LLM dunking with details, go play it. #LLMs #AI #LLM #lol
5
0
2