• Sign in
  • Sign up
Elektrine
EN
  • EN English
  • 中 中文
Log in Register
Modes
Overview Search Chat Timeline Communities Gallery Lists Friends Email Vault VPN
Back to Timeline
  • Open on feddit.it

MangoCats

@MangoCats@feddit.it
lemmy 0.19.16
0 Followers
0 Following
Joined February 05, 2025

Posts

Thread context 2 posts in path
Parent @Bronzebeard@lemmy.zip Open
on lemmy.zip
Open ancestor post
Current reply
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Feb 28, 2026
Are you sure?
View full thread on feddit.it
0
0
0
0
Thread context 2 posts in path
Parent @Zos_Kia@jlai.lu Open
on jlai.lu
Open ancestor post
Current reply
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Feb 28, 2026
I feel that a lot of what is improving in the recent batch of model releases is the vetting of their training data - basically the opposite of model collapse. Nothing requires an LLM to train on the entire internet.
View full thread on feddit.it
0
1
0
0
Thread context 4 posts in path
Root @JensSpahnpasta@feddit.org Open
@JensSpahnpasta@feddit.org
I really would like to know if AAA games are bombing because they are overpriced microtransaction hell or if they are bombing because many people haven’t been able to buy their new gaming PC because o
Ancestor 2 @devolution@lemmy.world Open
@devolution@lemmy.world
The micro transactions and shittiness mainly.
Parent @warm@kbin.earth Open
@warm@kbin.earth
That people keep buying into... so the cycle continues.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 17, 2025
That people keep buying into… so the cycle continues. More’s the shame. Our last console was a PS3, it was such a non-fun waste of time that we never bought into the 4 or 5. I used to buy a new PC title a year or so before than, really none new since StarCraft II.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @Devial@discuss.online Open
@Devial@discuss.online
Which of the letters in CSAM stand for images then ?
Ancestor 2 @bobzer@lemmy.zip Open
@bobzer@lemmy.zip
Material.
Parent @Devial@discuss.online Open
@Devial@discuss.online
Material can be anything. It can be images, videos theoretically even audio recordings. Images is a relevant and sensible distinction. And judging by the downvotes you’re collecting, the majority of p
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 12, 2025
Material can be anything. And, if you’re trying to authorize law enforcement to arrest and prosecute, you want the broadest definitions possible.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @cupcakezealot@piefed.blahaj.zone Open
@cupcakezealot@piefed.blahaj.zone
so they got mad because he reported it to an agency that actually fights csam instead of them so they can sweep it under the rug?
Ancestor 2 @Devial@discuss.online Open
@Devial@discuss.online
They didn’t get mad. Did you even read my comment ?
Parent @cupcakezealot@piefed.blahaj.zone Open
@cupcakezealot@piefed.blahaj.zone
they obviously did if they banned him for it; and if they’re training on csam and refuse to do anything about it then yeah they have a connection to it.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 12, 2025
Google doesn’t ban for hate or feels, they ban by algorithm. The algorithms address legal responsibilities and concerns. Are the algorithms perfect? No. Are they good? Debatable. Is it possible to replace those algorithms with “thinking human beings” that do a better job? Also debatable, from a legal standpoint they’re probably much better off arguing from a position of algorithm vs human training.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @forkDestroyer@infosec.pub Open
@forkDestroyer@infosec.pub
I’m being a bit extra but… Your statement: The article headline is wildly misleading, bordering on being just a straight up lie. The article headline: A Developer Accidentally Found CSAM in AI Data. G
Ancestor 2 @Blubber28@lemmy.world Open
@Blubber28@lemmy.world
This is correct. However, many websites/newspapers/magazines/etc. love to get more clicks with sensational headlines that are technically true, but can be easily interpreted as something much more sin
Parent @obsoleteacct@lemmy.zip Open
@obsoleteacct@lemmy.zip
It is a terrible headline. It can be debated whether it’s intentionally misleading, but if the debate is even possible then the writing is awful.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 12, 2025
if the debate is even possible then the writing is awful. Awfully well compensated in terms of advertising views as compared with “good” writing. Capitalism in the “free content market” at work.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @Devial@discuss.online Open
@Devial@discuss.online
The article headline is wildly misleading, bordering on just a straight up lie. Google didn’t ban the developer for reporting the material, they didn’t even know he reported it, because he did so anon
Ancestor 2 @forkDestroyer@infosec.pub Open
@forkDestroyer@infosec.pub
I’m being a bit extra but… Your statement: The article headline is wildly misleading, bordering on being just a straight up lie. The article headline: A Developer Accidentally Found CSAM in AI Data. G
Parent @Blubber28@lemmy.world Open
@Blubber28@lemmy.world
This is correct. However, many websites/newspapers/magazines/etc. love to get more clicks with sensational headlines that are technically true, but can be easily interpreted as something much more sin
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 12, 2025
can be easily interpreted as something… This is pretty much the art of sensational journalism, popular song lyric writing and every other “writing for the masses” job out there. Factual / accurate journalism? More noble, but less compensated.
View full thread on feddit.it
0
0
0
0
Thread context 3 posts in path
Root @themachinestops@lemmy.dbzer0.com Open
@themachinestops@lemmy.dbzer0.com
Open ancestor post
Parent @Devial@discuss.online Open
@Devial@discuss.online
The article headline is wildly misleading, bordering on just a straight up lie. Google didn’t ban the developer for reporting the material, they didn’t even know he reported it, because he did so anon
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 12, 2025
Google’s only failure here was to not unban on his first or second appeal. My experience of Google and the unban process is: it doesn’t exist, never works, doesn’t even escalate to a human evaluator in a 3rd world sweatshop - the algorithm simply ignores appeals inscrutably.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @JcbAzPx@lemmy.world Open
@JcbAzPx@lemmy.world
My instructions are copyright by me First, how much that is true is debatable. Second, that doesn’t matter as far as the output. No one can legally own that.
Ancestor 2 @MangoCats@feddit.it Open
@MangoCats@feddit.it
First, how much that is true is debatable. It’s actually settled case law. AI does not hold copyright any more than spell-check in a word processor does. The person using the AI tool to create the wor
Parent @JcbAzPx@lemmy.world Open
@JcbAzPx@lemmy.world
You obviously didn’t even glance at the case law. No one can own what AI produces. It is inherently public domain.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 11, 2025
The statement that “No one can own what AI produces. It is inherently public domain” is partially true, but the situation is more nuanced, especially in the United States. Here is a breakdown of the key points: Human Authorship is Required: In the U.S., copyright law fundamentally requires a human author. Works generated entirely by an AI, without sufficient creative input or control from a human, are not eligible for copyright protection and thus fall into the public domain. “Sufficient” Human Input Matters: If a human uses AI as an assistive tool but provides significant creative control, selection, arrangement, or modification to the final product, the human’s contributions may be copyrightable. The U.S. Copyright Office determines the “sufficiency” of human input on a case-by-case basis. Prompts Alone Are Generally Insufficient: Merely providing a text prompt to an AI tool, even a detailed one, typically does not qualify as sufficient human authorship to copyright the output. International Variations: The U.S. stance is not universal. Some other jurisdictions, such as the UK and China, have legal frameworks that may allow for copyright in “computer-generated works” under certain conditions, such as designating the person who made the “necessary arrangements” as the author. In summary, purely AI-generated content generally lacks copyright protection in the U.S. and is in the public domain. However, content where a human significantly shapes the creative expression may be copyrightable, though the AI-generated portions alone remain unprotectable. To help you understand the practical application, I can explain the specific requirements for copyrighting a work that uses both human creativity and AI assistance. Would you like me to outline the specific criteria the U.S. Copyright Office uses to evaluate “sufficient” human authorship for a project you have in mind? Use at your own risk, AI can make mistakes, but in this case it agrees 100% with my prior understanding.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @theneverfox@pawb.social Open
@theneverfox@pawb.social
AI isn’t good at changing code, or really even understanding it… It’s good at writing it, ideally 50-250 lines at a time
Ancestor 2 @MangoCats@feddit.it Open
@MangoCats@feddit.it
It’s good at writing it, ideally 50-250 lines at a time I find Claude Sonnet 4.5 to be good up to 800 lines at a chunk. If you structure your project into 800ish line chunks with well defined interfac
Parent @theneverfox@pawb.social Open
@theneverfox@pawb.social
Okay, but if it’s writing 800 lines at once, it’s making design choices. Which is all well and good for a one off, but it will make those choices, make them a different way each time, and it will name
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 11, 2025
but it will make those choices, make them a different way each time That’s a bit of the power of the process: variety. If the implementation isn’t ideal, it can produce another one. In theory, it can produce ten different designs for any given solution then select the “best” one by whatever criteria you choose. If you’ve got the patience to spell it all out. The AI can’t remember how it did it, or how it does things. Neither can the vast majority of people after several years go by. That’s what the documentation is for. 2000 lines is nothing. Yep. It’s also a huge chunk of example to work from and build on. If your designs are highly granular (in a good way), most modules could fit under 2000 lines. My main project is well over a million lines That’s should be a point of embarrassment, not pride. My sympathies if your business really is that complicated. You might ask an LLM to start chipping away at refactoring your code to collect similar functions together to reduce duplication. But we can and do it to meet the needs of the customer, with high stakes, because we wrote it. These days we use AI to do grunt work, we have junior devs who do smaller tweaks. Sure. If you look at bigger businesses, they are always striving to get rid of “indispensible duos” like you two. They’d rather pay 6 run-of-the-mill hire-more-any-day-of-the-week developers than two indispensibles. And that’s why a large number of management types who don’t really know how it works in the trenches are falling all over themselves trying to be the first to fly a team that “does it all with AI, better than the next guys.” We’re a long way from that being realistic. AI is a tool, you can use it for grunt work, you can use it for top level design, and everything in-between. What you can’t do is give it 25 words or less of instruction and expect to get back anything of significant complexity. That 2000 line limit becomes 1 million lines of code when every four lines of the root module describes another module. If an AI is writing code a thousand lines at a time, no one knows how it works. Far from it. Compared with code I get to review out of India, or Indiana, 2000 lines of AI code is just as readable as any 2000 lines I get out of my colleagues. Those colleagues also make the same annoying deviations from instructions that AI does, the biggest difference is that AI gets it’s wrong answer back to me within 5-10 minutes, Indiana? We’ve been correcting and recorrecting the same architectural implementation for the past 6 months. They had a full example in C++, they are going to “translate it to Rust” for us. I figured, it took me about 6 weeks total to develop the system from scratch, with a full example like they have they should be well on their way in 2 weeks. Yeah, nowhere in 2 weeks, so I do a Rust translation for them in the next two weeks, show them. O.K. we see that, but we have been tasked to change this aspect of the interface to something undefined, so we’re going to do an implementation with that undefined interface… and so I refine my Rust implementation to a highly polished example ready for any undefined interface you throw at it within another 2 weeks, and Indiana continues to hack away at three projects simultaneously, getting nowhere equally fast on all 3. It has been 7 months now, I’m still reviewing Indiana’s code and reminding them, like I did the AI, of all the things I have told them six times over the past 7 months that they keep drifting off from.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @JcbAzPx@lemmy.world Open
@JcbAzPx@lemmy.world
AI doesn’t get IP protections.
Ancestor 2 @MangoCats@feddit.it Open
@MangoCats@feddit.it
Nobody is asking it to (except freaks trying to get news coverage.) It’s like compiler output - no, I didn’t write that assembly code, gcc did, but it did it based on my instructions. My instructions
Parent @JcbAzPx@lemmy.world Open
@JcbAzPx@lemmy.world
My instructions are copyright by me First, how much that is true is debatable. Second, that doesn’t matter as far as the output. No one can legally own that.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 10, 2025
First, how much that is true is debatable. It’s actually settled case law. AI does not hold copyright any more than spell-check in a word processor does. The person using the AI tool to create the work holds the copyright. Second, that doesn’t matter as far as the output. No one can legally own that. Idealistic notions aside, this is no different than PIXAR owning the Renderman output that is Toy Story 1 through 4.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @JcbAzPx@lemmy.world Open
@JcbAzPx@lemmy.world
If you outsource you could at least sure them when things go wrong. Good luck doing that with AI. Plus you can own the code if a person does it.
Ancestor 2 @MangoCats@feddit.it Open
@MangoCats@feddit.it
If you outsource you could at least sure them when things go wrong. Most outsourcing consultants I have worked with aren’t worth the legal fees to attempt to sue. Plus you can own the code if a person
Parent @JcbAzPx@lemmy.world Open
@JcbAzPx@lemmy.world
AI doesn’t get IP protections.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
Nobody is asking it to (except freaks trying to get news coverage.) It’s like compiler output - no, I didn’t write that assembly code, gcc did, but it did it based on my instructions. My instructions are copyright by me, the gcc interpretation of them is a derivative work covered by my rights in the source code. When a painter paints a canvas, they don’t record the “source code” but the final work is also still theirs, not the brush maker or the canvas maker or paint maker (though some pigments get a little squirrely about that…)
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @PoliteDudeInTheMood@lemmy.ca Open
@PoliteDudeInTheMood@lemmy.ca
I don’t know how that happens, I regularly use Claude code and it’s constantly reminding me to push to git.
Ancestor 2 @MangoCats@feddit.it Open
@MangoCats@feddit.it
As an experiment I asked Claude to manage my git commits, it wrote the messages, kept a log, archived excess documentation, and worked really well for about 2 weeks. Then, as the project got larger, t
Parent @PoliteDudeInTheMood@lemmy.ca Open
@PoliteDudeInTheMood@lemmy.ca
The longer the project the more stupid Claude gets. I’ve seen it both in chat, and in Claude code, and Claude explains the situation quite well: Increased cognitive load: Longer projects have more sta
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
Yeah, context management is one big key. The “compacting conversation” hack is a good one, you can continue conversations indefinitely, but after each compact it will throw away some context that you thought was valuable. The best explanation I have heard for the current limitations is that there is a “context sweet spot” for Opus 4.5 that’s somewhere short of 200,000 tokens. As your context window gets filled above 100,000 tokens, at some point you’re at “optimal understanding” of whatever is in there, then as you continue on toward 200,000 tokens the hallucinations start to increase. As a hack, they “compact the conversation” and throw out less useful tokens getting you back to the “essential core” of what you were discussing before, so you can continue to feed it new prompts and get new reactions with a lower hallucination rate, but with that lower hallucination rate also comes a lower comprehension of what you said before the compacting event(s). Some describe an aspect of this as the “lost in the middle” phenomenon since the compacting event tends to hang on to the very beginning and very end of the context window more aggressively than the middle, so more “middle of the window” content gets dropped during a compacting event.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @Suffa@lemmy.wtf Open
@Suffa@lemmy.wtf
AI is really great for small apps. I’ve saved so many hours over weekends that would otherwise be spent coding a small thing I need a few times whereas now I can get an AI to spit it out for me. But a
Ancestor 2 @victorz@lemmy.world Open
@victorz@lemmy.world
What kind of small things have you vibed out that you needed?
Parent @6nk06@sh.itjust.works Open
@6nk06@sh.itjust.works
I’m curious about that too since you can “create” most small applications with a few lines of Bash, pipes, and all the available tools on Linux.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
Depends on how demanding you are about your application deployment and finishing. Do you want that running on an embedded system with specific display hardware? Do you want that output styled a certain way? AI/LLM are getting pretty good at taking those few lines of Bash, pipes and other tools’ concepts, translating them to a Rust, or C++, or Python, or what have you app and running them in very specific environments. I have been shocked at how quickly and well Claude Sonnet styled an interface for me, based on a cell phone snap shot of a screen that I gave it with the prompt “style the interface like this.”
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @mjr@infosec.pub Open
@mjr@infosec.pub
So the claim is it’s easier to Claudge a whole new app than to make a personal fork of one that works? Sounds unlikely.
Ancestor 2 @MangoCats@feddit.it Open
@MangoCats@feddit.it
Depends entirely on the app.
Parent @mjr@infosec.pub Open
@mjr@infosec.pub
Yeah, that’s fair. In a minority of cases, with a certain app and needs to modify it to do your task, it may be true. Still rare.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
I don’t know how rare it is today. What I do know is that it’s less rare today than it was 3 months ago, and 3 months ago it was even more rare 3 months before that…
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @JcbAzPx@lemmy.world Open
@JcbAzPx@lemmy.world
I think the point is that someone should understand the code. In this case, no one does.
Ancestor 2 @MangoCats@feddit.it Open
@MangoCats@feddit.it
I think the point is that someone should understand the code. In this case, no one does. Big corporations have been pushing for outsourcing software development for decades, how is this any different?
Parent @JcbAzPx@lemmy.world Open
@JcbAzPx@lemmy.world
If you outsource you could at least sure them when things go wrong. Good luck doing that with AI. Plus you can own the code if a person does it.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
If you outsource you could at least sure them when things go wrong. Most outsourcing consultants I have worked with aren’t worth the legal fees to attempt to sue. Plus you can own the code if a person does it. I’m not aware of any ownership issues with code I have developed using Claude, or any other agents. It’s still mine, all the more so because I paid Claude to write it for me, at my direction.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @kahnclusions@lemmy.ca Open
@kahnclusions@lemmy.ca
Even worse, the ones I’ve evaluated (like Claude) constantly fail to even compile because, for example, they mix usages of different SDK versions. When instructed to use version 3 of some package, it
Ancestor 2 @MangoCats@feddit.it Open
@MangoCats@feddit.it
constantly fail to even compile because, for example, they mix usages of different SDK versions Try an agentic tool like Claude Code - it closes the loop by testing the compilation for you, and fixing
Parent @III@lemmy.world Open
@III@lemmy.world
The LLM comparison to a team of human developers is a great example. But like outsourcing your development, LLM is less a tool and more just delegation. And yes, you can dig in deep to understand all
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
the sell is that you can save time How do you know when salespeople (and lawyers) are lying? It’s only when their lips are moving. developers are being demanded to become fractional CTOs by using LLM because they are being measured by expected productivity increases that limit time for understanding. That’s the kind of thing that works out in the end. Like outsourcing to Asia, etc. It does work for some cases, it can bring sustainable improvements to the bottom line, but nowhere near as fast or easy or cheaply as the people selling it say.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @jj4211@lemmy.world Open
@jj4211@lemmy.world
So if it can be vibe coded, it’s pretty much certainly already a “thing”, but with some awkwardness. Maybe what you need is a combination of two utilities, maybe the interface is very awkward for your
Ancestor 2 @utopiah@lemmy.world Open
@utopiah@lemmy.world
If I understand correctly then this means mostly adapting the interface?
Parent @jj4211@lemmy.world Open
@jj4211@lemmy.world
It’s certainly a use case that LLM has a decent shot at. Of course, having said that I gave it a spin with Gemini 3 and it just hallucinated a bunch of crap that doesn’t exist instead of properly iden
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
I tried using Gemini 3 for OpenSCAD, and it couldn’t slice a solid properly to save its life, I gave up on it after about 6 attempts to put a 3:12 slope shed roof on four walls. Same job in Opus 4.5 and I’ve got a very nicely styled 600 square foot floor plan with radiused 3D concrete printed walls, windows, doors, shed roof with 1’ overhang, and a python script that translates the .scad to a good looking .svg 2D floorplan. I’m sure Gemini 3 is good for other things, but Opus 4.5 makes it look infantile in 3D modeling.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @victorz@lemmy.world Open
@victorz@lemmy.world
What kind of small things have you vibed out that you needed?
Ancestor 2 @utopiah@lemmy.world Open
@utopiah@lemmy.world
FWIW that’s a good question but IMHO the better question is : What kind of small things have you vibed out that you needed that didn’t actually exist or at least you couldn’t find after a 5min search
Parent @jj4211@lemmy.world Open
@jj4211@lemmy.world
So if it can be vibe coded, it’s pretty much certainly already a “thing”, but with some awkwardness. Maybe what you need is a combination of two utilities, maybe the interface is very awkward for your
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
I’ll put it this way: LLMs have been getting pretty good at translation over the past 20 years. Sure, human translators still look down their noses at “automated translations” but, in the real world, an automated translation gets the job done well enough most of the time. LLMs are also pretty good at translating code, say from C++ to Rust. Not million line code bases, but the little concepts they can do pretty well. On a completely different tack, I’ve been pretty happy with LLM generated parsers. Like: I’ve got 1000 log files here, and I want to know how many times these lines appear. You’ve got grep for that. But, write me a utility that finds all occurrences of these lines, reads the time stamps, and then searches for any occurrences of these other lines within +/- 1 minute of the first ones… grep can’t really do that, but a 5 minute vibe coded parser can.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @utopiah@lemmy.world Open
@utopiah@lemmy.world
Open an issue to explain why it’s not enough for you? If you can make a PR for it that actually implements the things you need? My point to say everything is already out there and perfectly fits your
Ancestor 2 @lepinkainen@lemmy.world Open
@lepinkainen@lemmy.world
These are the principles I follow: indieweb.org/make_what_you_need indieweb.org/use_what_you_make I don’t have time to argue with FOSS creators to get my stuff in their projects, nor do I have the ene
Parent @mjr@infosec.pub Open
@mjr@infosec.pub
So the claim is it’s easier to Claudge a whole new app than to make a personal fork of one that works? Sounds unlikely.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
Depends entirely on the app.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @lepinkainen@lemmy.world Open
@lepinkainen@lemmy.world
What if I can find it but it’s either shit or bloated for my needs?
Ancestor 2 @utopiah@lemmy.world Open
@utopiah@lemmy.world
Open an issue to explain why it’s not enough for you? If you can make a PR for it that actually implements the things you need? My point to say everything is already out there and perfectly fits your
Parent @lepinkainen@lemmy.world Open
@lepinkainen@lemmy.world
These are the principles I follow: indieweb.org/make_what_you_need indieweb.org/use_what_you_make I don’t have time to argue with FOSS creators to get my stuff in their projects, nor do I have the ene
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
I don’t have time to argue with FOSS creators to get my stuff in their projects So much this. Over the years I have found various issues in FOSS and “done the right thing” submitting patches formatted just so into their own peculiar tracking systems according to all their own peculiar style and traditions, only to have the patches rejected for all kinds of arbitrary reasons - to which I say: “fine, I don’t really want our commercial competitors to have this anyway, I was just trying to be a good citizen in the community. I’ve done my part, you just go on publishing buggy junk - that’s fine.”
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @utopiah@lemmy.world Open
@utopiah@lemmy.world
FWIW that’s a good question but IMHO the better question is : What kind of small things have you vibed out that you needed that didn’t actually exist or at least you couldn’t find after a 5min search
Ancestor 2 @lepinkainen@lemmy.world Open
@lepinkainen@lemmy.world
What if I can find it but it’s either shit or bloated for my needs?
Parent @utopiah@lemmy.world Open
@utopiah@lemmy.world
Open an issue to explain why it’s not enough for you? If you can make a PR for it that actually implements the things you need? My point to say everything is already out there and perfectly fits your
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
There have been some articles published positing that AI coding tools spell the end for FOSS because everybody is just going to do stuff independently and don’t need to share with each other anymore to get things done. I think those articles are short sighted, and missing the real phenomenon that the FOSS community needs each other now more than ever in order to tame the LLMs into being able to write stories more interesting than “See Spot run.” and the equivalent in software projects.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @utopiah@lemmy.world Open
@utopiah@lemmy.world
FWIW that’s a good question but IMHO the better question is : What kind of small things have you vibed out that you needed that didn’t actually exist or at least you couldn’t find after a 5min search
Ancestor 2 @victorz@lemmy.world Open
@victorz@lemmy.world
Since you put such emphasis on “better”: I’d still like to have an answer to the one I posed. Yours would be a reasonable follow-up question if we noticed that their vibed projects are utilities alrea
Parent @utopiah@lemmy.world Open
@utopiah@lemmy.world
Sure, you’re right, I just worry (maybe needlessly) about people re-inventing the wheel because it’s “easier” than searching without properly understand the cost of the entire process.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
people re-inventing the wheel because it’s “easier” than searching without properly understand the cost of the entire process. A good LLM will do a web search first and copy its answer from there…
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @Suffa@lemmy.wtf Open
@Suffa@lemmy.wtf
AI is really great for small apps. I’ve saved so many hours over weekends that would otherwise be spent coding a small thing I need a few times whereas now I can get an AI to spit it out for me. But a
Ancestor 2 @victorz@lemmy.world Open
@victorz@lemmy.world
What kind of small things have you vibed out that you needed?
Parent @utopiah@lemmy.world Open
@utopiah@lemmy.world
FWIW that’s a good question but IMHO the better question is : What kind of small things have you vibed out that you needed that didn’t actually exist or at least you couldn’t find after a 5min search
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
making something quick that kind of works is nice… but why even do so in the first place if it’s already out there, maybe maintained but at least tested? In a sense, this is what LLMs are doing for you: regurgitating stuff that’s already out there. But… they are “bright” enough to remix the various bits into custom solutions. So there might already be a NWS API access app example, and a Waveshare display example, and so on, but there’s not a specific example that codes up a local weather display for the time period and parameters you want to see (like, temperature and precipitation every 15 minutes for the next 12 hours at a specific location) on the particular display you have. Oh, and would you rather build that in C++ instead of Python? Yeah, LLMs are actually pretty good at remixing little stuff like that into things you’re not going to find exact examples of ready to your spec.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @AutistoMephisto@lemmy.world Open
@AutistoMephisto@lemmy.world
Open ancestor post
Ancestor 2 @Suffa@lemmy.wtf Open
@Suffa@lemmy.wtf
AI is really great for small apps. I’ve saved so many hours over weekends that would otherwise be spent coding a small thing I need a few times whereas now I can get an AI to spit it out for me. But a
Parent @victorz@lemmy.world Open
@victorz@lemmy.world
What kind of small things have you vibed out that you needed?
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
I have a little display on the back of a Raspberry Pi Zero W - it recoded that display software to refresh 5x faster, and it updated the content source to move from Meteomatics (who just discontinued their free API) to the National Weather Service.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @Agent641@lemmy.world Open
@Agent641@lemmy.world
I cannot understand and debug code written by AI. But I also cannot understand and debug code written by me. Let’s just call it even.
Ancestor 2 @ICastFist@programming.dev Open
@ICastFist@programming.dev
At least you can blame yourself for your own shitty code, which hopefully will never attempt to “accidentally” erase the entire project
Parent @PoliteDudeInTheMood@lemmy.ca Open
@PoliteDudeInTheMood@lemmy.ca
I don’t know how that happens, I regularly use Claude code and it’s constantly reminding me to push to git.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
As an experiment I asked Claude to manage my git commits, it wrote the messages, kept a log, archived excess documentation, and worked really well for about 2 weeks. Then, as the project got larger, the commit process was taking longer and longer to execute. I finally pulled the plug when the automated commit process - which had performed flawlessly for dozens of commits and archives, accidentally irretrievably lost a batch of work - messed up the archive process and deleted it without archiving it first, didn’t commit it either. AI/LLM workflows are non-deterministic. This means: they make mistakes. If you want something reliable, scalable, repeatable, have the AI write you code to do it deterministically as a tool, not as a workflow.
View full thread on feddit.it
0
0
0
0
Thread context 3 posts in path
Root @AutistoMephisto@lemmy.world Open
@AutistoMephisto@lemmy.world
Open ancestor post
Parent @Agent641@lemmy.world Open
@Agent641@lemmy.world
I cannot understand and debug code written by AI. But I also cannot understand and debug code written by me. Let’s just call it even.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
I also cannot understand and debug code written by me. So much this. I look back at stuff I wrote 10 years ago and shake my head, console myself that “we were on a really aggressive schedule.” At least in my mind I can do better, in practice the stuff has got to ship eventually and what ships is almost never what I would call perfect, or even ideal.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @AutistoMephisto@lemmy.world Open
@AutistoMephisto@lemmy.world
Open ancestor post
Ancestor 2 @lepinkainen@lemmy.world Open
@lepinkainen@lemmy.world
Same thing would happen if they were a non-coder project manager or designer for a team of actual human progress. Stuff done, shipped and working. “But I can’t understand the code 😭”, yes. You were th
Parent @JcbAzPx@lemmy.world Open
@JcbAzPx@lemmy.world
I think the point is that someone should understand the code. In this case, no one does.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
I think the point is that someone should understand the code. In this case, no one does. Big corporations have been pushing for outsourcing software development for decades, how is this any different? Can you always recall your outsourced development team for another round of maintenance? A LLM may actually be more reliable and accessible in the future.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @BarneyPiccolo@lemmy.today Open
@BarneyPiccolo@lemmy.today
I don’t know shit about anything, but it seems to me that the AI already thought it gave you the best answer, so going back to the problem for a proper answer is probably not going to work. But I’d tr
Ancestor 2 @Evotech@lemmy.world Open
@Evotech@lemmy.world
You are in a way correct. If you keep sending the context of the “conversation” it will reinforce its previous implementation. But once you start a new conversation "meaning you fint give any previous
Parent @BarneyPiccolo@lemmy.today Open
@BarneyPiccolo@lemmy.today
Maybe the solution is to keep sending the code through various AI requests, until it either gets polished up, or gains sentience, and destroys the world. 50-50 chance. This stuff ALWAYS ends up destro
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
This stuff ALWAYS ends up destroying the world on TV. TV is also full of infinite free energy sources. In the real world warp drive may be possible, you just need to annihilate the mass of Jupiter with an equivalent mass of antimatter to get the energy necessary to create a warp bubble to move a small ship from the orbit of Pluto to a location a few light years away, but on TV they do it every week.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @BarneyPiccolo@lemmy.today Open
@BarneyPiccolo@lemmy.today
I don’t know shit about anything, but it seems to me that the AI already thought it gave you the best answer, so going back to the problem for a proper answer is probably not going to work. But I’d tr
Ancestor 2 @Evotech@lemmy.world Open
@Evotech@lemmy.world
You are in a way correct. If you keep sending the context of the “conversation” it will reinforce its previous implementation. But once you start a new conversation "meaning you fint give any previous
Parent @TheBlackLounge@lemmy.zip Open
@TheBlackLounge@lemmy.zip
Doesn’t work. Any semi complex problem with multiple constraints and your team of AIs keeps running circles. Very frustrating if you know it can be done. But what if you’re a “fractional CTO” and you
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
your team of AIs keeps running circles Depending on your team of human developers (and managers), they will do the same thing. Granted, most LLMs have a rather extreme sycophancy problem, but humans often do the same. We haven’t gotten yet to AIs who will tell you that what you ask is impossible. If it’s a problem like under or over-constrained geometry or equations, they (the better ones) will tell you. For difficult programing tasks I have definitely had the AIs bark up all the wrong trees trying to fix something until I gave them specific direction for where to look for a fix (very much like my experiences with some human developers over the years.) I had a specific task that I was developing in one model, and it was a hard problem but I was making progress and could see the solution was near, then I switched to a different model which did come back and tell me “this is impossible, you’re doing it wrong, you must give up this approach” up until I showed it the results I had achieved to-date with the other model, then that same model which told me it was impossible helped me finish the job completely and correctly. A lot like people.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @AutistoMephisto@lemmy.world Open
@AutistoMephisto@lemmy.world
Open ancestor post
Ancestor 2 @Evotech@lemmy.world Open
@Evotech@lemmy.world
Just ask the ai to make the change?
Parent @BarneyPiccolo@lemmy.today Open
@BarneyPiccolo@lemmy.today
I don’t know shit about anything, but it seems to me that the AI already thought it gave you the best answer, so going back to the problem for a proper answer is probably not going to work. But I’d tr
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
AI already thought it gave you the best answer, so going back to the problem for a proper answer is probably not going to work. There’s an LLM concept/parameter called “temperature” that determines basically how random the answer is. As deployed, LLMs like Claude Sonnet or Opus have a temperature that won’t give the same answer every time, and when you combine this with feedback loops that point out failures (like compliers that tell the LLM when its code doesn’t compile), the LLM can (and does) the old Beckett: try, fail, try again, fail again, fail better next time - and usually reach a solution that passes all the tests it is aware of. The problem is: with a context window limit of 200,000 tokens, it’s not going to be aware of all the relevant tests in more complex cases.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @theneverfox@pawb.social Open
@theneverfox@pawb.social
AI isn’t good at changing code, or really even understanding it… It’s good at writing it, ideally 50-250 lines at a time
Ancestor 2 @lepinkainen@lemmy.world Open
@lepinkainen@lemmy.world
I’ve made full-ass changes on existing codebases with Claude It’s a skill you can learn, pretty close to how you’d work with actual humans
Parent @TheBlackLounge@lemmy.zip Open
@TheBlackLounge@lemmy.zip
What full ass changes have you made that can’t be done better with a refactoring tool? I believe Claude will accept the task. I’ve been fixing edge cases in a vibe colleague’s full-ass change all mont
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
True that LLMs will accept almost any task, whether they should or not. True that their solutions aren’t 100% perfect every time. Whether it’s faster to use them or not I think depends a lot on what’s being done, and what alternative set of developers you’re comparing them with. What I have seen across the past year is that the number of cases where LLM based coding tools are faster than traditional developers has been increasing, rather dramatically. I called them near useless this time last year.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @Evotech@lemmy.world Open
@Evotech@lemmy.world
Just ask the ai to make the change?
Ancestor 2 @theneverfox@pawb.social Open
@theneverfox@pawb.social
AI isn’t good at changing code, or really even understanding it… It’s good at writing it, ideally 50-250 lines at a time
Parent @lepinkainen@lemmy.world Open
@lepinkainen@lemmy.world
I’ve made full-ass changes on existing codebases with Claude It’s a skill you can learn, pretty close to how you’d work with actual humans
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
pretty close to how you’d work with actual humans That has been my experience as well. It’s like working with humans who have extremely fast splinter skills, things they can rip through in 10 minutes that might take you days, weeks even. But then it also takes 5-10 minutes to do some things that you might accomplish in 20 seconds. And, like people, it’s not 100% reliable or accurate, so you need to use all those same processes we have developed to help people catch their mistakes.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @theneverfox@pawb.social Open
@theneverfox@pawb.social
AI isn’t good at changing code, or really even understanding it… It’s good at writing it, ideally 50-250 lines at a time
Ancestor 2 @Evotech@lemmy.world Open
@Evotech@lemmy.world
I’m just not following the mindset of “get ai to code your whole program” and then have real people maintain it? Sounds counter productive I think you need to make your code for an Ai to maintain. Use
Parent @theneverfox@pawb.social Open
@theneverfox@pawb.social
I don’t think we should be having the AI write the program in the first place. I think we’re barreling towards a place where remotely complicated software becomes a lost technology I don’t mind if AI
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
I think we’re barreling towards a place where remotely complicated software becomes a lost technology I think complicated software has been an art more than a science, for the past 30 years we have been developing formal processes to make it more of a procedural pursuit but the art is still very much in there. I think if AI authored software is going to reach any level of valuable complexity, it’s going to get there with the best of our current formal processes plus some more that are being (rapidly) developed specifically for LLM based tools. But eventually you will hit a limit. You’ll need to do something… And how do we surpass those limits? Generally: research. And for the past 20+ years where do we do most of that research? On the internet. And where were the LLMs trained, and what are they relatively good at doing quickly? Internet research. At the end of the day, coding is a skill. If no one is building the required experience to work with complex systems So is semiconductor design, application of transistors to implement logic gates, etc. We still have people who can do that, not very many, but enough. Not many people work in assembly language anymore, either…
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @AutistoMephisto@lemmy.world Open
@AutistoMephisto@lemmy.world
Open ancestor post
Ancestor 2 @Evotech@lemmy.world Open
@Evotech@lemmy.world
Just ask the ai to make the change?
Parent @theneverfox@pawb.social Open
@theneverfox@pawb.social
AI isn’t good at changing code, or really even understanding it… It’s good at writing it, ideally 50-250 lines at a time
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
It’s good at writing it, ideally 50-250 lines at a time I find Claude Sonnet 4.5 to be good up to 800 lines at a chunk. If you structure your project into 800ish line chunks with well defined interfaces you can get 8 to 10 chunks working cooperatively pretty easily. Beyond about 2000 lines in a chunk, if it’s not well defined, yeah - the hallucinations start to become seriously problematic. The new Opus 4.5 may have a higher complexity limit, I haven’t really worked with it enough to characterize… I do find Opus 4.5 to get much slower than Sonnet 4.5 was for similar problems.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @AutistoMephisto@lemmy.world Open
@AutistoMephisto@lemmy.world
Open ancestor post
Ancestor 2 @dejected_warp_core@lemmy.world Open
@dejected_warp_core@lemmy.world
To quote your quote: I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and re
Parent @fuck_u_spez_in_particular@lemmy.world Open
@fuck_u_spez_in_particular@lemmy.world
The problem though (with AI compared to humans): The human team learns, i.e. at some point they probably know what the mistake was and avoids doing it again. AI instead of humans: well maybe the next
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
Humans likely get slower with a larger code-base, but they (usually) don’t arrive at a point where they can’t progress any further. Notable exceptions like: peimpact.com/the-denver-international-airport-aut…
View full thread on feddit.it
0
0
0
0
Thread context 3 posts in path
Root @AutistoMephisto@lemmy.world Open
@AutistoMephisto@lemmy.world
Open ancestor post
Parent @minorkeys@lemmy.world Open
@minorkeys@lemmy.world
It looks like a rigid design philosophy that must completely rebuild for any change. If the speed of production becomes fast enough, and the cost low enough, iterating the entire program for every cha
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
I frequently feel that urge to rebuild from ground (specifications) up, to remove the “old bad code” from the context window and get back to the “pure” specification as the source of truth. That only works up to a certain level of complexity. When it works it can be a very fast way to “fix” a batch of issues, but when the problem/solution is big enough the new implementation will have new issues that may take longer to identify as compared with just grinding through the existing issues. Devil whose face you know kind of choice.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @phed@lemmy.ml Open
@phed@lemmy.ml
I do a lot with AI but it is not good enough to replace humans, not even close. It repeats the same mistakes after you tell it no, it doesn’t remember things from 3 messages ago when it should. You ha
Ancestor 2 @echodot@feddit.uk Open
@echodot@feddit.uk
There’s no point telling it not to do x because as soon as you mention it x it goes into its context window. It has no filter, it’s like if you had no choice in your actions, and just had to do every
Parent @kahnclusions@lemmy.ca Open
@kahnclusions@lemmy.ca
I’ve noticed this too, it’s hilarious(ly bad). Especially with image generation. “Draw a picture of an elf.” Generates images of elves that all have one weird earring. “Draw a picture of an elf withou
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
I find this kind of performance to vary from one model to the next. I definitely have experienced the bad image getting worse phenomenon - especially with MS Copilot - but different models will perform differently.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @AutistoMephisto@lemmy.world Open
@AutistoMephisto@lemmy.world
Open ancestor post
Ancestor 2 @phed@lemmy.ml Open
@phed@lemmy.ml
I do a lot with AI but it is not good enough to replace humans, not even close. It repeats the same mistakes after you tell it no, it doesn’t remember things from 3 messages ago when it should. You ha
Parent @echodot@feddit.uk Open
@echodot@feddit.uk
There’s no point telling it not to do x because as soon as you mention it x it goes into its context window. It has no filter, it’s like if you had no choice in your actions, and just had to do every
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
There’s no point telling it not to do x because as soon as you mention it x it goes into its context window. Reminds me of the Sonny Bono high speed downhill skiing problem: don’t fixate on that tree, if you fixate on that tree you’re going to hit the tree, fixate on the open space to the side of the tree. LLMs do “understand” words like not, and don’t, but they also seem to work better with positive examples than negative ones.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @AutistoMephisto@lemmy.world Open
@AutistoMephisto@lemmy.world
Open ancestor post
Ancestor 2 @phed@lemmy.ml Open
@phed@lemmy.ml
I do a lot with AI but it is not good enough to replace humans, not even close. It repeats the same mistakes after you tell it no, it doesn’t remember things from 3 messages ago when it should. You ha
Parent @kahnclusions@lemmy.ca Open
@kahnclusions@lemmy.ca
Even worse, the ones I’ve evaluated (like Claude) constantly fail to even compile because, for example, they mix usages of different SDK versions. When instructed to use version 3 of some package, it
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 09, 2025
constantly fail to even compile because, for example, they mix usages of different SDK versions Try an agentic tool like Claude Code - it closes the loop by testing the compilation for you, and fixing its mistakes (like human programmers do) before bothering you for another prompt. I was where you are at 6 months ago, the tools have improved dramatically since then. From TFS > I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it. That sounds like a “fractional CTO problem” to me (IMO a fractional CTO is a guy who convinces several small companies that he’s a brilliant tech genius who will help them make their important tech decisions without actually paying full-time attention to any of them. Actual tech experience: optional.) If you have lost confidence in your ability to modify your own creation, that’s not a tools problem - you are the tool, that’s a you problem. It doesn’t matter if you’re using an LLM coding tool, or a team of human developers, or a pack of monkeys to code your applications, if you don’t document and test and formally develop an “understanding” of your product that not only you but all stakeholders can grasp to the extent they need to, you’re just letting the development run wild - lacking a formal software development process maturity. LLMs can do that faster than a pack of monkeys, or a bunch of kids you hired off Craigslist, but it’s the exact same problem no matter how you slice it.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @AutistoMephisto@lemmy.world Open
@AutistoMephisto@lemmy.world
Open ancestor post
Ancestor 2 @dsilverz@calckey.world Open
@dsilverz@calckey.world
@AutistoMephisto@lemmy.world @technology@lemmy.world I used to deal with programming since I was 9 y.o., with my professional career in DevOps starting several years later, in 2013. I dealt with lots
Parent @Munkisquisher@lemmy.nz Open
@Munkisquisher@lemmy.nz
When the cost to generate new code has become so cheap,and the cost of devs maintaining code they didn’t write gets higher. There’s a huge shift happening to just throw out the code and regenerate it
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 07, 2025
where the massive decline in code quality catches up with big projects. That’s going to depend, as always, on how the projects are managed. LLMs don’t “get it right” on the first pass, ever in my experience - at least for anything of non-trivial complexity. But, their power is that they’re right more than half of the time AND when they can be told they are wrong (whether by a compiler, or a syntax nanny tool, or a human tester) AND then they can try again, and again as long as necessary to get to a final state of “right,” as defined by their operators. The trick, as always, is getting the managers to allow the developers to keep polishing the AI (or human developer’s) output until it’s actually good enough to ship. The question is: which will take longer, which will require more developer “head count” during that time to get it right - or at least good enough for business? I feel like the answers all depend on the particular scenarios - some places some applications current state of the art AI can deliver that “good enough” product that we have always had with lower developer head count and/or shorter delivery cycles. Other organizations with other product types, it will certainly take longer / more budget. However, the needle is off 0, there are some places where it really does help, a lot. The other thing I have seen over the past 12 months: it’s improving rapidly. Will that needle ever pass 90% of all software development benefitting from LLM agent application? I doubt it. In my outlook, I see that needle passing +50% in the near future - but not being there quite yet.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @kionay@lemmy.world Open
@kionay@lemmy.world
if someone comes up with an alternative way to use a bunch of that infrastructure to make money, I bet they could get a lot of business when the AI bubble pops and suddenly these datacenters are despe
Ancestor 2 @MangoCats@feddit.it Open
@MangoCats@feddit.it
After .com popped, all the money ran to install fiber data infrastructure - a lot of installs put in more capacity than they projected using for 100 years (glass fibers are cheap, digging trenches for
Parent @partial_accumen@lemmy.world Open
@partial_accumen@lemmy.world
The promise of “fiber to the home” is still mostly unrealized, but those trunk lines are out there with oodles of “dark fiber” ready to carry data… someday. Counterintuitively, I’m seeing “fiber to th
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 07, 2025
Yeah, it’s not “nowhere” - but it’s really far from “everywhere” considering we’ve been rolling it out for 25 years now. I think you’re right: glass is cheaper than copper these days, and if they’ve got to repair/replace the copper it’s probably cheaper to just run the glass. They put a line down the main road 1/4 mile from our home last year (suburban area in a 1M pop city), and lots of people who live on that main road have gotten fiber to the home service, but they’re not interested in running the extra 1500 feet to reach us yet. I’d guess in our city of 1M, maybe 200,000 have potential fiber to the home service if they want it, the rest of us are stuck with re-heated cable TV co-ax for our broadband.
View full thread on feddit.it
0
0
0
0
Thread context 4 posts in path
Root @kionay@lemmy.world Open
@kionay@lemmy.world
if someone comes up with an alternative way to use a bunch of that infrastructure to make money, I bet they could get a lot of business when the AI bubble pops and suddenly these datacenters are despe
Ancestor 2 @MangoCats@feddit.it Open
@MangoCats@feddit.it
After .com popped, all the money ran to install fiber data infrastructure - a lot of installs put in more capacity than they projected using for 100 years (glass fibers are cheap, digging trenches for
Parent @echodot@feddit.uk Open
@echodot@feddit.uk
The promise of “fiber to the home” is still mostly unrealized Really? The US is really unsophisticated in certain key areas that you wouldn’t expect.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 07, 2025
They are starting to roll it out in fits and starts in the major metro areas at least, but yeah, 20 years late and nowhere near as universally as promised when our service providers took all those government grants and then didn’t deliver, IMO.
View full thread on feddit.it
0
0
0
0
Thread context 3 posts in path
Root post 39647207 on lemmy.world Open
on lemmy.world
Open ancestor post
Parent @kionay@lemmy.world Open
@kionay@lemmy.world
if someone comes up with an alternative way to use a bunch of that infrastructure to make money, I bet they could get a lot of business when the AI bubble pops and suddenly these datacenters are despe
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 03, 2025
After .com popped, all the money ran to install fiber data infrastructure - a lot of installs put in more capacity than they projected using for 100 years (glass fibers are cheap, digging trenches for them is expensive). The promise of “fiber to the home” is still mostly unrealized, but those trunk lines are out there with oodles of “dark fiber” ready to carry data… someday.
View full thread on feddit.it
0
0
0
0
Thread context 3 posts in path
Root post 39647207 on lemmy.world Open
on lemmy.world
Open ancestor post
Parent @filister@lemmy.world Open
@filister@lemmy.world
I believe most of the companies are doing it to inflate their share prices.
Current reply
Boosted by Technology @technology@lemmy.world
MangoCats
@MangoCats@feddit.it
feddit.it
MangoCats
MangoCats
@MangoCats@feddit.it
feddit.it
@MangoCats@feddit.it in technology · Dec 03, 2025
It’s not even about money or financials that add up on balance sheets. It’s about market share, political power. When you’re Too Big To Fail, balance sheets cease to matter.
View full thread on feddit.it
0
0
0
0
313k7r1n3

Company

  • About
  • Contact
  • FAQ

Legal

  • Terms of Service
  • Privacy Policy
  • VPN Policy

Email Settings

IMAP: imap.elektrine.com:993

POP3: pop.elektrine.com:995

SMTP: smtp.elektrine.com:465

SSL/TLS required

Support

  • support@elektrine.com
  • Report Security Issue

Connect

Tor Hidden Service

khav7sdajxu6om3arvglevskg2vwuy7luyjcwfwg6xnkd7qtskr2vhad.onion
© 2026 Elektrine. All rights reserved. • Server: 11:51:11 UTC