I may be misunderstanding your argument but just to make sure I want to point out that
> desperate people will do desperate things to survive
does not run counter to
> if you can’t afford to live, then you certainly can’t afford to move to another country
hoppolito
@hoppolito@mander.xyz
lemmy
0.19.17
0
Followers
0
Following
Joined November 23, 2024
Posts
Open post
In reply to
hoppolito
@hoppolito@mander.xyz
mander.xyz
> It uses a completely different paradigm of process chaining and management than POSIX and the underlying Unix architecture.
I think that's exactly it for most people. The socket, mount, timer unit files; the path/socket activations; the `After=`, `Wants=`, `Requires=` dependency graph, and the overall architecture as a more unified 'event' manager are what feels really different than most everything else in the Linux world.
That coupled with the ini-style VerboseConfigurationNamesForThatOneThing and the binary journals made me choose a non-systemd distro for personal use - where I can tinker around and it all feels nice and unix-y. On the other hand I am really thankful to have systemd in the server space and for professional work.
I think that's exactly it for most people. The socket, mount, timer unit files; the path/socket activations; the `After=`, `Wants=`, `Requires=` dependency graph, and the overall architecture as a more unified 'event' manager are what feels really different than most everything else in the Linux world.
That coupled with the ini-style VerboseConfigurationNamesForThatOneThing and the binary journals made me choose a non-systemd distro for personal use - where I can tinker around and it all feels nice and unix-y. On the other hand I am really thankful to have systemd in the server space and for professional work.
25
11
0
0
Open post
In reply to
hoppolito
@hoppolito@mander.xyz
mander.xyz
@hoppolito@mander.xyz
in
technology
·
Dec 15, 2025
As far as I know that’s generally what is often done, but it’s a surprisingly hard problem to solve ‘completely’ for two reasons:
The more obvious one - how do you define quality? When you’re working with the amount of data LLMs require as input and need to be checked for on output you’re going to have to automate these quality checks, and in one way or another it comes back around to some system having to define and judge against this score.
There’s many different benchmarks out there nowadays, but it’s still virtually impossible to just have ‘a’ quality score for such a complex task.
Perhaps the less obvious one - you generally don’t want to ‘overfit’ your model to whatever quality scoring system you set up. If you get too close to it, your model typically won’t be generally useful anymore, rather just always outputting things which exactly satisfy the scoring principle, nothing else.
If it reaches a theoretical perfect score, it would just end up being a replication of the quality score itself.
The more obvious one - how do you define quality? When you’re working with the amount of data LLMs require as input and need to be checked for on output you’re going to have to automate these quality checks, and in one way or another it comes back around to some system having to define and judge against this score.
There’s many different benchmarks out there nowadays, but it’s still virtually impossible to just have ‘a’ quality score for such a complex task.
Perhaps the less obvious one - you generally don’t want to ‘overfit’ your model to whatever quality scoring system you set up. If you get too close to it, your model typically won’t be generally useful anymore, rather just always outputting things which exactly satisfy the scoring principle, nothing else.
If it reaches a theoretical perfect score, it would just end up being a replication of the quality score itself.
0
0
0
0
Open post
In reply to
hoppolito
@hoppolito@mander.xyz
mander.xyz
@hoppolito@mander.xyz
in
technology
·
Dec 05, 2025
I think you really nailed the crux of the matter.
With the ‘autocomplete-like’ nature of current LLMs the issue is precisely that you can never be sure of any answer’s validity. Some approaches try by giving ‘sources’ next to it, but that doesn’t mean those sources’ findings actually match the text output and it’s not a given that the sources themselves are reputable - thus you’re back to perusing those to make sure anyway.
If there was a meter of certainty next to the answers this would be much more meaningful for serious use-cases, but of course by design such a thing seems impossible to implement with the current approaches.
I will say that in my personal (hobby) projects I have found a few good use cases of letting the models spit out some guesses, e.g. for the causes of a programming bug or proposing directions to research in, but I am just not sold that the heaviness of all the costs (cognitive, social, and of course environmental) is worth it for that alone.
With the ‘autocomplete-like’ nature of current LLMs the issue is precisely that you can never be sure of any answer’s validity. Some approaches try by giving ‘sources’ next to it, but that doesn’t mean those sources’ findings actually match the text output and it’s not a given that the sources themselves are reputable - thus you’re back to perusing those to make sure anyway.
If there was a meter of certainty next to the answers this would be much more meaningful for serious use-cases, but of course by design such a thing seems impossible to implement with the current approaches.
I will say that in my personal (hobby) projects I have found a few good use cases of letting the models spit out some guesses, e.g. for the causes of a programming bug or proposing directions to research in, but I am just not sold that the heaviness of all the costs (cognitive, social, and of course environmental) is worth it for that alone.
2
0
0
0
Open post
In reply to
hoppolito
@hoppolito@mander.xyz
mander.xyz
@hoppolito@mander.xyz
in
lemmyshitpost
·
Dec 04, 2025
Holyy, thanks for this. I can finally put a name to it. Was wondering with my partner for ages what sometimes suddenly befalls us, especially if we’re lying in a weird position.
0
0
0
0