• Sign in
  • Sign up
Elektrine
EN
Log in Register
Modes
Overview Chat Timeline Communities Gallery Lists Friends Email Vault DNS VPN
Back to Timeline
  • Open on awful.systems

scruiser

@scruiser@awful.systems
lemmy 0.19.12
0 Followers
0 Following
Joined August 29, 2023

Posts

Open post
In reply to
scruiser
@scruiser@awful.systems
awful.systems
scruiser
scruiser
@scruiser@awful.systems
awful.systems
@scruiser@awful.systems in techtakes · 19h ago
I wouldn’t give him credit for a full admission. He isn’t acknowledging that “biased left-wing experts” means expert like psychologists with a basic understanding of psychometric validity and geneticists with the basic understanding that popular notions of race don’t have a genetic basis and biological determinism is false.
View full thread on awful.systems
0
0
0
0
Open post
In reply to
scruiser
@scruiser@awful.systems
awful.systems
scruiser
scruiser
@scruiser@awful.systems
awful.systems
@scruiser@awful.systems in techtakes · 19h ago
The security blog I linked the other day has more criticisms of Anthropics mythos cybersecurity claims: -Apparently Opus 4.6 may have found the FreeBSD Anthropic has made a huge deal about Mythos finding? And Anthropic didn’t clarify that there older model had found the bug as well: flyingpenguin.com/freebsd-cve-2026-4747-log-sugge… -More explanation about why Anthropic’s entire approach with Mythos and cybersecurity is more oriented around marketing than good (or any) cybersecurity practices. Also, the author makes the point that if you did have a tool that could rapidly refactor code into other languages, the solution to the vast majority of bugs and vulnerabilities Mythos found isn’t bug hunting one by one with Anthropic’s (much more expensive) LLM, it is to refactor code into a memory safe language. (I think the author is too credulous of LLM coding agents code quality here, but given those assumptions I think there point is correct.) flyingpenguin.com/how-sans-mythos-marketing-disap… -Bonus, MCP (model context protocol, a standard for tools for LLM agents Anthropic has developed and tried to push) is insecure by default and Anthropic has refused to fix it! flyingpenguin.com/ox-security-report-anthropic-mc…
View full thread on awful.systems
0
1
0
0
Open post
In reply to
scruiser
@scruiser@awful.systems
awful.systems
scruiser
scruiser
@scruiser@awful.systems
awful.systems
@scruiser@awful.systems in techtakes · 2d ago
Hey Eliezer is very mad about being quoted as saying to bomb them! (He’s made very clear that he wants them destroyed with air strikes!)
View full thread on awful.systems
0
1
0
0
Open post
In reply to
scruiser
@scruiser@awful.systems
awful.systems
scruiser
scruiser
@scruiser@awful.systems
awful.systems
@scruiser@awful.systems in techtakes · 2d ago
A detailed analysis of why Anthropic’s claims about Mythos’s cybersecurity implications are bs: flyingpenguin.com/the-boy-that-cried-mythos-verif… And a followup post about why Anthropic’s Glasswing project violates cybersecurity community norms and is an attempt to form a cartel: flyingpenguin.com/cartel-or-not-anthropic-mythos-…
View full thread on awful.systems
0
1
0
0
Open post
In reply to
scruiser
@scruiser@awful.systems
awful.systems
scruiser
scruiser
@scruiser@awful.systems
awful.systems
@scruiser@awful.systems in techtakes · 2d ago
Eliezer complaining about vigilante actions is really ironic considering one of his main themes in Harry Potter and the Methods of Rationalist was about “heroic responsibility” and complaining about how ordinary people default to doing nothing. I guess what he actually meant was for right-thinking people (people that agree with him) to take the actions he approves of.
View full thread on awful.systems
0
0
0
0
Open post
In reply to
scruiser
@scruiser@awful.systems
awful.systems
scruiser
scruiser
@scruiser@awful.systems
awful.systems
@scruiser@awful.systems in techtakes · 3d ago
they fail to grasp the real social reaction side-note… I wonder what the overlap is between rationalist that showed up to their stupid “march for billionaires” and AI doomers?
View full thread on awful.systems
0
1
0
0
Open post
In reply to
scruiser
@scruiser@awful.systems
awful.systems
scruiser
scruiser
@scruiser@awful.systems
awful.systems
@scruiser@awful.systems in techtakes · 3d ago
LLMs generate the next most probable token given the previous context of tokens they have (not an average of the entire internet). And post-training shifts the odds a bit further in a relatively useful direction. So given the right context the LLM will mostly consistently regurgitate content stolen from PhDs and academic papers, maybe even managing to shuffle it around in a novel way that is marginally useful. Of course, that is only the general trend given the right^tm^ prompt. Even with a prompt that looks mostly right, one seemingly innocuous word in the wrong place might nudge the odds and you get the answer of a moron /r/hypotheticalphysics in response to a physics question. Or a asking for a recipe gets you elmer’s glue on your mozarella pizza from a reddit joke answer. So tldr; you’re right, but since it is possible to get somewhat better than average internet junk with pre-training and prompting, llm boosters and labs have convinced themselves they are just a few more iterations of training approaches and prompting techniques away from entirely eliminating the problem, when the best they can do is make it less likely.
View full thread on awful.systems
0
0
0
0
Open post
In reply to
scruiser
@scruiser@awful.systems
awful.systems
scruiser
scruiser
@scruiser@awful.systems
awful.systems
@scruiser@awful.systems in techtakes · 3d ago
Eliezer joins the trend of condemning “political” violence with confidence on the far end of the dunning-kruger curve: lesswrong.com/…/only-law-can-prevent-extinction I’ve already mocked this attitude down thread and in the previous weekly thread, so I’ll keep my mockery to a few highlights… He’s admitting nuke the data centers is in fact violence! It would be beneath my dignity as a childhood reader of Heinlein and Orwell to pretend that this is not an invocation of force. But then drawing a special case around it. But it’s the sort of force that’s meant to be predictable, predicted, avoidable, and avoided. And that is a true large difference between lawful and unlawful force. I don’t think Eliezer has checked the news if he think the US government carries out violence in predictable and fairly avoidable ways! Venezuela! The entire lead up to Iran consisted of ripping up Obama’s attempts at treaties and trying to obtain regime change through surprise assassination! Also, if the stop AI doomers used some clever cryptography scheme to make their policy of property destruction (and assassination) sufficiently predictable and avoidable would that count as “Lawful” in Eliezers book. If he kept up with the DnD/Pathfinder source material, he would know Achaekek’s assassins are actually Lawful Evil The ASI problem is not like this. If you shut down 5% of AI research today, humanity does not experience 5% fewer casualties. We end up 100% dead after slightly more time. His practical argument against non-state-sanctioned violence is that we need a total ban (and thus the authority of state driving it), because otherwise someone with 8 GPUs in a basement could invent strong AGI and doom us all. This is a dumb argument, because even most AI doomers acknowledge you need a lot of computational power to make the AGI God. And (violently) slowing down AGI might buy time for another sort of solution. Statistics show that civil movements with nonviolent doctrines are more successful at attaining their stated goals Sources cited: 0 One of the comments also pisses me off: Which reminds me about another point: I suspect that “bomb data centers” meme causal story was not somebody lying, but somebody recalling by memory without a thought that such serious allegation maybe is worthy to actually look up it and not rely on unreliable memory. “Drone strike the data centers even if starts nuclear war” is the exact argument Eliezer made and that we mocked. It is the rationalists that have tried to soften it by eliding over the exact details.
View full thread on awful.systems
0
11
0
0
Open post
In reply to
scruiser
@scruiser@awful.systems
awful.systems
scruiser
scruiser
@scruiser@awful.systems
awful.systems
@scruiser@awful.systems in techtakes · 3d ago
how do we ensure that no-one builds it? Eliezer made a lesswrong post yesterday where he explains that since anyone could build it, lone acts of violence are obviously ineffective and the only solution is the right and proper (“Lawful” as he calls it, sense he has been on a trend of DnD since writing Planecrash) state violence which can enforce a worldwide ban (which you may recall Eliezer has put at the absurdly low 8 2024 GPUs).
View full thread on awful.systems
0
3
0
0
Open post
In reply to
scruiser
@scruiser@awful.systems
awful.systems
scruiser
scruiser
@scruiser@awful.systems
awful.systems
@scruiser@awful.systems in techtakes · 3d ago
or is there more going on? One idea I’ve read about (heavily developed by Ed Zitron, but also a few other news sources and commentators have put it forward) is that SaaS (Software as a Service) businesses were heavily over invested in expectation of basically infinite growth over the past decade. SaaS growth has exponential in its early days, but then various needs of the market were basically saturated, so SaaS companies squeezed more growth out cutting cuts or upping how much they charged, and now it is finally catching up to them. The AI hype means almost everyone tries to interpret everything the lines of AI causing it. The recent price correction in many SaaS companies was (mis)interpreted as the threat of vibe-coded replacements forcing them to cut costs. The SaaS companies trying to cut costs and going through layoffs is being misinterpreted as AI successfully replacing junior devs.
View full thread on awful.systems
0
0
0
0
Open post
In reply to
scruiser
@scruiser@awful.systems
awful.systems
scruiser
scruiser
@scruiser@awful.systems
awful.systems
@scruiser@awful.systems in techtakes · 4d ago
The Zvi post really pisses me off for continuing to normalize Eliezer’s comments (in a way that misrepresents the problems with them). This happened quite a bit around Eliezer’s op-ed in Time in particular, usually in highly bad faith, and this continues even now, equating calls for government to enforce rules to threats of violence, and there are a number of other past cases with similar sets of facts. Eliezer called for the government to drone strike data centers, even of foreign governments not signatories to international agreements, and even if doing so risked starting nuclear war. Pacifism is at least a consistent position, but instead rationalists like Zvi want to simultaneously disown the radical actions, but legitimizes the US’s shit show of a foreign policy. Another thing that pisses me off is the ahistorical claim by rationalist that such actions are ineffective and unlikely to succeed. Asymmetric warfare and terrorist tactics have obtained success many times in history! The kkk successfully used terrorism to repress a population for a century. The black panthers got gun control passed in California and put pressure on political leaders to accept the more peaceful branch of the civil rights movement. The IRA got the Good Friday agreement. The US revolution! All the empires that have withdrawn from Afghanistan! Overall though… I guess this is a case of two wrongs making a sorta right. They are dangerously wrong about AI doom, but at least they are also wrong about direct action and so usually won’t take the actions implied by their beliefs. (But they are still, completely predictably, inspiring stochastic terrorists).
View full thread on awful.systems
0
4
0
0
Open post
Boosted by Charlie Stross @cstross@wandering.shop
In reply to
scruiser
@scruiser@awful.systems
awful.systems
scruiser
scruiser
@scruiser@awful.systems
awful.systems
@scruiser@awful.systems · Feb 17, 2026
A little exchange on the EA forums I thought was notable: …effectivealtruism.org/…/long-term-risks-from-ide… tldr; a super long essay lumping together Nazism, Communism and religious fundamentalism (I didn’t read it, just the comments). The comment I linked notes how liberal democracies have also killed a huge number of people (in the commenter’s home country, in the name of purging communism): The United States presented liberal democracy as a universal emancipatory framework while materially supporting anti-communist purges in my country during what is often called the “Jakarta Method". Between 500,000 and 1 million people were killed in 1965–66, with encouragement and intelligence support from Western powers. Variations of this model were later replicated in parts of Latin America. The OP’s response is to try to explain how that wasn’t real “liberal democracy” and to try to reframe the discussion. Another commenter is even more direct, they complain half the sources listed are Marxist. A bit bold to unqualifiedly recommend a list of thinkers of which ~half were Marxists, on the topic of ideological fanaticism causing great harms. I think it’s a bit bold of this commenter to ignore the empirical facts cited in how many people ‘liberal democracies’ had killed and to exclude sources simply for challenging your ideology. Just another reminder of how the EA movement is full of right wing thinking and how most of it hasn’t considered even the most basic of leftist thought.
View full thread on awful.systems
0
0
1
0
Open post
Boosted by Charlie Stross @cstross@wandering.shop
In reply to
scruiser
@scruiser@awful.systems
awful.systems
scruiser
scruiser
@scruiser@awful.systems
awful.systems
@scruiser@awful.systems · Feb 01, 2026
The lesson should be the mega rich are class conscious, dumb as hell, and team up to work on each others interests and dont care about who gets hurt Yeah this. It would be nice if people could manage to neither dismiss the extent to which the mega rich work together nor fall into insane conspiracy theories about it.
View full thread on awful.systems
0
2
1
0
313k7r1n3

Company

  • About
  • Contact
  • FAQ

Legal

  • Terms of Service
  • Privacy Policy
  • VPN Policy

Email Settings

IMAP: mail.elektrine.com:993

POP3: pop3.elektrine.com:995

SMTP: mail.elektrine.com:465

SSL/TLS required

Support

  • support@elektrine.com
  • Report Security Issue

Connect

Tor Hidden Service

khav7sdajxu6om3arvglevskg2vwuy7luyjcwfwg6xnkd7qtskr2vhad.onion
© 2026 Elektrine. All rights reserved. • Server: 18:18:40 UTC