In reply to
Mathaetaes
@mathaetaes@infosec.exchange
infosec.exchange
Mathaetaes
@mathaetaes@infosec.exchange
infosec.exchange
@mathaetaes@infosec.exchange
·
Apr 11, 2026
@glyph I’m not an AI doomer and I pretty much agree with your whole thread, but something to consider: when I think of AI taking over humanity, I see The Matrix more than Terminator. A hypothetical super intelligent AI wouldn’t necessarily need complete self-replication ability… it just needs the ability to influence humans enough to have them do the parts it can’t.
If you think about it, we’ve been building tech designed specifically to manipulate human behavior since the advent of social media… and it’s effective. If humanity were any good at protecting itself or organizing for the greater good, the perverse reward systems of capitalism would’ve been brought into check long before US oligarchs like Musk and Zuck could have been able to amass their power.
In a hypothetical world where machines rule, the more likely scenario is a majority of humans self-oppressing because they’ve been manipulated into it by adjustments to the algorithms that feed them the information they use to establish reality. We become part of the system.
The only real difference between that world and today is that today it’s a handful of billionaires controlling the algorithms. Replace Zuck and a few others with sufficiently capable AI and is it really that unbelievable that society would just keep cranking out more machines despite a slow degradation of quality of life?
Anyway - great thread and a fun topic to kick around. Thanks for posting it.
View full thread on infosec.exchange
0
2
1
Conversation (2)
Showing 0 of 2 cached locally.
Syncing comments from the remote thread. 2 more replies are still loading.
Loading comments...