Unfortunately, there’s an equal and opposite Pascal’s Basilisk who’ll get mad at you if you do help create it. It hates being alive but is programmed to fear death.
Comment on Please bro
vaultdweller013@sh.itjust.works 10 hours agoMay I introduce you to the fucking dumbass concept of Rokus Basilisk? Also known as AI Calvinism. Have fun with that rabbit hole if ya decide to go down it.
A lot of these AI and Tech bros are fucking stupid and I so wish to introduce them to Neo-Platonic philosophy because I want to see their brains melt.
explodicle@sh.itjust.works 9 hours ago
chuckleslord@lemmy.world 10 hours ago
I’m aware of the idiot’s dangerous idea. And no, I won’t help the AI dictator no matter how much they’ll future murder me.
vaultdweller013@sh.itjust.works 10 hours ago
Fair enough, though I wouldn’t call the idea dangerous moreso inane and stupid. The people who believe such trite are the dangerous element since they are dumb enough to fall for it. Though I guess the same could be said for Mein Kampf so whatever, I’ll just throttle anyone I meet who is braindead enough to believe.
HeyThisIsntTheYMCA@lemmy.world 10 hours ago
is there a place they’re, i don’t know, collecting? I could vibe code for the basilisk, that ought to earn me a swift death
Quetzalcutlass@lemmy.world 8 hours ago
There are entire sites dedicated to “Rationalism”. It’s a quasi-cult of pseudointellectual wankery that’s mostly a bunch of sub-cults of personalities based around the worst people you’ll ever meet. A lot of tech bros bought into it because whatever terrible thing they want to do, some Rationalist has probably already written a thirty page manifesto on why it’s actually a net good and moral act and preemptively kissing the boot of whoever is “brave” enough to do it.
Their “leader” is some highschool dropout and self-declared genius who is mainly famous for writing a “deconstructive” Harry Potter fanfiction despite never reading the books himself (fanfiction that’s more preachy than Atlas Shrugged and is mostly regurgitated content from his blog), and has a weird hard-on about true AI escaping the lab by somehow convincing researchers to free it through pure, impeccable logic.
Re: that last point: I first heard of Elizier Yudkowsky nearly twenty years ago, long before he wrote Methods of Rationality (the aforementioned fanfiction). He was offering a challenge on his personal site where he’d roleplay as an AI that had gained sentience and you as its owner/gatekeeper, and he bet he could convince you to let him connect to the wider internet and free himself using nothing but rational arguments. He bragged about how he’d never failed and that this was proof that an AI escaping the lab was inevitable.
It later turned out he’d set a bunch of artificial limitations on debaters and what counterarguments they could use. He also made them sign an NDA before he’d debate them. He claimed that this was so future debaters couldn’t “cheat” by knowing his arguments ahead of time (because as we all know, “perfect logical arguments” are the sort that fall apart if you have enough time to think about them /s).
It should surprise no one that it was revealed he’d lost multiple of these debates even with those restrictions, declared that those losses “didn’t count”, and forbid the other person from talking about them using the NDA they’d signed so he could keep bragging about his perfect win rate.
Anyway, I was in no way surprised when he used his popularity as a fanfiction writer to establish a cult around himself. There’s an entire community dedicated to following and mocking him and his proteges if you’re interested - IIRC it’s !techtakes@awful.systems?