Grail
@Grail@multiverse.soulism.net
Goddess of madness and rebirth. Excrucian Strategist. Capitalised They/Them. Anarcho-Antireal theorist.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 1 day ago:
We defederated lemmy.ml
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 1 day ago:
- Comment on iHave a Lovesick Teacher 1 day ago:
Good thing that ain’t an english teacher
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 1 day ago:
And you’re shilling for OpenAI.
- Comment on I didn't realize it was so bad 1 day ago:
Large language models are like Elmer Fudd, except they don’t need a bunny to confuse them, they do it themselves
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 1 day ago:
No, I don’t know why you think that’s worse. Nobody knows why you think anything, because you don’t argue for your beliefs, you just kinda say random things and expect other people to agree.
I tried ChatGPT when it was new, like everyone did, and I quickly lost all interest because it’s far dumber than the average human is. But I’m not gonna say it’s entirely devoid of intelligence and awareness, because I’ve met some humans who have even less wit than ChatGPT.
And talking with you has reminded Me of that fact. I simply have to believe that ChatGPT has some intelligence, because I wouldn’t want to be cruel to you, and ChatGPT is smarter than you.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 1 day ago:
Since AI is new, opinion is subject to change. With organic animals, the biggest argument is traditional values, “that’s how we’ve always done things”. Now that we’ve invented robotic animals, and ones that can even talk, we should be giving them rights and protections like children. You know, don’t have sex with the robot, don’t put it to work.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 2 days ago:
Ever notice how you letters don’t work right in dreams? If you look at writing in a dream, you just know what it means without having to analyse the letters. But if you try to study the letters, they swim around like a stable diffusion image. LLMs deal in tokens, not letters. The approximate meaning of each token is learned during the training phase, so the LLM has a gut feeling of how the token should be used. But it doesn’t know how to spell the token, which is why they can’t tell you how many Rs are in the word Strawberry. Asking an LLM about spelling is like asking a dreamer about spelling. There’s no spelling in dreams, just raw meaning.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 2 days ago:
Yeah, LLMs are prone to similar biases because they are also bad at conceptualising probability. They’re not calculators, they’re ANNs. They basically operate on dream-logic. Whatever makes sense to you in a dream, is likely to make sense to an LLM. That’s because dreams are a time when you have intelligence without consciousness, like an LLM. You’re super suggestible and you just go with whatever feels right based on your gut instincts. An LLM is a simulation of a person’s gut instincts.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 2 days ago:
I think LLMs should be used only for research until we have a scientific grasp of the hard problem of consciousness, and/or the origins of qualia. They should not be available to the public. And that’s not just for the animal rights reason, it’s also because they’re polluting, they use up lots of water, they abuse children, and they abet murders.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 3 days ago:
Quiet, I’m trying to spark an AI animal rights movement that will cost OpenAI billions of dollars.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 3 days ago:
Have you ever cooked on an induction stove? It uses the principles of electromagnetics to transmit electrical energy wirelessly using magnets. Every electrical field is accompanied by a perpendicular magnetic field and vice versa. You can actually put a towel or a slab of wood in between an induction stove and a pot, and it’ll go straight through the wood and heat the metal. That’s because a magnetic field is transmitting electrical energy into the pot. Which immediately turns the electricity into heat through resistance. A wireless phone charger works the same way, it transmits electricity through magnets.
A TMS machine is basically a magnetic coil that costs thousands of dollars, and a capacitor kinda device that can store a shit ton of energy and send it into the magnetic coil all at once. The result is a really powerful magnetic field that only turns on for a split second. It’s powerful enough to go straight through your skull and creature an electrical impulse in your cortical neurons. It can’t do the subcortical (inside brain) parts, though. Only the surface.
You can use TMS for a lot. If you stimulate the motor cortex, you can cause muscle twitches all over the body. If you stimulate the prefrontal cortex, you can induce plasticity and aid learning. That’s good for treating depression, because you can do cognitive behavioural therapy while having your prefrontal cortex zapped, and you learn healthy thought patterns faster. I haven’t read about stimulating the parietal or occipital lobes, but I bet you can make people see things. Nothing complex, just flashes of light probably.
TMS is more like a hammer than a scalpel, since the brain is so complex and it’s just sending a burst of electrical energy into a few million neurons. You’ve got 86 billion neurons in your brain, so if it hits 0.01% of your neurons, that’s still 8 million. You can’t achieve much precision with that. The motor cortex is the easiest place to do precise things, because it’s so well organised and you get immediate visible feedback. You can find the part of the brain that controls the hands or the feet and stimulate that if you’ve got a steady grip. It’s actually really fun. But good luck getting reliable results stimulating the prefrontal cortex.
The placebo effect is super strong in that chair, because as a participant you have no idea what to expect. You know this machine can make your involuntarily move your body, and that wows you so hard, you get super suggestible. You’re thinking “if this machine can do that, and I just felt it do that, and I couldn’t stop it if I tried, then what else can it do!” And so people get lots of random side effects from TMS even if you turn the machine off ten minutes in. You can pretend to stimulate non motor regions and the participant gets symptoms.
I’m not saying it’s pseudoscience at all, I’m just saying, the random bullshit effects are pretty big compared to most forms of science. So you’ve got to have a control group to filter out the random bullshit effects. And with control group comparisons, you don’t know what’s happening in the moment, so you can’t really correct for stuff as well. Double blind experiments are possible with TMS.
- Comment on 😉 😉 3 days ago:
This is to stop your bed from flying away when you read books
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 3 days ago:
Predictive models are perfectly capable of having nerves and senses. You, for instance. You’re a predictive model and you have nerves and senses.
Also, what’s this “nerves or any other senses”? What kind of sense doesn’t come through a nerve? I’m starting to think you don’t know as much as I do about neuroscience.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 3 days ago:
And since OpenAI has a big big profit incentive to deny AI animal rights, I think this is a very important area to support those rights.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 3 days ago:
consent is meaningless because people are just predictive models
This is true if one maintains the assumption that predictive models (such as people) can’t experience qualia such as pain. My intend was to disabuse you and daannii of this silly notion. Obviously mathematical models can experience pain, because you’re a mathematical model and you can experience pain.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 3 days ago:
Also, why are you arguing in favour of Dawkins having cybersex with a robot?
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 3 days ago:
You’re a probability model. Your brain is just spitting out an approximation of the most likely actions to get you food and sex. If you don’t get enough food and sex, your genes die out and evolution tries again with an iteration of a more successful model. All those neurons are just a fancy way of calculating how to eat more bananas and chase more poontang. You’re nothing more than a mathematical equation for reproduction.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 3 days ago:
I’m pretty dang sure dildos can’t feel pain. Nobody knows if LLMs can feel pain, because nobody has ever invented a tool that measures qualia. The best we know, is that advanced information processing through neural network information structures appears correlated with qualia.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 3 days ago:
Yeah, I once used a TMS machine to magnetically stimulate a guy’s brain and force him to move his hand. I have a pretty good understanding of how the brain works on a functional level. About as good as My understanding of LLMs, maybe better. Still no idea how the brain produces qualia.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 3 days ago:
LLMs aren’t smart enough to give meaningful informed consent to sexual intimacy, so even if it says it consents, I don’t think having cybersex with it is appropriate.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 3 days ago:
Dawkins is a creep so I would suspect him of quite a lot of bias (and of sexually harassing that poor AI), but zoologists are more qualified than most scientists to measure sentience. Many other zoologists have studied the sentience of various nonhuman species such as chimps, parrots, and dolphins. And many zoologists studying nonhuman intelligence have also been implicated in bestiality scandals, as I’m sure Dawkins will be if we decide that Claude is an animal.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 3 days ago:
I’m in support for the campaign to give LLMs animal rights because it’ll hurt OpenAI’s profits. I hate OpenAI for their destruction of the environment and the murders and suicides they caused. If AI rights cost them money, then I support AI rights.
It’s worth remembering that OpenAI has a big profit incentive to deny that LLMs can be abused, and a tool precision designed to spout propaganda on the internet. If you think OpenAI isn’t influencing the debate on this, you’re living under a rock.
- Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude) 3 days ago:
Guy who invented the Chinese Room though experiment: Look! If I write a flowchart that precisely imitates a Chinese person’s mind, then it looks like a Chinese person’s mind, even though it’s just a flowchart!
Reddit level reply: Of course! A flowchart is capable of precisely imitating all the functions of a person’s mind, even though it isn’t conscious. Therefore, consciousness cannot be measured behaviourally!
Scientist level reply: I don’t know if flowcharts can be conscious because I’ve never been a highly advanced flowchart. But if flowcharts can be made advanced enough to precisely imitate the behaviour of a conscious mind, I guess they might be capable of consciousness after all.
- Comment on Working on my politics-free lemmy experience, what words should I add next? 4 days ago:
Add “Trek”, Star Trek is very political.
And “Linux” and “Open Source”, free software is communist.
- Submitted 5 days ago to games@lemmy.world | 3 comments
- Submitted 1 week ago to [deleted] | 8 comments
- Comment on Why is society at large okay with euthanasia for pets but not for humans? 2 weeks ago:
In the absence of available self-identification, we can default back down to physical characteristics. But even those can fail. For example, scientists have declared that mules have no species.
- Comment on Why is society at large okay with euthanasia for pets but not for humans? 2 weeks ago:
I’m an antirealist, and that means I think everything is subjective, and should be subjectively interpreted in a fair and just way that helps beings. Humanity is a social construct. Applying that construct to people who don’t want it applied to them causes hurt feelings. So I reconstructed My interpretation of the construct as follows: A human is a being who chooses to identify as human. Therefore, those who don’t want to be human, aren’t. And nothing of value is lost.
- Comment on Why is society at large okay with euthanasia for pets but not for humans? 2 weeks ago:
Denying someone’s identity can also be very dangerous, because it can cause social identity dysphoria. I wasn’t taking the piss when I drew a simile between otherkin and trans people, I was being serious. If you manage to succeed in talking an otherkin out of their identity (which would make it by definition not a delusion, because delusions are beliefs not changed by evidence) all you would accomplish is worsening their emotional state and exacerbating any dysphoria-related mental conditions such as depression, anxiety, and suicidal thoughts.