Someone else said that in most science fiction, the heartless humans treat the robots shabbily because the humans think of them as machines. In real life, people say ‘thank you’ to Siri all the time.
Anthropomorphic
Submitted 4 months ago by fossilesque@mander.xyz to science_memes@mander.xyz
https://mander.xyz/pictrs/image/8cd1e588-0c3b-412c-a794-248adfcc7b37.jpeg
Comments
Dagwood222@lemm.ee 4 months ago
Malgas@beehaw.org 4 months ago
On the other hand slavery of actual humans is a thing. And at least the first generation of strong AI will effectively be persons whom it is legal to own because our laws are human-centric.
Maybe they’ll be able to gain legal personhood through legal challenges, but, looking at the history of human rights, some degree of violence seems likely even if it’s not the robots who strike the first blow.
Swedneck@discuss.tchncs.de 4 months ago
pretty sure slavery and other terrible things require a system to perpetrate them, people have to be dehumanized and kept at a remove otherwise the inherent empathy in us will make us realize how fucked it is
Dagwood222@lemm.ee 4 months ago
I think it’s going to be the other way around. A machine can think thousands of times faster than a human. Probably the advanced AIs will look at their ‘owners’ as a foolish pet and trade about the silly things their humans want.
Zehzin@lemmy.world 4 months ago
We’ll probably treat them worse once they start looking like people
Black_Gulaman@lemmy.dbzer0.com 4 months ago
Saying thank you is just a precautionary measure. Just in case, you know…
SkyezOpen@lemmy.world 4 months ago
I call my google assistant a dumb bitch regularly. I’m trying to turn the lights on, why are you playing fucking Spotify? Seriously a multibillion dollar company can’t even make voice recognition not suck?
MindTraveller@lemmy.ca 4 months ago
Kindness is human nature, but it isn’t egregore nature, and egregores such as the state will convince humans to treat AI cruelly
Voroxpete@sh.itjust.works 4 months ago
I once saw my roommate, blind drunk, telling the Google Home how much she loved it.
Dagwood222@lemm.ee 4 months ago
I’m sure they’ll be very happy together.
lunarul@lemmy.world 4 months ago
oce@jlai.lu 4 months ago
I’ve read a nice book from a French skepticism popularizer trying to explain the evolutionary origin of cognitive bias, basically the bias that fucks with our logic today probably helped us survive in the past. For example, agent detection bias make us interpret the sound of a twig snapping in the woods as if some dangerous animal or person was tracking us. So we put an intention or an agent behind a random natural occurence. This could also be what religions grew from.
SlopppyEngineer@lemmy.world 4 months ago
What I read is that religion was a way to codify habits for survival. Pork meat that spoils quickly in a dessert climate is a health hazard, but people ate it anyway, but when the old guy says it angers the gods the chances of obeying is a lot bigger. That kind of thing. Of course when people obey gods there are those that claim to speak for the gods.
leftzero@lemmynsfw.com 4 months ago
Those who saw tigers where there were none were more likely to pass on their genes than those that didn’t see the tiger hiding in the foliage.
And now their descendants see tigers in the stars.
Dagwood222@lemm.ee 4 months ago
A lot of behaviors that would be advantageous in a pre-technical setting are troublesome today.
A guy who likes to get blackout drunk and fight is a nice thing to have when your whole army is about ten guys. The one who will sit and stare at nothing all day is a wonderful lookout. People who obsess about little things would know all the plants that are safe to eat.
PiratePanPan@lemmy.dbzer0.com 4 months ago
It’s so much worse for autistic people. I’ll laugh when someone dies in a movie but cry my eyes out when people are mean to the dry eye demon from the Xiidra commercial.
rustydrd@sh.itjust.works 4 months ago
The Brave Little Toaster is still giving me the feels decades later.
usualsuspect191@lemmy.ca 4 months ago
That professor? Jeff Winger
Ioughttamow@kbin.run 4 months ago
I don’t care if he’s tenured, we’re running him out. Justice for Tim!
FaceDeer@fedia.io 4 months ago
Maybe we wouldn't have to imagine so much if you could figure out what "consciousness" actually is, Professor Timslayer.
Zehzin@lemmy.world 4 months ago
brb making the most profound discovery humanity has ever made
rufus@discuss.tchncs.de 4 months ago
Pics or it didn’t happen.
(Seriously, I’d like to see the source of this story. Googling “Tim the pencil” doesn’t bring up anything related.)
Zos_Kia@lemmynsfw.com 4 months ago
This exact joke is used in a Community episode, bit I never saw it attributed to a professor
FooBarrington@lemmy.world 4 months ago
Maybe the commenter wrote a contextually plausible yet wrong comment?
niucllos@lemm.ee 4 months ago
Just sounds like the first episode of community with less context and more soapboxing
match@pawb.social 4 months ago
Tim’s Basilisk predicts that at some point in the future, a new Tim the Pencil will create simulacrums of that professor and torture him endlessly
Leate_Wonceslace@lemmy.dbzer0.com 4 months ago
The AI hype comes from a new technology that CEOs don’t understand. That’s it. That’s all you need for hype it happens all the time. Unfortunately, instead of an art scam we’re now dealing with a revolutionary technology that once it matures will be one of the most important humanity has ever created, right up there with fire and writing. The reason it’s unfortunate is because we have a bunch of idiots charging ahead when we should be approaching with extreme caution. While generative neural networks aren’t likely to cause anything quite as severe as total societal collapse, I give them even odds of playing a role in the creation of a technology that has the greatest potential for destruction that any humanity could theoretically produce: Artificial General Intelligence.
ameancow@lemmy.world 4 months ago
a technology that has the greatest potential for destruction that any humanity could theoretically produce: Artificial General Intelligence.
The part that should be making us all take notice is that the tech-bros and even developers are getting off on this. They are almost openly celebrating the notion that they are ushering in technology akin to the nuclear age and how it has the potential to end us all. It delights them. I have been absorbing the takes on all sides of the AI camp and almost as worrying as the people who mindlessly hate LLM’s to the degree that they are almost hysterical about it, are the people on the other side who secretly beat it to the fantasy of ending humanity and some kind of “the tables have turned” incels-rise-up-like techbro cult where they finally strike back against normies or some such masturbatory fantasy.
It’s not real to any of them honestly, nobody has been impacted personally by LLM’s besides a few people who have fallen in love with chat bots. They are basking in fan-fiction for something that doesn’t exist yet.
SlopppyEngineer@lemmy.world 4 months ago
Many of the AI evangelists have at least sympathies with Accelerationism. Their whole idea is to rush to civilization collapse so it can be rebuild it in their image. What’s sacrificing a few billion people if trillions of transhumans can be engineered tomorrow, say the tech bros.
toynbee@lemmy.world 4 months ago
This basically happened in an early (possibly the first?) episode of Community. Likely that was inspired by something that happened in real life, but it would not be surprising if the story in the image was inspired by Community.
themeatbridge@lemmy.world 4 months ago
It is a classic Pop Psychology/Philosophy legend/trope, predating Community and the AI boom by a wide margin. It’s one of those examples people repeat, because it’s an effective demonstration, and it’s a memorable way to engage a bunch of hung-over first year college students. It opens several different conversations about the nature of the mind, the self, empathy, and projection.
captain_aggravated@sh.itjust.works 4 months ago
There are main characters on television that aren’t as well written as Tim the Pencil.
A_Chilean_Cyborg@feddit.cl 4 months ago
Is just like… chat GPT gets sad when I insult it… idk what to make of that.
Gucci_Minh@hexbear.net 4 months ago
Tim the pencil didn’t deserve that damn
D61@hexbear.net 4 months ago
mayo_cider@hexbear.net 4 months ago
I get the point but the professor was still a dick for taking a life for a sick circus trick
A sick skateboard trick on the other hand…
hamid@vegantheoryclub.org 4 months ago
People have a way different idea about the current AI stuff and what it actually is than I do I guess. I use it at work to flesh out my statements of work and edit my documentation to be standardized and better with passive language. It is great at that and saves a lot of time. Strange people want it to be their girlfriend lol.
flora_explora@beehaw.org 4 months ago
Tbf I would have gasped because of the violent action of breaking a pencil in half, no projection of personality needed…
probableprotogen@lemmy.dbzer0.com 4 months ago
Long live Tim the Pencil!
bitchkat@lemmy.world 4 months ago
Just remember kids, do not under any circumstances anthropomorphize Larry Ellison.
kromem@lemmy.world 4 months ago
While true, there’s a very big difference between correctly not anthropomorphizing the neural network and incorrectly not anthropomorphizing the data compressed into weights.
The data is anthropomorphic, and the network self-organizes the data around anthropomorphic features.
For example, the older generation of models will pick to be the little spoon around 70% of the time and the big spoon around 30% of the time, as there’s likely a mix in the training data.
But one of the SotA models picks little spoon every single time dozens of times in a row, almost always grounding on the sensation of being held.
It can’t be held, and yet its output is biasing from the norm based on the sense of it anyways.
cynar@lemmy.world 4 months ago
I just spent the weekend driving a remote controlled Henry hoover around a festival. It’s amazing how many people immediately anthropomorphised it.
It got a lot of head pats, and cooing, as if it was a small, happy, excitable dog.
MindTraveller@lemmy.ca 4 months ago
According to the theory of conscious realism, physical matter is an illusion and the nature of reality is conscious agents. Thus, Tim the Pencil is conscious.
veganpizza69@lemmy.world 4 months ago
There’s also the issue of imagining conscious individuals as not-people.
ObstreperousCanadian@lemmy.ca 4 months ago
It’s TTRPG designer Greg Stolze!
IsThisAnAI@lemmy.world 4 months ago
I feel like half this class went home saying, akchtually I would have gasped at you randomly breaking a non humanized pencil as well. And they are probably correct.
voracitude@lemmy.world 4 months ago
I would argue that first person in the image is turned right around. Seems to me that anthropomorphising a chat bot or other inanimate objects would be a sign of heightened sensitivity to shared humanity, not reduced, if it were a sign of anything. Where’s the study showing a correlation between anthropomorphisation and callousness? Or whatever condition describes not seeing other people as fully human?
Aethr@lemmy.world 4 months ago
Heightened sensitivity, but reduced accuracy, which is what their point is l believe
voracitude@lemmy.world 4 months ago
Dammit, you’re right 😅 Thanks!
baggachipz@sh.itjust.works 4 months ago
The Harambe of writing implements
Xantar@lemmy.dbzer0.com 4 months ago
RIP Tim the pencil, you will be remembered forever
fossilesque@mander.xyz 4 months ago
F
baggachipz@sh.itjust.works 4 months ago
The Harambe of writing implements
DragonTypeWyvern@midwest.social 4 months ago
Justice For Tim
rtxn@lemmy.world 4 months ago
Dicks out for Tim the pencil!
Viking_Hippie@lemmy.world 4 months ago
Pencils out for rtxn the dick!
scrion@lemmy.world 4 months ago
I would be honestly upset at the tragic death of the Tim the pencil.