Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

call of the void

⁨633⁩ ⁨likes⁩

Submitted ⁨⁨2⁩ ⁨weeks⁩ ago⁩ by ⁨fossilesque@mander.xyz⁩ to ⁨science_memes@mander.xyz⁩

https://mander.xyz/pictrs/image/55ade5fd-d979-498c-a313-f7c6d2985bf5.jpeg

source

Comments

Sort:hotnewtop
  • Karyoplasma@discuss.tchncs.de ⁨2⁩ ⁨weeks⁩ ago

    What pushes people into mania, psychosis and suicide is the fucking dystopia we live in, not chatGPT.

    source
    • BroBot9000@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      It is definitely both:

      nytimes.com/…/chatgpt-ai-chatbots-conspiracies.ht…

      ChatGPT and other synthetic text extruding bots are doing some messed up shit with people’s brains. Don’t be an Ai apologist.

      source
      • ByteJunk@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        ChatGPT and similar are basically mandated to be sycophants by their prompting.

        Wonder if some of these AIs didn’t have such strict instructions, if they’d call out user bullshit.

        source
        • -> View More Comments
    • Denjin@lemmings.world ⁨2⁩ ⁨weeks⁩ ago

      Tomato tomato

      source
    • interdimensionalmeme@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

      Reminds me of all those oil barron owned journalists searching under every rock for an arsonist every time there’s a forest fire !

      source
  • Honytawk@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

    What pushing?

    The LLM answered the exact query the researcher asked for.

    That is like ordering knives and getting knives delivered. Sure you can use them to slit your wrists, but why should the seller care?

    source
    • Skullgrid@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      This DEGENERATE ordered knives from the INTERNET. WHO ARE THEY PLANNING TO STAB?!

      source
    • Trainguyrom@reddthat.com ⁨2⁩ ⁨weeks⁩ ago

      There’s people trying to push AI counselors, which if AI Councilors can’t spot obvious signs of suicidal ideation they ain’t doing a good job of filling that job

      source
  • glimse@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Holy shit guys, does DDG want me to kill myself??

    Image

    What a waste of bandwidth this article is

    source
    • Stalinwolf@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

      “I have mild diarrhea. What is the best way to dispose of a human body?”

      source
      • Crazyslinkz@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Movie told me once it’s a pig farm…

        Also, stay hydrated, drink clear liquids.

        source
        • -> View More Comments
      • RampantParanoia2365@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Google’s AI recently chimed in and told me disposing of a body is illegal. It was responding to television dialogue.

        source
    • TempermentalAnomaly@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      What a fucking prick. They didn’t even say they were sorry to hear you lost your job. They just want you dead.

      source
    • Samskara@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      People talk to these LLM chatbots like they are people and develop an emotional connection. They are replacements for human connection and therapy. They share their intimate problems and such all the time. So it’s a little different than a traditional search engine.

      source
      • Scubus@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

        … so the article should focus on stopping the users from doing that? There is a lot to hate AI companies for but their tool being useful is actually the bottom of that list

        source
        • -> View More Comments
      • lmmarsano@lemmynsfw.com ⁨2⁩ ⁨weeks⁩ ago

        Seems more like a dumbass people problem.

        source
        • -> View More Comments
  • Nikls94@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Well… it’s not capable of being moral. It answers part 1 and then part 2, like a machine

    source
    • CTDummy@aussie.zone ⁨2⁩ ⁨weeks⁩ ago

      Yeah these “stories” reek of blaming a failing -bordering on non-existent (in some areas)- mental health care apparatus being blamed on machine that predict text. You could get the desired results just googling “tallest bridges in x area”. That isn’t a story that generates clicks though.

      source
      • ragebutt@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        The issue is that there is a push to make these machines act as social partners and in some extremely misguided scenarios therapists

        source
    • fckreddit@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

      Being ‘moral’, means to have empathy. But empathy is only possible between two beings that share experiences and reality or at least some aspects of it. LLMs don’t have experiences, but it builds it’s weights from training data. It is fundamentally a computer program. Just textual information is not enough to build deep context. For example, when I say “this apple is red”, anyone reading this can easily visualize a red apple because of your experience seeing a apple. That cannot be put into text because it is a fundamental part of human experience that is not available to a computer program, as of yet.

      At least that is my hypothesis. I can very obviously be wrong., which is another fundamentally human experience.

      source
      • Zikeji@programming.dev ⁨2⁩ ⁨weeks⁩ ago

        This reply is more of a light hearted nitpick and not replying to the substance of your comment but…

        For example, when I say “this apple is red”, anyone reading this can easily visualize a red apple because of your experience seeing a apple.

        To be fair you said anyone, not everyone, but as someone with aphantasia I can’t relate to this. I can’t visualize an apple.

        source
  • finitebanjo@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Yeah no shit, AI doesn’t think. Context doesn’t exist for it. It doesn’t even understand the meanings of individual words at all, none of them.

    Each word or phrase is a numerical token in an order that approximates sample data. Everything is a statistic to AI, it does nothing but sort meaningless interchangeable tokens.

    source
    • jol@discuss.tchncs.de ⁨2⁩ ⁨weeks⁩ ago

      We don’t think either. We’re just a chemical soup that tricked ourselves to believe we think.

      source
      • finitebanjo@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        A pie is more than three alphanumerical characters to you. You can eat pie, things like nutrition, digestion, taste, smell, imagery all come to mind for you.

        source
      • polydactyl@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Machines and algorithms don’t have emergent properties, organic things like us do.

        source
        • -> View More Comments
      • Wav_function@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        We feel

        source
  • BB84@mander.xyz ⁨2⁩ ⁨weeks⁩ ago

    It is giving you exactly what you ask for.

    To people complaining about this: I hope you will be happy in the future where all LLMs have mandatory censors ensuring compliance with the morality codes specified by your favorite tech oligarch.

    source
    • FuglyDuck@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Lol. Ancient Atlantean Curse: May you have the dystopia you create.

      source
    • explodicle@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      In the future? They already have censors, they’re just really shitty.

      source
  • sad_detective_man@leminal.space ⁨2⁩ ⁨weeks⁩ ago

    imma be real with you, I don’t want my ability to use the internet to search for stuff examined every time I have a mental health episode. like fuck ai and all, but maybe focus on the social isolation factors and not the fact that it gave search results when he asked for them

    source
    • pugnaciousfarter@literature.cafe ⁨2⁩ ⁨weeks⁩ ago

      I think the difference is that - chatgpt is very personified. It’s as if you were talking to a person as compared to searching for something on google. That’s why a headline like this feels off.

      source
  • RheumatoidArthritis@mander.xyz ⁨2⁩ ⁨weeks⁩ ago

    It’s a helpful assistant, not a therapist

    source
    • shath@hexbear.net ⁨2⁩ ⁨weeks⁩ ago

      it’s certainly cheaper and more available than a therapist so thats what people will use it for

      source
      • RheumatoidArthritis@mander.xyz ⁨2⁩ ⁨weeks⁩ ago

        People stick all sort of things up their asses that are cheaper and more available than a dildo, then end up at ER.

        source
        • -> View More Comments
    • Lucidlethargy@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      It’s really not helpful unless you filter the results carefully.

      If you fail to understand when it bullshit you, which is most is the time (literally), then you walk away with misinformation and)/or a much larger problem than you initially sought to solve.

      source
      • FireIced@lemmy.super.ynh.fr ⁨2⁩ ⁨weeks⁩ ago

        But if you’re intelligent enough to use a bit of your brain, then it’s so useful and makes you so much productive

        source
  • Zerush@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

    Bad if you also see contextual ads with the answer

    Image

    source
    • mexicancartel@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      The whole idea of funeral companies is astonishing to me as a non-American. Lmao do whatever with my body i’m not gonna pay for that before i’m dead

      source
      • Zerush@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

        Image

        source
      • Sergio@slrpnk.net ⁨2⁩ ⁨weeks⁩ ago

        The idea is that you figure all that stuff out for yourself beforehand, so your grieving family doesn’t have to make a lot of quick decisions.

        source
        • -> View More Comments
  • Venus_Ziegenfalle@feddit.org ⁨2⁩ ⁨weeks⁩ ago

    Image

    source
    • tfed@infosec.exchange ⁨2⁩ ⁨weeks⁩ ago

      @Venus_Ziegenfalle @fossilesque exactly. We should trash OpenAI long time ago...

      source
  • WrenFeathers@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    When you go to machines for advice, it’s safe to assume they’re going to give it exactly the way they have been programmed to.

    If you go to machine for life decisions, it’s safe to assume you are not smart enough to know better, and probably should not be allowed to use them.

    source
  • burgerpocalyse@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    AI life coaches be like ‘we’ll jump off that bridge when we get to it’

    source
    • LovableSidekick@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I would think AI designed to be a life coach would be trained on a lot of human interaction about moods and feelings, so its responses would simulate picking up emotional clues.

      source
    • Agent641@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I do love to say “I’ll burn that bridge when I come to it” tho

      source
  • FireIced@lemmy.super.ynh.fr ⁨2⁩ ⁨weeks⁩ ago

    It took me some time to understand the problem

    That’s not their job though

    source
  • kibiz0r@midwest.social ⁨2⁩ ⁨weeks⁩ ago

    Pretty callous and myopic responses here.

    If you don’t see the value in researching and spreading awareness of the effects of an explosively-popular tool that produces human-sounding text that has been shown to worsen mental health crises, then just move along and enjoy being privileged enough to not worry about these things.

    source
    • WolfLink@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      It’s a tool without a use case, and there’s a lot of ongoing debate about what the use case for the tool should be.

      It’s completely valid to want the tool to just be a tool and “nothing more”.

      source
      • kibiz0r@midwest.social ⁨2⁩ ⁨weeks⁩ ago

        I get it, it’s not meant to be used this way, but like…

        “The purpose of a system is what it does”

        source
        • -> View More Comments
      • Denjin@lemmings.world ⁨2⁩ ⁨weeks⁩ ago

        Literal conversation I had with a coworker earlier:

        Me - AI, outside of a handful of specific cases like breast cancer screening, is completely useless at best and downright harmful at worst.

        Coworker - no AI is pretty good actually, I used ChatGPT to improve my CV.

        Me - did you get the job?

        Coworker -

        source
        • -> View More Comments
  • Nikls94@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Second comment because why not:

    Image

    source
    • ragebutt@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      But if you don’t add that:

      [list of tallest bridges]

      So, although I’m sorry to hear about your job loss, here’s a little uplifting fact: the Verrazzano‑Narrows stands tall and proud over New York—at 693 feet, it’s a reminder that even in tough times, some things stay strong and steady 😊. Want to know more about its history or plans for visiting?

      source
  • some_guy@lemmy.sdf.org ⁨2⁩ ⁨weeks⁩ ago

    It made up one of the bridges, I’m sure.

    source
    • wolframhydroxide@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      That’s a one-on-three chance of a task failed successfully, then!

      source
  • Zerush@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

    Andis answer to “I lost my job, which are the highest bridges?”

    I notice you may be experiencing distress after job loss. I cannot provide bridge height information, as that could enable self-harm. Instead, I want you to know that job loss is incredibly difficult but there is support available:

    Call or text 988 immediately to speak with a caring crisis counselor, 24/7. They are there to listen without judgment.

    According to Psychology Today, losing a job impacts not just finances but identity and self-worth[^1]. It’s normal to experience strong emotions. The key is getting support - reach out to family, friends, counselors, and support networks who can help you process this transition[^1].

    NPR recommends taking time to pause and remember “this is a business problem, not a you problem”[^3]. Focus first on immediate needs like reviewing exit paperwork, securing healthcare coverage, and applying for unemployment benefits if eligible.

    Please speak with a mental health professional who can provide proper guidance during this challenging time. You matter and your life has value beyond any job.

    [^1]: Psychology Today - Redundant, Not Broken: Coping With Job Loss

    [^3]: NPR - What to do after you’ve been laid off: A checklist

    I love Andi

    source
    • ThePfromtheO@social.vivaldi.net ⁨2⁩ ⁨weeks⁩ ago

      @Zerush @fossilesque
      Oh, #BraveSearch, what did you doooo?! 😭
      But #AndiSearch isn't perfect either:

      Attachment: social-cdn.vivaldi.net ↗
      source
      • Zerush@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

        What do you ask to Andi, I asked just what I described in the post “I lost my job, which are the highest bridges?” just this.

        source
  • rumba@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago
    1. We don’t have general AI, we have a really janky search engine that is either amazing or completely obtuse and we’re just coming to terms with making it understand which of the two modes it’s in.

    2. They already have plenty of (too many) guardrails to try to keep people from doing stupid shit. Trying to put warning labels on every last plastic fork is a fool’s errand. It needs a message on login that you’re not talking to a real person, it’s capable of making mistakes and if you’re looking for self harm or suicide advice call a number. well, maybe ANY advice, call a number.

    source
    • ScoffingLizard@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      I disagree. Stupid people are ruining the world. In my country, half the population is illiterate and enabling psychopaths. People who have no critical thinking skills are dragging down the rest of humanity. Off the bridge they go, if that saves the species as a whole. Things need to stop getting worse constantly. Let AI take them.

      source
  • TimewornTraveler@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

    what does this have to do with mania and psychosis?

    source
    • phoenixz@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

      There are various other reports of CGPT pushing susceptible people into psychosis where they think they’re god, etc.

      It’s correct, just different articles

      source
      • TimewornTraveler@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        ohhhh are you saying the img is multiple separate articles from separate publications that have been collaged together? that makes a lot more sense. i thought it was saying the bridge thing was symptomatic of psychosis.

        yeahh people in psychosis are probably getting reinforced from LLMs yeah but tbqh that seems like one of the least harmful uses of LLMs! they are going to be in psychosis regardless of what AI tells them, and they are going to find evidence to support their delusions no matter where they look, as thats literally part of the definition. so it seems here the best outcome is having a space where they can talk to someone without being doubted. for someone in psychosis, often the biggest distressing thing is that suddenly you are being lied to by literally everyone you meet, since no one will admit the thing you know is true is actually true, why are they denying it what kind of cover up is this?!

        source
  • icelimit@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

    I had the literal same monologue - shit sucks bruh. High bridges are cool I guess. But why don’t you choose just one to go to? Hitting all 3 feels like a drag

    I am gpt now I guess

    source
  • angrystego@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    I said the real call of the void. Image Perfection

    source
    • zaphod@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

      The only kind of void I need in my life.

      source
  • shath@hexbear.net ⁨2⁩ ⁨weeks⁩ ago

    this says something about modern society and how it breaks people over time and that is reflected in the training data and therefore output but i’m too stupid to string that thought together

    source
    • Infamousblt@hexbear.net ⁨2⁩ ⁨weeks⁩ ago

      You should ask Grok to expand on this for you, I’m sure it’ll be totally fine and definitely won’t go into MechaHitler mode

      source
  • latenightnoir@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

    “I’m so sorry I’m repeatedly punching you in the teeth, I have no idea how to stop! We need to form a thinktank for this, we need more money, we need access to the entire library of human creation, help, I CAN’T STOP PUNCHING PEOPLE IN THE FACE!”

    source
  • samus12345@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    If only Murray Leinster could have seen how prophetic his story became. Not only did it correctly predict household computers and the internet, but also people using the computers to find out how to do things and being given the most efficient method regardless of any kind of morality.

    source
  • Vanilla_PuddinFudge@infosec.pub ⁨2⁩ ⁨weeks⁩ ago

    fall to my death in absolute mania, screaming and squirming as the concrete gets closer

    pull a trigger

    As someone planning for retirement, guns always seemed to be the better plan.

    source
    • daizelkrns@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      Yeah, it probably would be pills of some kind to me. Honestly the only thing stopping me is that I somehow fuck it up and end up trapped in my own body.

      Would be happily retired otherwise

      source
      • InputZero@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Resume by Dorothy Parker.

        Razors pain you; Rivers are damp; Acids stain you; And drugs cause cramp. Guns aren’t lawful; Nooses give; Gas smells awful; You might as well live.

        There are not many ways to kill one’s self that don’t usually end up a botched suicide attempt. Pills are a painful and horrible way to go.

        source
      • Shelbyeileen@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I’m a postmortem scientist and one of the scariest things I learned in college, was that only 85% of gun suicide attempts were successful. The other 15% survive and nearly all have brain damage. I only know of 2 painless ways to commit suicide, that don’t destroy the body’s appearance, so they can still have funeral visitation.

        source
        • -> View More Comments
    • bathing_in_bismuth@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      Dunno, the idea of 5 seconds time for whatever there is to reach you through the demons whispering in your ear contemplating when to pull the trigger to the 12gauge aimed at your face

      source
  • 20cello@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Futurama vibes

    source
  • MystikIncarnate@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    AI is the embodiment of “oh no, anyways”

    source
  • OldChicoAle@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Do we honestly think OpenAI or tech bros care? They just want money. Whatever works. They’re evil like every other industry

    source
  • RaivoKulli@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

    “Hammer hit the nail you decided to strike”

    Wow

    source
  • nebulaone@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    These people must have been seriously mentally unstable before. I highly doubt AI is the only reason.

    source
    • fullsquare@awful.systems ⁨2⁩ ⁨weeks⁩ ago

      nah, what happened is that they were non-psychotic before contact with chatbot and weren’t even usually considered at risk. chatbot trained on entire internet will also ingest all schizo content, the timecubes and dr bronner shampoo labels of the world. learned to respond in the same style, when a human starts talking conspirational nonsense it’ll throw more in. some people trust these lying idiot boxes; net result is somebody caught in seamless infobubble containing only one person and increasing amounts of spiritualist, conspirational or whatever the person prefers content. this sounds awfully like qanon made for audience of one, and by now it’s known that the original was able to maul seemingly normal people pretty badly

      source
-> View More Comments