Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Can AI talk us out of conspiracy theory rabbit holes?

⁨26⁩ ⁨likes⁩

Submitted ⁨⁨8⁩ ⁨months⁩ ago⁩ by ⁨101@feddit.org⁩ to ⁨technology@beehaw.org⁩

https://theconversation.com/can-ai-talk-us-out-of-conspiracy-theory-rabbit-holes-238580

source

Comments

Sort:hotnewtop
  • halm@leminal.space ⁨8⁩ ⁨months⁩ ago

    According to that research mentioned in the article, the answer is yes. The big caveats are

    • that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.
    • you need a level of “AI” that isn’t going to start hallucinating and instead enforce the subjects’ conspiracy beliefs. Despite techbros’ hype of the technology, I’m not convinced we’re anywhere close.
    source
    • Butterbee@beehaw.org ⁨8⁩ ⁨months⁩ ago

      It’s not even fundamentally possible with the current LLMs. It’s like saying “Yes, it’s totally possible to do that! We just need to invent something that can do that first!”

      source
      • halm@leminal.space ⁨8⁩ ⁨months⁩ ago

        I think we agree on the limited capability of (what is currently passed off as) “artificial intelligence”, yes.

        source
    • CanadaPlus@lemmy.sdf.org ⁨8⁩ ⁨months⁩ ago

      that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.

      You overestimate how hard it is to get a conspiracy theorist to click on something.

      you need a level of “AI” that isn’t going to start hallucinating and instead enforce the subjects’ conspiracy beliefs. Despite techbros’ hype of the technology, I’m not convinced we’re anywhere close.

      They used a purpose-finetuned GPT-4 model for this study, and it didn’t go off script once.

      source
  • SweetCitrusBuzz@beehaw.org ⁨8⁩ ⁨months⁩ ago

    Betteridge’s law of headlines.

    So no.

    source
    • OhNoMoreLemmy@lemmy.ml ⁨8⁩ ⁨months⁩ ago

      That’s just what they want you to think.

      source
      • SweetCitrusBuzz@beehaw.org ⁨8⁩ ⁨months⁩ ago

        Hehe

        source
  • desktop_user@lemmy.blahaj.zone ⁨8⁩ ⁨months⁩ ago

    the better goal is creating new unique conspiracy theories that nobody has heard of with the help of machine learning.

    source
  • Kwakigra@beehaw.org ⁨8⁩ ⁨months⁩ ago

    I have two main thoughts on this

    1. LLMs are not at this time reliable sources of factual information. The user may be getting something that was skimmed from factual information, but the output can often be incorrect since the machine can’t “understand” the information it’s outputting.

    2. This could potentially be an excellent way to do real research for people who were not provided research skills by their education. Conspiracy theorists often start off as curious but undisciplined before they fall into the identity aspects of the theories. If a machine using human-like language is able to report factual information quickly, reliably, and without judgement to those who wouldn’t be able to find that info on their own, this could actually be a very useful tool.

    source