Comment on Reddit lost it

<- View Parent
NotANumber@lemmy.dbzer0.com ⁨2⁩ ⁨days⁩ ago

I don’t trust OpenAI and try to avoid using them. That being said they have always been one of the more careful ones regarding safety and alignment.

I also don’t need you or openai to tell me that hallucinations are inevitable. Here have a read of this:

Title: Hallucination is Inevitable: An Innate Limitation of Large Language Models, Author: Xu et al., Date: 2025-02-13, url: arxiv.org/abs/2401.11817

Regarding resource usage: this is why open weights models like those made by the Chinese labs or mistral in Europe are better. Much more efficient and frankly more innovative than whatever OpenAI is doing.

Ultimately though you can’t just blame LLMs for people committing suicide. It’s a lazy excuse to avoid addressing real problems like how treats neurodivergent people. The same problems that lead to radicalization including incels and neo nazis. These have all been happening before LLM chatbots took off.

source
Sort:hotnewtop