LLMs are absolutely the worst thing you can talk to about mental health issues. There was a post here recently that linked a screenshot of an LLM telling an addict to have some meth as a treat for being a week sober. It’s a glorified autocomplete, nothing more.
Iunnrais@lemm.ee 3 days ago
Please be careful, while the thrust of your statement is correct (not a substitute for a real professional, it can give dangerously bad advise on some occasions and there’s no way besides personal knowledge and expertise to distinguish when it messes up besides hard study and real research), the meme that LLMs are glorified autocomplete is factually incorrect. Don’t be like the D.A.R.E. program and try to scare people away from things with bad facts and lies.
It is disingenuous to say that because the AI system that trains the AI system that becomes the LLM uses “next word prediction” as its success metric, that the LLM itself is nothing but autocomplete. Here’s an example of a next word predictor: a fully fledged intelligent human being who is asked to predict the next word of a sentence. And I’m not saying that an LLM is that, or equivalent, or even close, just that being a next word predictor doesn’t rule that out, and claiming or implying so is simply wrong.
True, use of LLMs is not guaranteed to be correct, and in areas where correctness really matters and you lack expertise to check it, you really should not use an LLM. But let’s not lie to make it sound dumber than it is. It’s plenty dumb enough already.
null_dot@lemmy.dbzer0.com 2 days ago
This statement is really just saying that an LLM can not reason about it’s assertions.