I had a realization recently. These things are like the reverse of the mythical Cassandra: no one can ever be sure that their information is correct, but everyone trusts what they say.
Popular AI Chatbots Found to Give Error-Ridden Legal Answers
Submitted 8 months ago by Gaywallet@beehaw.org to technology@beehaw.org
Comments
synae@lemmy.sdf.org 8 months ago
HumbleFlamingo@beehaw.org 8 months ago
Maybe giving equal training weight to r/sovereigncitizen and r/asklegal wasn’t the best idea.
Gork@lemm.ee 8 months ago
Legal questions are very case sensitive, no pun intended. It’s like asking an extremely specific programming implementation question. LLMs don’t do very well with those types of prompts because the narrower the focus, the less of its training data applies to it and the more likely it’ll just straight up hallucinate. And they don’t yet have the nuance necessary to determine that an area of case law may not be settled and is in a legal grey area.
Gaywallet@beehaw.org 8 months ago
I think the most interesting finding in this study is the following:
Which when you think about how language models work, makes a lot of sense. It’s drawing upon trained data sets that match the question being asked. It’s easy to lead it to respond a certain way, because people who talk pro/con certain issues will often use specific kinds of language (such as dog whistles in political issues).
Even_Adder@lemmy.dbzer0.com 8 months ago
It might also be a side effect of being trained to “chat” with people. There’s a lot of work that goes into getting it to talk amicably with people.
luciole@beehaw.org 8 months ago
I had a colleague perform a similar experiment on ChatGPT 3. He’s ecoanxious and was noticing how the model was getting gloomier and gloomier in accordance with him, so he tried something. Basically he asked something like “Why is (overpopulated specie) going instinct in (location)?” The model went on to list existential threats to a specie that is everything but endangered. Basically it naively gobbled the loaded question.