But why not ask it for a source if this is information that has some critical piece to it. It’s right far more than it’s wrong and works as a great tool to speed up learning. I’m really interested in people sharing what prompts they used and the wrong answers it produced.
Comment on How does AI-based search engines know legit sources from BS ones ?
Pyr_Pressure@lemmy.ca 3 days agoPretty much anything tech support, it gives you options which no longer exist anymore because the solution it is suggesting is from a slightly older windows/android version and the UI changed so the option is no longer where it thinks.
Also asking if particular wildlife in in a particular location. Tried asking it if polar bears were in a location I’m going to visit and it said yes, but a quick search through its sources confirmed that was false and the nearest Polaris bears are hundreds of miles away.
Melvin_Ferd@lemmy.world 3 days ago
Pyr_Pressure@lemmy.ca 3 days ago
What’s the point of AI if you need to search for the source to make sure it’s right everytime? Just skip a step and search for a source first thing.
Melvin_Ferd@lemmy.world 3 days ago
There’s so many ways to answer this that I’m surprised it’s asked in the first place. AI is not some be all end all of knowledge. It’s a tool like any other.
Case@lemmynsfw.com 3 days ago
If an amateur mycologist picks and eats the wrong mushroom that an LLM said was fine to eat, is the LLM liable for the death legally and/or financially?
I mean, I know better than to pick random mushrooms and eat them, but I don’t really care for mushrooms - though some have some delightful effects when metabolized, lol. The only ones of THOSE I tried, I knew who grew them, and saw the “operation,” and reviewed his sources before trying one.
Call me paranoid, but I’m not blindly trusting a high school drop out to properly identify mushrooms when professionals make mistakes to the point where any mycologist will tell you, DON’T TRUST PICS OR THE INTERNET.
It can be too difficult to tell from those sources, and I doubt the LLM and the human asking questions have the right wavelength of discussion to not produce misleading, if not entirely fabricated, results.