projectmoon
@projectmoon@lemm.ee
- Comment on How does AI-based search engines know legit sources from BS ones ? 3 days ago:
A lot of the answers here are short or quippy. So, here’s a more detailed take. LLMs don’t “know” how good a source is. They are word association machines. They are very good at that. When you use something like Perplexity, an external API feeds information from the search queries into the LLM, and then it summarizes that text in (hopefully) a coherent way. There are ways to reduce hallucination rate and check factualness of sources, e.g. by comparing the generated text against authoritative information. But how much of that is employed by Perplexity et al I have no idea.
- Comment on Patrol Unit 13 3 weeks ago:
I think you have the wrong full generation parameters here.
- Comment on Duolingo will replace contract workers with AI 5 weeks ago:
The problem is that while LLMs can translate, it’s still machine translation and isn’t always accurate. It’s also not going to just be for that. It’ll be applying “AI” to everything that looks like it might vaguely fit, and it’ll stifle productivity.
- Comment on English, old 1 year ago:
It’s the opening of the Canterbury Tales.
- Comment on Why do startrek.website pictures/avatars not show up? 1 year ago:
Yes, it’s federated. I’ll try this if I notice it again.
- Submitted 1 year ago to meta@lemm.ee | 2 comments