I get that, and it’s good to be cautious. You certainly need to be careful with what you take from it. For my use cases, I don’t rely on “reasoning” or “knowledge” in the LLM, because they’re very bad at that. But they’re very good at processing grammar and syntax and they have excellent vocabularies.
Instead of thinking of it as a person, I think of it as the world’s greatest rubber duck.
Jayjader@jlai.lu 2 days ago
I’m not sure if this is how @hersh@literature.cafe is using it, but I could totally see myself using an LLM to check my own understanding like the following:
Ironically, this exercise works better if the LLM “hallucinates”; noticing a hallucination in its summary is a decent metric for my own understanding of the chapter.
hersh@literature.cafe 2 days ago
That’s pretty much what I do, yeah. On my computer or phone, I split an epub into individual text files for each chapter using
pandoc
(or similar tools). Then after I read each chapter, I upload it into my summarizer, and perhaps ask some pointed questions.It’s important to use a tool that stays confined to the context of the provided file. My first test when trying such a tool is to ask it a general-knowledge question that’s not related to the file. The correct answer is something along the lines of “the text does not provide that information”, not an answer that it pulled out of thin air (whether it’s correct or not).
Jayjader@jlai.lu 2 days ago
Ooooh, that’s a good first test / “sanity check” !
May I ask what you are using as a summarizer? I’ve played around with locally running models from huggingface, but never did any tuning nor straight-up training “from scratch”. My (paltry) experience with the HF models is that they’re incapable of staying confined to the given context.