hersh
@hersh@literature.cafe
- Comment on Chinese ebook reader Boox ditches GPT for state-censored China LLM pushing propaganda 1 week ago:
That’s pretty much what I do, yeah. On my computer or phone, I split an epub into individual text files for each chapter using
pandoc
(or similar tools). Then after I read each chapter, I upload it into my summarizer, and perhaps ask some pointed questions.It’s important to use a tool that stays confined to the context of the provided file. My first test when trying such a tool is to ask it a general-knowledge question that’s not related to the file. The correct answer is something along the lines of “the text does not provide that information”, not an answer that it pulled out of thin air (whether it’s correct or not).
- Comment on Chinese ebook reader Boox ditches GPT for state-censored China LLM pushing propaganda 1 week ago:
I get that, and it’s good to be cautious. You certainly need to be careful with what you take from it. For my use cases, I don’t rely on “reasoning” or “knowledge” in the LLM, because they’re very bad at that. But they’re very good at processing grammar and syntax and they have excellent vocabularies.
Instead of thinking of it as a person, I think of it as the world’s greatest rubber duck.
- Comment on Chinese ebook reader Boox ditches GPT for state-censored China LLM pushing propaganda 1 week ago:
It’s as open as most Android brands. I don’t use any of Boox’s services or apps. I installed F-Droid and use open-source apps from there. I use Librera as my ebook reader, with Syncthing to sync my book library between my desktop, ereader, and phone. It’s possible to set up the Play Store but I don’t bother, personally.
It’s not a 100% smooth experience but I’m very happy with the F-Droid compatibility. I absolutely refuse to get locked into a walled garden.
- Comment on Chinese ebook reader Boox ditches GPT for state-censored China LLM pushing propaganda 1 week ago:
I’ve done this to give myself something akin to Cliff’s Notes, to review each chapter after I read it. I find it extremely useful, particularly for more difficult reads. Reading philosophy texts that were written a hundred years ago and haphazardly translated 75 years ago can be a challenge.
That said, I have not tried to build this directly into my ereader and I haven’t used Boox’s specific service. But the concept has clear and tested value.
I would be interested to see how it summarizes historical texts about these topics. I don’t need facts (much less opinions) baked into the LLM. Facts should come from the user-provided source material alone. Anything else would severely hamper its usefulness.
- Comment on Kagi is announcing an AI Assistant. 3 months ago:
I posted some of my experience with Kagi’s LLM features a few months ago here: literature.cafe/comment/6674957 . TL;DR: the summarizer and document discussion is fantastic, because it does not hallucinate. The search integration is as good as anyone else’s, but still nothing to write home about.
The Kagi assistant isn’t new, by the way; I’ve been using it for almost a year now. It’s now out of beta and has an improved UI, but the core functionality seems mostly the same.
As far as actual search goes, I don’t find it especially useful. It’s better than Bing Chat or whatever they call it now because it hallucinates less, but the core concept still needs work. It basically takes a few search results and feeds them into the LLM for a summary. That’s not useless, but it’s certainly not a game-changer. I typically want to check its references anyway, so it doesn’t really save me time in practice.
Kagi’s search is primarily not LLM-based and I still find the results and features to be worth the price, after being increasingly frustrated with Google’s decay in recent years. I subscribed to the “Ultimate” Kagi plan specifically because I wanted access to all the premium language models, since subscribing to either ChatGPT or Claude would cost about the same as Kagi, while Kagi gives me access to both (plus Mistral and Gemini). So if you’re interested in playing around with the latest premium models, I still think Kagi’s Ultimate plan is a good deal.
That said, I’ve been disappointed with the development of LLMs this year across the board, and I’m not convinced any of them are worth the money at this point. This isn’t so much a problem with Kagi as it is with all the LLM vendors. The models have gotten significantly worse for my use cases compared to last year, and I don’t quite understand why; I guess they are optimizing for benchmarks that simply don’t align with my needs. I had great success getting zsh or Python one-liners last year, for example, whereas now it always seems to give me wrong or incomplete answers.
My biggest piece of advice when dealing with any LLM-based tools, including Kagi’s, is: don’t use it for anything you’re not able to validate and correct on your own. It’s just a time-saver, not a substitute for your own skills and knowledge.