Comment on The Media's Pivot to AI Is Not Real and Not Going to Work
p03locke@lemmy.dbzer0.com 16 hours agoIt’s a shame, because ComfyUI can be so much more than just image generation. And just because there’s a lot of string processing for LLMs doesn’t mean that it isn’t important to capture in an I/O interface, especially when it comes to preserving chat history. Save data, load data, ask new questions, etc.
ChatGPT is pretty damn powerful, I’ll admit. But, all of its components need to be localized, especially since something like a Mixture of Experts model could be split down to base models and loaded/unloaded as necessary.
brucethemoose@lemmy.world 15 hours ago
It doesn’t work that way. All MoE experts are ‘interleaved’ and you need all of them loaded at once, for every token. Some API servers can hotswop wholes models, but its not fast, and rarely done since LLMs are pretty ‘generalized’ and tend to serve requests in parallel on API servers.
The closest to what you’re thinking of is LoRAX (which basically hot-swaps Loras efficiently). But it needs an extremely specialized runtime derived from its associated paper, hence people tend to not use it since it doesn’t support quantization and some other features as well: github.com/predibase/lorax
There is a good case for pure data processing, yeah… But it has little integration with LLMs themselves, especially with the API servers generally handling tokenizers/prompt formatting.
They already are! Local LLM tooling and engines are great and super powerful compared to ChatGPT (which offers no caching, no raw completion, primitive sampling, hidden thinking, and so on).