Comment on The Media's Pivot to AI Is Not Real and Not Going to Work
p03locke@lemmy.dbzer0.com 1 day agoHave you used any good ComfyUI workflows specifically for chat LLMs?
Comment on The Media's Pivot to AI Is Not Real and Not Going to Work
p03locke@lemmy.dbzer0.com 1 day agoHave you used any good ComfyUI workflows specifically for chat LLMs?
brucethemoose@lemmy.world 1 day ago
Not specifically. Ultimately, ComfyUI would build prompts/API calls, which I tend to do in Python scripts.
I tend to use Mikupad or Open Web UI for more general testing.
There are some neat tools with ‘lower level’ integration into LLM engines, like SGlang (which leverages caching and constrained decoding) to do things one can’t do over standard APIs: docs.sglang.ai/frontend/frontend.html
p03locke@lemmy.dbzer0.com 7 hours ago
ComfyUI is just a bunch of Python code tied into I/O nodes. I’m surprised there isn’t a good set of nodes for SGLang yet.
brucethemoose@lemmy.world 6 hours ago
SGLang is partially a scripting language for prompt building leveraging its caching/logprobs output, for doing stuff like filling in fields or branching choices, so it’s probably best done in that. It also requires pretty beefy hardware for the model size (as opposed to backends like exllama or llama.cpp that focus more on tight quantization and unbatched performance), so I suppose theres not a lot of interest from more local tinkerers?
It would be cool, I guess, but ComfyUI does feel more geared for diffusion. Image/video generation is more multimodel and benefits from dynamically loading/unloading/swapping all sorts of little submodels, loras and masks, applying them, piping them into each other and such.
LLM running is more monolithic: you have the 1 big model, maybe a text embeddings model as part of the same server, and everything else is just processing strings to build the prompts which one does linearly om python or whatever. Stuff like CFG and Loras do exist, but aren’t used much.
p03locke@lemmy.dbzer0.com 3 hours ago
It’s a shame, because ComfyUI can be so much more than just image generation. And just because there’s a lot of string processing for LLMs doesn’t mean that it isn’t important to capture in an I/O interface, especially when it comes to preserving chat history. Save data, load data, ask new questions, etc.
ChatGPT is pretty damn powerful, I’ll admit. But, all of its components need to be localized, especially since something like a Mixture of Experts model could be split down to base models and loaded/unloaded as necessary.