People have been training great Flux LoRAs for a while now, haven’t they?
Comment on Stable Diffusion 3 Medium Fine-tuning Tutorial — Stability AI
Even_Adder@lemmy.dbzer0.com 2 months agoI don’t think so. They’re going to have to do a lot better than a tutorial to win people back.
clb92@feddit.dk 2 months ago
Even_Adder@lemmy.dbzer0.com 2 months ago
Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn’t really work.
clb92@feddit.dk 2 months ago
Oh well, in practice I’ll just continue to enjoy this (possibly forgetful and not-fully-finetunable) model then, that still gives me amazing results 😊
erenkoylu@lemmy.ml 2 months ago
quite the opposite. Lora’s are very effective against catastrophic forgetting, and full finetuning is very dangerous.
istanbullu@lemmy.ml 2 months ago
kohya now supports flux fine tuning. I have seen nice examples in civitai.
Even_Adder@lemmy.dbzer0.com 2 months ago
Those might just be LoRA merged models, not full fine-tuning. From what I heard, fine-tuning doesn’t work because the models are distilled. You’d have to find a way to undistill them to train them.