Image generation models are generally more than capable of doing that they’re just not trained to do it.
That is, just doing a bit of hand-holding and showing SDXL appropriately tagged images and you get quite sensible results. Under normal circumstances it just simply doesn’t get to associate any input tokens with the text in the pixels because people rarely if ever describe, verbatim, what’s written in an image. “Hooters” is an exception, hard to find a model on Civitai that can’t spell it.
zalgotext@sh.itjust.works 7 months ago
Aggravationstation@feddit.uk 7 months ago
Blovw it hard!