I wonder what that indicates about its data set and the general use of image gen
I think you know.
On a more serious note, it’s interesting to put in pure nonsense as the prompt (just strings of syllables with no meaning), and see what it comes up with. It likes misshapen heads, which makes sense because it’s trained on a lot of human features, but it also likes houses, fish, and hot air balloons quite a lot for some reason. The images are in my opinion a lot more interesting than a lot of what it comes up with if you give it words.
Even_Adder@lemmy.dbzer0.com 2 days ago
The model is probably just a bit overfit. More importantly, the image description is not the generation parameters. I wrote myself. There weren’t any generation parameters for this image.
benignintervention@lemmy.world 2 days ago
Gotcha, I misinterpreted