Comment on *Doesn't look like anything to me.*
brucethemoose@lemmy.world 1 week agoIt’s a plausible trap. Depending on the architecture, the image decoder (that “sees”) is bolted onto main model as a more discrete part, and the image generator could be a totally different model. So internally, if it’s not ingesting the “response” image, it possibly has no clue they’re the same.
Of course, we have no idea, because OpenAI is super closed :/
Scrollone@feddit.it 1 week ago
“Open” AI. Yeah, open my ass.