Comment on How does generative AI create convincing lighting in images?

<- View Parent
lets_get_off_lemmy@reddthat.com ⁨1⁩ ⁨week⁩ ago

I’m an AI researcher and yes, that’s basically right. There is no special “lighting mechanism” portion of the network designed before training. Just, after seeing enough images with correct lighting (either for text to image transformer models or GANs), it will understand what correct lighting should look like. It’s all about the distribution of the training data. A simple example is this-person-does-not-exist.com. All of the training images are high resolution, close-up, well-lit headshots. If all the training data instead had unrealistic lighting, you would get unrealistic lighting out. If it’s something like 50/50, you’ll get every part of that spectrum between good lighting and bad lighting at the output.

That’s not to say that the overall training scheme of especially something like GPT-4 doesn’t include secondary training operations for more complex tasks. But lighting of images is a simple thing to get correct with enough training images.

As an aside, I said that website above is a simple example, but I remember less than 6 years ago when that came out and it was revolutionary, so it’s crazy how fast the space has moved forward in such a short time.

source
Sort:hotnewtop