Current DLSS intent: We can only render this at like 720p with enough frames, so let’s do that and use AI anti-aliasing tricks so that when we present it at 4k, none of the jaggies are visible on-screen like they would be with raw 720p upscaling.
DLSS5 intent: Using our pile of stolen artwork neural net that we can now render at 60fps+ let’s “reimagine” the entire look of the game as we present it on screen, even if it was already running at 4k just fine.
Nibodhika@lemmy.world 2 weeks ago
Because a pixelated circle being upscaled is a circle, but a pixelated circle being turned into a high definition pie is no longer a circle, and that’s especially problematic if the circle was just a cross hair or some other random circle like thing the AI thought was meant to be a pie.
Yes, both things are the same, but that’s like saying you had a tiny spider in your house and you were okay because it killed mosquitoes in your house, so you should be okay with having a colony of bats since they are also animals and eat mosquitoes. Yes, both are the same, but the scales and the amount of intrusion are completely different.
grue@lemmy.world 2 weeks ago
If your training data has a pixelated circle as an input and a circle as output, your neural network will “upscale” your pixelated circle to a circle. If your training data has a pixelated circle as input and a high definition pie as output, your neural network will “upscale” your pixelated circle to a high definition pie. It’s the same algorithm in both cases.
Nibodhika@lemmy.world 2 weeks ago
Yes, that’s precisely my point. The difference is in what the algorithm is trying to do, traditional DLSS uses the image rendered in resolution X as output and scaled down to X/2 as input (for example), so it’s trained to upscale images, whereas this new thing uses who knows what as either, and clearly outputs something that is not an upscaled version of the frame.