It just kinda makes no sense to me. How can you improve the framerate by predicting how the next frame should be rendered while reducing the overhead and not increasing it more than what it already takes to render the scene normally? Like even the simplistic concept of it sounds like pure magic. And yet… It’s real.
the ELI5 would be that frame generation skips a lot of the graphical calculations for geometry and lighting and so on and instead bases the generated frame on the pixel data from the real frames before. For every real frame the calculations must be done.
graynk@discuss.tchncs.de 10 hours ago
It does so at the cost of latency. It does not actually predict the next frame, it renders two full frames and then interpolates one frame between them. So it looks smoother but your input also takes that much longer to be displayed on the screen
fulg@lemmy.world 9 hours ago
That is not strictly true, the actual latency increase is half the frame rate. Because the input is not just the frame image but also the motion vectors (in which direction the pixel moved) for the current frame. Frame gen also knows a lot about the image, like which bits have transparent pixels (which move in multiple directions at once) and when the game is done with the frame yet still has to wait for the GPU (time which can be used for more work with little impact).
Frame gen is much more involved than the old “motion smoothing” of televisions, the so called “soap opera” mode, which did increase the latency much more and had no knowledge of how the source image was built, so processing was much more involved.
Stuff like DLSS5 is supposed to use the same inputs (source images and motion vectors), now that is magic to me.
graynk@discuss.tchncs.de 9 hours ago
Yeah, I simplified it to keep it at ELI5 level, but you’re right
dustyData@lemmy.world 8 hours ago
Isn’t that the thing NVIDIA was found to be lying about?