I’ve seen a lot of people lately saying that upscaling (fsr, dlss, etc.) is a bad thing, including some calling it ‘fake frames’, which is probably due to them confusing it with frame generation.
What upscaling does is take an input (a frame rendered at 1080p, for example) and attempt to improve it by generating more information (bringing that 1080p frame to 1440p). this does make things a little fuzzy, but it also frees up resources to allow stuff like improved lighting to be rendered which makes games like cyberpunk able to be rendered at a decent framerate without a $5,000 gpu.
Frame generation is different. It takes an input as well (same 1080p frame, for example), but it doesn’t improve the frame. It makes a new one based on that frame, sometimes several. These actually are ‘fake frames’, and this is what the people who called upscaling fake frames were really talking about.
I won’t lie, upscaling is definitely a crutch and the goal should be to be able to render that cool stuff at native resolution. however, the tech that can render that stuff is too expensive to be worth buying unless you have money to throw away, which real people typically don’t. it’s up to you whether a little fuzziness in the graphics is worth it to you, but the fact is it’ll give you the leeway to choose between higher framerate and prettier lighting. without it most people are stuck just setting their graphics to ‘no’, because they can’t afford the kind of processing power making things look good at native resolution takes.
Part of why I am making this post is that I wanted to see what other people think of this take, and more importantly get feedback so I can improve the take later. I’m currently running a laptop with a 1650, and I’ve had it for years. I’m used to balancing frames and quality and making compromises, and upscaling tends to be one of them that’s worth making.
I’m fine with the concept of upscaling tech. DLSS 4 with the transformer model looks excellent. And FSR 4 is looking pretty damn decent as well. The earlier attempts weren’t as good. Ideally it would be acting more like DLAA, but 8.3 million pixels is a lot to render (4K). And if 8K is going to be a thing one day, it makes even more sense there.
I think too many people focus on the now and can’t imagine what things will be in the future as they progress.
Now frame generation, that one I feel less optimistic about. Especially when I see people using it for 60fps or less. It should really only ever be used at 80fps or higher, where the lag is less of a problem. But one day inferred frames, where it only looks at the prior frames and does not wait for the next frame, might make it a better experience.
Lastly, it’s NVidia and AMD’s marketing departments fault for having them all conflated. DLFG & FFG is what the frame gen tools should have been called, rather than shoehorning them under their super sampling and super resolution branding.