|
Post by Sarge on Aug 27, 2020 21:32:56 GMT -5
Actually, it is different from traditional super-sampling. The idea is to use AI to "learn" about gaming scenes and how they're typically constructed, and to use said information to help reconstruct what the image would look like at a higher resolution. Think of it as an interpolation based on prior knowledge from various scenes, objects, and textures. It's using really old techniques (neural networks) to get there - we just have a ton more computing power to throw at stuff now.
Now, to be clear, it isn't a full replacement for true 4K, but it does a surprisingly good job in getting close to the image quality.
|
|
|
Post by Ex on Aug 27, 2020 21:48:28 GMT -5
Interesting. I'll have to read up on this technology later, when it's not late at night, and I'm not super tired. I still suspect the end result uses old visual tricks to accomplish the ocular perception, in tandem with the AI stuff to drive the render. This'll take some deep dive reading to fully understand and report back on.
|
|
|
Post by Sarge on Aug 27, 2020 21:56:58 GMT -5
I'm not claiming I fully understand it all myself, just a high level overview of what they're doing. It makes sense to me, especially since it often looks like implementation is per-game specific.
|
|
|
Post by EasyHard on Aug 28, 2020 0:03:44 GMT -5
DLSS and AI-based upscaling in general is different, more complicated, and less logical than normal techniques when viewed from a mathematical or signal processing perspective. (Being more complicated doesn't mean it is better, mind you) It is based on having a large dataset of (high resolution) images, so the AI can learn via this reference library what downscaled images look like compared to their original. When applied to a new image it tries to upscale it based on the patterns its seen before. If one were to take the trained neural network and write an explicit function of how it upscales a local region of pixels, it would be a messy ad-hoc equation. It might upscale a local region differently if the colors were rotated (R->G->B) or if there are small edge-like features or a million other dumb variables. That is, it isn't required to treat situations in a uniform way like a "logical" mathematical interpolation.
|
|
|
Post by anayo on Aug 28, 2020 3:49:58 GMT -5
Have you guys ever submitted a new game for HLTB? Yes many times. It's easy to do. Just use the " Submit" option. while a neural network adds detail to create the impression that the graphics are 4K I'm sorry man, but this sounds like sales pitch jargon to me. You can't "beautify" 1080p into legitimate 4K. The base image detail in a 1080x1920 image, is simply not equivalent to the base image detail in a 3840 x 2160 image. So if anything, this algorithm is replicating and interpolating the lower resolution subpixels in a manner that fools the human eye, into perceiving there is more detail than is actually present. This is probably done by using " downsampling", or as its technically known Ordered Grid SuperSampling AntiAliasing. A technique that's been around for at least twenty years. Using the downsampling technique, in tandem with increased anisotropic filtering, could produce an image that appears sharper than its native resolution. I'd wager these old techniques are being used, rather than some sort of spooky intelligent AI that procedurally generates magical 4K. Fan-Made Doom Mod Upscales Lo-Res Textures to High Res Using Neural Networks; Kotaku, December 2018
Mod Creates "HD" Final Fantasy VII Using AI Neural Networks - Kotaku, January 2019
Neural Network Convincingly Interpolates 24 FPS Video to 60 FPS - Boing Boing, February 2020Digital Foundry Reviews RTX Features in Wolfenstein Youngblood for PC, Namely Raytracing, DLSS - Youtube, January 2020Digital Foundry Upscales 540p Gameplay to 1080p in Control for PC Using DLSS 2.0 - Youtube, January 2020Linus Tech Tips Reviews 1080p to 4K AI Supersampling for Video Streaming on Latest Model of Nvidia Shield TV - YouTube, May 2020
|
|
|
Post by anayo on Aug 28, 2020 4:03:09 GMT -5
While digging for those I realized I can't take credit for predicting the 2021 Nintendo Switch will use AI supersampling because digital foundry already made a video predicting that in February. Maybe that video planted the idea in my head and I forgot about the video, erroneously thinking its predictions were mine.
|
|
|
Post by Ex on Aug 28, 2020 9:52:00 GMT -5
Being more complicated doesn't mean it is better, mind you Good point. The way you said that, makes me think that this pretend-4K optimization is done on a per game basis, pre-baked, rather than an adaptive general algorithm working in realtime from a chipset. I do find this concept very interesting. I will do a lot more research on it soon, to understand exactly what is actually going on here.
|
|
|
Post by Ex on Aug 28, 2020 12:47:33 GMT -5
I will do a lot more research on it soon, to understand exactly what is actually going on here. Okay guys, I read a bunch, now I understand how this stuff works. First off, I was exactly right in my initial assumption that NVIDIA is using downsampling to achieve their DLSS effect. The fact that DLSS stands for Deep Learning Super Sampling should tell anybody that. ( Supersampling is just another name for downsampling, when we're talking about video rendering.) Traditionally, supersampling is achieved by rendering the internal image at a higher resolution than the one being displayed, then downsampling that internal image to the desired output resolution, using the extraneous data for real-time subpixel calculation. This process has been used for years by PC gamers, usually referred to as TAA in graphics option menus. Temporal anti-aliasing however, is traditionally rendered on the fly, without heuristics to drive its signal processing. Because this was done without heuristics, the output render could have artifacting or occasional lighting irregularities. It wasn't perfect, but the processing method has existed for over twenty years, as I mentioned earlier.
Nvidia originally explained that DLSS 1.0 worked for each target game image, by generating a "perfect frame" using traditional supersampling, then trained their machine learning ("neural network") algorithm via these resulting images. On a second step, the extrapolated model derived was further tweaked (in real-time) to avoid aliased inputs against the output result. This created an efficient heuristic filtering process. The heuristics aspect of DSLL is where NVIDIA's DSLL technology shines. Because of NVIDIA's machine learning implementation (the Tensor Cores + convolutional autoencoder algorithm) in tandem with reference heuristics data, a superior TAA method is possible via DSLL. DLSS can take a 1080p render and upscale it to 4K without significant aliasing, to get a result that looks close to a native 4K render. However, DLSS 2.0 has a tendency to over sharpen the 4K image, producing a subtle ringing around hard edges, especially text. That's a smoking gun of aggressive anisotropic filtering, an image processing method I had also guessed earlier was being used. The original DLSS required training the AI network for each new game. DLSS 2.0 trains using non-game-specific content, delivering a generalized network that works across games. This means faster game integrations, and ultimately more DLSS compatible games. However, by not using game specific heuristic data, the DLSS result won't be as well realized versus using a specific game model. So I assume NVIDIA will make the option available to developers - to use a specifically defined heuristic model or a universal heuristic model, all depending on how much money that developer wants to spend. So to recap: I assumed NVIDA was using downsampling and anisotropic filtering, to make a 1080p image look nice when upscaled to 4K resolution. I was correct in that assumption. What I didn't realize previously, is that NVIDIA is using predefined heuristics models (built from machine learning) in tandem with a proprietary algorithm (convolutional autoencoder) executed in real-time (by networked Tensor Cores), to guide the downsampling/anisotropic subpixel precision for optimal results. This is an interesting mix of using old graphical techniques driven by new AI precision.
Although to be honest, I'd still rather play 4K games that are internally scaled and rendered at 4K, rather than 1080p games cleverly upscaled to 4K. Though I fully understand why Nintendo would want this DSLL tech for their next Switch iteration.
|
|
|
Post by Sarge on Aug 28, 2020 13:40:21 GMT -5
I'm a bit weird in that I don't understand this overbearing need for 4K anyway. I'd much rather have 1080p 60 FPS instead. And 1080p is quite sharp as it is, unless you're just playing on a massive TV set and really close to it.
I get really annoyed at times by the constant talk about a "Pro" model Switch. I honestly think at this point, they're better off waiting for a new iteration, probably one based on the newer Shield technology instead of the older chipset. And my guess is that they can make it backwards compatible pretty easily, so hopefully they do that as well.
|
|
|
Post by Ex on Aug 28, 2020 13:44:03 GMT -5
I'm a bit weird in that I don't understand this overbearing need for 4K anyway. "Bigger is always better."Yeah, I don't care either. I mean, yeah I own a 4K TV, but I'm just as happy playing 256×192 DS games. If you're in this hobby for the graphics, you'll get burnt out soon enough.
|
|