👤by David Mitchelson Comments 📅07-02-19
Features – DLSS


Lets be clear: RTX does not mean ‘Real-time Raytracing’. This misunderstanding has been bubbling underneath the surface since Jensen’s original reveal of the new GPU architecture at Gamescom, and needs to be dispelled. RTX is in fact an umbrella term of a raft of new rendering techniques – some announced, some still in the pipeline – which rely on NVIDIA RTX-class hardware to execute. Think of it as being a bit like GameWorks, only the hardware it can be run on is even more restrictive.

To their credit, by making RTX part of the graphics card branding the end-user should be in an excellent position to know whether a particular feature will run on their hardware. At a time when we're often critical of hardware manufacturers using opaque terminology and naming schema that can potentially confuse consumers, it's really a breath of fresh air to see NVIDIA take this step.

NVIDIA’s Raytracing and Deep Learning Super-Sampling (DLSS) are two of the techniques that sit under the RTX umbrella. These leverage RT Cores and Tensor Cores which together are hardware components currently unique to the Turing architecture.


In today’s games a rendered frame usually isn’t the final image that you see on screen. This frame also undergoes post-processing to remove artefacts generated by the rendering process, the most well-understood of which is the jagged edges brought about by aliasing. These post-processing techniques, including anti-aliasing, are quite expensive computationally but cheaper than rendering at a higher than needed higher resolution and averaging the pixels down.

But how about instead leveraging the enormous power of Neural Networks to sharpen the image, perhaps even side-stepping many of the known weaknesses of traditional anti-aliasing techniques? Deep Learning Super-Sampling is just such a technique.

As it turns out image processing has been one of the early success stories for Neural Network development, a field that NVIDIA has lead with continued improvements to their hardware architecture and developer tools. Deep Learning Super-Sampling, or DLSS for short, begins before players even load up the game. The game developer supplies NVIDIA with a beta build of the game – it doesn’t need to be 100% bug-free, but should be able to render frames representative of the final in-game experience. NVIDIA use this to generate thousands of reference images, rendered at the ‘gold standard’ image quality of 64x supersampling.

"64x supersampling means that instead of shading each pixel once, we shade at 64 different offsets within the pixel, and then combine the outputs, producing a resulting image with ideal detail and anti-aliasing quality."

These reference frames are used alongside raw captured images rendered at the same time to train a DLSS neural network which, when presented with a raw captured image, can generate an output that matches the 64x supersampled reference. The process of training the neural network is iterative, and utilises a technique known as back-propagation to adjust weights in the network with each iteration.

The final DLSS network is one that has learned to produce images which closely approximate the 64x supersampled reference, which is good. However it also sidesteps the problems of traditional anti-aliasing such as issues with transparencies and blurring, which is even better.

So that’s the hard part done, and it has only required the power of super-computer to do it. If you think super-sampling is hard, it’s got nothing on training a neural network. But the powerful aspect of DLSS is that this trained neural network can be shared and run on inferencing hardware, i.e. a GPU. You’re not training any more, and inferencing is much quicker.

So the DLSS network, trained for scenes in one or a narrow selection of games, is supplied to GPUs via the driver download system. When running the game the GPU takes each rendered frame and runs it through the DLSS network with a single input image as reference, outputting a finished image that’s far higher quality than the rendered frame. Plus, with NVIDIA RTX inferencing with the network is actually exceptionally fast as it leverages the Tensor cores of the Turing GPU.

Unlike some post-processing techniques DLSS needs a few tweaks to the game prior to implementation, and of course the DLSS neural network needs to have been generated by NVIDIA. At present there are two means to see it in action – the Epic Infiltrator demo, and a Final Fantasy DLSS demo. So far it’s been announced that developers from a multitude of studios are working to integrate it into 25 games:

- Ark: Survival Evolved from Studio Wildcard
- Atomic Heart from Mundfish
- Dauntless from Phoenix Labs
- Fractured Lands from Unbroken Studios
- Final Fantasy XV from Square Enix
- Hitman 2 from IO Interactive / Warner Bros.
- Islands of Nyne from Define Human Studios
- Justice from NetEase
- JX3 from Kingsoft
- MechWarrior 5: Mercenaries from Piranha Games
- PlayerUnknown’s Battlegrounds from PUBG Corp.
- Remnant: From the Ashes from Arc Games
- Serious Sam 4: Planet Badass from Croteam / Devolver Digital
- Shadow of the Tomb Raider from Square Enix/Eidos-Montréal/Crystal Dynamics/Nixxes
- The Forge Arena from Freezing Raccoon Studios
- We Happy Few from Compulsion Games / Gearbox
- Darksiders 3 by Gunfire Games / THQ Nordic
- Deliver Us The Moon: Fortuna by KeokeN Interactive
- Fear the Wolves by Vostok Games / Focus Home Interactive
- Hellblade: Senua's Sacrifice by Ninja Theory
- KINETIK by Hero Machine Studios
- Outpost Zero by Symmetric Games / tinyBuild Games
- Overkill's The Walking Dead by Overkill Software / Starbreeze Studios
- SCUM by Gamepires / Devolver Digital
- Stormdivers by Housemarque

21 pages « 3 4 5 6 > »