NVIDIA's RTX technologies - a suite of features exclusive to GeForce RTX that includes both real-time raytracing and DLSS - have yet to appear in any actual videogame. However, with hardware now in the hands of gamers and some titles rolling out the tech. in post-launch patches, attention returns to their potential image fidelity upsides, as well as necessary performance impact.
Unlike DLSS, where NVIDIA have been exceptionally keen to detail the performance differential between conventional and new processing techniques on old and new hardware, few claims are being made about the real-time raytracing aspect of RTX and the playability of games using it. First impressions at Gamescom of Shadows of the Tomb Raider running at 1080p with RTX enabled were, to put it mildly, not good; frame rates dropped precipitously, diminishing enjoyment of the improved visuals. Early doors though, there's still plenty of time for optimisation.
At GTC Europe this week game developer Remedy stuck their head above the parapet and outlined some performative aspects to implementing RTX Ray Tracing within their Northlight engine. The information they relayed, reported at Golem.de, was certainly eye-opening.
Something of a given is improved image quality. Remedy individually implemented ray-traced contact and sun shadows, reflections, and a technique known as Indirect Diffuse Lighting in a test scene, observing more realistic lighting dynamics and reflections that were accurate irrespective of camera location (a core weakness of traditional rasterisation). Unfortunately the costs associated with improving image quality were acute.
Rendered individually and aggregated together, in total the techniques added 9.2ms of rendering time per frame. Reflections were the most expensive at 4.2ms; two beams per pixel contact and sun shadows with noise rejection cost 2.3ms, and global de-ionisation a further 2.5ms. When you consider that to operate at a consistent 60fps a frame must take no more than 16.7ms to render, or 33ms at 30fps, an additional 9.2ms per frame is huge.
The fact that this test ran at 1080p on an RTX 2080 Ti must be another cause for concern. NVIDIA's RTX 2080 Ti is rated at 76 RTX-OPS, while the just-released RTX 2070 sits at ~45 RTX-OPS; will the 2070 even be able to handle the feature in games? How scalable will the technology turn out to be on release? Is Turing Doomed?
The answer, of course, is that such speculations remain moot until RTX support in games is a reality. What it does underscore is that you shouldn't be buying GeForce RTX hardware right now on the assumption that upcoming features will be life-changing (or at least game-changing), and that both game developers and NVIDIA themselves are probably hard at work optimising the techniques prior to them appearing in the games you love. If conventional rendering performance is all you're interested in you now have plenty of information on which to base your purchase of the next-gen hardware.
SOURCE: Golem.de (Google Translate)