Super Sampling Anti-Aliasing (SSAA) isn’t a new concept, but its adoption in games or graphics hardware over the years has been patchy due how processor-intensive the process is. It relies on sampling the scene at a resolution much higher than your native screen resolution and then blending pixels together to output a 'normal' resolution frame, reducing ‘jaggies’. It’s often colloquially referred to as the ‘king of AA techniques’, but has given way to MSAA, MLAA and NVIDIA’s own FXAA, each of which are faster and less computationally expensive even if the quality isn’t as good.
Progress in graphics technology and horsepower has meant that a game which would bring a GPU to its knees five years ago can be comfortably played on mid-range hardware these days, whilst top-tier hardware push frame rates well into the hundreds. That’s a lot of horsepower going to waste when it could, if the game supported it, be used to improve image quality significantly.
Relatively recently enthusiasts have used a homebrew technique to increase the resolution which a game engine renders at, and then apply post-processing techniques to reduce the output resolution down to the monitor’s native. Known as downsampling, it's essentially a form of SSAA. Unfortunately it’s very hit-and-miss, often relying of 3rd-party applications and prone to throwing up OS-level errors; and of course it’s very computationally expensive. However the rewards – beautiful images from older game engines – make the effort worth it.
Today GPUs such as the GTX 780Ti and Titan are being produced which are capable of and designed for 4K resolutions in even modern titles. However most gamers still use 1080p monitors on a day-to-day basis, making a $500+ GPU obscene overkill for all but the most taxing of titles. Because of this NVIDIA are going to start to support downsampling at the driver level through a method they’re calling Dynamic Super-Resolution (DSR).
Composite scene, DSR sample on right
Using DSR (which can be enabled automatically through GeForce Experience or the driver application) the game is rendered at a higher resolution and then downscaled to the monitor native resolution. Unlike previous methods however tt’s fully integrated into the driver, meaning not only ubiquitous game support but also a far more stable process than the bespoke solutions previously used.
Whilst the potential image quality benefits are significant it’s also worth noting that DSR would be an excellent measure of assessing whether your setup is capable of supporting a higher resolution monitor in gaming. The performance impact of the downsampling process is apparently only 1-2% (i.e less than a frame per second), so the framerates you get running DSR at 4K on your 1080p monitor should be extremely close to those you get for 4K natively. We’re not sure if NVIDIA intended this, but we’ll certainly take it.
DSR is part of NVIDIA’s launch driver for the GTX 980 and 970, but will also be rolled down to Kepler in the near future. The current implementation is still a little rough around the edges and NVIDIA haven’t supplied a software means of easily showing the downsampled output frame as all in-engine screenshots will be at the supersample resolution, but it shows a great deal of promise.
Optimised anti-aliasing techniques is one of the ways GPU manufacturers have sought to differentiate themselves beyond pure frame rates and GFLOP statistics. Recognising that MSAA was insufficient to the needs of shader-based rendering and SSAA was too expensive both AMD and NVIDIA have generated their own methods to combat aliasing, to greater or lesser success. NVIDIA’s most well-known iteration is Fast Approximate Anti-Aliasing (FXAA), initially requiring in-engine implementation but later enabled at the driver level. More recently Temporal Anti-Aliasing was debuted, cleaning up an image based on frame-by-frame changes to pixels around edges. Each has their pros and cons; a good way to get into a fight on enthusiast boards is to claim that one method is flat out better than the other.
Today Multi-Frame Anti-Aliasing (MFAA, not to be mistaken for AMD’s Morphological Filtering AA) is being announced as a Maxwell-exclusive feature. As the name suggests MFAA is a post-processing anti-aliasing technique which operates by analysing a frame within the context of the previous frame, checking aliasing and rendering colours based on a proprietary algorithm in hardware.
The foundation of MFAA is multi-pixel programmable sampling, a means by which the matrix of sampling positions can be randomised to reduce data processing artefacts caused by repeating the same sample position from frame-to-frame.
MFAA seeks to be as good as MSAA, but without the performance impact algorithms and with image characteristics that are substantially different from FXAA. The latter has garnered a reputation for slightly blurring images, reducing the overall image quality, and so an effective alternative is probably due at this point.
Sadly MFAA is not quite ready for prime-time and will be issued as part of a driver update at a later date. The obvious question-mark is just how high the image quality presented is, and if it can operate well when the differences between frames is relatively large. However NVIDIA have clarified that the technique will work in tandem with alternate-frame rendering, an SLI mode available when using two or more matched NVIDIA GPUs.