last month Nvidia unveiled its latest version of DLSS, drawing negative criticism from gamers and developers alike. Criticism stems from the fact that the tool appears to change the art direction and style of games Nvidia used to demonstrate the technology, including Resident Evil Requiem and Star field.
Since the announcement of DLSS 5, Nvidia CEO Jensen Huang has responded several times to the public outcry, saying that gamers are completely wrong about their stance on the tool. In his latest comments, Huang unsurprisingly defended DLSS 5 again, saying it's just another way for developers to improve visual fidelity. He went a step further by saying that he considers all AI-generated content to be “beautiful.”
However, developers like Quin Henshaw, an expert on the Unity development engine and a Unity instructor at the Vancouver Film School, see the latest iteration of DLSS as a step in the wrong direction. From his perspective, DLSS 5 will only have a negative impact on game development and art direction as a whole.
What is DLSS?
Nvidia's deep learning super sampling (DLSS) is an artificial intelligence-powered graphics technology designed to increase visual fidelity and improve performance. It uses AI to render games at a lower resolution and then upscale the image to make it look better. It's similar to Sony's PSSR, which also uses AI to upscale pixel by pixel.
DLSS uses a type of AI called a deep neural network (hence the “deep learning” part of its name) to enhance images. DLSS was trained on supercomputers to recognize high-quality images and learn how to reconstruct those lower-quality images that were missing, thus upscaling the image.
It's not something developers use to make games, but it's something that runs with the game to improve its image and performance. Developers are able to tweak it to varying degrees, but DLSS must be implemented and enabled by developers to work.
DLSS improves performance by reducing the amount of work the computer's GPU has to do. If a game is rendered in 4k, it means that the GPU is processing millions of pixels. With DLSS, it was possible to render a game at 1440p or even 1080p and then upscale it to 4k with AI.
Previous versions of DLSS were not problematic. In fact, image upscaling is becoming more common and is generally viewed by developers as a positive technological advance in game fidelity. “All the upscaling technology is pretty slick and so [NVIDIA is] At the forefront of this, almost every software developer now has their own version,” said Henshaw.
For example, AMD offers FidelityFX Super Resolution (FSR) as an upscaling tool. However, it doesn't require dedicated AI hardware like Tensor Cores, which is what DLSS needs to run.
“I've been pretty on board with a lot of the technology that Nvidia has created over the last few years,” Henshaw said. In the past, he used previous versions of DLSS to create technical demos. Generative AI was first introduced to DLSS in 2022 with DLSS 3. It marks the point where Nvidia moved from AI-assisted rendering to AI-generated frames.
DLSS 3 used generative AI to generate frames. The GPU creates a frame to transition between frames rendered by the game engine to improve image quality and performance. DLSS 3.5 took this a step further by allowing lighting to be generated using Ray Reconstruction. DLSS 4 has been technologically enhanced with improvements to image generation and greater integration with Ray Tracing technology.
But DLSS 5 can do more than just upscale the image using generative artificial intelligence to make a game run smoother or look better. It starts to more aggressively affect how the resulting image looks, potentially changing aspects of how the scene looks. This is what developers and players struggle with.
“It's not something I can see any serious development studio, whether at the indie level or at the triple-A level, really getting into unless the publishers push hard to keep costs down,” Henshaw said.
In a video by YouTuber Daniel Owens, Nvidia's Jacob Freeman provided a few more details on how DLSS 5 works. It doesn't really change anything at the game engine level, but basically takes a 2D image and runs it through its generative AI technology to enhance the image. Artists can control things like gradients and filter intensity, but they don't seem to have control over the final output. Freeman also stated that this is an early preview of the technology. DLSS 5 is expected to release sometime this fall, so it stands to reason that the full range of developer controls will be made clear around then.
Developers claim that DLSS 5 takes away their autonomy
DLSS 5 is more aggressive than previous iterations of the tool thanks to its AI generation. It has been shown to improve lighting, materials and fine details, from enhancing the image to potentially affecting the appearance of these elements. It uses what it has been trained on to enhance the appearance of the image to make it look more faithful.
That's why Grace's face looks so different from her original image – because that's how it should look according to DLSS 5. The end result was deemed “AI Slop” by players and developers, which they compared to a filter you might see on Instagram. “I feel like anyone who's even remotely familiar with the internet and the players would have known that he would have been kicked everywhere immediately. So it was a bit of a shock to me,” Hendshaw said.
It was also there that Nvidia CEO Jensen Huang first spoke out against the public's reaction to the technology. “Well, for one thing, they're completely wrong,” he told Tom's Hardware. Huang later toned down his comments in an interview with Lex Fridman, rewording that he understood the criticism but still stated that he considered all generative AI to be “beautiful”.
Huang wasn't the only executive to take advantage of the technology's potential. “When Nvidia showed us DLSS 5 and we ran it at Starfield, it was amazing how it came to life. We played it. We can't wait for you all to do it,” said Todd Howard, executive producer and game director at Bethesda, during the first DLSS 5 reveal. Star field it was recently used in the DLSS 5 tech demo, which featured 12 minutes of the tech in action.
This goes back to something that Mat Piscatella, video game industry analyst and executive director of games at Circana, pointed out. It may seem obvious, but he said, “CEOs will generally be bullish on a technology that can save them a lot of money.” Which is a sentiment that has artists like Henshaw worried about the potential ramifications of Nvidia's latest technology.
“The difference with DLSS 5 is that it pushes into artistic quality and takes control away from developers,” said Henshaw. “I see this as a big, big deal. If studios start implementing it, it's probably going to cut a huge number of artists.”
Henshaw worries that executives will see the tool as an opportunity to cut work and costs. Even if the final product doesn't do well with consumers, the cost saved on paying artists could offset the loss in the eyes of management. He also said that consumers in general are already pretty sick of how much AI-generated content is out there and are likely to respond with their wallets.
In a recent interview, Huang positioned DLSS 5 as another tool that developers can use to improve the visual fidelity of their games. However, Henshaw argues that this will only move the artistic needle in a more homogenous direction, a common criticism of generative AI.
Generative AI in video games is generally frowned upon by the community. Recent controversies include Crimson desert developer Pearl Abyss using gen AI assets in their finished game. The developer apologized on Twitter, but it's just the latest case of player backlash. Earlier controversies arose with Baldur's Gate 3 developer Larian Studios and Clair Obscur: Expedition 33 developed by Sandfall Interactive.
The issue with developer adoption of DLSS 5 underscores the growing concern in game development as a whole. As AI tools are increasingly integrated into game development, developers don't have much say when it comes to their use. While most developers don't seem to mind AI being used as a tool in workflows like programming or early concept stages, they generally oppose generative AI tools used in the creative side of things like voice acting, art, quest design, and asset generation.
“With the industry the way it is right now, [developers have] very little reflection. Maybe in smaller studies, but in larger studies there will be zero feedback. There is very little enforcement and power on the developer side when it comes to the big publishers and studios,” said Henshaw.
Ultimately, DLSS 5 highlights the growing divide in how AI is viewed across the industry. While companies like Nvidia and potentially Bethesda see this as a way to move visual fidelity forward and streamline development, many artists and developers worry about what could be lost in the process. Whether DLSS 5 becomes widespread or faces resistance will likely depend on how much control managers are willing to give up and how players respond to AI taking a bigger part in shaping how games actually look.