• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

What do we think Nvidia's next tech innovation will bring?

daninthemix

Member
This is as relevant to console gamers as it is PC gamers, because Nvidia invariably creates the tech innovations that then eventually find their way into both. For example:
  • VRR / G-sync (2013)
  • Ray tracing (2018)
  • AI upscaling / DLSS (2018)
  • Frame Generation (2022)
It is Nvidia that seems to single-handedly bring to market the technologies that then steer the industry and become the exciting specs that people look forward to and argue about in upcoming console and PC hardware.

So that begs the question: what will be their next innovation? The answer may well be one of the largest bullet-points on the next generation of consoles.

I presume it will be something that leverages AI, but I'm not sure what.
 
Last edited:

kiphalfton

Member
It's fun to get excited about Nvidia tech innovations, but seems like not that many devs make use of it.

Case in point, has anybody actually done anything with the RTX Remix?
 

Amey

Member
Realtime AI style transfer. For adding realism or making it more cartoony or just remastering old games, without needing to tweak any geometry or textures.
ri6C8QK.jpg
 

Hudo

Member
Just as an asinine FYI: Nvidia didn't invent any of these things, they just made adapations to their hardware and software stack to allow these things to be computed more efficiently / allow these things to be computed at all. Which, I know, is also important, but they do it to keep people locked into their ecosystem, they don't implement these things out of the goodness of their hearts. That being said, where Nvidia are actually pushing the envelope with scientific research is the area of Mesh generation and their work on simplifying (i.e. de-bloating) the rasterization pipeline.

Raytracing exists since the 60s, the idea of what is now known as G-Sync (and FreeSync) was already tested out by 3dfx, AI Upscaling has been looked into since 2015 and most likely even before that. And Frame-Generation did also exist (at least in theory) before that.
 
Last edited:
Nvidia are actually pushing the envelope with scientific research is the area of Mesh generation and their work on simplifying (i.e. de-bloating) the rasterization pipeline.

Both AMD and Nvidia were working on fully programmable pipelines - AMD did get to it first with the NGG Fast Path feature on their Vega Series cards, which pushed the Primitive Shaders.

Nvidia later came out with Mesh Shaders - which offered the same feature set but also re-wrote the pipeline for the aim of simplification (although more developer work is required for this).

The problem with the Vega series Primitive Shaders was lack of developer support and also driver support which meant this feature was never fully actualised in games till Sony decided to adopt for the PS5 and since then several first party titles have leveraged it.

Mesh Shaders became the standard across PC and Xbox thanks to Microsoft adopting Mesh Shaders as a DX12U standard feature, and I think the first title we saw making use of Mesh Shaders was Alan Wake 2.
 

Damigos

Member
After seeing the digital foundry avatar video running on super high end pc i think there is not a lot left regarding graphics tech.
The real innovation will happen when existing games will be on the fly playable in VR.
Only then VR will transition from niche to mainstream and i really believe this is the next big thing. Not sure if Nvidia will do it, but someone will
 

Ballthyrm

Member
Just as an asinine FYI: Nvidia didn't invent any of these things, they just made adapations to their hardware and software stack to allow these things to be computed more efficiently / allow these things to be computed at all. Which, I know, is also important, but they do it to keep people locked into their ecosystem, they don't implement these things out of the goodness of their hearts. That being said, where Nvidia are actually pushing the envelope with scientific research is the area of Mesh generation and their work on simplifying (i.e. de-bloating) the rasterization pipeline.

To see what's coming you just got to look at what where the biggest evolution in CGI.
It usually takes a while for new technologies from the like of Weta and ILM to real time rendering.

What I want to see is better crowd generation and higher NPC count.
Right now only a few game manage to put a lot of people on screen. Making that accessible to indie Game would be great.
 

Gaiff

SBI’s Resident Gaslighter
It's fun to get excited about Nvidia tech innovations, but seems like not that many devs make use of it.

Case in point, has anybody actually done anything with the RTX Remix?
RTX Remix is for modders, not developers. The Portal mod also uses it.
Just as an asinine FYI: Nvidia didn't invent any of these things, they just made adapations to their hardware and software stack to allow these things to be computed more efficiently / allow these things to be computed at all. Which, I know, is also important, but they do it to keep people locked into their ecosystem, they don't implement these things out of the goodness of their hearts. That being said, where Nvidia are actually pushing the envelope with scientific research is the area of Mesh generation and their work on simplifying (i.e. de-bloating) the rasterization pipeline.

Raytracing exists since the 60s, the idea of what is now known as G-Sync (and FreeSync) was already tested out by 3dfx, AI Upscaling has been looked into since (2015) and most likely even before that. And Frame-Generation did also exist (at least in theory) before that.
Yeah, "create" is a a bit inaccurate. They didn't "invent" most of those things since for one, a lot of the time, these are actually joint efforts between multiple IHV, and for two, NVIDIA's innovation was bringing them to consumer grade applications. That's like those saying Ford invented the automobile. He did not. He innovated the assembly line.
After seeing the digital foundry avatar video running on super high end pc i think there is not a lot left regarding graphics tech.
I'd argue there's actually a lot. For how good this game looks, it exposes the glaring limitations of current tech. We need what is effectively infinite draw distances, improved fluid dynamics and motions (the water in the game looks me), and something akin to Nanite that isn't tied to UE5, and much much more. The game still has some rough edges that really show the clash between new and old tech.
 

Caio

Member
  1. Performance Leap: Continued advancements in architecture and manufacturing processes, leading to significant performance gains, possibly doubling or more in computational power compared to current models.
  2. Ray Tracing and AI Rendering: Further refinement and optimization of real-time ray tracing and AI-powered rendering, making them more efficient and capable of handling even more complex scenes and effects.
  3. Quantum Leap in AI Integration: Deeper integration of AI cores within GPUs, enabling more sophisticated AI-based features like enhanced upscaling, real-time object recognition, and adaptive scene optimization.
  4. Memory and Bandwidth Expansion: Increased memory capacities and faster memory interfaces to accommodate larger textures and data-intensive applications like high-resolution gaming, virtual reality, and AI training.
  5. Energy Efficiency: Continued focus on power efficiency and thermal management to deliver higher performance without significantly increasing power consumption or heat output.
  6. New Rendering Techniques: Introduction of novel rendering techniques or hybrid rendering approaches that combine rasterization, ray tracing, and AI to achieve unprecedented levels of realism and performance.
  7. Specialized Compute Units: Tailored hardware for specific applications, such as dedicated units for machine learning, computational simulations, or professional content creation, optimizing performance for these specialized tasks.
 
Maybe raytraced sound.

Giving each material a sound value(how does this material reflect sound) + information about the size/shape of the room.
This is the dream right here. Sound simulation was doing so well in the late 90s and really 00s then it started regressing year after year until selling 2012 and has been on life support ever since. We have games with less complicated sound processing today than we had 20 years ago.
 

Hugare

Member
Raytracing exists since the 60s, the idea of what is now known as G-Sync (and FreeSync) was already tested out by 3dfx, AI Upscaling has been looked into since 2015 and most likely even before that. And Frame-Generation did also exist (at least in theory) before that.
There's a difference between "testing out", "looking into", "existing in theory" and offering an actual working, stable sollution for customers at a reasonable price.

No one did it first with each of these techs. So it wasnt as simple as you're implying here.
 

KungFucius

King Snowflake
I think they are just going to go all in with AI processing. However you can use AI in games, Nvidia is going to open that up with a $2k GPU.
 

Hudo

Member
actual working, stable sollution for customers at a reasonable price.
Nvidia is literally: Choose one of the three, though.

Edit: And I also said that there's some value to bringing stuff to the consumer market, which Nvidia have done. But they haven't invented it. Which this thread suggests. Or at least it puts Nvidia as this "highly inventive" tech firm, which they are only in very few and very specific areas, none of them listed here.
They deserve some praise, yes. But also a very lot of shit.
 
Last edited:

Hugare

Member
Huh? What does that have anything to do with that?
"Nvidia is literally: Choose one of the three, though"

Their tech works, is better than the competitors and is sold at a "reasonable" price (not cheap, but most people can buy anyway, hence Nvidia being the market leader)

I was just pointing out that your post was dumb
 
Last edited:

Hudo

Member
"Nvidia is literally: Choose one of the three, though"

Their tech works, is better than the competitors and is sold at a "reasonable" price (not cheap, but most people can buy, hence Nvidia being the market leader)

I was just pointing out that your post was dumb
Again, so their "choose one of the three" is the least worst with Nvidia. Cool. I still don't understand what the poor market situation has anything to do with how Nvidia is celebrated as inventing all of the mentioned technologies when they haven't.
 
Last edited:

Bernoulli

M2 slut
the Meatriding for Nvidia is crazy they didn't invent anything you said

History​

Vector displays had a variable refresh rate on their cathode-ray tube (CRT), depending on the number of vectors on the screen, since more vectors took more time to draw on their screen.[7]

Since the 2010s decade, raster displays gained several industry standards for variable refresh rates. Historically, there was only a limited selection of fixed refresh rates for common display modes.



Using a computer for ray tracing to generate shaded pictures was first accomplished by Arthur Appel in 1968.[8] Appel used ray tracing for primary visibility (determining the closest surface to the camera at each image point), and traced secondary rays to the light source from each point being shaded to determine whether the point was in shadow or not.

Later, in 1971, Goldstein and Nagel of MAGI (Mathematical Applications Group, Inc.)[9] published "3-D Visual Simulation", wherein ray tracing is used to make shaded pictures of solids by simulating the photographic process in reverse. They cast a ray through each pixel in the screen into the scene to identify the visible surface. The first surface intersected by the ray was the visible one. This non-recursive ray tracing-based rendering algorithm is today called "ray casting". At the ray-surface intersection point found, they computed the surface normal and, knowing the position of the light source, computed the brightness of the pixel on the screen. Their publication describes a short (30 second) film “made using the University of Maryland’s display hardware outfitted with a 16mm camera. The film showed the helicopter and a simple ground level gun emplacement. The helicopter was programmed to undergo a series of maneuvers including turns, take-offs, and landings, etc., until it eventually is shot down and crashed.” A CDC 6600 computer was used. MAGI produced an animation video called MAGI/SynthaVision Sampler in 1974.[10]
 

T4keD0wN

Member
Their focus lies in enterprise and AI now. Probably some big leaps in AI that will make many jobs redudant. Self controlling industrial vehicles, automating many software tools and processes, generating plans for buildings and much more.

I bet they will make some innovations in software which will include game development tools, generating assets and stuff like that. RTX Remix doesnt count.
As for their consumer gpus: i dont think there are that many innovations to be had now that pathtracing is doable, probably just the classic power + efficiency gains with better hw acceleration for ai workloads and maybe compression.
GPUse subscription fees.
Already here with Geforce NOW or do you mean something much more nefarious like pay nvidia 10bucks for x amount of watts used?
 
Last edited:

Hugare

Member
Again, so their "choose one of the three" is the least worst with Nvidia. Cool. I still don't understand what the poor market situation has anything to do with how Nvidia is celebrated as inventing all of the mentioned technologies when they haven't.
Man, I'm trying to make you see the light, but its hard

"There's a difference between "testing out", "looking into", "existing in theory" and offering an actual working, stable sollution for customers at a reasonable price."

Most tecnologies that we use today were "discovered" decades ago. The thing is, most of them werent "invented" by companies that are offering them today, but these companies spent millions/billions in R&D to make them cheap enough, stable enough and consumer friendly enough to make them usable to the masses.

These companies deserve as much credit as those people who "invented" those technologies. 'Cause there's a pretty stark difference between theorizing about something and actually making a product with it.

Without Gsync we wouldnt have Freesync, withour DLSS we wouldnt have FSR and so on. Or maybe we would have to wait many years, maybe decades until competitors figured out how to make these techs work.
 
Last edited:
This is as relevant to console gamers as it is PC gamers, because Nvidia invariably creates the tech innovations that then eventually find their way into both. For example:
  • VRR / G-sync (2013)
  • Ray tracing (2018)
  • AI upscaling / DLSS (2018)
  • Frame Generation (2022)
It is Nvidia that seems to single-handedly invent the technologies that then steer the industry and become the exciting specs that people look forward to and argue about in upcoming console and PC hardware.

So that begs the question: what will be their next innovation? The answer may well be one of the largest bullet-points on the next generation of consoles.

I presume it will be something that leverages AI, but I'm not sure what.
And

Ray Reconstruction (2023)
 
Per asset AI remaster should be considered. It could involve using Tensor tech and AI alongside high fidelity capture data from real life images & Nvidia's deep learning NN's to rebuild and reconstruct low res assets In game to much higher fidelity ones.
 

poppabk

Cheeks Spread for Digital Only Future
After seeing the digital foundry avatar video running on super high end pc i think there is not a lot left regarding graphics tech.
The real innovation will happen when existing games will be on the fly playable in VR.
Only then VR will transition from niche to mainstream and i really believe this is the next big thing. Not sure if Nvidia will do it, but someone will
See that us something people forget - Nvidia are by far the market leaders so the market goes where they push and withers away where they neglect it. Like removing the alt mode USB C from their cards, in one step killing the simple wired uncompressed 4K possibility for PC VR. PSVR2 uses it which pretty much killed the chances of PSVR2 headsets being PC compatible. But no-one cares because without Nvidia support it is a de facto failed technology.
 

Celcius

°Temp. member
I’d like to see them get back to working on hair, fluids, fire, and smoke. And a big one - object clipping with character models.
 
Top Bottom