• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Graphical Fidelity I Expect This Gen

For you guys talking what can and cant be done in 15 years on PS6 a few days ago, you just got to remember what can be technically done in a demo vs a real game.

That 10 year old Unreal Engine 3 demo The Samaritan was showcased assuming a PC or Xbox 360/PS3 could handle something like that. Ya right. That canned demo still looks better than just about every game released now, even though were up to UE5 on systems 10x more powerful.
That demo have been surpassed 100x over.
Infact the assets of that demo have been released. its nothing but low quality assets which quality are obfuscated by the lack of light in that demo.

You can make an ugly thing look good if you put it in the dark.
The character in the demo has also been released.

From the skin shader, to eye shader, to hair shader, to material shader, to actual PBR, to object density, etc. To the amount of particles you can have on the screen and the fact the particles are programmable (niagara), etc.

The actual demo when brought to the light looks like sin. its literally just a street with low poly builds and objects.
The assets from the samaritan demo was used to make the VR showdown demo which is available for download.
batman arkham knight single handedly surpassed this when it came out and it wasn't a single street, it was an entire open world.

 
The skin shaders are still not as good as they can be.
Never said they were. There are still ways to go which is why i believe we will get there next gen but I'm not talking perfect realism.
And each of these characters you are seeing isn't how they will look in a gameplay environment where you can tell without going into photomode.
Yes and the problem lies in a completely different department called lighting. Right now we need to place addition lights around character during cut scene to make things look good. The same way movies do it. Just like they just don't rely on the sun, games can't rely on just one world directional light. The problem is that those additional cinematic dynamic lights are expensive. Which is why we need ps6 and next xbox.

They will provide the compute neccesary to have those lights in gameplay.

lightingbasics_blog.png


Also, the faces looking this real should also have had the environments just as good and yet they are not.
Yes and we will solve that this gen with nanite like solutions, mesh shaders and omni-verse.

As I said, if you look at the gameplay of these games out now, they are only slightly better than last gen but running on significantly more powerful hardware.
I'm guessing you are referring to current gen games (ps5, xsx).
Its almost like you skipped or didn't get what i posted.
The games look that way because tools haven't been developed to leverage the new power.
Not sure how that's hard to understand.

When unreal engine 4 released this was the state of open world games in 2014 through 2019 and the compute power did not change. So can you answer me one question? What changed?

2014


Still 2014 as Deep Silver tried to make Dead Rising 2 with early UE4 version and this how it looked.


Then 2015 with the Kite demo


Then 2019 with days gone


So my only question to you is, what changed between 2014 and 2019? Same power...yet night and day visuals. What gives?
 
Last edited:

Lethal01

Member
We already have nanite and mesh shaders. What more do you want?

Photorealism != Perfect Realism.

It doesn't have to be perfect to be photo realistic.

Those are technologies, not results, the results that we are getting are far from photorealisitc for a long list or reasons not only limited to geometric density.

TO be photorealistic you have to at least look real at a glance, Nothing I've seen is close to making me mistake it for being real and there is a ton of things that need to be done to reach that level.
 
For you guys talking what can and cant be done in 15 years on PS6 a few days ago, you just got to remember what can be technically done in a demo vs a real game.

That 10 year old Unreal Engine 3 demo The Samaritan was showcased assuming a PC or Xbox 360/PS3 could handle something like that. Ya right. That canned demo still looks better than just about every game released now, even though were up to UE5 on systems 10x more powerful.
LOL no it doesn’t…

 

VFXVeteran

Banned
Never said they were. There are still ways to go which is why i believe we will get there next gen but I'm not talking perfect realism.

Yes and the problem lies in a completely different department called lighting. Right now we need to place addition lights around character during cut scene to make things look good. The same way movies do it. Just like they just don't rely on the sun, games can't rely on just one world directional light. The problem is that those additional cinematic dynamic lights are expensive. Which is why we need ps6 and next xbox.
That is completely false. Light rigs per character are a thing of the past since path-tracing has come out. The proper lighting is doing monte-carlo sampling of all lights within an environment. We are many many years away from doing that in realtime.

Yes and we will solve that this gen with nanite like solutions, mesh shaders and omni-verse.

No we will not. Nanite isn't enough to make a good lighting solution. I don't care about the geometry if the lighting is the biggest detractor in getting good results for realtime gameplay. And the next-generation, like this one and previous gens, won't have powerful enough hardware to solve that problem at any reasonable quality resolution.

I'm guessing you are referring to current gen games (ps5, xsx).
Its almost like you skipped or didn't get what i posted.
The games look that way because tools haven't been developed to leverage the new power.
Not sure how that's hard to understand.
Game tools don't need much work this generation. The same old PBR with dynamic lights, where only one light casts a shadow to a shadow map, sprite cards for FX -- still, true volumetric solutions for FX still too computensive, hair shaders still using crude solutions due to low bandwidth.

Let's just stop this argument with this realization amongst all of you console gamers - pay attention to Nvidia/AMD in the PC space. They are the ones that will push hardware for the future. The consoles are so far behind the highest end Nvidia cards that they have a LOT of catching up to do. I wouldn't be surprised if the PS6 isn't as powerful as a 3090 today. And the 3090 struggles just as much as the consoles just at a higher resolution and requiring DLSS to get by the ray-tracing bandwidth issue.
 
Last edited:

UnNamed

Banned
Samaritan wasn't even an UE4 demo, was the last revision of UE4 called UE3.9 which had effect like area lights, tassellation, SSSSS, deferred lighting, etc. Something in par with every engine of that era, and features already seen in Cryengine 5 some year before.
 
That demo have been surpassed 100x over.
Infact the assets of that demo have been released. its nothing but low quality assets which quality are obfuscated by the lack of light in that demo.

You can make an ugly thing look good if you put it in the dark.
The character in the demo has also been released.

From the skin shader, to eye shader, to hair shader, to material shader, to actual PBR, to object density, etc. To the amount of particles you can have on the screen and the fact the particles are programmable (niagara), etc.

The actual demo when brought to the light looks like sin. its literally just a street with low poly builds and objects.
The assets from the samaritan demo was used to make the VR showdown demo which is available for download.
batman arkham knight single handedly surpassed this when it came out and it wasn't a single street, it was an entire open world.


Bruh, in Arkham Knight I had never seen any self shadowing, mesh changes, reflections in the outdoor environments or APEX cloth physics. Hell that Samaritan character model is still to me a notch above current games.
 
That is completely false. Light rigs per character are a thing of the past since path-tracing has come out. The proper lighting is doing monte-carlo sampling of all lights within an environment. We are many many years away from doing that in realtime.
I was refering to filming not cgi. Hence i posted the picture.
No we will not. Nanite isn't enough to make a good lighting solution. I don't care about the geometry if the lighting is the biggest detractor in getting good results for realtime gameplay. And the next-generation, like this one and previous gens, won't have powerful enough hardware to solve that problem at any reasonable quality resolution.
I'm sure you and others (like myself) believed that there was no way we would have any real form of real time ray tracing, not on PC and definitely not on console in 2020. Yet here we are. Why? because of the machine learning breakthrough in denoisers. So you should never rule anything out. Next gen we will have some real NN accelerators that will run all sorts of vfx simulation there. I predict that VFX will move to being calculated on these accelerators.
Game tools don't need much work this generation. The same old PBR with dynamic lights, where only one light casts a shadow to a shadow map, sprite cards for FX -- still, true volumetric solutions for FX still too computensive, hair shaders still using crude solutions due to low bandwidth.
You say that yet one of the tools being developed for this gen is nanite. And its not even out in production yet. This is called revising history.
Lastly Nanite's goals is to eventually apply the fundamental thinking of nanite on everything.

P8FlLL0.png

Let's just stop this argument with this realization amongst all of you console gamers - pay attention to Nvidia/AMD in the PC space. They are the ones that will push hardware for the future. The consoles are so far behind the highest end Nvidia cards that they have a LOT of catching up to do. I wouldn't be surprised if the PS6 isn't as powerful as a 3090 today. And the 3090 struggles just as much as the consoles just at a higher resolution and requiring DLSS to get by the ray-tracing bandwidth issue..
That's funny because weren't you the one who said current gen will just be last gen with PC ultra settings?
How did that work out? Are any of the 3 unreal engine demos like a PS4/XBO games with PC Ultra settings?

Come on, atleast admit when you are wrong rather than trying to revise history. I'm not some typical sony fanboy.
I'm looking at this from the view as an cs engineer and the advancement in deep learning.
You can run GTA V from PS3/360 on a 1,000 TLOP GPU and it will still look the same.
Why can't you acknowledge that?
 
Last edited:

VFXVeteran

Banned
I was refering to filming not cgi. Hence i posted the picture.
How can you compare film on green screens to CG?

I'm sure you and others (like myself) believed that there was no way we would have any real form of real time ray tracing, not on PC and definitely not on console in 2020. Yet here we are. Why? because of the machine learning breakthrough in denoisers. So you should never rule anything out. Next gen we will have some real NN accelerators that will run all sorts of vfx simulation there. I predict that VFX will move to being calculated on these accelerators.
No, I was told RT would be "right around the corner" back in 2002 when I interviewed at AMD. They was wrong. I was right.

You say that yet one of the tools being developed for this gen is nanite. And its not even out in production yet. This is called revising history.
Lastly Nanite's goals is to eventually apply the fundamental thinking of nanite on everything.
Nanite is one aspect in getting high quality geometry and textures. It's the first in like forever since videogames have come out. To me, it's an extremely long period of time just to get that and then you have hardware that can't even render it at a good performance level.
That's funny because weren't you the one who said current gen will just be last gen with PC ultra settings?
How did that work out?
Are any of the 3 unreal engine demos like a PS4/XBO games with PC Ultra settings?
So far, there has been no game that has come out this generation that makes my claim false. Nothing. Demos don't count when there isn't a game associated with it. If you don't have RT lighting, Nanite won't be good enough to make my claim false either. There are just way too many areas that need improvement in graphics pipeline to make it a "generational leap" from last gen.
 
Last edited:
Nanite is one aspect in getting high quality geometry and textures. It's the first in like forever since videogames have come out. To me, it's an extremely long period of time just to get that and then you have hardware that can't even render it at a good performance level.
From my understanding even the ps5 can run nanite at 60fps 1440p. It is lumen that is taxing the hardware.

Once nanite is adapted to work with more mobile geometry like human characters, we will have essentially gotten infinite geometric detail for all objects in the scene. The only thing that will be remaining is physics and lighting.
 
So far, there has been no game that has come out this generation that makes my claim false. Nothing. Demos don't count when there isn't a game associated with it. If you don't have RT lighting, Nanite won't be good enough to make my claim false either. There are just way too many areas that need improvement in graphics pipeline to make it a "generational leap" from last gen.

Matrix Awakens and a few games say no…
 
Bruh, in Arkham Knight I had never seen any self shadowing, mesh changes, reflections in the outdoor environments or APEX cloth physics. Hell that Samaritan character model is still to me a notch above current games.
LOL wut? Above current games? Are you trolling…Almost every PS4 era game has better character models not including PS5 era like Ratchet or Aloy…lol.
 
LOL wut? Above current games? Are you trolling…Almost every PS4 era game has better character models not including PS5 era like Ratchet or Aloy…lol.
Those games are still rendering clothing that look and move like cardboard cutouts man. Even in the cutscenes of those newer games every character look straight up muscleless in their jaws with animated lips, shit look crazy.
 

VFXVeteran

Banned
From my understanding even the ps5 can run nanite at 60fps 1440p. It is lumen that is taxing the hardware.

Once nanite is adapted to work with more mobile geometry like human characters, we will have essentially gotten infinite geometric detail for all objects in the scene. The only thing that will be remaining is physics and lighting.
You will not have infinite geometric detail that will be "rendered". Anything rendered has to be shaded and that's where limitations in hardware take place.
 

VFXVeteran

Banned
Matrix Awakens and a few games say no…
A few games? What games? And Matrix Awakens is running at a crappy 1080p and it still has several things not implemented in rendering to bring it a leap forward. FX, hair, transparency cards, shadow-casting local light sources, cloth, etc.. are all still milestones to be achieved.
 

VFXVeteran

Banned
Those games are still rendering clothing that look and move like cardboard cutouts man. Even in the cutscenes of those newer games every character look straight up muscleless in their jaws with animated lips, shit look crazy.
You will find a common theme on these boards with the Sony gamers. They will always "think" that their exclusive games are leaps forward in graphics technology without any basis in fact. This happens every single generation. When this generation is over, you'll see them first start with their "future predictions" thread claiming huge hardware developments that surpass even mainstream high end PC graphics cards with talk of CG-looking renderings like the movies. Overly ambitious and wishful thinking. It would be so much easier if they allowed reality to set in and talk about continuing to improve upon things that are still shortcomings in games.
 
How can you compare film on green screens to CG?
Because alot of filming technique shows up in game lighting...three point lighting, using bounce cards, etc.
Nanite is one aspect in getting high quality geometry and textures. It's the first in like forever since videogames have come out. To me, it's an extremely long period of time just to get that
You just said there's no need for new tools, that "Game tools don't need much work this generation." So Which is it? Do you realize that when new compute power are made available, it leads to new ways to do things and makes possible old ideas that weren't possible before? You do realize that this change doesn't happen instantly?

For example take a look at the deep learning breakthrough that happened in 2012. Deep learning till today is still mostly using the same algorithm that existed in the 60s. The difference is that we didn't have enough compute power required to make those algorithm actually work back then. In 2012, thanks to the advancement in GPU due to the gaming industry, two guys randomly tried the ancient old algorithms from the 60s and BOOOM it worked and over-night hundreds of new industries were created and affected due to that one event in 2012. Tools were created, entire IDE to train, label, process datasets for NN.

Now every device you use or service you use has been affected by that one moment in 2012.
That's to say that tools don't exist until new/old ideas are made possible. Which then lead to tools being created to explore these ideas. This change doesn't happen instantly.

and then you have hardware that can't even render it at a good performance level.
So which is it, games will look the same or games have to be native 4k, 60fps to not look the same?
What does 1080p have anything to do with photorealism? Are you saying games have to be a mytical native 4k to be considered as "looks better"?

With all the reconstruction tech that we have FSR, DLSS, TSR, etc. Trying to achieve anything higher than 1400p is literally a waste of compute power.
So far, there has been no game that has come out this generation that makes my claim false. Nothing.
Your statement has already been proven false and laughable by looking at the prior gen and how games looked.

Just to name a few generation transitions.

RDR 1 > RDR 2
TLOUS 1 > TLOUS 2
Beyond: Two Souls > Detroit Become Human
Alan Wake > Control / Quantum Break

Last gen looked a gen ahead of its prior games and you have given 0 arguments against this.
Will there be games that will waste compute power trying to get to native 4k and 60 fps and hence not look a gen ahead? Yes.
Demos don't count when there isn't a game associated with it. If you don't have RT lighting, Nanite won't be good enough to make my claim false either. There are just way too many areas that need improvement in graphics pipeline to make it a "generational leap" from last gen.
Demos shown last gen have already been surpassed. Not only that but The Matrix Awaken isn't even your typical demo. It actually runs on consoles and its playable. Lastly devs stated expressly that it had alot of headroom for game logic and missions. This is clear indication that you will never admit that you are wrong. No matter what. You will make up some bs because this is more about your Vendetta against Sony fans than anything to due with graphics.
 
Last edited:
You will not have infinite geometric detail that will be "rendered". Anything rendered has to be shaded and that's where limitations in hardware take place.
Nanite transforms essentially infinite detail into a finite form that can be rendered even by a ps5, and it does so without loss of quality
"Nanite virtualized micropolygon geometry frees artists to create as much geometric detail as the eye can see," explains Epic. "Nanite virtualized geometry means that film-quality source art comprising hundreds of millions or billions of polygons can be imported directly into Unreal Engine—anything from ZBrush sculpts to photogrammetry scans to CAD data—and it just works.

"Nanite geometry is streamed and scaled in real time so there are no more polygon count budgets, polygon memory budgets, or draw count budgets; there is no need to bake details to normal maps or manually author LODs; and there is no loss in quality."https://www.vg247.com/playstation-5-unreal-5-demo
 

VFXVeteran

Banned
So which is it, games will look the same or games have to be native 4k, 60fps to not look the same?
What does 1080p have anything to do with photorealism? Are you saying games have to be a mytical native 4k to be considered as "looks better"?

With all the reconstruction tech that we have FSR, DLSS, TSR, etc. Trying to achieve anything higher than 1400p is literally a waste of compute power.
And I'm done with this conversation. You sound like the old microsoft that swore memory didn't need to increase beyond 65k addressable.

Rendering quality is directly determined by resolution of the framebuffers. Since we are always dealing with discrete pixels, we will never have enough samples to approximate a good enough rendering solution. The mere fact that native 4k looks better than any upsampled image is testament to that. You can be satisfied with an aliased triangle if you want at 1400p, but technology will move forward with making that approximation as analog as possible.

When the consoles can fully support native 4k with very little performance degradation, you like all the rest of the warriors, will switch your tune and declare 4k the new standard for rendering resolution.

Take care.
 
Last edited:

VFXVeteran

Banned
Nanite transforms essentially infinite detail into a finite form that can be rendered even by a ps5, and it does so without loss of quality
That's simply NOT true. We all had the Nanite demo and the loss of quality is directly determined by the rendering resolution. The higher your resolution the sharper and more tessellated the geometry. Why don't you go into UE5 yourself and actually "test" that theory yourself with your PC. - then come in here and declare that there is no loss of quality.
 
Last edited:
That's simply NOT true. We all had the Nanite demo and the loss of quality is directly determined by the rendering resolution. The higher your resolution the sharper and more tessellated the geometry. Why don't you go into UE5 yourself and actually "test" that theory yourself with your PC. - then come in here and declare that there is no loss of quality.
That's a quote by epic. Sure the detail is increased with resolution. But for a given resolution it can handle essentially unlimited geometry by turning it into a finite set of very small polygons to render.
 
Last edited:
You will find a common theme on these boards with the Sony gamers. They will always "think" that their exclusive games are leaps forward in graphics technology without any basis in fact. This happens every single generation. When this generation is over, you'll see them first start with their "future predictions" thread claiming huge hardware developments that surpass even mainstream high end PC graphics cards with talk of CG-looking renderings like the movies. Overly ambitious and wishful thinking. It would be so much easier if they allowed reality to set in and talk about continuing to improve upon things that are still shortcomings in games.
You seem to be biased against only Sony…anybody can see the samaritan demo has been surpassed…
 

VFXVeteran

Banned
You seem to be biased against only Sony…anybody can see the samaritan demo has been surpassed…
I'm biased against anyone that thinks these games and demos that have come out is the end all be all of graphics and that the $500 console hardware can render CG-like visuals. That just happens to be the Sony crowd thinking that. Everyone else knows we have a long long way to go and that you can only get so much out of a $500 console.
 
You mean this 37k triangle character model?
And? If you're implying that Samaritan model is using less polygons than these games that's it even worst man, it's disappointing.

My video games are not doing this. Why aren't retail games moving like this in motion with even more polygons?







You claimed that Arkham Knight surpassed the Samaritan demo, but I disagree. The Arkham Knight I've played on PC (Maxed settings) looked liked ass compared to this:

http://www.blogcdn.com/www.joystiq.com/media/2011/03/dx11dynamictessellationlogotext.jpg


How many polygons is the Infiltrator demo using from characters to environment assets? If the poly count budget is lower than current games too then where are the video games that look liked the Infiltrator demo?

Unreal Engine 4 Infiltrator Demo Released | Geeks3D
 

SlimySnake

Flashless at the Golden Globes
And? If you're implying that Samaritan model is using less polygons than these games that's it even worst man, it's disappointing.

My video games are not doing this. Why aren't retail games moving like this in motion with even more polygons?







You claimed that Arkham Knight surpassed the Samaritan demo, but I disagree. The Arkham Knight I've played on PC (Maxed settings) looked liked ass compared to this:

How many polygons is the Infiltrator demo using from characters to environment assets? If the poly count budget is lower than current games too then where are the video games that look liked the Infiltrator demo?


Most games nowadays look that good in cutscenes which is basically what the samaritan demo is.

the-last-of-us2-ellie-and-dina.gif


7a68b7262d0aa1983543e713c1cee4b1.gif


2fc9e2a8c3787da0e92f7acea52e13927c61255e.gifv

a7637e03c9930926ecb0ee784ab2573a.gif

ChubbyNauticalFattaileddunnart-size_restricted.gif
 
Most games nowadays look that good in cutscenes which is basically what the samaritan demo is.

the-last-of-us2-ellie-and-dina.gif


7a68b7262d0aa1983543e713c1cee4b1.gif


2fc9e2a8c3787da0e92f7acea52e13927c61255e.gifv

a7637e03c9930926ecb0ee784ab2573a.gif

ChubbyNauticalFattaileddunnart-size_restricted.gif
Look at the clothing/gear on these characters, I don’t see any physics. The Samaritan’s trench coat is reacting to his actions from the shoulders on down, it’s discernible there’s weight; Their mouths are also animating, but where are the jaw muscles in their faces like the Samaritan character model?
 
Last edited:

Arioco

Member
Look at the clothing/gear on these characters, I don’t see any physics. The Samaritan’s trench coat is reacting to his actions from the shoulders on down, it’s discernible there’s weight; Their mouths are also animating, but where are the jaw muscles in their faces like the Samaritan character model?


I hope you're just joking. Just a head of a FFVII Remake character has more triangles and detail than the whole main character in Samaritan. Cloud has 135.000 triangles, his hair alone is almost 50.000 triangles, his head is 60.000 vs 37.000 for the whole character in Samaritan.

Please, stop it, it's not even close. And this is maths, it's no debatable.
 
I hope you're just joking. Just a head of a FFVII Remake character has more triangles and detail than the whole main character in Samaritan. Cloud has 135.000 triangles, his hair alone is almost 50.000 triangles, his head is 60.000 vs 37.000 for the whole character in Samaritan.

Please, stop it, it's not even close. And this is maths, it's no debatable.
Beccie Abey (@the_beckenator) / Twitter
 

Tqaulity

Member
Let's just stop this argument with this realization amongst all of you console gamers - pay attention to Nvidia/AMD in the PC space. They are the ones that will push hardware for the future. The consoles are so far behind the highest end Nvidia cards that they have a LOT of catching up to do. I wouldn't be surprised if the PS6 isn't as powerful as a 3090 today. And the 3090 struggles just as much as the consoles just at a higher resolution and requiring DLSS to get by the ray-tracing bandwidth issue.
Ok you lost me here bro but I think I'm starting to see why. First, to say that you wouldn't be surprised if the PS6 isn't as powerful as a 3090 is just....flagrant. Let me explain. I'm sure you know the whole console generation increasing performance by an order of magnitude. Traditionally, consoles would see a ~10x increase in power in a 5-7 year period. Obviously that has slowed down and most recently we're closer to 5-7x increase in computational power in a 7-8 year period.

Now to say that the 3090 would be more powerful than a PS6, is to say that a console releasing in roughly 2027-2028 (if previous trends continue) couldn't exceed a flagship GPU from 2020 which is kind of absurd when you think about it. First of all, every console generation has had graphically capabilities well beyond the flagship cards available at the time of the previous generation console's release (even a lowly 7850 in PS4 would smoke a 7900 GTX from the PS3 era). Second, you are obviously only thinking about raw hardware specs and not considering architectural advancements and new feature sets that are likely to come online in the next 5 years or so. And third, this is where I point out there is a clear distinction between theoretical specs and actual real world performance. If you look at a 3090 vs a 6600XT (closest PC GPU to PS5 currently) in terms of specs, you'll see a 3.5x TFLOP advantage, a ~5x increase in shader units, 2x mem bandwidth, and dedicated RT hardware in the form of tensor cores on the 3090. Yet, you do realize that in terms of real world performance, the 3090 does not even perform 2x better than a lowly 6600XT on average in most games (in raster perf) (Link1 | Link2).

Console generations tend to base target on actual game performance so even the rumored PS5 pro with 2x raster and 2.5x RT perf improvement would rival or exceed a 3090. A PS6 in ~5 years or so would absolutely blow the 3090 away even if it's transistor count, chip size, and shader counts don't suggest that. Size and power efficiency improvements along with new feature sets and techniques will allow graphical evolution well beyond a 3090 today.
 

VFXVeteran

Banned
Ok you lost me here bro but I think I'm starting to see why. First, to say that you wouldn't be surprised if the PS6 isn't as powerful as a 3090 is just....flagrant. Let me explain. I'm sure you know the whole console generation increasing performance by an order of magnitude. Traditionally, consoles would see a ~10x increase in power in a 5-7 year period. Obviously that has slowed down and most recently we're closer to 5-7x increase in computational power in a 7-8 year period.

Now to say that the 3090 would be more powerful than a PS6, is to say that a console releasing in roughly 2027-2028 (if previous trends continue) couldn't exceed a flagship GPU from 2020 which is kind of absurd when you think about it. First of all, every console generation has had graphically capabilities well beyond the flagship cards available at the time of the previous generation console's release (even a lowly 7850 in PS4 would smoke a 7900 GTX from the PS3 era). Second, you are obviously only thinking about raw hardware specs and not considering architectural advancements and new feature sets that are likely to come online in the next 5 years or so. And third, this is where I point out there is a clear distinction between theoretical specs and actual real world performance. If you look at a 3090 vs a 6600XT (closest PC GPU to PS5 currently) in terms of specs, you'll see a 3.5x TFLOP advantage, a ~5x increase in shader units, 2x mem bandwidth, and dedicated RT hardware in the form of tensor cores on the 3090. Yet, you do realize that in terms of real world performance, the 3090 does not even perform 2x better than a lowly 6600XT on average in most games (in raster perf) (Link1 | Link2).

Console generations tend to base target on actual game performance so even the rumored PS5 pro with 2x raster and 2.5x RT perf improvement would rival or exceed a 3090. A PS6 in ~5 years or so would absolutely blow the 3090 away even if it's transistor count, chip size, and shader counts don't suggest that. Size and power efficiency improvements along with new feature sets and techniques will allow graphical evolution well beyond a 3090 today.

1. Raster performance isn't a concern these days. RT is.
2. Consoles don't get architected at the very last minute before a release. A 2027 release would mean at least a 4-5yr development cycle on silicon - which is right around now or early 2023 for speccing out chip designs --- NOT 2026 deciding to make a new console.
3. The 3090 was a significant jump in architecture from a 2x00 series board so it's delta is significantly better than the 1x00 - 2x00 boards.
4. Power limitations are always a problem for consoles where price point has to be under $500 MSRP. There is no technology that exists in the AMD space TODAY that can make a chip faster than the 3090 in RT performance, have Tensor-like capabilities, increased VRAM and better CPU all for under $500.

Be realistic about hardware expectations like most of you guys WASN'T this generation and you won't be shocked by the low performance numbers. I told people 2yrs ago that the PS5 was only equal to a 2080 in game performance and people scoffed at it. And here we are today.. limited bandwidth consoles that can not even sustain native 4k/60 renders let alone full RT features with NO hardware AI for reconstruction. This is the very reason mid-gen refreshes started last gen. The hardware is continuously behind on power compared to what games are trying to implement.
 
Last edited:

PUNKem733

Member
Ok you lost me here bro but I think I'm starting to see why. First, to say that you wouldn't be surprised if the PS6 isn't as powerful as a 3090 is just....flagrant. Let me explain. I'm sure you know the whole console generation increasing performance by an order of magnitude. Traditionally, consoles would see a ~10x increase in power in a 5-7 year period. Obviously that has slowed down and most recently we're closer to 5-7x increase in computational power in a 7-8 year period.

Now to say that the 3090 would be more powerful than a PS6, is to say that a console releasing in roughly 2027-2028 (if previous trends continue) couldn't exceed a flagship GPU from 2020 which is kind of absurd when you think about it. First of all, every console generation has had graphically capabilities well beyond the flagship cards available at the time of the previous generation console's release (even a lowly 7850 in PS4 would smoke a 7900 GTX from the PS3 era). Second, you are obviously only thinking about raw hardware specs and not considering architectural advancements and new feature sets that are likely to come online in the next 5 years or so. And third, this is where I point out there is a clear distinction between theoretical specs and actual real world performance. If you look at a 3090 vs a 6600XT (closest PC GPU to PS5 currently) in terms of specs, you'll see a 3.5x TFLOP advantage, a ~5x increase in shader units, 2x mem bandwidth, and dedicated RT hardware in the form of tensor cores on the 3090. Yet, you do realize that in terms of real world performance, the 3090 does not even perform 2x better than a lowly 6600XT on average in most games (in raster perf) (Link1 | Link2).

Console generations tend to base target on actual game performance so even the rumored PS5 pro with 2x raster and 2.5x RT perf improvement would rival or exceed a 3090. A PS6 in ~5 years or so would absolutely blow the 3090 away even if it's transistor count, chip size, and shader counts don't suggest that. Size and power efficiency improvements along with new feature sets and techniques will allow graphical evolution well beyond a 3090 today.
The PS6 will have an 80-100TF GPU. Thinking it will be under a 3090 is HILARIOUS!!!
 
There is no technology that exists in the AMD space TODAY that can make a chip faster than the 3090 in RT performance, have Tensor-like capabilities, increased VRAM and better CPU all for under $500.
You seem to forget the temporary slowdown in gpu improvement was due to extreme ultraviolet lithography delay. Now after long wait eul is available and it will allow timely transition to 3nm 1.5nm and perhaps even under 1nm.

It is not amd that will underly the main gains but tsmc. Unless something comes up ps6 is likely to be on something like 1.5nm or 1nm. And each node improvement from 8nm(3090) to 5nm to 3nm to 1.5nm comes with improved density and energy efficiency.
In mid 2020 TSMC claimed its (N5) 5 nm process offered 1.8x the density of its 7 nm N7 process, with 15% speed improvement or 30% lower power consumption; an improved sub-version (N5P or N4) was claimed to improve on N5 with +5% speed or -10% power.-wiki

For example, TSMC has stated that its 3 nm FinFET chips will reduce power consumption by 25 to 30 percent at the same speed, increase speed by 10 to 15 percent at the same amount of power and increase transistor density by about 33 percent compared to its previous 5 nm FinFET chips-wiki

And 5nm is already more energy efficient than 7nm.
 
Last edited:
I'm biased against anyone that thinks these games and demos that have come out is the end all be all of graphics and that the $500 console hardware can render CG-like visuals. That just happens to be the Sony crowd thinking that. Everyone else knows we have a long long way to go and that you can only get so much out of a $500 console.
It already has with The Matrix Awakens demo…
 

sendit

Member
Let's just stop this argument with this realization amongst all of you console gamers - pay attention to Nvidia/AMD in the PC space. They are the ones that will push hardware for the future. The consoles are so far behind the highest end Nvidia cards that they have a LOT of catching up to do. I wouldn't be surprised if the PS6 isn't as powerful as a 3090 today. And the 3090 struggles just as much as the consoles just at a higher resolution and requiring DLSS to get by the ray-tracing bandwidth issue.

Um....lol.
 

VFXVeteran

Banned
You seem to forget the temporary slowdown in gpu improvement was due to extreme ultraviolet lithography delay. Now after long wait eul is available and it will allow timely transition to 3nm 1.5nm and perhaps even under 1nm.

It is not amd that will underly the main gains but tsmc. Unless something comes up ps6 is likely to be on something like 1.5nm or 1nm. And each node improvement from 8nm(3090) to 5nm to 3nm to 1.5nm comes with improved density and energy efficiency.




And 5nm is already more energy efficient than 7nm.
My argument still stands. Regardless of the nm size, they will have to develop TODAY as in - right now - in order to put something out for PS6. And PS6 isn't the next-generation of latest tech anyway, the AMD GPUs for the PC are. So we will know what future console will look like when the next GPU from AMD comes out and it'll more than likely will be more powerful than the PS6. It will probably fall somewhere at the low-end of the new generation of cards - which will probably be less powerful than a 3090 (regardless of efficiency advances).
 

sendit

Member
My argument still stands. Regardless of the nm size, they will have to develop TODAY as in - right now - in order to put something out for PS6. And PS6 isn't the next-generation of latest tech anyway, the AMD GPUs for the PC are. So we will know what future console will look like when the next GPU from AMD comes out and it'll more than likely will be more powerful than the PS6. It will probably fall somewhere at the low-end of the new generation of cards - which will probably be less powerful than a 3090 (regardless of efficiency advances).

Right......

This was the highest end gaming card you could buy at the time of PS4's release: GTX 780 Ti Founders Edition. Are you claiming the PS5 is less capable?

This was the highest end gaming card you could but at the time of PS3's release: GeForce 7950 GX2. Are you complaining the PS4 is less capable?
 
Last edited:

VFXVeteran

Banned
Gordon Ramsay Facepalm GIF by Masterchef


yes, that 1080p, super aliased demo... really CGI like...
This is why I waste my time talking to these guys. They make these claims EVERY single generation. It's laughable. And then they predict supercomputer-like hardware for the next generation ignoring any advances in the PC space. It's as if the consoles are always the lead platform with new advances despite the consoles adopting old tech developed years ago. They somehow believe that the next hardware will be better than the latest hardware that comes from Nvidia/AMD. Instead they should be paying attention to what the PC leads with and scaling DOWN from that.
 

VFXVeteran

Banned
Right......

This was the highest end gaming card you could buy at the time of PS4's release: GTX 780 Ti Founders Edition. Are you claiming the PS5 is less capable?

This was the highest end gaming card you could but at the time of PS3's release: GeForce 7950 GX2. Are you complaining the PS4 is less capable?
I'm claiming that innovation isn't linear and constant like you guys try to fit consoles in that box. The 3090 is significantly better than the 2080 Ti in every way. That denoted a different "rate of change" in the GPU sector. You simply CAN NOT add 1 + 1 = 2 every single generation. Life doesn't work that predictably.
 

sendit

Member
I'm claiming that innovation isn't linear and constant like you guys try to fit consoles in that box. The 3090 is significantly better than the 2080 Ti in every way. That denoted a different "rate of change" in the GPU sector. You simply CAN NOT add 1 + 1 = 2 every single generation. Life doesn't work that predictably.

Then why are you predicting the future performance/capabilities of the PS6?

And no it isn't. However, aside from missing dedicated hardware for AI accelerated functions. The PS5 and XSX support pretty much all of the latest technologies that surround the RDNA 2 architect.
 
Last edited:
Top Bottom