• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Graphical Fidelity I Expect This Gen

ChiefDada

Gold Member
I know all the reasons. I know the whys behind the extended cross gen period. I understand everything you're saying. I just dont care. I paid $500 for this thing TWO years ago on the promise of next gen games, and I have not gotten my moneys worth. Plain and simple. I dont care about covid. I dont care about how long it takes. I dont care about crunch or tools or engines because that's not my job. I am the consumer and I am tired of being jerked around.

If GG, SSM, PD had started next gen dev in 2017 and 2018, they wouldve had access to devkits or GPUs and CPUs very similar to what we have out there right now. I am not buying this its taking more time argument because they had more time and they blew it working on last gen games fucking mocapping dozens of hours of side content no one gives a shit about instead of spending resources on making visuals look this good or systems that use the fucking CPU and IO Cerny built for them.

UE4 had some very impressive demos out in 2019. This was possible back then. I really dont care about the whys. They are excuses for lazy unambitious developers who were supposed to be on our side. We have spent decades championing artists like Cory and Neil and what do they do as soon as the become heads of the studio? They fold and turn into suits who prioritize profits and easy over ambition and pushing boundries. Whats the difference between these guys and Phil who went on and straight up told everyone Cross Gen is here to stay in 2020 some TWO years after he announced next gen consoles in 2018. WTF was he doing in those two years? Why was Forza Horizon 5 greenlit as a last gen game AFTER the E3 2018 conference. Why was Halo downgraded in 2019 when they couldve simply scrapped the last gen versions to achieve their true vision of a vast wild life system?

It's not that I dont understand. I just dont care. And I am not going to sit here and defend or excuse practices that have led to ZERO next gen games releasing in the third year or second full year or this generation. Especially after seeing whats possible. Not that I didnt know this was possible. I have been getting laughed at for 4 years for saying photorealism was possible this gen. Way before Matrix or these Unity demos came out.

Not only was this an extremely hilarious read, but also very true. We've spent the money for the consoles off the promise of delivering next gen experiences. We've only had a handful of such experience which have been too few and far between. I don't mind cross-gen and understand the unique circumstances for why it needs to exist right now, however Sony should have instituted separate development environments for PS4 and PS5 games. Imo, you especially have this flexibility for narrative centric games. For example, Horizon's story is the same on PS5 and PS4; great, there should be no divergence there, but have the next gen rendering pipeline be applied to PS5 version. As great as Horizon was, it shouldn't have been so close in comparison between the PS4 and PS5 versions. It made me sick to my stomach to hear directly from Guerilla that certain programing methods that would have yielded better performance were abandoned for HFW because "they didn't want to leave out PS4". Give us the experiences we paid for. At the very least, maybe provide PS5 owners with side story/ DLC like content that simply can't be experienced on PS4. It will still be available and ready to be experienced by PS4 owners once they are able to upgrade. There was another argument from some developer stating how parallel dev environment would create different experiences as if it was a bad thing. Well duh! That's exactly what I want! Even if I wasn't able to obtain PS5 by now, I wouldn't be upset with such policies one bit, it would be totally understandable and make me that much determined to obtain one as soon as possible. Instead, you actually have some consumers who are less inclined to transition because the cross gen games aren't much different from each other. It;s like low vs ultra PC settings, which has always been boring to me. Higher resolution and framerates don't excite me. Applications of the geometry engine, I/O tech, etc. as Cerny has described them to us is what made me purchase the console.
 
Not only was this an extremely hilarious read, but also very true. We've spent the money for the consoles off the promise of delivering next gen experiences. We've only had a handful of such experience which have been too few and far between. I don't mind cross-gen and understand the unique circumstances for why it needs to exist right now, however Sony should have instituted separate development environments for PS4 and PS5 games. Imo, you especially have this flexibility for narrative centric games. For example, Horizon's story is the same on PS5 and PS4; great, there should be no divergence there, but have the next gen rendering pipeline be applied to PS5 version. As great as Horizon was, it shouldn't have been so close in comparison between the PS4 and PS5 versions. It made me sick to my stomach to hear directly from Guerilla that certain programing methods that would have yielded better performance were abandoned for HFW because "they didn't want to leave out PS4". Give us the experiences we paid for. At the very least, maybe provide PS5 owners with side story/ DLC like content that simply can't be experienced on PS4. It will still be available and ready to be experienced by PS4 owners once they are able to upgrade. There was another argument from some developer stating how parallel dev environment would create different experiences as if it was a bad thing. Well duh! That's exactly what I want! Even if I wasn't able to obtain PS5 by now, I wouldn't be upset with such policies one bit, it would be totally understandable and make me that much determined to obtain one as soon as possible. Instead, you actually have some consumers who are less inclined to transition because the cross gen games aren't much different from each other. It;s like low vs ultra PC settings, which has always been boring to me. Higher resolution and framerates don't excite me. Applications of the geometry engine, I/O tech, etc. as Cerny has described them to us is what made me purchase the console.
I agree, even if the game looks worlds apart on PS4 and PS5 thats fine…just have everything exactly the same with wayyyyyyyy better quality on PS5…The thing is they are delivering an experiance and If one game looks CGI quality and the other PS4 quality the experience is too different and the artist intent is gone.
 

alloush

Member


nTVVa35.png
 

Sosokrates

Report me if I continue to console war
Seems like I’ve been right about UE5….hmmmm👍
UE5 is still many orders of magnitude inferior to modern CG in many aspects. While it may do well at static environments...... particle effects, fluid simulations, character models, lighting etc will be far superior in modern cgi to gameplay UE5 visuals.
 
Last edited:
If studios continue to pursue close to native 4K resolutions and 60fps performance modes for their games for the rest of the gen you can say bye bye to full length AAA games with Matrix Awakens type of visual fidelity on consoles. You'll have the whole generation filled up with games that won't go much past the best looking next-gen titles that we know of in terms of IQ. There's a reason why the Matrix demo runs at 1440p 30fps on both the PS5 and the Series X.
 
Last edited:

Raven77

Member


i think we will get get games which are more populated with life forms and objects on screen, there are probably better examples but this fan made rdr2 craziness showed an impressive amount of npcs, objects and weather effects all going on at once.


Can someone explain to me WTF I just watched...
 
If studios continue to pursue close to native 4K resolutions and 60fps performance modes for their games for the rest of the gen you can say bye bye to full length AAA games with Matrix Awakens type of visual fidelity on consoles. You'll have the whole generation filled up with games that won't go much past the best looking next-gen titles that we know of in terms of IQ. There's a reason why the Matrix demo runs at 1440p 30fps on both the PS5 and the Series X.
I think eventually they'll fold and do 1440p to get those cgi like visuals. The hard part will be announcing no performance mode though. The whiney childish internet will inevitably jump down whatever developers throat that wants to trade 60fps for movie visuals.
 
Last edited:

Heimdall_Xtreme

Jim Ryan Fanclub's #1 Member
Hopefully technology will allow us to combine 2 different graphics engines.

A pokemon or Sonic style game similar to the movies that combined realism with animation.

Similar like Roger Rabbit

5NP3WGBXSZG6TP4KLU2BBZYJYE.jpg



262.084.512



Japan studio gave us something close with their graphics engine from The last guardian.
 

ChiefDada

Gold Member
Where are u getting this info from? We dont have any metrics on how a mid/high end PC using DirectStorage performs in comparison to PS5. PCs can utilise all video ram and a majority of system Ram so I dont know what you mean when you say PCs have inactive RAM. video cards have nearly as much available RAM as the PS5 plus system RAM so I dont think RAM will be an issue for PCs.

This is what I am referring to:



A PS5 rendering pipeline that fully leverages I/O wouldn't have so much texture and other heavy asset data parked in memory like PCs currently need to do. When 8gb+ can be swapped in less than a second, so much more RAM could be available to store BVH data. Take one step further, perhaps scene geometry can be prioritized in GPU cache to the point where the GPU doesn't need to call back to RAM to fetch nearly as often. People always focus on comparing RAM bandwidth, but there is no comparison between GPU cache b/w vs RAM b/w. To me, this is proof that a PS5 could theoretically perform RT much better than even a 3070 under the right circumstances and with an optimized rendering pipeline, despite the compute disadvantage.

You mentioned using system memory, but that would impact performance on PC. We will see how far DirectStorage will allow PC to close the data management gap, but as of now, console (specifically PS5), has advantage in streaming and data management.
 

Sosokrates

Report me if I continue to console war
This is what I am referring to:



A PS5 rendering pipeline that fully leverages I/O wouldn't have so much texture and other heavy asset data parked in memory like PCs currently need to do. When 8gb+ can be swapped in less than a second, so much more RAM could be available to store BVH data. Take one step further, perhaps scene geometry can be prioritized in GPU cache to the point where the GPU doesn't need to call back to RAM to fetch nearly as often. People always focus on comparing RAM bandwidth, but there is no comparison between GPU cache b/w vs RAM b/w. To me, this is proof that a PS5 could theoretically perform RT much better than even a 3070 under the right circumstances and with an optimized rendering pipeline, despite the compute disadvantage.

You mentioned using system memory, but that would impact performance on PC. We will see how far DirectStorage will allow PC to close the data management gap, but as of now, console (specifically PS5), has advantage in streaming and data management.


Its shown in the DF breakdown that Spiderman pc is only utilising a fraction of a nvme drives caperbilities, its why it has such a heavy CPU requirement because the Cpu is doing decompression.

What you referring to is what DirectStorage and NVcomp does on PC. Where the GPU is used to do decompression.
 

ChiefDada

Gold Member
Its shown in the DF breakdown that Spiderman pc is only utilising a fraction of a nvme drives caperbilities, its why it has such a heavy CPU requirement because the Cpu is doing decompression.

What you referring to is what DirectStorage and NVcomp does on PC. Where the GPU is used to do decompression.

That's my point. There isn't a solution for PC yet, still work in progress. As of now, DirectStorage is only useful for load times, not in-game asset streaming.
 

Sosokrates

Report me if I continue to console war
That's my point. There isn't a solution for PC yet, still work in progress. As of now, DirectStorage is only useful for load times, not in-game asset streaming.
If thats the case I wonder what is preventing data to be stream directly from the SSD, i mean the only thing that is different on the PS5/series consoles and PC are the additional Decompression chips and cache scrubbers.
I really do facepalm when People think there is some magical thing these consoles have. The reason for decompression and io assistance chips is to save costs and efficientcy, unlike a PC, GPU and CPU resources cant be used for these things because if they were there would be less compute for everything else.
 
Hopefully technology will allow us to combine 2 different graphics engines.

A pokemon or Sonic style game similar to the movies that combined realism with animation.

Similar like Roger Rabbit

5NP3WGBXSZG6TP4KLU2BBZYJYE.jpg



262.084.512



Japan studio gave us something close with their graphics engine from The last guardian.
You don’t have to combine two engines to do that, it’s just a matter of applying different shaders to objects.
So yes this can already be done easily.

The reason why nobody does it is because it’s butt ugly.
 

Turk1993

GAFs #1 source for car graphic comparisons
You don’t have to combine two engines to do that, it’s just a matter of applying different shaders to objects.
So yes this can already be done easily.

The reason why nobody does it is because it’s butt ugly.
Hopefully technology will allow us to combine 2 different graphics engines.

A pokemon or Sonic style game similar to the movies that combined realism with animation.

Similar like Roger Rabbit

5NP3WGBXSZG6TP4KLU2BBZYJYE.jpg



262.084.512



Japan studio gave us something close with their graphics engine from The last guardian.
You can do that easily, even Spiderman had some amazing costumes with cartoon style graphics.
spider-clan-armor-spiderman-ps4-turf-wars.jpg
 

ChiefDada

Gold Member
If thats the case I wonder what is preventing data to be stream directly from the SSD, i mean the only thing that is different on the PS5/series consoles and PC are the additional Decompression chips and cache scrubbers.

That's like saying the only thing different with Nvidia cards is DLSS; it's the differences that allow them to solve issues better than their rivals.

I really do facepalm when People think there is some magical thing these consoles have. The reason for decompression and io assistance chips is to save costs and efficientcy, unlike a PC, GPU and CPU resources cant be used for these things because if they were there would be less compute for everything else.

It's not magical, just consoles taking advantage of closed environment. On the fly decompression is more than cost savings. When you have 100-200gb games (which I believe will be common as the generation goes on), being able to stream multiple GBs of data in and out of memory with minimal latency will be a necessity.
 

Sosokrates

Report me if I continue to console war
That's like saying the only thing different with Nvidia cards is DLSS; it's the differences that allow them to solve issues better than their rivals.
No, DLSS is proprietary Nvidia tech, what we are talking about is not. Turns out GPUs and CPUs are also very good at decompression. Using low cost specific use chips to do decompression is a cheaper way of doing it, but not better then using the gpu or cpu. It makes sense for console because they get a higher performing console at a reasonable price point, but thats not an issue on PC. PCs will always be better because tech advances very quickly.

It's not magical, just consoles taking advantage of closed environment. On the fly decompression is more than cost savings. When you have 100-200gb games (which I believe will be common as the generation goes on), being able to stream multiple GBs of data in and out of memory with minimal latency will be a necessity.

All this will be possible on PC. What specifically is stopping a modern PC from swapping out data from the Ram as fast as the SSD can feed it?

A PC with a 6700xt, ryzen 5600, nvme 7gb/s will be able to give very similar performance to current gen games all throughout the gen.
 

ChiefDada

Gold Member
No, DLSS is proprietary Nvidia tech, what we are talking about is not. Turns out GPUs and CPUs are also very good at decompression. Using low cost specific use chips to do decompression is a cheaper way of doing it, but not better then using the gpu or cpu. It makes sense for console because they get a higher performing console at a reasonable price point, but thats not an issue on PC. PCs will always be better because tech advances very quickly.

How is using gpu/cpu compute for decompression better than fixed function hardware? You say PC tech advances quickly, yet here we are with no PC that can match console end to end data throughput. How can you say PC will always be better when they are literally behind on this aspect today?

All this will be possible on PC. What specifically is stopping a modern PC from swapping out data from the Ram as fast as the SSD can feed it?

I would assume abstraction layers, having to account for different hardware and infinite variations for PC builds. The more custom an engine that is designed to flex PS5 i/o, the worse the abstraction bottleneck, I presume.



A PC with a 6700xt, ryzen 5600, nvme 7gb/s will be able to give very similar performance to current gen games all throughout the gen.

I mean, we're already seeing Spiderman Remastered, a last gen game, giving PC CPUs a good work out with a miniscule 25mb-200mb/s reads from SSD storage so no.
 

Sosokrates

Report me if I continue to console war
yet here we are with no PC that can match console end to end data throughput.
Really whats your source?
I would assume
Ok, so you dont know. Will wait until we get comformation on this.
I mean, we're already seeing Spiderman Remastered, a last gen game, giving PC CPUs a good work out with a miniscule 25mb-200mb/s reads from SSD storage so no.

As Nixxes said to Alex, they are not using direct storage or anything like that, if you look at the UE5 demos on PC they have a more stable data stream from SSD. We dont know what the limits are for data streaming on PC from the SSD.
 

ChiefDada

Gold Member
Really whats your source?

Are you serious? I thought this argument was officially dead when the PCMR Messiah (LTT) himself admitted that console i/o is much more performant than PC?

Ok, so you dont know. Will wait until we get comformation on this.

You asked for specific reason and my guess is abstraction. It is a fact that abstraction overhead and lack of dedicated hardware is a key bottleneck that prevents PC I/O from performing on the same level as consoles.

As Nixxes said to Alex, they are not using direct storage or anything like that, if you look at the UE5 demos on PC they have a more stable data stream from SSD.

Of course they're not using DirectStorage, because there is no API solution that presently exists to allow for in game streaming directly to GPU. That is why PC CPUs are getting taxed heavily with decompression. UE5 demo on PC follows conventional data management, i.e. cache most assets in RAM. That simply won't work with premier next gen games.

We dont know what the limits are for data streaming on PC from the SSD.

Lol, yes we do. That's why DirectStorage and RTX IO are being worked on to address this significant bottleneck.
 

Sosokrates

Report me if I continue to console war
Are you serious? I thought this argument was officially dead when the PCMR Messiah (LTT) himself admitted that console i/o is much more performant than PC?



You asked for specific reason and my guess is abstraction. It is a fact that abstraction overhead and lack of dedicated hardware is a key bottleneck that prevents PC I/O from performing on the same level as consoles.



Of course they're not using DirectStorage, because there is no API solution that presently exists to allow for in game streaming directly to GPU. That is why PC CPUs are getting taxed heavily with decompression. UE5 demo on PC follows conventional data management, i.e. cache most assets in RAM. That simply won't work with premier next gen games.



Lol, yes we do. That's why DirectStorage and RTX IO are being worked on to address this significant bottleneck.

So how will current gen games work on PC then? If they are so inferior?
 

ChiefDada

Gold Member
So how will current gen games work on PC then? If they are so inferior?

In short-term they will use brute force as they've done in the past. The issue for PC then is the on paper compute advantage will not translate 1 to 1 with real world performance, as we are seeing with Spiderman Remastered. If I'm not mistaken, RC Rift Apart was also in the GForce leak, will be interesting to see PC performance comparison since it utilizes PS5 i/o much more.
 

Sosokrates

Report me if I continue to console war
In short-term they will use brute force as they've done in the past. The issue for PC then is the on paper compute advantage will not translate 1 to 1 with real world performance, as we are seeing with Spiderman Remastered. If I'm not mistaken, RC Rift Apart was also in the GForce leak, will be interesting to see PC performance comparison since it utilizes PS5 i/o much more.
Which is what I have been trying to say...
That the specialist hardware in the ps5/xs can be done with traditional GPUs and CPUs.

Also Spiderman could be optimised better for PC. So its not evidence that the console ssd + io solutions are better then a 20% more powerful PC.
 
Last edited:

winjer

Gold Member
This is what I am referring to:



A PS5 rendering pipeline that fully leverages I/O wouldn't have so much texture and other heavy asset data parked in memory like PCs currently need to do. When 8gb+ can be swapped in less than a second, so much more RAM could be available to store BVH data. Take one step further, perhaps scene geometry can be prioritized in GPU cache to the point where the GPU doesn't need to call back to RAM to fetch nearly as often. People always focus on comparing RAM bandwidth, but there is no comparison between GPU cache b/w vs RAM b/w. To me, this is proof that a PS5 could theoretically perform RT much better than even a 3070 under the right circumstances and with an optimized rendering pipeline, despite the compute disadvantage.

You mentioned using system memory, but that would impact performance on PC. We will see how far DirectStorage will allow PC to close the data management gap, but as of now, console (specifically PS5), has advantage in streaming and data management.



Although I agree that 8Gb is too small for a GPU of the 3070 class, I disagree with your assessment that an SSD can do better than ram.
Streaming data from Ram is many times faster than streaming from an SSD. Might I remind you that even an old kit of DDR4 3200 in dual channel can do 52Gb/s. Or that there are kits of DDR5 that are close to pushing 100Gb/s
Not to mention that latency on an SSD several orders of magnitude slower that DRam.
If ram can't stream data quickly enough to vram, you can be sure that an SSD can't do it either.

No, the PS5 can't do RT better than an RTX3070. The BVH structure is not that big, to the point of requiring many Gb of data on the GPU vram.
In fact, it should be on the caches, for low latency, as it's accessed frequently.
This is one of the uses of the Infinity cache on RDNA2, as described by AMD. But guess what, even with it, RDNA2 is much slower than Ampere at ray-tracing.
 

ChiefDada

Gold Member
Although I agree that 8Gb is too small for a GPU of the 3070 class, I disagree with your assessment that an SSD can do better than ram.
Streaming data from Ram is many times faster than streaming from an SSD. Might I remind you that even an old kit of DDR4 3200 in dual channel can do 52Gb/s. Or that there are kits of DDR5 that are close to pushing 100Gb/s
Not to mention that latency on an SSD several orders of magnitude slower that DRam.
If ram can't stream data quickly enough to vram, you can be sure that an SSD can't do it either.

I have never and would never argue this. My point is the console I/O is apparently so fast to the point where even textures behind the player don't need to be in RAM; this isn't just Cerny hyperbole anymore, it's become reality as Insomniac has confirmed that this is what is actually happening in Ratchet and Clank. To my knowledge, PCs are not able to do this yet.

No, the PS5 can't do RT better than an RTX3070. The BVH structure is not that big, to the point of requiring many Gb of data on the GPU vram.
In fact, it should be on the caches, for low latency, as it's accessed frequently.

That is the point I'm making. There are two main topics for RT performance, memory access and compute. Consoles will never win the compute battle, but with new I/O, I can easily see them outclassing PCs in terms of memory performance, at least in short-term. Historically, memory access has been poor due to poor cache hit rate for scene geometry. This applies to all GPUs, that is why memory bandwidth has been so critical for RT. But even then, bandwidth contention across memory bus has also hindered RT performance.

With new console i/o, my question is whether there is a cascade effect; how much RAM is suddenly available as a result of not needing to reserve distant data? Can data that was previously stored in GPU cache and perhaps doesn't require the cache bandwidth speeds now be placed in main memory? Does cache coherency hardware outclass IC effective bandwidth multiplying effect? Again it's not a question of whether SSD or RAM is faster than RAM or GPU cache, the question is whether or not specified memory is fast enough for the type of data it's storing.

This is one of the uses of the Infinity cache on RDNA2, as described by AMD. But guess what, even with it, RDNA2 is much slower than Ampere at ray-tracing.

To my knowledge IC helps a lot, but IC size is not enough because of continued data contention in cache.
 

ChiefDada

Gold Member
Which is what I have been trying to say...
That the specialist hardware in the ps5/xs can be done with traditional GPUs and CPUs.

Yes, but at a performance penalty.

Also Spiderman could be optimised better for PC. So its not evidence that the console ssd + io solutions are better then a 20% more powerful PC.

By definition they are better at that task because they have fixed function hardware that doesn't take compute from CPU and/or GPU. Remember, we are talking about PC folks here; they would spend $hundreds extra for a couple additional frames from a marginally better GPU. You can be sure that they would welcome any hardware that frees up their CPU/GPU to perform more compute.
 

winjer

Gold Member
I have never and would never argue this. My point is the console I/O is apparently so fast to the point where even textures behind the player don't need to be in RAM; this isn't just Cerny hyperbole anymore, it's become reality as Insomniac has confirmed that this is what is actually happening in Ratchet and Clank. To my knowledge, PCs are not able to do this yet.

The SSD in the PS5 is very fast, mostly because of the file system, but is no replacement for ram or vram.
But it does make streaming much more efficient, requiring less data to be cached in the vram.
The PC can't stream data from an SD as fast. But it really doesn't need it. Any modern PC has at least 16GB of ram, and 32Gb is becoming the standard for new PCs.

That is the point I'm making. There are two main topics for RT performance, memory access and compute. Consoles will never win the compute battle, but with new I/O, I can easily see them outclassing PCs in terms of memory performance, at least in short-term. Historically, memory access has been poor due to poor cache hit rate for scene geometry. This applies to all GPUs, that is why memory bandwidth has been so critical for RT. But even then, bandwidth contention across memory bus has also hindered RT performance.

Memory bandwidth is important for RT, but latency is also very important. If a GPU can keep it's BVH structure in a GPU cache it will make things much faster. That's one of the reason why Ada Lovelace is increasing it's L2 cache by a huge amount.
But the issue with the RTX 3070 is not memory bandwidth, is the amount of vram. Might I remind you that the PS5 and the RTX 3070 have the same memory bandwidth, 448GB/s.
But the RTX 3070 has all of that for itself. The GPU in the PS5 has to share it with the CPU, so in reality it will probably have less than 400Gb/s.

With new console i/o, my question is whether there is a cascade effect; how much RAM is suddenly available as a result of not needing to reserve distant data? Can data that was previously stored in GPU cache and perhaps doesn't require the cache bandwidth speeds now be placed in main memory? Does cache coherency hardware outclass IC effective bandwidth multiplying effect? Again it's not a question of whether SSD or RAM is faster than RAM or GPU cache, the question is whether or not specified memory is fast enough for the type of data it's storing.

nvidia's RT Cores not only accelerate ray casting, but also accelerate BVH traversal. This means there is no need to have the CPU doing these calculations, and send it back and forth between the CPU and GPU.
Also, remember that current nvidia and AMD GPUs, are tile based renderers. This means they process small chunks of data, that is kept in the GPU caches.

But even for the rest of data, it's much faster fetching from ram, than from an SSD.

To my knowledge IC helps a lot, but IC size is not enough because of continued data contention in cache.

Might I remind you that there is a real issue with memory bandwidth contention in consoles, because of using a unified pool.
This is one of the reasons why AF16X is so rare on consoles, while it's free on all PC GPUs, for over a decade.
 

Sosokrates

Report me if I continue to console war
Yes, but at a performance penalty.



By definition they are better at that task because they have fixed function hardware that doesn't take compute from CPU and/or GPU. Remember, we are talking about PC folks here; they would spend $hundreds extra for a couple additional frames from a marginally better GPU. You can be sure that they would welcome any hardware that frees up their CPU/GPU to perform more compute.
Since the start of this exchange I have said that a more powerful PC is needed to reach PS5/xsx performance on average.
I agree that a 6600xt + 3700 + 3.5nvme would not, it would get close though.
 

ChiefDada

Gold Member
The SSD in the PS5 is very fast, mostly because of the file system, but is no replacement for ram or vram.
But it does make streaming much more efficient, requiring less data to be cached in the vram.
The PC can't stream data from an SD as fast. But it really doesn't need it. Any modern PC has at least 16GB of ram, and 32Gb is becoming the standard for new PCs.

I think PC requirements for future next gen games will prove otherwise, but agree to disagree for now.

Memory bandwidth is important for RT, but latency is also very important. If a GPU can keep it's BVH structure in a GPU cache it will make things much faster. That's one of the reason why Ada Lovelace is increasing it's L2 cache by a huge amount.
But the issue with the RTX 3070 is not memory bandwidth, is the amount of vram. Might I remind you that the PS5 and the RTX 3070 have the same memory bandwidth, 448GB/s.
But the RTX 3070 has all of that for itself. The GPU in the PS5 has to share it with the CPU, so in reality it will probably have less than 400Gb/s.

I believe the issue of latency was the point I was stressing in the subsequent quote so yeah I agree. Memory bandwidth has been so important for RT because cache hit rates are so bad. Nvidia is following AMD footsteps by increasing cache size, and that's great.
I don't agree with your math for throughput at all; why would throughput rate be affected? You can argue the portion of memory available to GPU but even still my position is the PS5 has more available GPU memory than 3070 w/ 8gb.

The remainder of your post I agree in general; as I said repeatedly, console will always lose the pure compute battle.
 
UE5 is still many orders of magnitude inferior to modern CG in many aspects. Well it may do well at statoc environments, particle effects, fluid simulations, character models, lighting etc will be far superior to in gameplay UE5 visuals.
Yes, I know…but it will allow games to actually look “CGI”. The pure number of triangles and lumen is mostly what makes CGI so high fidelity…next we need particles, simulations, hair, skeleton deformation (Spiderman Miles Morales and Unity Lion demo..) to become normal to complete the CGI look…that Unity lion demo has shown that CGI quality in games visuals will show up late generation and definitely next gen…
 

Sosokrates

Report me if I continue to console war
Yes, I know…but it will allow games to actually look “CGI”. The pure number of triangles and lumen is mostly what makes CGI so high fidelity…next we need particles, simulations, hair, skeleton deformation (Spiderman Miles Morales and Unity Lion demo..) to become normal to complete the CGI look…that Unity lion demo has shown that CGI quality in games visuals will show up late generation and definitely next gen…

Yes static environments will look Cgi-esque, i dont know enough about lighting to really say.
Modern cgi uses full path tracing with as many bounces as real life, I dont know if this is the case with lumen.
I know skin shaders still have a way to go, you look at the skin in the UE5 matrix demo walking about the city and it does not look that much different to the skin on last gen character models. Even modern cgi aint great at direct sunlight skin shaders.

6Mobk9O.jpg
Q8GxbwW.jpg


We seem to be in agreement that UE5 has not really shown fluid simulations and particle effects apart from people playing about with the engine on youtube nothing in a game or demo running on current gen hardware, I hope we see something because its an area that last gen needed improvement.
 
Last edited:
No, DLSS is proprietary Nvidia tech, what we are talking about is not. Turns out GPUs and CPUs are also very good at decompression. Using low cost specific use chips to do decompression is a cheaper way of doing it, but not better then using the gpu or cpu. It makes sense for console because they get a higher performing console at a reasonable price point, but thats not an issue on PC. PCs will always be better because tech advances very quickly.
the decompressor on ps5 i think is equivalent to 8 zen cores, iirc. Many pc gamers only have 8 cores, so how are they going to run constant streaming on demand decompression and still have the 8 cores free for the game?

Maybe the gpu decompression is viable, but not sure if the gpu decompression requires any cpu input or uses the gpu resources or has actual fixed function decompression.
 
Yes, I know…but it will allow games to actually look “CGI”. The pure number of triangles and lumen is mostly what makes CGI so high fidelity…next we need particles, simulations, hair, skeleton deformation (Spiderman Miles Morales and Unity Lion demo..) to become normal to complete the CGI look…that Unity lion demo has shown that CGI quality in games visuals will show up late generation and definitely next gen…
Tim Sweeney said 40 TFLOPS are needed for photo realistic dynamic scenes…I predict next gen consoles will be approximately 50 TFLOPS or more…based on the Lion Unity demo done on a small scale…I’d say those visuals will be the normal next gen…

CGI




Real Time 10 to 12 TFLOPS





Real Time 10 TFLOPS

 
Last edited:

Sosokrates

Report me if I continue to console war
the decompressor on ps5 i think is equivalent to 8 zen cores, iirc. Many pc gamers only have 8 cores, so how are they going to run constant streaming on demand decompression and still have the 8 cores free for the game?

Maybe the gpu decompression is viable, but not sure if the gpu decompression requires any cpu input or uses the gpu resources or has actual fixed function decompression.

Yeah, it occurred to me that sony + Microsoft compare there decompression chips to the zen2 cores, they do this because it sounds better then saying its the equivalent of 1/10th of the GPU.


In the pics below its seems to take off the burden from the CPU
https://www.nvidia.com/en-gb/geforce/news/rtx-io-gpu-accelerated-storage-technology/

j2T8v6m.jpg
ps1RWhh.jpg
RjB8A9f.jpg
 

amigastar

Member
Like i said before, i'm waiting for games not graphics. That may be because i'm little dissappointed in these games graphics of this gen.
 
Last edited:

winjer

Gold Member
I think PC requirements for future next gen games will prove otherwise, but agree to disagree for now.

I think that future games will require more vram and ram.
And this is the main reason that makes the RTX 3070 a bad card for the future.

I don't agree with your math for throughput at all; why would throughput rate be affected? You can argue the portion of memory available to GPU but even still my position is the PS5 has more available GPU memory than 3070 w/ 8gb.

Those are two different things. Memory amount and memory bandwidth.
And in the case of memory bandwidth, the RTX has a decent amount more than the PS5's GPU has.
Also consider that nvidia has a much better memory compression algorithms on Ampere, that AMD has on RDNA2.
 

winjer

Gold Member
The game looks hella good honestly and this is a pre-alpha build. I wonder if they will ever downgrade it. This is built on UE5 I am assuming!

That is ruining on UE5, but with RT and DLSS. Chances are it will look this good on PC.
Consoles is another different matter. RT effects will have to be cut down. But at least it can use TSR to upscale the image.
 

alloush

Member
That is ruining on UE5, but with RT and DLSS. Chances are it will look this good on PC.
Consoles is another different matter. RT effects will have to be cut down. But at least it can use TSR to upscale the image.
I saw somewhere that they showed a footage a while back looking as good running on a 2060, if that's the case I assume consoles will be fine? Either way it looks really good.
 
Top Bottom