• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[IGNxGamer] Matrix Awakens, Hellblade and the Power of Unreal Engine 5 - Performance Preview

Status
Not open for further replies.

ReBurn

Gold Member
When the demo came out I tested it on PS5 and Series X, and was imediately noticible the performance difference between the two. I even posted it in this forum.
Few days later on the DF video the difference was only 1 fps, and I found that very weird.
I played it on both PS5 and XSX, too. I thought performance was really bad on both. The graphics are really nice but both versions have glitchy pop-in issues and performance in free roam is so bad in both it's like deciding which turd sandwich stinks less. But it is just a tech demo and not a retail game so there's no reason to argue about it.
 

kingfey

Banned
Pretty sure Stalker 2 isnt using lumens or Nanite.
"The massive leap in processing, graphics, and I/O capabilities from next-gen baseline hardware has allowed GSC to really push the boat out there in terms of STALKER 2's visuals. Hardware ray-tracing, extremely high polycount models, and Unreal 5's Nanite and Lumen rendering systems together make for a truly next-gen experience. "





This screenshot highlights great use of Nanite for detailed rendering of the ruins (Image source: GSC Game World)This screenshot highlights great use of Nanite for detailed rendering of the ruins (Image source: GSC Game World)
Lumen global illumination adds depth to the bucolic environment pictured here (Image source: GSC Game World)Lumen global illumination adds depth to the bucolic environment pictured here (Image source: GSC Game World)

Lunatic_Gamer Lunatic_Gamer

Here is another UE5 game screen shot. Maybe you can use this one too.
 
Last edited:

adamsapple

Or is it just one of Phil's balls in my throat?


If its any consolation, Aaron is the same guy who said the "You realize you're gonna see everything in 1080p right ?" .. he's kinda out there with his commentary, lol.

Phil's tweet on the other hand, is a great example of a PR tweet done right.

"The massive leap in processing, graphics, and I/O capabilities from next-gen baseline hardware has allowed GSC to really push the boat out there in terms of STALKER 2's visuals. Hardware ray-tracing, extremely high polycount models, and Unreal 5's Nanite and Lumen rendering systems together make for a truly next-gen experience. "



Cool ! I'm already pretty excited to see how the game performs on console.
 
Last edited:
Unless most of the Series version development was off-loaded to The Coalition. We can never say anything about these kind of things for sure, and neither Epic nor Coalition are ever gonna fully talk about it.

Well they certainly had the dev kits before the system launched. It's not like Microsoft sent them after they released the system. As for the Coalition im pretty sure they had them for quite a while. Just saying that using dev kits as a reason will wear out the longer this gen goes.
 

adamsapple

Or is it just one of Phil's balls in my throat?
Well they certainly had the dev kits before the system launched. It's not like Microsoft sent them after they released the system. As for the Coalition im pretty sure they had them for quite a while. Just saying that using dev kits as a reason will wear out the longer this gen goes.

Of course, the dev kits reason won't hold any merit beyond the first gen launch games. But we can't say it wouldn't have had an impact on games in development between 2019~2020, i-e many of the games coming out now-ish or soon-ish.
 

Pedro Motta

Member
Yes, I agree with you on this but giblet did indeed describe the actual underpinnings of what's going on at the data structure level. I wouldn't have come down hard claiming each doesn't have any knowledge on the subject though. That's a little trollish imo. I get NXGamer's context and I also get the point about being more detail-oriented on the actual implementation details of what Nanite is (which, to be fair, is probably overkill for most people anyway).

I liken Nanite to MIP-mapping in a way. While the data for all the levels are in memory at once, you could say each level is a "proxy" of the lowest level mip.
Exactly, NXGamer NXGamer is not doing videos for the GDC crowd, but home tech enthusiasts that like to understand a bit more. I'm not saying giblet was wrong, but being picky with these kinds of videos is just too much.
 

Loxus

Member
Same RDNA2.0 architecture but not same overall architecture. The PS5 I/O could be helping the PS5 GPU overperform its tflops here.

Also, the RDNA 2.0 architecture relies on really high clocks up to 2.7 GHz to hit its performance targets. The PS5 is at 2.23 Ghz while the XSX is even lower at 1.8 Ghz. It's possible that the lower clocks are holding back the 52 CU XSX GPU.

Lastly, the XSX uses a RDNA 2.0 chip that adds more CUs to a 2 Shader Array system which is probably causing some kind of bottleneck where the CUs arent being effectively utilized. The 13 tflops AMD 6700xt does not use 52 CUs. It tops out at 40 CUs and pushes the clocks up to 2.4 Ghz to hit its tflops target. It seems even AMD knew that the best way to get performance out of that particular CU configuration. This is also something Cerny hinted at in his Road to PS5 conference. Something we initially dismissed as damage control.

Another potential difference between the XSX and RDNA 2.0 PC cards is that it lacks the infinity cache thats part of the GPU die. The 6700xt is a 337 mm2 GPU compared to the entire XSX APU which includes the CPU and IO and still comes in around 360 mm2. So how can a 40 CU GPU be almost as big as the entire 52 CU XSX APU? The infinity cache must be taking up a lot of that space, and thus must be the reason why the XSX might not be performing up to its tflops potential.
Even RDNA 3 doesn't exceed 5 WGP per Shader Array.
x7uxIZJ.png
 

SlimySnake

Flashless at the Golden Globes
That mainly only applies to launch day releases.

Series console been out for over a year. Epic had enough time to get used to the hardware and even have Nanite using Mesh Shader.
QKVlTKd.png
The engine optimization/experience argument makes no sense. This is a third party multiplatform engine that devs will use to develop on PC. From there, they will create builds for PS5 and XSX. The whole point of using UE5 and other third party engines is to do game dev once. Once the builds are created they can then go in and optimize/downgrade settings here and there, but thats about it. The engine level optimizations for each console simply wont be done on multiplat releases. No one is going to code around the higher clocks of the PS5 or the higher SSD bandwidth for multiplatform games. And for that same reason, no one [third party] is going to bother optimizing for xsx's advantages either.

You wont ever see PC gamers say that a game or demo runs better on one GPU over the other because its more optimized for that GPU. At least when it comes to the same AMD family of cards. Epic engineers arent going to go in and specifically start adding optimizations for the 6700xt and the 6600xt and so on. They make one engine for all GPUs.

The PS5 is performing better because its simply better at rendering Nanite and Lumens. Maybe other engines like Unity will show an XSX advantage. But to dismiss this performance advantage as an optimization thing is bizarre. Imagine if a Unity game comes out and the XSX performs better, should we dismiss the XSX advantage as an optimization issue?
 

SlimySnake

Flashless at the Golden Globes
"The massive leap in processing, graphics, and I/O capabilities from next-gen baseline hardware has allowed GSC to really push the boat out there in terms of STALKER 2's visuals. Hardware ray-tracing, extremely high polycount models, and Unreal 5's Nanite and Lumen rendering systems together make for a truly next-gen experience. "





This screenshot highlights great use of Nanite for detailed rendering of the ruins (Image source: GSC Game World)This screenshot highlights great use of Nanite for detailed rendering of the ruins (Image source: GSC Game World)
Lumen global illumination adds depth to the bucolic environment pictured here (Image source: GSC Game World)Lumen global illumination adds depth to the bucolic environment pictured here (Image source: GSC Game World)

Lunatic_Gamer Lunatic_Gamer

Here is another UE5 game screen shot. Maybe you can use this one too.
I stand corrected. Now I am excited for this game. lol
I'm not big on conspiracies or seeing DF as purposely doing something like that. But the differences found by Kingthrash (Yes ik he's a warrior, doesn't negate his findings) and now by NX as well point to DF either outright avoiding the differences or being incompetent in finding them. Not a good look.
You can go back to that thread and see me defend them. I dont think there have been many instances of this, but this one seems oddly egregious. They had a 50 minute video on this thing and barely covered the performance metrics.
 
Of course, the dev kits reason won't hold any merit beyond the first gen launch games. But we can't say it wouldn't have had an impact on games in development between 2019~2020, i-e many of the games coming out now-ish or soon-ish.

I still have my doubts that we are going to see any big differences.
 

adamsapple

Or is it just one of Phil's balls in my throat?
I still have my doubts that we are going to see any big differences.
No, probably nothing noteworthy. I think the games this gen will be very close in 99% cases.

SX to PS5 TF difference is lesser than One X to Pro.

First parties will be the highlights like always while third party games will be nigh indistinguishable in most cases.
 
Last edited:

Dream-Knife

Banned
Same RDNA2.0 architecture but not same overall architecture. The PS5 I/O could be helping the PS5 GPU overperform its tflops here.

Also, the RDNA 2.0 architecture relies on really high clocks up to 2.7 GHz to hit its performance targets. The PS5 is at 2.23 Ghz while the XSX is even lower at 1.8 Ghz. It's possible that the lower clocks are holding back the 52 CU XSX GPU.

Lastly, the XSX uses a RDNA 2.0 chip that adds more CUs to a 2 Shader Array system which is probably causing some kind of bottleneck where the CUs arent being effectively utilized. The 13 tflops AMD 6700xt does not use 52 CUs. It tops out at 40 CUs and pushes the clocks up to 2.4 Ghz to hit its tflops target. It seems even AMD knew that the best way to get performance out of that particular CU configuration. This is also something Cerny hinted at in his Road to PS5 conference. Something we initially dismissed as damage control.

Another potential difference between the XSX and RDNA 2.0 PC cards is that it lacks the infinity cache thats part of the GPU die. The 6700xt is a 337 mm2 GPU compared to the entire XSX APU which includes the CPU and IO and still comes in around 360 mm2. So how can a 40 CU GPU be almost as big as the entire 52 CU XSX APU? The infinity cache must be taking up a lot of that space, and thus must be the reason why the XSX might not be performing up to its tflops potential.
Counter-point: AMDs high end cards all have higher CU and lower clocks.
6700xt 40cu at 2581
6800 60cu at 2105
6800xt 72cu at 2250
6900xt 80cu at 2250


Pushing fewer CUs to a higher frequency would be a more economical choice wouldn't it?
 
Last edited:

Tripolygon

Banned
There is no appreciable difference between Epic games devs level of experience when they received both dev kits. What Tim Sweeney said was that Sony was early in engaging them about next gen consoles which is what both Sony and Microsoft do. They talk to middleware developers and game studios even before any hardware is decided upon. Because the relationship was earlier and strong between Sony and Epic they decided to showcase UE5 on PS5. Had nothing to do with them not having devkits or lacking experience. The second demo was showcased on both PS5 and XSX and even released on PC for the public.

Want more proof that they have experience with both consoles for a similar amount of time? They added support for both PS5 and XSX in UE4 at the same time with the 4.25 update in may 2020
 
Last edited:
No, probably nothing noteworthy. I think the games this gen will be very close in 99% cases.

SX to PS5 TF difference is lesser than One X to Pro.

First parties will be the highlights like always while third party games will be nigh indistinguishable in most cases.

Well I think the differences go beyond just the TF which is why the systems are so close.

As for Pro Vs X I don't see that happening this gen even if developers take advantage of each systems features.
 

SlimySnake

Flashless at the Golden Globes
Counter-point: AMDs high end cards all have higher CU and lower clocks.
6700xt 40cu at 2581
6800 60cu at 2105
6800xt 72cu at 2250
6900xt 80cu at 2250


Pushing fewer CUs to a higher frequency would be a more economical choice wouldn't it?
Yeah, but those are the advertised clocks. RDNA 1 and 2.0 cards go well beyond their advertised game clocks in game.
mJ2S5sC.jpg


This is the highest clock I could find on the 6800xt and 6900xt, but the other games in this comparison video show how both cards are consistently in the 2.5-2.4 ghz range. Consistently higher than the PS5 clocks, and way higher than the xsx clocks.



I would love to see DF cap the clocks to 1.8 ghz and see just how proportionately the performance drops. Is it 1:1 with tflops? Or does the GPU is increasing the clockspeeds because thats what it needs to hit the higher framerates?

Then there is the infinity cache. IIRC, the 6800xt literally has a 6 billion transistor infinity cache taking up precious space on the 25 billion transistor die. Thats a cost increase of 30% on each die. That tells me that there is no way they are getting this performance without the infinity cache or they wouldve skimped on such an expense.

And yes, the reason fewer CUs and higher clocks wouldve been more economical. Thats probably why Cerny went with that design because it seems Sony was still targeting a $399 price point. However, you still have to cool that thing. Look at the wattage on the 6700xt. 170w on its own at 2.539 GHz. So 12.99 tflops. MS went with more CUs and higher clocks because they had to budget for the CPU wattage, the SSD, and the motherboard. Thats a lot of heat being produced. MS's vapor chamber cooling solution is already very expensive. Way more than Sony's traditional but bulky cooling solution. If they had gone for a 40 CU 2.4 Ghz GPU, they wouldve saved some space/cost on the die, but wouldve required a far more elaborate and expensive cooling. Adding more CUs was probably cheaper in their scenario.
 
Yes SSD performance was a point of contention for UE5, performance on a tech demo is not that important, or maybe it was a low priority.
I dont know why there has to be some sinister motive behind the decision, what do DF have to gain by doing something like that?

I personally would of like a fps comparison from DF, so I could compare it with my own findings.

But I see no reason to think theres some conspiracy.
if there are sinister things about DF it started with the addition of Alex to the Team.. the PC Masterrace Terminology and his ignorance when it comes to console advantages ..
The Guy needs to go.. i had hopes for Corona but since its a mere better flew chances are low ..
edit: Stalker 2 is not a hard locked Xbox Eclusive , right? Its only a timed Exclusivity, isn´t it?
 
Last edited:

SlimySnake

Flashless at the Golden Globes
I still have my doubts that we are going to see any big differences.
I had my doubts. I upset a lot of my fellow blue rats when I refused to declare a winner after the first few cross gen comparisons or even some of the latter ones this year, but I really wanted to see if the PS5 would hold its own when next gen only games designed around next gen tech arrive and well, the PS5 seems to be on par with the xsx at the very least. The fact that some games have the PS5 going up to 230 Watts of power consumption shows that there is really no need to worry about any kind of GPU clock downgrade either. There is more than enough power available for both the CPU and GPU.

I just wish the Next gen spec thread was still open so we can go and revisit some of these topics because Cerny has proven everyone wrong here.
 
I had my doubts. I upset a lot of my fellow blue rats when I refused to declare a winner after the first few cross gen comparisons or even some of the latter ones this year, but I really wanted to see if the PS5 would hold its own when next gen only games designed around next gen tech arrive and well, the PS5 seems to be on par with the xsx at the very least. The fact that some games have the PS5 going up to 230 Watts of power consumption shows that there is really no need to worry about any kind of GPU clock downgrade either. There is more than enough power available for both the CPU and GPU.

I just wish the Next gen spec thread was still open so we can go and revisit some of these topics because Cerny has proven everyone wrong here.

I honestly thought the PS5 was doomed due to the GitHub leaks. But if these comparisons prove anything is that PS5 owners don't have anything to worry about.
 

Azelover

Titanic was called the Ship of Dreams, and it was. It really was.
It looks incredible. Actually, it looks better than I need it to be. I wish they would focus more on other things.

I just finished another playthrough of Chrono Trigger, and it really hit me. This game is old, but it looks exactly like it should be. It would almost be bad if it looked any better, or different.
 

adamsapple

Or is it just one of Phil's balls in my throat?
I should really download that matrix demo soon..

You should, it's mindblowing in terms of what we can do now with real time rendering on console hardware.

It looks incredible. Actually, it looks better than I need it to be. I wish they would focus more on other things.

I just finished another playthrough of Chrono Trigger, and it really hit me. This game is old, but it looks exactly like it should be. It would almost be bad if it looked any better, or different.


Sprite based games age very well, but if you go to early 3D games on the PS1 (or even the PS2 with games like Summoner 1/2), they .. don't really hold up today even with emulation.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
I honestly thought the PS5 was doomed due to the GitHub leaks. But if these comparisons prove anything is that PS5 owners don't have anything to worry about.
lol forget github. Cerny's Road to PS5 show sounded like a big damage control PR piece. I thought for sure the PS5 was underpowered. Now I understand he was just trying to explain why he made the choices he made. He definitely wasn't spinning or being dishonest. Higher clocks vs more CUs is pretty much the same performance wise.
 
Last edited:
lol forget github. Cerny's Road to PS5 show sounded like a big damage control PR piece. I thought for sure the PS5 was underpowered. Now I understand he was just trying to explain why he made the choices he made. He definitely was spinning or being dishonest. Higher clocks vs more CUs is pretty much the same performance wise.

Don't you mean wasn't spinning or being dishonest?

Speaking about people calling him dishonest there was a lot of that back in those days. Like the PS5 having RT hardware for example.
 

SlimySnake

Flashless at the Golden Globes
Don't you mean wasn't spinning or being dishonest?

Speaking about people calling him dishonest there was a lot of that back in those days. Like the PS5 having RT hardware for example.
lol yes. Wasn't.
Corrected.

Speaking about people calling him dishonest there was a lot of that back in those days. Like the PS5 having RT hardware for example.

Yep Alex led the charge on that. What was shocking was that even after Cerny confirmed that the ray tracing was hardware accelerated in the second wired article, Alex went on and on about how it will be just shadows and Ambient occlusion only because its cheaper and reflections and ray traced GI is too expensive for PS5. Like WTF dude, just stfu and take the L. He continued to repeat that nonsense up until the Road to PS5 and even after Ratchet was revealed where IIRC, he said that the reflections werent ray traced. So much FUD was spread around the PS5 wired articles and Road to PS5 that it was hard to not get influenced by some of this nonsense when respectable outlets like DF were the ones spreading it.

Timestamped:

 
Last edited:
lol yes. Wasn't.
Corrected.



Yep Alex led the charge on that. What was shocking was that even after Cerny confirmed that the ray tracing was hardware accelerated in the second wired article, Alex went on and on about how it will be just shadows and Ambient occlusion only because its cheaper and reflections and ray traced GI is too expensive for PS5. Like WTF dude, just stfu and take the L. He continued to repeat that nonsense up until the Road to PS5 and even after Ratchet was revealed where IIRC, he said that the reflections werent ray traced. So much FUD was spread around the PS5 wired articles and Road to PS5 that it was hard to not get influenced by some of this nonsense when respectable outlets like DF were the ones spreading it.

Timestamped:



I mean Miles Morales and Ratchet have some very excellent RT reflections in them. Alex was definitely wrong about that one.

Now I know having more CUs will make RT better on the system but it certainly has enough of them for RT. Unlike the XSS where they have to drop RT in some games otherwise they suffer from pretty big performance drops like in RE Village.
 
I mean Miles Morales and Ratchet have some very excellent RT reflections in them. Alex was definitely wrong about that one.

Now I know having more CUs will make RT better on the system but it certainly has enough of them for RT. Unlike the XSS where they have to drop RT in some games otherwise they suffer from pretty big performance drops like in RE Village.
This is not entirely accurate.

Parts of the ray tracing pipeline can be accelerated by clock speeds as well for example with higher clocks the latency of generating a ray to when it is completed is lower. Higher clocks also means more bounces for a single ray.
 

SlimySnake

Flashless at the Golden Globes
I mean Miles Morales and Ratchet have some very excellent RT reflections in them. Alex was definitely wrong about that one.

Now I know having more CUs will make RT better on the system but it certainly has enough of them for RT. Unlike the XSS where they have to drop RT in some games otherwise they suffer from pretty big performance drops like in RE Village.
RT actually scales with both tflops and CUs, so any disadvantage the XSX might have in traditional rasterization due to bottlenecked CUs will manifest itself in ray traced games like the Matrix demo. We have seen this in other games where Control can have a 18% performance advantage in some scenes and only 1 fps advantage in other scenes.

Alex tested the corridor of doom like he always does to run ray traced benchmarks and found a lousy 1 fps difference in the most RT intensive part of the game. If RT scaled with only CUs then the XSX wouldve had 18% better performance in every single scenario. He went on to test 21 other parts of the game and found an average of 16% better performance but with not much going on. The XSX was dropping frames whenever there was action. Then we see screens like below where the performance is nearly identical in highly intensive ray traced corridors. Something is holding back the XSX when shit hits the fan, and I suspect the same is happening in the Matrix demo when they start to fly around or drive really fast. Maybe it's the GPU or the memory setup or maybe it's Cerny's magic sperm. Whatever the reason, this shouldnt really happen when there is a 18% tflops difference and a 44% difference in CUs between two GPUs.

t2IiaQ0.jpg
tWfGXxR.jpg
 
RT actually scales with both tflops and CUs, so any disadvantage the XSX might have in traditional rasterization due to bottlenecked CUs will manifest itself in ray traced games like the Matrix demo. We have seen this in other games where Control can have a 18% performance advantage in some scenes and only 1 fps advantage in other scenes.

Alex tested the corridor of doom like he always does to run ray traced benchmarks and found a lousy 1 fps difference in the most RT intensive part of the game. If RT scaled with only CUs then the XSX wouldve had 18% better performance in every single scenario. He went on to test 21 other parts of the game and found an average of 16% better performance but with not much going on. The XSX was dropping frames whenever there was action. Then we see screens like below where the performance is nearly identical in highly intensive ray traced corridors. Something is holding back the XSX when shit hits the fan, and I suspect the same is happening in the Matrix demo when they start to fly around or drive really fast. Maybe it's the GPU or the memory setup or maybe it's Cerny's magic sperm. Whatever the reason, this shouldnt really happen when there is a 18% tflops difference and a 44% difference in CUs between two GPUs.

t2IiaQ0.jpg
tWfGXxR.jpg

Hey I'm fine with Cernys Sperm impregnating my PS5 and creating baby Cerny PS5 Slim hybrids.

But yes there's definitely more going on than just the TFs.
 
Last edited:

supernova8

Banned
There is no tessellation when using Nanite, they literally swap out the triangles to render with more detailed meshes, not divide existing ones. If you want to use technical word-salad, use the right technical word salad. Nanite is all about data structures and maintaining mesh integrity when traversing said data structures. You might as well eschew the word "Rasterisation" or "Pixels" and use Proxy in that case. The data structure is virtual, therefore it doesn't need to store all of it's data in RAM, but the structure contains "All of the data", even if it is streaming parts of it therefore it is by definition "Not a Proxy", as you sample the data, as you would a texture wrapping a traditional mesh, and get the correct pixel colour. Nanite is an optimisation around selecting the correct data to sample, swapping things out behind the scenes, but there is no "Simpler Version", just a hint that you have the colour you need, and don't go further. THE ENTIRE POINT IS THAT ALL OF THE DATA IS THERE, IT'S JUST VIRTUALIZED!

alright mate calm down
 

Dream-Knife

Banned
Yeah, but those are the advertised clocks. RDNA 1 and 2.0 cards go well beyond their advertised game clocks in game.
mJ2S5sC.jpg


This is the highest clock I could find on the 6800xt and 6900xt, but the other games in this comparison video show how both cards are consistently in the 2.5-2.4 ghz range. Consistently higher than the PS5 clocks, and way higher than the xsx clocks.



I would love to see DF cap the clocks to 1.8 ghz and see just how proportionately the performance drops. Is it 1:1 with tflops? Or does the GPU is increasing the clockspeeds because thats what it needs to hit the higher framerates?

Then there is the infinity cache. IIRC, the 6800xt literally has a 6 billion transistor infinity cache taking up precious space on the 25 billion transistor die. Thats a cost increase of 30% on each die. That tells me that there is no way they are getting this performance without the infinity cache or they wouldve skimped on such an expense.

And yes, the reason fewer CUs and higher clocks wouldve been more economical. Thats probably why Cerny went with that design because it seems Sony was still targeting a $399 price point. However, you still have to cool that thing. Look at the wattage on the 6700xt. 170w on its own at 2.539 GHz. So 12.99 tflops. MS went with more CUs and higher clocks because they had to budget for the CPU wattage, the SSD, and the motherboard. Thats a lot of heat being produced. MS's vapor chamber cooling solution is already very expensive. Way more than Sony's traditional but bulky cooling solution. If they had gone for a 40 CU 2.4 Ghz GPU, they wouldve saved some space/cost on the die, but wouldve required a far more elaborate and expensive cooling. Adding more CUs was probably cheaper in their scenario.

You can overclock anything. Irregardless of how much you can OC a card, that base CU count and clock is what it is sold as.

My old 6800 came at 2214 from the factory (Powercolor). Highest I ever overclocked it was 2350, and the performance gains weren't as much as you would expect.

Yes it seems memory speed is very important for performance. The higher end cards have a larger bus.
 

Darsxx82

Member
There is no appreciable difference between Epic games devs level of experience when they received both dev kits. What Tim Sweeney said was that Sony was early in engaging them about next gen consoles which is what both Sony and Microsoft do. They talk to middleware developers and game studios even before any hardware is decided upon. Because the relationship was earlier and strong between Sony and Epic they decided to showcase UE5 on PS5. Had nothing to do with them not having devkits or lacking experience. The second demo was showcased on both PS5 and XSX and even released on PC for the public.

Want more proof that they have experience with both consoles for a similar amount of time? They added support for both PS5 and XSX in UE4 at the same time with the 4.25 update in may 2020
? No

1. We are not talking about the Epic experience. We talk about the experience gained by the small team that has been in charge of the different demonstrations. The experience of these is reduced on consoles mostly to PS5 for which they made a specific demo that took even longer to finish than Matrix.

2. The second demo has nothing to do with the one from Matrix, which is public and reaches the user's hands and requires a decent optimization. In XSX you only saw seconds of footage and for that you only need to press a button and take seconds with a stable framerate to say that it "works". You don't need experience to do this.

3. That TC has had to fully take care of the port and optimization of the XSeries version says it all. As far as it is known, the PS5 version did not require a Sony Studio and it was the small team at Epic (I repeat, the same from the first specifically optimiced demo for PS5) who resolved the development. It clearly indicates knowledge of the Hardware, and I would say that it shows which was the base version.

4.UE5 is still an engine in development and testing on consoles. No matter how similar they are in architecture, they use different tools and the greater experience and testing time in one of them is what can make the difference between consoles that are so equally in power.

For me, he indications show that the participation of The Coalition has been essential for the existence of an XSeries version. Let us see that, without them, there would only have been the demo on PS5. My bet being that this was precisely what MS tried to avoid, the great marketing blow that would have been an exclusive demo for PS5.

MS is lucky to have TC in all of relative to UE5. That they have created different optimizations (from which PS5 has also benefited) , unknown by Epic itself, says a lot about what experience means in a hardware and its specificities.
 

SlimySnake

Flashless at the Golden Globes
You can overclock anything. Irregardless of how much you can OC a card, that base CU count and clock is what it is sold as.

My old 6800 came at 2214 from the factory (Powercolor). Highest I ever overclocked it was 2350, and the performance gains weren't as much as you would expect.

Yes it seems memory speed is very important for performance. The higher end cards have a larger bus.
Not sure if those cards are overclocked, but like I said, the GPUs always go above the advertised specs, even the non-overclocked ones.

And yes, the performance gains start to level out as you go up, but the point I am trying to make is that the 1.8 Ghz clockspeed of the XSX is way lower than what the higher CU cards run at. To me, RDNA 2.0's biggest leap over RDNA 1 cards wasnt the IPC gains (there were none), it was the perf/watt gains that allowed them to increase CU counts AND increase clockspeeds to insane levels. XSX got the CU count upgrade but not the higher clocks, and that could be one of the bottlenecks.
 
The only issue I have with this is the lack of support for animated meshes in Nanite. Basically it means that as soon as you try and do something with heavy vegetation, you're looking at a potentially very large performance hit if you want any motion in the scene.

Its going to be interesting to see if we are going to end up with the "look" of the gen being dominated by rocky and arid landcapes...

Imho, I think it's ideal. Most modern games bust the majority of their polygon budgets on characters and stationary assets that will be seen up-close by the camera. These are already very highly detailed in PS4/XB1 games, so even a meagre bump in geometric complexity for these dynamic assets will be a big win.

On the other hand, stationary backdrops tend to be insufferably low poly and low detailed in comparison. Technology like Nanite is what will make the biggest difference here. And anything else that's intended to be dynamic doesn't have to have multi-million poly source assets on disc.

So yeah, devs will still probably have to author content for their games, e.g. creating LOD meshes for those dynamic objects in-game, but that's gonna be far far less work if all the stationary stuff can be modelled in Maya and then simply imported directly into the engine.
 

SlimySnake

Flashless at the Golden Globes
Its going to be interesting to see if we are going to end up with the "look" of the gen being dominated by rocky and arid landcapes...
That's very interesting. The lack of jump in CPU and a massive 16x increase in RAM forced devs to set every fucking game in the wild. Even Rockstar skipped this gen and just made RDR2.

If Nanite's limitations force us to go back to cities and rocky landscapes then great. It's about time we leave dense forests behind.
 

adamsapple

Or is it just one of Phil's balls in my throat?
RT actually scales with both tflops and CUs, so any disadvantage the XSX might have in traditional rasterization due to bottlenecked CUs will manifest itself in ray traced games like the Matrix demo. We have seen this in other games where Control can have a 18% performance advantage in some scenes and only 1 fps advantage in other scenes.

Alex tested the corridor of doom like he always does to run ray traced benchmarks and found a lousy 1 fps difference in the most RT intensive part of the game. If RT scaled with only CUs then the XSX wouldve had 18% better performance in every single scenario. He went on to test 21 other parts of the game and found an average of 16% better performance but with not much going on. The XSX was dropping frames whenever there was action. Then we see screens like below where the performance is nearly identical in highly intensive ray traced corridors. Something is holding back the XSX when shit hits the fan, and I suspect the same is happening in the Matrix demo when they start to fly around or drive really fast. Maybe it's the GPU or the memory setup or maybe it's Cerny's magic sperm. Whatever the reason, this shouldnt really happen when there is a 18% tflops difference and a 44% difference in CUs between two GPUs.


The average difference between the SX and PS5 in DF's unlocked FPS photo mode comparison came out to be 16 to 18% if I'm not mistaken, which is roughly what the TF difference in the two consoles also happens to be right ? Isn't that actually in line with what the expectations were ?

For some scenarios where the difference is just 1 FPS, there's also scenarios where the differences are pretty large.

eg this 15 FPS difference:


aukgccg.png



The Corridor of Doom is probably problematic for multiple reasons, even beefy PC's struggle there going by DF.
 
Last edited:
Status
Not open for further replies.
Top Bottom