• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry about XSX teraflops advantage : It's kinda all blowing up in the face of Xbox Series X

S0ULZB0URNE

Member
You guys have completely lost the grasp on efficient. You're making up things to try and dispute my comments which are factually correct. It's science. So your comments fail.

Energy in providing energy out is efficiency.

You have two athletes on a running machine. One requires more fuel...food to be pumped into them to reach their goal at the same time. To reach the same performance they are in a larger box with the running machine that takes up more space and requires considerably more energy to be put in to put energy out.

The other is in a smaller box and requires less food and energy in to reach their performance level. On the right days and conditions this athlete can perform up to 20 percent faster on that same efficiency.

Which is more efficient. Energy efficient. Power efficient?

It's a fact and I'm not going to argue about a completely different console launching 3 years later.

I get it, no one here can give micorosft their props but the series x, when all things are taken into account is a brilliantly designed console imo.
Love the features(plug in SSD,AI HDR,BC,great audio options and major accessories support)and even the look but brilliant design(at least in the tech part) I can't give it with two different pools of ram.
 

rnlval

Member
Digital foundry admits teraflops don't matter anymore after 3 years of this being pushed by them
This is fitting with the new pro consoles rumors coming don't rely on Teraflops

1:05:38



Looking back at mark cerny was clowned for saying teraflops don't matter


TFLOPS matters if there are sufficient load-store units i.e. TMUs and ROPS.

XSX has superiority with RT and TMU bias workloads.

XSX's GPU design has 56 CUs with 52 CU active for yield issues.

Incoming RX 7800 XT (60 CU, NAVI 32) has 128 ROPS (1).
Reference
1. https://www.techpowerup.com/312043/...x-7800-xt-pictured-confirmed-based-on-navi-32

Modern GpGPUs still follow the basic RISC Load-Store processor design. TMUs and ROPS are Load-Store units with different hardware fixed function features.
 
Last edited:

buenoblue

Member
I think it does matter, and not to be a contrarian but I do think it could be the API. Something doesn't add up about the Series X. The CPU is even faster. The ram is faster. Could be due to the ram being split but I am not sure. The PS5 isn't doing something exotic either, other than having a higher GPU clock but I have never seen this replicated with PC cards. If Sony really does have some magic custom hardware then they would say.
It's fucking windows slowing that fucker diwn
 

PaintTinJr

Member
I thought the xbox 360's 250 gflops GPU was very well balanced by its xenon processor and 512 MB of unified vram. Especially for a $299 console. It was the PS3 that was served a dude of a GPU with bottlenecks everywhere and kutaragi's ridiculous decision to split the vram.

The x1 was held back by Don's ridiculous push for kinect and tv, but their 1.2 tflops gpu wasnt exactly held back by the ESRAM and jaguar CPUs. it was just a dated old system, just not what i would call unbalanced.

I had no idea the OG xbox was lacking in fill rate because it was running games the PS4 simply couldnt run. Doom 3 and Half Life 2 were never ported to the PS2 and by the end of the gen, it was even running some games at 720p.

I think this is the first time the xbox team has created an unbalanced console. Even the X1x got a big ram increase to 12 GB where the PS4 Pro stayed at 8GB) and they made sure to give the bandwidth a massive increase as well. resulting in several games running at native 4k when the PS4 Pro had to settle for 4kcb or 1440p. The XSX is just a unique mess because of their insistence of hitting 12 tflops at any cost.
IMO the 360 is as close as Xbox ever got to a balanced console, but the context of RRoD - caused by the GPU specs - undermines that.

It took 2years of revisions to fix RRoD, and the launch specs weren't targeting 720p, but 1024x768 based on the lack of hdmi for lossless video, audio or stereoscopic 3D, and the EDRAM amount fitting a double buffered 1024x768, which meant superior 2nd half gen titles were sub-HD on 360 showing unbalance, and the absence of a HDD and HD-DVD as a base model all led to other imbalances. IMO, but the GPU probably looked pretty balanced because it's TF and fill-rate were based around a popular PC GPU ATI 9700 Pro/ATI 9800

By comparison, the 2nd half of the gen the PS3 could demonstrate that the GPU/SPUs TF/fillrate and CPU/storage and RAM were all in balance, but it was PlayStation's least balanced console IMO....but it wasn't their intended design which was supposed to be two Cell BEs + a PS2 style GS and unified XDR memory. But problems beyond their control forced a redesign with Nvidia. The 360 was a new gen at the PRO console timescale combined with the most inopportune time for the gulf of change in graphics in a 6year period to be the biggest we seen.

As for the FPS games the PS2 didn't get, the 2year older console was only short of memory and a HDD, and Carmack even discussed the Doom3 iconic Carmack's reversal shadow technique aligning well with the PS2 graphics capabilities in the Nvidia paper. A feature that he couldn't do on the OG Xbox version because it lacked fill-rate, and lacked a z-buffer, - although the technique was a reinvention of an older CreativeLabs patent. The cost of the HDD was also not viable at the time for a base console box price because of added shipping weight of a cheap 3.5inch HDD - and had been subsidized heavily by Xbox for the OG Xbox and was removed from the 360 arcade in the next gen, so the absence of such games from PS2 is pretty logical, and not indicative of unbalanced design at the time-frame the PS2 launched IMO.

I also think the Xbox One shared imbalanced issues. The lack of ACEs and Rapid pack maths meant that by the end of the generation the PS4 using FP16 was twice the capability with async compute, the Cyberpunk probably being the biggest example of that.

The esram size was another poor choice that meant 900p was the only way to extract the full fill-rate and compared with the PS4's unified GDDR5 to dereference RAM buffers between CPU and GPU instantly meant decompressing high quality textures for games like Arkham Knight, MGSV, TLOU2, Death Stranding and GoT, to name just a few, and still be at Full HD locked to 30 or 60, showcased the PS4 balance and the X1 imbalance by its deficiencies IMO.
 
Last edited:
I don't believe the PS5 is particularly special, no. The primary bottleneck for both systems is the CPU design.

The belief that the PS5 is performing better than its specs is just a case of many not understanding how similar these two GPUs are to each other. They try to compare to PC parts with a similar TF difference, but in the PC world all aspects of the GPU typically move upward with each improvement. On paper both of these console GPUs have distinct statistical advantages, the fact they perform similarly is the expected result for them.

The reality is that MS created the smaller more efficient system and hit their performance target. Sony did likely create the cheaper to produce system which from a business perspective is a big win for them.
Great story. It just omits Sony genius on its biggest feature, the SSD I/O which is highly optimized and makes streaming data so much more efficient while taking away the bottlenecks which make even a fast SSD perform way slower.
 
It’s always been a strategy problem with Microsoft.

The hardware features that make Series consoles special simply aren’t being utilized. If every game was designed to make use of SFS, VRS2, DirectML, Mesh Shaders etc. the power disparity would be obvious.

The SDK itself is intended on helping developers release on both Xbox / PC. There is no incentive whatsoever to integrate the most advanced Series only features. That’s on Microsoft and Phil Spencer’s crusade of developer appeasement. Even 1st party devs don’t use these features.

Microsoft should have just built a simpler box designed to brute force everything. The Series X hardware is almost wasted as everything special about it, that gives it an identity is being ignored.
 

PaintTinJr

Member
I could 100% build a system featuring a slower card that consistently beats another system featuring a faster card. You are correct in your assessment, but the Series X is seemingly incompetently designed. That split memory pool is just bizarre and reminiscent of the GTX 970.
IMO the design and split pool all makes sense when you visualise racks of XsX in Xcloud, where only the GPU is being used and the CPU cloud sessions are done on a high core count server with lots RAM. The memory split just means they are only wasting the slower portion when two XsS session are running, or able to multiplex multiple XBLA sessions along with a XsS session, although I image the Xcloud XsX machines have 24GB's of unified RAM.
 
The quality of Nanite in Fortnite (UE5) looks better in the PS5 version.
5EDI2fl.jpg
CtWltMu.jpg
mlsBo0F.jpg
Jurassic Park Ian Malcom GIF

Never trust what DF says...
 

damiank

Member
The One X was much more than a mid-gen, Microsoft was ashamed of the original Xbox One and they changed everything but the CPU (because there wasn't an alternative to Jaguar in 2017)
Actually Zen was out there when one x released. TBH Sony could postpone PS4 Pro and release it later in early 2017 with 2x 4c/4t 1.6GHz zen with ability to boost one or two cores to 3GHz or smh, GDDR5X memory and GPU boosted to 1GHz. Would be perfect 1080p60 machine for 499.
 

solidus12

Member
Imagine if devs stated they were having trouble developing games on PS5. How would that make you feel as PS5 players?

Sony really dodged a bullet here, despite the pressure from the media early on the gen when they were criticizing PlayStation for not having a Series S equivalent.
 

HeWhoWalks

Gold Member
Imagine if devs stated they were having trouble developing games on PS5. How would that make you feel as PS5 players?

Sony really dodged a bullet here, despite the pressure from the media early on the gen when they were criticizing PlayStation for not having a Series S equivalent.
Sony also rarely caves to media pressure, so I think it's less bullet dodging and better reading of the market (something they're good at).

To answer your earlier question — PS3 players already had that experience. Question from me — why does that matter?
 

DaGwaphics

Member
Imagine if devs stated they were having trouble developing games on PS5. How would that make you feel as PS5 players?

Sony really dodged a bullet here, despite the pressure from the media early on the gen when they were criticizing PlayStation for not having a Series S equivalent.

Statistically, with all of the third party games released across current-gen PS and Xbox, and just that one game being delayed on Xbox because of the XSS the numbers are quite good in MS's favor there. Must be like 99.9999% no problem. :messenger_tears_of_joy:

The issue is more a desperate fanboy problem than anything else.
 

JackMcGunns

Member
Series S has better CPU and storage but weaker GPU, less RAM and less memory bandwidth.


It does NOT have a weaker GPU than X1X, the RDNA 2 GPU in Series S outperforms it. It also has adequate bandwidth and memory for 1080p to 1440p games which it was designed for. X1X was pushing for 4K, so 12GB is less adequate to 4K than 10GB to 1080p/1440p. The hypocrisy is spilling over other aspects now. Are we talking efficient adequate numbers vs Raw performance or not??
 

Clear

CliffyB's Cock Holster
It’s always been a strategy problem with Microsoft.

The hardware features that make Series consoles special simply aren’t being utilized. If every game was designed to make use of SFS, VRS2, DirectML, Mesh Shaders etc. the power disparity would be obvious.

The SDK itself is intended on helping developers release on both Xbox / PC. There is no incentive whatsoever to integrate the most advanced Series only features. That’s on Microsoft and Phil Spencer’s crusade of developer appeasement. Even 1st party devs don’t use these features.

Microsoft should have just built a simpler box designed to brute force everything. The Series X hardware is almost wasted as everything special about it, that gives it an identity is being ignored.

So, who's using all these amazing technical "features" anywhere?

Seems to me that if they were as effective as advertised there'd be broad adoption across PC at the very least. Hell, look at the limited utilization of DirectStorage, a tech that seems like the easiest of easy wins, and yet a port of a PS5 game made by a Sony studio is getting attention for its implementation in mid 2023.
 
So, who's using all these amazing technical "features" anywhere?

Seems to me that if they were as effective as advertised there'd be broad adoption across PC at the very least. Hell, look at the limited utilization of DirectStorage, a tech that seems like the easiest of easy wins, and yet a port of a PS5 game made by a Sony studio is getting attention for its implementation in mid 2023.

I laughed at his mention of VRS2... the same feature that saw almost ubiquitous utilization at the beginning of the gen in cross-gen games, until it didn't because the results were largely shit and inferior to pure software-based custom solutions that can vary shading rate with variable granularity (source: see the COD dev presentation).
 

sinnergy

Member
I laughed at his mention of VRS2... the same feature that saw almost ubiquitous utilization at the beginning of the gen in cross-gen games, until it didn't because the results were largely shit and inferior to pure software-based custom solutions that can vary shading rate with variable granularity (source: see the COD dev presentation).
It was mostly implemented shit .. the coalition showed how it was done .
 

Bojji

Member
It does NOT have a weaker GPU than X1X, the RDNA 2 GPU in Series S outperforms it. It also has adequate bandwidth and memory for 1080p to 1440p games which it was designed for. X1X was pushing for 4K, so 12GB is less adequate to 4K than 10GB to 1080p/1440p. The hypocrisy is spilling over other aspects now. Are we talking efficient adequate numbers vs Raw performance or not??

RDNA2 doesn't have enough IPC jump to compensate TF difference plus without enough RAM bandwidth (and rops, tmus etc.) GPU is performing worse overall. Memory bandwidth helps in ALL situations, it's more needed in higher resolutions but even in 1080p games can easily max out slower memory of GPUs (with alpha effects for example). Xbox series s is just stupid, it should be at least on par with one x in terms of memory and GPU and it would be much better for everyone.
 

Lysandros

Member
The belief that the PS5 is performing better than its specs is just a case of many not understanding how similar these two GPUs are to each other. They try to compare to PC parts with a similar TF difference, but in the PC world all aspects of the GPU typically move upward with each improvement. On paper both of these console GPUs have distinct statistical advantages, the fact they perform similarly is the expected result for them.
This 100%. I keep saying it since the very beginning. For some very peculiar reason this seems to be a very hard notion to grasp for the majority though.
 

Panajev2001a

GAF's Pleasant Genius
It’s always been a strategy problem with Microsoft.

The hardware features that make Series consoles special simply aren’t being utilized. If every game was designed to make use of SFS, VRS2, DirectML, Mesh Shaders etc. the power disparity would be obvious.

The SDK itself is intended on helping developers release on both Xbox / PC. There is no incentive whatsoever to integrate the most advanced Series only features. That’s on Microsoft and Phil Spencer’s crusade of developer appeasement. Even 1st party devs don’t use these features.

Microsoft should have just built a simpler box designed to brute force everything. The Series X hardware is almost wasted as everything special about it, that gives it an identity is being ignored.
Yes and no, some of their devs did take advantage of the higher degree of customisation for RT compared to DXR on PC and Direct Storage seems to be getting far more use on Xbox than PC so far. Xbox Series X is also a late 2020 HW designed for a $499 price point (design probably finalised about a year before that more or less).

Still, it is possible that taking advantage of all the unit custom’s bells and whistles is coming at a price when it is not what is leading your development (PS4 Pro vs Xbox One X had the former severely held back by devs not designing games purely around its unique features).
 
Great story. It just omits Sony genius on its biggest feature, the SSD I/O which is highly optimized and makes streaming data so much more efficient while taking away the bottlenecks which make even a fast SSD perform way slower.
Ooohhh Riky, only you would lol that post… and you did 😁. I really feel for you now MS is making bad decision after bad decision.

Big Hero GIF


It’ll all be okay, hang in there pal…
 
It does NOT have a weaker GPU than X1X, the RDNA 2 GPU in Series S outperforms it. It also has adequate bandwidth and memory for 1080p to 1440p games which it was designed for. X1X was pushing for 4K, so 12GB is less adequate to 4K than 10GB to 1080p/1440p. The hypocrisy is spilling over other aspects now. Are we talking efficient adequate numbers vs Raw performance or not??
It has never being proven, it's just conjecture from PR promises.

The rare benchmarks comparing flops of RDNA vs Polaris all showed clearly that 4TF RDNA < 6TF Polaris
 
So, who's using all these amazing technical "features" anywhere?

Seems to me that if they were as effective as advertised there'd be broad adoption across PC at the very least. Hell, look at the limited utilization of DirectStorage, a tech that seems like the easiest of easy wins, and yet a port of a PS5 game made by a Sony studio is getting attention for its implementation in mid 2023.

Who is using these amazing ‘features’? That’s exactly my point. They aren’t being utilised. Your implication that they aren’t ‘effective’ is Ill considered. Do you understand what is possible with DirectML? SFS itself is amazing tech that would completely eliminate current RAM bottlenecks.

As always it’s about support. It’s about development resource. It’s about sales. It’s because releases via the SDK typically have to support PC. Much easier to be less tailored to a particular device than to use features that haven’t yet been adopted to mass market. Isn’t that obvious?
 

PaintTinJr

Member
I don't believe the PS5 is particularly special, no. The primary bottleneck for both systems is the CPU design.

The belief that the PS5 is performing better than its specs is just a case of many not understanding how similar these two GPUs are to each other. They try to compare to PC parts with a similar TF difference, but in the PC world all aspects of the GPU typically move upward with each improvement. On paper both of these console GPUs have distinct statistical advantages, the fact they perform similarly is the expected result for them.

The reality is that MS created the smaller more efficient system and hit their performance target. Sony did likely create the cheaper to produce system which from a business perspective is a big win for them.
I would agree with you if the AlphaPointDemo by the Coalition had measured up to the first UE5 showing.

It has only just occurred to me - coming back to nanite kit-bashing - that the inability for the XsX to handle real-time kit bashing with nanite at a good speed, is a bit of a red flag for liberal megascan use on XsX - without kit bashing offline to pre-calulate a result - and maybe that explains why so little of what the Coalition showed was megascans and used 100x less geometry than the PS5 demo.

Cerny mentioned the custom Geometry engine in the Road to PS5 without much detail, and in all the acronym tweet exchanges at launch of RDNA 1.5, one of the pro PS5 tweets - think it was from Cerny himself - mentioned culling geometry before it even enters the vertex pipeline.

I now believe he was referring to the real-time kit bashing capability of the PS5 used in the UE5 demo. In essence kit bashing is like using a Boolean modifier in 3D blender, where two models have a precalculated BVH and the kit bashing combines the two BVH's and returns the required set theory answer, such as union, intersection or difference, as a resultant BVH and the geometry that represents.

If what I'm leaning towards is the case, then that is a huge advantage to use megascan liberally for Ps5 and improve in game development iteration, reduce size on the disk - from not pre calculating kit-bashing results - not to mention in game flexibility to be able to move and intersect many megascans on screen in game and get perfect results in real-time. And all without wasting drawcalls, FLOPS and memory bandwidth to transform all the eventually culled mesh parts for those vertices to just get discard later in the vertex/geometry pipelines.
 
I would agree with you if the AlphaPointDemo by the Coalition had measured up to the first UE5 showing.

It has only just occurred to me - coming back to nanite kit-bashing - that the inability for the XsX to handle real-time kit bashing with nanite at a good speed, is a bit of a red flag for liberal megascan use on XsX - without kit bashing offline to pre-calulate a result - and maybe that explains why so little of what the Coalition showed was megascans and used 100x less geometry than the PS5 demo.

Cerny mentioned the custom Geometry engine in the Road to PS5 without much detail, and in all the acronym tweet exchanges at launch of RDNA 1.5, one of the pro PS5 tweets - think it was from Cerny himself - mentioned culling geometry before it even enters the vertex pipeline.

I now believe he was referring to the real-time kit bashing capability of the PS5 used in the UE5 demo. In essence kit bashing is like using a Boolean modifier in 3D blender, where two models have a precalculated BVH and the kit bashing combines the two BVH's and returns the required set theory answer, such as union, intersection or difference, as a resultant BVH and the geometry that represents.

If what I'm leaning towards is the case, then that is a huge advantage to use megascan liberally for Ps5 and improve in game development iteration, reduce size on the disk - from not pre calculating kit-bashing results - not to mention in game flexibility to be able to move and intersect many megascans on screen in game and get perfect results in real-time. And all without wasting drawcalls, FLOPS and memory bandwidth to transform all the eventually culled mesh parts for those vertices to just get discard later in the vertex/geometry pipelines.

This is simply a software problem, not a hardware one.

It is reasonable to suggest, given the close ties between Epic and Sony, that there has been a considerable effort to integrate and optimise UE5 on the side of PlayStation. It is also worth noting that there has yet to be been a single UE5 game at the fidelity level shown in the PS5 tech demo released to date. Meaning, the point you made is moot as there is nothing comparable between the systems released in which to test your theories. Tech demos are just that, tech demos.

In regard to the Coalition tech demo. You are ignoring the time constraints possibly placed upon the studio. Your assertions that missing features or otherwise may simply be due to lack of time or experience with the engine. It’s unreasonable at this time to suggest anything until we see more.
 

sachos

Member
I like Oliver answer, pretty balanced take. Its not 100% useless, you just have to be careful when quoting it. The PS5 has 17% less TF than XSX but the clock speed is higher meaning a lot of other metrics are better on PS5, memory arrangement and bandwith plays a role too.
 

shamoomoo

Member
I would agree with you if the AlphaPointDemo by the Coalition had measured up to the first UE5 showing.

It has only just occurred to me - coming back to nanite kit-bashing - that the inability for the XsX to handle real-time kit bashing with nanite at a good speed, is a bit of a red flag for liberal megascan use on XsX - without kit bashing offline to pre-calulate a result - and maybe that explains why so little of what the Coalition showed was megascans and used 100x less geometry than the PS5 demo.

Cerny mentioned the custom Geometry engine in the Road to PS5 without much detail, and in all the acronym tweet exchanges at launch of RDNA 1.5, one of the pro PS5 tweets - think it was from Cerny himself - mentioned culling geometry before it even enters the vertex pipeline.

I now believe he was referring to the real-time kit bashing capability of the PS5 used in the UE5 demo. In essence kit bashing is like using a Boolean modifier in 3D blender, where two models have a precalculated BVH and the kit bashing combines the two BVH's and returns the required set theory answer, such as union, intersection or difference, as a resultant BVH and the geometry that represents.

If what I'm leaning towards is the case, then that is a huge advantage to use megascan liberally for Ps5 and improve in game development iteration, reduce size on the disk - from not pre calculating kit-bashing results - not to mention in game flexibility to be able to move and intersect many megascans on screen in game and get perfect results in real-time. And all without wasting drawcalls, FLOPS and memory bandwidth to transform all the eventually culled mesh parts for those vertices to just get discard later in the vertex/geometry pipelines.
Maybe you are on to something. Do people remember that tweet by a Sony graphic engineer?

I might be reading into the tweet more then what's really there because of his English but it still seems puzzling.



Inevitably ended in the midst of a fierce controversy, the engineer clarified his statements to make people understand exactly how things are. His new messages, which are also private and unfortunately shared on social media, are very interesting:

"RDNA 2 is a commercial theme to simplify the market, otherwise GPUs with completely random features would come out and it would be difficult for the average user to choose," wrote Leonardi.

"For example, support for ray tracing is not present in any AMD GPU currently on the market. (...) The PlayStation 5 GPU is unique, it cannot be classified as RDNA 1, 2, 3 or 4."

"It is based on RDNA 2, but it has more features and, it seems to me, one less. That message turned out badly, I was tired and I shouldn't have written the things I wrote", continued the engineer, complaining that he received insults for his statements.

We know the cache scrubbers are unique to the PS5 and it lacks infinity cache,so what are these "more" features in comparison to RDNA2?
 
Last edited:
Maybe you are on to something. Do people remember that tweet by a Sony graphic engineer?

I might be reading into the tweet more then what's really there because of his English but still seems puzzling.



Inevitably ended in the midst of a fierce controversy, the engineer clarified his statements to make people understand exactly how things are. His new messages, which are also private and unfortunately shared on social media, are very interesting:

"RDNA 2 is a commercial theme to simplify the market, otherwise GPUs with completely random features would come out and it would be difficult for the average user to choose," wrote Leonardi.

"For example, support for ray tracing is not present in any AMD GPU currently on the market. (...) The PlayStation 5 GPU is unique, it cannot be classified as RDNA 1, 2, 3 or 4."

"It is based on RDNA 2, but it has more features and, it seems to me, one less. That message turned out badly, I was tired and I shouldn't have written the things I wrote", continued the engineer, complaining that he received insults for his statements.

We know the cache scrubbers are unique to the PS5 and it lacks infinity cache,so what are these "more" features in comparison to RDNA2?

Fuck the pigeon guy. What he did to that engineer was disgusting.
 

Lysandros

Member
I like Oliver answer, pretty balanced take. Its not 100% useless, you just have to be careful when quoting it. The PS5 has 17% less TF than XSX but the clock speed is higher meaning a lot of other metrics are better on PS5, memory arrangement and bandwith plays a role too.
Did Oliver really say this in the video?
 

Lysandros

Member
Maybe you are on to something. Do people remember that tweet by a Sony graphic engineer?

I might be reading into the tweet more then what's really there because of his English but it still seems puzzling.



Inevitably ended in the midst of a fierce controversy, the engineer clarified his statements to make people understand exactly how things are. His new messages, which are also private and unfortunately shared on social media, are very interesting:

"RDNA 2 is a commercial theme to simplify the market, otherwise GPUs with completely random features would come out and it would be difficult for the average user to choose," wrote Leonardi.

"For example, support for ray tracing is not present in any AMD GPU currently on the market. (...) The PlayStation 5 GPU is unique, it cannot be classified as RDNA 1, 2, 3 or 4."

"It is based on RDNA 2, but it has more features and, it seems to me, one less. That message turned out badly, I was tired and I shouldn't have written the things I wrote", continued the engineer, complaining that he received insults for his statements.

We know the cache scrubbers are unique to the PS5 and it lacks infinity cache,so what are these "more" features in comparison to RDNA2?
Well, the hardware ID buffer could qualify (inherited from PS4 PRO), also the Tempest.
 

onQ123

Member
I remember the TFLOPS war on GAF pre-launch of Series X and PS5. Whoof, some sick amount of FUD that was spread about PS5 and an equal amount of chest pumping by the green tea party. Not a good look now.
 
Well said. Xbox wanted to have a bigger TF number, regardless of what needed to be sacrificed in order to achieve it.

DF lost my respect years ago with all the damage control they did for the Xbox One. Now it seems that they're doing preemptive damage control before the arrival of a PS5 Pro.

DF has always been best at patting these companies on the back. I cant even watch their videos any more for this reason. They just want to stay in the good graces of MS and Sony. The only person that has any backbone there believe it or not is Alex when it comes to PC, but he's annoyong af with his not so covert Master Race stuff.

People always want to stand up for John, because he makes some great videos, but he has the least backbone out of anyone. No real sharp criticism from him ever. Always looking "at the positive side" of these disappointing cross gen big exclusives instead of laying into these pubs the way he should.

If a big name channel like DF actually had the balls to call out these devs, they might have done some real good to get these conpanies to stop releasing such dissapointing cross gen graphics. Richard doesn't ever talk harshly either when its needed.

PS- NX gamer is the same exact way as DF. Just read a tweet from him how he thinks Insomniac will "match or exceed" Spiderman 2's reveal trailer except .. we've seen gameplay now and its not looking anywhere near as good and camt possibly improve that much in 2 months. He works for IGN now too so that says it all. They're like extra mouthpieces for these companies and have been for a while.
 

CGNoire

Member
DF has always been best at patting these companies on the back. I cant even watch their videos any more for this reason. They just want to stay in the good graces of MS and Sony. The only person that has any backbone there believe it or not is Alex when it comes to PC, but he's annoyong af with his not so covert Master Race stuff.

People always want to stand up for John, because he makes some great videos, but he has the least backbone out of anyone. No real sharp criticism from him ever. Always looking "at the positive side" of these disappointing cross gen big exclusives instead of laying into these pubs the way he should.

If a big name channel like DF actually had the balls to call out these devs, they might have done some real good to get these conpanies to stop releasing such dissapointing cross gen graphics. Richard doesn't ever talk harshly either when its needed.

PS- NX gamer is the same exact way as DF. Just read a tweet from him how he thinks Insomniac will "match or exceed" Spiderman 2's reveal trailer except .. we've seen gameplay now and its not looking anywhere near as good and camt possibly improve that much in 2 months. He works for IGN now too so that says it all. They're like extra mouthpieces for these companies and have been for a while.
I swear Money corrupts everyone.
 

sinnergy

Member
Not really.

Even the Gears implementation wasn't perfect and had artifacts. It was only that there wasn't a PS version available to compare it to.
It was well done .. (even at the right distance ) there are numerous implementations that use VRS on anything in view but not correctly implemented and this blurring the image , as with everything with compression you get artifacts .. look at all the current upscaling tech for example . And most of these artifacts are visible at 400x zoom .. just enjoy the game not the static pics.
 
Last edited:

PaintTinJr

Member
This is simply a software problem, not a hardware one.

It is reasonable to suggest, given the close ties between Epic and Sony, that there has been a considerable effort to integrate and optimise UE5 on the side of PlayStation. It is also worth noting that there has yet to be been a single UE5 game at the fidelity level shown in the PS5 tech demo released to date. Meaning, the point you made is moot as there is nothing comparable between the systems released in which to test your theories. Tech demos are just that, tech demos.

In regard to the Coalition tech demo. You are ignoring the time constraints possibly placed upon the studio. Your assertions that missing features or otherwise may simply be due to lack of time or experience with the engine. It’s unreasonable at this time to suggest anything until we see more.
If that is the case, it is very much a hardware feature in the custom geometry engine providing the solution. The gulf in geometry used was at least 100x.

Do you honestly think they would have used that demo to inform other devs about UE5 on XsX - as is -if they could have added more time and even got 1billion polys in the demo with a superior 50:50 split using modelled geometry and megascans - rather than just 10%?
 
If that is the case, it is very much a hardware feature in the custom geometry engine providing the solution. The gulf in geometry used was at least 100x.

Do you honestly think they would have used that demo to inform other devs about UE5 on XsX - as is -if they could have added more time and even got 1billion polys in the demo with a superior 50:50 split using modelled geometry and megascans - rather than just 10%?
Epic themselves built the initial PS5 demo.

You are inferring too much. On pc we have seen high poly similar to that of the PS5 tech demo running on hardware older than what we have with the PS5 and Series consoles.
 

PaintTinJr

Member
Epic themselves built the initial PS5 demo.

You are inferring too much. On pc we have seen high poly similar to that of the PS5 tech demo running on hardware older than what we have with the PS5 and Series consoles.
How? The purpose of megascans is not to remodel them, but to slam them together, typically reusing a small number, to create unique looking results from the natural scan complexity with mindboggling polygon counts in the result- or use them as individual models as-is in isolation.

The Coalition effectively conceded that using megascans dynamically in the former way is very limited on XsX, and multi-platform UE5 games do look limited like their demo. And as I tried to discuss the launch argument about geometry culling via the custom geometry engine of pre-culling would both backup the claim and shed light on why the technical tweets of the time were opaque.
 
Last edited:

Lysandros

Member
I guess,if you considered the tempest engine unique. Supposedly the TE is just True Audio Next repurposed on the PS5.
Based on Road to PS5 Cerny presentation this is a custom designed hardware. It's a SIMD/SPU like processor inspired by CELL B.E sound capabilities based on a CU stripped of its caches having only two wavefront with DMA. Do PC RDNA2 cards and XSX have such a component specifically?
 
Last edited:
Top Bottom