Doesn't Microsoft usually do something to try making sure 3P games are running adequately on their systems (relative to competition) if they have marketing rights to them? And wouldn't they want to do that for a game in their service Day 1, that is multiplat and could help attract subscribers to their service?
If this isn't something Microsoft does, then they might want to change their stance. Because having the worst performing version of a multiplat that's in Game Pass Day 1 is just reinforcing negative perspectives about the service. That it's only good for lower-quality games, or that it negatively affects the quality of games across platforms, and if you want the better-performing version of the game, you better be ready to pay upfront for it.
MS kind of defeat the appeal of Game Pass, in that sense.
Lol Deathloop came out on Xbox a whole year after it hit PS5. Also it's a 1P game and the team had over a year's worth of time to optimize the Xbox versions.
Microsoft had actual incentives to make sure the Xbox version ran better. Doesn't seem like they felt those incentives were here for Atomic Heart.
I didn't outright say it was a problem with the hardware, why the game runs worst on Series X. Even said that out of the two, I'm more inclined to believe it was Microsoft not providing any technical assistance to the developer.
What I want to know is why they didn't provide any technical assistance, knowing they secured the game for Day 1 in Game Pass. They nullified the impact of that by a lot due to the performance issues, you can't deny this. Do Microsoft only provide that type of support for games which are otherwise exclusive or timed exclusive? Or for 1P games coming later to Xbox so that they don't create bad optics of their own 1P games running worst on Xbox compared to PlayStation?
All I'm saying is, maybe for 3P games they get Day 1 in Game Pass, they should actually provide technical support for the devs to make sure the Xbox version is at least on par with the PlayStation version at launch, especially if it's a smaller dev who may be strapped for resources and could use the extra ones on the Xbox version (if they're busy on the PlayStation and/or PS & PC versions beforehand).
How many are current-gen only? Even Plague Tale: Requiem got a port to the Switch, so that wouldn't count. If you have some examples, you could share them.
FH5 has terrible LOD pop-in issues probably due to how its engine handles asset streaming, and those problems exist on all versions. Also it doesn't have in-game RT. And Forza Motorsport's physics engine is a lot more involved than Forza Horizon 5's.
XSX is more powerful
on paper; in practice it really comes down to how much a given game is going to benefit from the extra GPU bandwidth and compute-driven tasks. Otherwise I'd say PS5 beats it in most areas that are more critical to gaming (pixel fillrate, GPU cache bandwidth, geometry culling & rasterization, I/O, (assumed) lower OS overhead, more refined APIs, audio). Also the GPU cache scrubbers help reduce the amount of trips it needs to make out to system RAM, and it has dedicated hardware for enforcing cache coherency.
The cache scrubbers nullify some of the Series X's raw bandwidth advantage, and the cache coherency being offloaded from the CPU in PS5 also helps there, reducing Series X's (very slight) CPU advantage, and bandwidth advantage even more. For all intents and purposes the two systems are about equal in "power".
Wasn't that just Richard speculating, and not confirmed? Anyway the issue with Series X's memory is that if a dev needs more than 10 GB for graphics data, they have to either swap to using some of the other 3.5 GB to house the rest (and would need two reserves: one for the extra data and one to act as a scratchpad if they don't want to dump data in the 10 GB out altogether and re-fetch it from storage), or access the data from storage.
In the case of a texture miss, the Series X (and Series S) GPU has a mip-blending piece that's supposed to help fill in for the missing texture until it's fetched and placed in memory, but that's obviously a fallback feature, not something devs are meant to rely on. And in the case the game would need to access data for the GPU beyond what's in the 10 GB, even if it's in the other 3.5 GB, that ends up dropping effective system memory bandwidth for the GPU.
We don't know by how much; it would probably vary from game to game. But you wouldn't see the GPU utilizing its full theoretical bandwidth of 560 GB/s if it has to fetch data from the other 3.5 GB and/or storage that isn't in the 10 GB. I'm not even 100% sure the Series X CAN access the other 3.5 GB, but given MS calls the 10 GB "GPU optimized", I'm assuming that the GPU can. It's just up to the developer and how they want to manage access (keeping in mind the drawbacks).
Are you sure that's due to the bandwidth? Or the type of GDDR Series S uses? Don't forget the role of the GPU in all of this.
The GDDR type doesn't matter as much if in case of Series S, it has a factually lower bandwidth than the One X. Only things it could possibly help with is latency, but newer GDDR chips tend to have slightly more latency than their previous versions. And as the chip makers keep trying to boost the bandwidths by multiple amounts each generation, they trade on some latency to do so.
In other words Series S probably has GDDR6 with slightly more latency than the GDDR5 the One X used. Same with Series X and PS5, but the reason it hurts Series S is because it has both less bandwidth in total (224 GB/s vs 336 GB/s),
AND less memory capacity (8 GB for Series S GPU (maybe closer to 6-7 GB for the GPU after taking out what the CPU & audio use) vs 9 GB for One X GPU (closer to 7-8 GB after taking out CPU & audio use, though MS added back some system memory a little later).
I mean in practice this SHOULD be the cast and manifest in the games, and I'm sure these same advantages exist with Series X.
But none of that seems to be helping Atomic Heart perform up to expectations on Series X, and I'd like to know why. It's either because MS didn't provide technical assistance (for whatever reason) or as
aries_71
said, the devs just didn't give a shit to QA the Xbox versions (all the more reason why Microsoft should have stepped in).
I doubt it's down to architecture issues but if people don't want to entertain MS not providing support, or even entertain the devs just pissing away QA for the Series X & S versions...that doesn't leave a lot of other possibilities.