• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Let's Design The Mid-Gen Refreshes, Part 1: OBTAINING SOME CRITICAL PS5 DATA

VFXVeteran

Banned
Very good thread and enjoyable read. I agree with most of your assumptions, though my suspicion is that that whole conversation about “continuous boost” is smoke & mirrors, not altogether different from some of the claims made about the Cell back in 2005.

The reason why we got a mid-gen refresh last time had to do with the fact, I believe, that both the X360 and the PS3 stuck around for almost eight years (2005-2013), for reasons not worth getting into in this particular thread, so the following generation came out underpowered (1.3/1.8 TFLOPs) just as display technology began to shift. Not sure if we’ll the same movement from the major players this time around.

Agree. I'm actually very doubtful there will be one. Many assume it just because of last gen without looking at the reasons why. Before that, there was no mid-gen refresh.
 

Elog

Member
While it can't be denied Sony have developed a really capable SSD I/O solution, I think some people make the mistake of believing it is a heads-and-shoulders solution above what other companies are providing in either their own next-gen systems, or next-gen PC GPUs. This is not actually the case. There will be some small differences and edge-cases where certain solutions provide better results here and there, but nothing where any one solution routinely outperforms the other in the area of data I/O. All of the various approaches are immensely capable and backed by MANY years of R&D research and, in some cases, actual integration in real-world products within the data markets. Here's a post from dobwal on B3D that shows you a glimpse to how much real-world R&D these companies have invested into solving data I/O bottlenecks:

There are MANY extremely valid approaches to solving data I/O bottlenecks, and Sony's approach is just one of many valid ones. Companies like Microsoft and Nvidia have invested tons into technologies to address many of these same things, and it's not a stretch to assume they have integrated a lot of that into their next-gen gaming offerings. So I don't necessarily know how true the notion is that Sony's I/O solution is so far ahead of everything else that no one else understands it yet, though I do agree that (just like on the Series systems at least), it'll mainly be the 1st-party titles that best show off the capabilities of the system data I/O solutions.

(the following assumes that the reported numbers are correct regarding the PS5's I/O capabilities)

PS5's I/O capabilities massively outperforms other available solutions. But not because there is any secret sauce. It is because there is a dedicated hardware path from the SSD to the VRAM and GPU caches that can operate without CPU involvement. That is it.

I am certain similar solutions are being discussed in the PC space but because of the number of companies involved - that also need to agree regarding standards - it will take time.

The example you bring up is about compression. That is only one piece of the I/O puzzle and right now it is not even the key piece that slows the PC down - the key piece that is the problem is that the CPU still controls the PCI bus. Any bit that ends up in the hardware domain of your GPU needs to pass your CPU under the umbrella of some sort of API - that adds latency and throughout limitations. The former is the larger problem of the two.

Software based (CPU driven) I/O will of course continue to evolve but without a dedicated CPU independent hardware path there are clear limitations to what can be achieved.
 
(the following assumes that the reported numbers are correct regarding the PS5's I/O capabilities)

PS5's I/O capabilities massively outperforms other available solutions. But not because there is any secret sauce. It is because there is a dedicated hardware path from the SSD to the VRAM and GPU caches that can operate without CPU involvement. That is it.

I am certain similar solutions are being discussed in the PC space but because of the number of companies involved - that also need to agree regarding standards - it will take time.

The example you bring up is about compression. That is only one piece of the I/O puzzle and right now it is not even the key piece that slows the PC down - the key piece that is the problem is that the CPU still controls the PCI bus. Any bit that ends up in the hardware domain of your GPU needs to pass your CPU under the umbrella of some sort of API - that adds latency and throughout limitations. The former is the larger problem of the two.

Software based (CPU driven) I/O will of course continue to evolve but without a dedicated CPU independent hardware path there are clear limitations to what can be achieved.

Elog, we've already discussed this multiple times and you've been clarified on how you're wrong multiple times as well, so I'm not honestly interested doing the same dance again. There's much evidence that what you hold onto insofar as putting one solution on a pedestal far above the others is fundamentally false, but you are constantly telling yourself as to otherwise.

As well, the point of this thread was never about discussing I/O solutions, it was just something I sought to elucidate on from RaySoft RaySoft 's post. If you were not so determined (wrongly, imo) into valuing one particular solution heads-and-shoulders above others in this department for dubious reasons, you could see that almost every concern you bring up regarding I/O bottlenecks has, in some ways, solutions already in practice in other aspects of various technological computing/data markets. These aren't amateur companies just because their name isn't Sony.

Agree. I'm actually very doubtful there will be one. Many assume it just because of last gen without looking at the reasons why. Before that, there was no mid-gen refresh.

Still personally think there'll be some kind of mid-gen refresh, but it won't be focused on raw power. Doesn't need to be IMHO. Great Hair Great Hair mentioned something about the 3090 seemingly struggling to do 4K60 with Ultra settings and RT on some titles. Dunno how true that is, I haven't watched any 3090 benchmark tests.

But even if that's the case, I don't think you need more power to accomplish doing that stuff consistently. If that were the case we'd be using 100 TF Fermi cards today. Or, as another example, it's like having a top-of-the-line computer from 2008 struggling to play a 4K60 Youtube video while some cheap modern tablet has no problem with it.

Performance gains going forward, IMHO, are going to come from a lot of things aside from raw power increases or node shrinks.

They've always done this for years against high-end Nvidia hardware. They somehow think that the "next" iteration is going to be that monster that they've all been wishing for instead of seeing realistically about 1) AMD is not at the forefront of tech like Nvidia. 2) Costs to own a console are rising to the brink of unaffordability, 3) tech takes LOTS of time and 4) the consoles will never really "lead" in any advances in tech (i.e. it will most likely have already been iterated upon through some other means). The sooner we can all come to grips with how this stuff pans out by taking our last 2 generations into account (PS4/PS5) and seeing the outcome there, the better off we'll be at having a reasonable conversation about future hardware. As it is now, it's not even worth entertaining the conversation as the wishlist is way out in left field like the majority of the Speculation thread.

Yeah, when you look at the entirity of the gaming market at any given time period, consoles were never at the front of the technological pack. If it wasn't the PC beating them to the punch, it was a microcomputer (Amiga, for example). If it wasn't a microcomputer, it was any number of highly advanced arcade machines of the time.

Even when we look at stuff like the SSD I/O for next-gen, this stuff has been done for a very long time in data center markets, and through technological advancements like data processing units (DPUs), which in principal do a lot of the same things the SSD I/O in the next-consoles will be doing, but applying that to data management over the network.

PS5 Pro is already 72CU chiplet in the making, probably 5nm or 3nm. XSX? Doesn't seem clear to me as it's not following RDNA2's roadmap, but anything can be fabricated/customized with the right amount of money.

I definitely think chiplets will be involved in a PS5 Pro, but I don't think it'll be 72 CUs ;)

5nm seems like a lock for any mid-gen refreshes.

Very good thread and enjoyable read. I agree with most of your assumptions, though my suspicion is that that whole conversation about “continuous boost” is smoke & mirrors, not altogether different from some of the claims made about the Cell back in 2005.

The reason why we got a mid-gen refresh last time had to do with the fact, I believe, that both the X360 and the PS3 stuck around for almost eight years (2005-2013), for reasons not worth getting into in this particular thread, so the following generation came out underpowered (1.3/1.8 TFLOPs) just as display technology began to shift. Not sure if we’ll see the same movement from the major players this time around.

Personally, I think there's at least some substance to the "continuous boost" claims, though I'm honestly wondering why it would be worth it to keep the clocks at peak or near-peak the vast majority of the time knowing that generally games won't need that much unless for occasional big-time calculations. Sounds like a waste of electricity to me for the time where max clocks aren't needed.

I agree that the lack of any major shift in display tech that goes mainstream limits the appeal of mid-gen refreshes on that note. No one should be counting on 8K going mass-market adoption by 2023 or 2024. Maybe curved displays? That's still niche even on mobile devices and it'd be insurmountably harder for large televisions to adapt that, plus I just wonder what things that would even serve to fuel mid-gen refreshes.

The mid-gen refreshes, IMO, will probably be more like the kind of production cost-reduction, power-reduction things we saw in older gens (Genesis Model 2, PSOne, PS2 Slim etc.), with some good performance gains but nothing massive. Efficiency of design will go up in a good bit of areas, and maybe MSRP reductions will be possible by this point.

So I'm really itching to make time for the next part 'cuz I've got some neat ideas for mid-gen refreshes (even if none of it ends up happening xD).

No spec yet, but here's two links for possible early info.




To add my own input, DDR3 hit consumers in 07, with GDDR5 hitting consumers in 08. DDR 4 came in 2014 with GDDR6 following in 2018. With LPDDR5 already in phones and Intel targeting it for next year, 2023-2024 seems like a very possible target for GDDR7, but for that to hold true we would need to see an official specification soon.

Thanks dude, much appreciated! Also, that timeline for GDDR7 sounds pretty plausible; I actually had some specs for mid-gen refreshes written up but didn't consider GDDR7, I might have to make some changes to account for that now xD
 
Last edited:
20-22TF won't be a big deal 3-4 years from now, and it's cheaper for Sony to make 2x the same die? that will be used in the PS5 Slim.

I dunno; personally I feel internally there are people at both MS and Sony maybe a bit surprised at some of the more reserved "mind-blown" impressions some people (not myself; didn't have the mid-gen refreshes :) ) have had towards this upcoming generation visually. Diminishing returns is real, and super-powerful mid-gen refreshes would only contribute towards hastening the reduction in what jumps come afterwards.

Also if you look at the actual sales for the current mid-gen refreshes, they didn't really do any huge numbers. The vast majority were still purchasing the baseline consoles and I'd say the mid-gen refreshes had more influence on what choice of GPU card PC gamers chose vs. the other way around. TBH MS and Sony were never in danger of "losing" any significant portion of their userbase to PC gaming, though the mid-gen refreshes did justify themselves in other ways besides raw power (4K ready (kinda), HDR, VR etc.).

Honestly having a really hard time seeing why mid-gen refreshes this time around should prioritize more power/doubling raw power. There are other areas the budget could probably be better spent in. But with architectural and node process improvements, you'd get "essentially" something around 15-20 TF or equivalent RDNA2 process on whatever hypothetical RDNA 4 / RDNA 5 GPU PS5 Pro and Series X-2 use (assuming IPC gains through node shrink, packaging methods and architectural refinements have steady and notable increases gen-to-gen).
 
Hmm...so I'm gonna make a few changes to the OP. It's possible PS5 is using 7nm EUV after all after seeing Rogame's die estimate (286 mm^2) on Twitter. The previous estimate was 320 mm^2, or just about. Also saw Kirby Louise coming to similar conclusion.

So yeah, I think I might be inclined to suggest it's on EUV...this kinda does mean I have to adjust what I had in mind for the mid-gen refresh for it, though. Particularly in terms of a specific node gain metric related to either performance or power consumption (I'm thinking both next-gen midgen refreshes will be 5nm EUVL).

7nm+ has either a 10% power consumption reduction or 15% performance increase, but you can't have both. I think, going by the Oberon timeline, Sony may've went with the performance increase over power consumption reduction. Otherwise PS5 could have GPU clocks closer to the 2.5 GHz we are seeing for one of AMD's upcoming cards.

I'll just TL;DR some new numbers below:

>PS5 GPU Power Consumption: 160 watts (could be lower to maybe 155 watts)

>PS5 GPU Die Area: ~ 105 mm^2 (could be lower, maybe 95 mm^2?)

>PS5 APU Die Area: ~ 286 mm^2 (originally went with 320 mm^2, N7+ with upwards 20% density improvement tho if 286 mm^2 is accurate then PS5's is closer to 11%)

>PS5 System Power Consumption: 225 watts

>PS5 PSU: 350 watts
 
Last edited:

Marlenus

Member
Ignored the OP. Don't have time to read an essay.

So, Holiday 2024 for refresh consoles. 3nm should be in volume production by then. That is about 3.5x denser than 7nm and the consoles are around 42-45M xtors per mm². That means we are looking at around 160Xtors per mm². Costs will likely go up so I expect smaller dies than Series X but PS5 sized could be okay.

That means we have 46B transistors to play with.

The next thing to ask is what will be available? Zen4 is slated for 2022 and zen5 is not on roadmaps yet but that should be available by sometime in 2023 if AMD keep their current cadence.

GPU side RDNA 3 is likely 2022 as well and RDNA 4 is probably 2023 as well.

Soo I would expect 8c16t zen4 CPU, probably 4Ghz+ and that will probably be 2B or so transistors like zen2 is. That leaves around 42B transistors for GPU which with linear scaling from RDNA would be 160CUs and 256 ROPs. 256 ROPs is waaaaay overkill even for 8k so expect that to get cut down to 128 ROPs leaving room for a few more CUs which can get cut for yield reasons. I expect around 3x performance of the OG console.

Ram I have no clue, probably something like GDDR7 with around 3x the bandwidth of current consoles. I expect a ram increase too to handle the larger textures so maybe 24GB on 384 bit buses with faster ram.

SSD bandwidth will likely increase so you can transfer the 8k textures in the same time as 4k textures on the new consoles. Capacity will probably increase as well to say 4TB.

TLDR.
2024 release
3nm node.
4Ghz+ 8c16t Zen 5.
RDNA 4 GPU about 3x faster than OG consoles.
24GB GDDR6x (7, whatever) ram with 3x bandwidth of new consoles. 1.5TB/s.
4TB SSDs on PCie 5 with double the throughput, maybe a larger increase for Series XL since Series X is already half that of PS5.
Should be doable in a similar TDP envelope to OG consoles.
 
M Marlenus Nice insight. I like a lot of those numbers and specs, and I think we're on the same page regarding CPU/GPU gen timing. I personally think the mid-gen refreshes might launch in 2023 vs. 2024, or maybe one is 2023 and the other 2024.

You're a lot more bullish on specs in general for mid-gen refreshes than I am tho xD. I'm still thinking about what market factors drove the previous mid-gen refreshes, the fact those factors probably won't be at play next time around, and also MS and Sony probably wanting to reserve substantial jumps for the true 10th-gen systems later in the decade. There are probably other aspects they will focus on versus raw power increases or even huge (say 2x) bandwidth increases IMHO.

Reception to tech of PS5 and Series X is much more favorable this time compared to PS4 and XBO respective the general level of the gaming market at the time if you include PC. So I personally think there's less incentive for them to go ham with the mid-gen refreshes and probably use more of their budgets for pushing some specialized features and design or packaging technologies that can serve as a strong basis for the next-next gen consoles.

Again tho, those are some really interesting specs you have laid out.
 

Marlenus

Member
Nice insight. I like a lot of those numbers and specs, and I think we're on the same page regarding CPU/GPU gen timing. I personally think the mid-gen refreshes might launch in 2023 vs. 2024.

I can give some insight into why I think the above.

2024 for 3 reasons. The 1st is that the new consoles are relatively more powerful vs PC hardware than PS4 and Xbox One were so the need for a refresh as quickly is diminished. 2nd is that I think the only way to sell them is as 8k machines because I do not think 4k but prettier will cut it but it will take a few years for 8k to even be a high end option. 3rd is because I think 3nm will be a requirement to hit the performance targets needed to make the die small enough to make cost/die reasonable with increasing wafer costs.

The reason I think 3x more GPU power is because from PC benchmarks going from 1080p to 4k hitting the same framerate with identical settings takes about 3x more GPU grunt. 4k to 8k is similar to 3x again seems about right.

8c16t because the current consoles are that config and I don't think MS or Sony want a mid gen refresh with a substantially different CPU for comparability reasons. I would not be surprised if Sony stuck with zen2 even for that reason although MS might move up because their 1st party games are already built for PCs as well so are going to be using higher level APIs to support the wide range of hardware available.

More faster ram will be needed to process at the higher resolution. If AMD come up with alternative solutions to wider busses that are purely managed at the driver and hardware levels then that may be an alternative to significantly faster ram but more is a given.

4TB SSD I just plucked out of thin air. It will depend on nand cost decreases and game size so 2TB would not be a surprise. The transfer speed will increase. The PS5 raw speed does not saturate pcie4 and I can easily see 10GB/s raw speeds with pcie5 to handle the larger textures required for 8k. I also expect some enhancements to the compression/decompression hardware.
 
M Marlenus I mean, I agree with a lot of your general thinking, but I guess my idea on it is, the mid-gen refreshes in particular maybe won't push as hard into going for 8K gaming. Personally I feel that's what the next-next gen systems will be able to provide.

3nm is going to be very expensive even in 2024 and also less mature than either 5nm or 5nm EUVL. Regardless with successive node shrinks, seeing as how not all of them give you both power reduction and performance gains (i.e you have to choose one or the other; maybe you can mix a percentage of both but from TMSC's wording that doesn't sound to be the case), one or the other has to be chosen, so that complicates things WRT node shrinks going forward.

I honestly don't even know if native resolution will be something strongly pushed for going forward. If techniques like DLSS are already good enough to provide near native quality upscaled from much lower texture asset quality, then it's only a matter of time before the other problems with the approach (mainly, the fact it must be on a game-by-game basis) can be addressed at the hardware level, and automated through the system rendering pipeline. I think that's when DLSS (or equivalents) will have their breakout moment, and from there can be further refined.

In fact, the concept of DLSS-like techniques can probably be expanded to other aspects of the rendering pipeline, such as training models to transform lower-polygon models and meshes into higher polygonal density ones (possibly by utilizing drawings of the characters to serve as the reference point; you can already essentially do things like turn drawings into 3D models in programs like Blender), system-level LOD "generation" management (the devs don't create the LODs; the system does, depending on distance and vantage point of objects relative to the player and whether generating the LOD will increase performance to maintain a certain performance target i.e 60 FPS), etc. With complexity of game scope increasing, more powerful hardware is only going to mean it takes longer to develop games to leverage the power, which means fewer AAA releases. It also means fewer AA and indie games that can easily tap into that hardware, because the scale of workloads isn't being reduced at a similar clip as gains in the hardware performance.

That's why I personally think the mid-gen refreshes won't aim for pure power increases; they'll probably start serving as a starting ground into exploring the types of specialized approaches I mentioned above, that can then be perfected with the next-next gen consoles. Now, I think native 4K 60 FPS will be something the mid-gen refreshes aim for, I'm just not sure if they'll achieve it simply by multiplying the raw processing power (tho there is going to be some type of raw performance increase, for sure).
 
Last edited:

Marlenus

Member
M Marlenus I mean, I agree with a lot of your general thinking, but I guess my idea on it is, the mid-gen refreshes in particular maybe won't push as hard into going for 8K gaming. Personally I feel that's what the next-next gen systems will be able to provide.

I am not that convinced that there will be mid gen refreshes to be honest. I don't see 8k picking up enough steam in the next few years to make it viable and I don't think 4k bit prettier will sell that well to make a refresh worthwhile. These new consoles are powerful so it does diminish the need.

I guess if ML based upscaling becomes far more common you could have a less drastic gain with a bit more emphasis on running the upscaling algorithm to hit '8k'.

If they can find a business case for it then great but I think they might as well go all in like One X and not dip their toes in like PS4 Pro. If they do go all in I think it needs a spec similar to what I propose to make it worthwhile.
 
I am not that convinced that there will be mid gen refreshes to be honest. I don't see 8k picking up enough steam in the next few years to make it viable and I don't think 4k bit prettier will sell that well to make a refresh worthwhile. These new consoles are powerful so it does diminish the need.

I guess if ML based upscaling becomes far more common you could have a less drastic gain with a bit more emphasis on running the upscaling algorithm to hit '8k'.

If they can find a business case for it then great but I think they might as well go all in like One X and not dip their toes in like PS4 Pro. If they do go all in I think it needs a spec similar to what I propose to make it worthwhile.

You're right, there's a very real possibility we don't get mid-gen refreshes. The main business case I can see for them is power consumption reduction and size reduction, and putting some early versions of specialized approaches (GPU chiplets, more advanced ML and AI tech, possibly testbeds for some scratchpad of persistent memory to aid in system performance etc.) in the mass market. Things that can be analyzed and then perfected with the next-next gen systems.

So the marketing angle would basically be something like "Smaller! Sleeker! Eats Less Energy!! Cheaper!!!", essentially, because you can't really market that other stuff as well. And while they may be capable of genuine native 4K (the mid-gen refreshes...maybe), that's not enough of a marketing angle, either.

They'll basically be a lot like what the PS2 Slim, PS One, Genesis Mark 2 etc. were for their time, just earlier in the console gen to align with when the mid-gen refreshes would hit.
 
Top Bottom