Could a Custom Navi 31 (80CUs with identical configuration as Navi 21; possibly RDNA3) be the GPU of a hypothetical PS5 PRO coming out in Nov 2022 ? Just wondering if this would be feasible in terms of cost and heating dissipation in two years from now.
Maybe. But I'm kinda thinking both Sony and MS aren't going to be "Pro" like mid-gen refreshes the PS4 Pro and One X were. For one, they may want a bigger technological impact for PS6/Series X-2 and Pro model mid-gen refreshes temper that.
Secondly, these GPUs are probably on 7nm EUV, and I'm guessing the consoles are 7nm DUV enhanced (or at least PS5 is). Plus, these 2.2 GHz and 2.5 GHz are Boost Clock settings, and scaling of power to frequency is not a 1:1 linear match, as we already know for PS5 from Cerny (10% power reduction for 2% frequency reduction = 5:1 ratio). I'd shiver to think the amount of power these Navi GPUs will be consuming at their Boost clocks, even with the power consumption reduction (which I'd peg at 30%).
So basically, it might be unrealistic to expect a PS5 Pro doubling the base's TF even in 2023 or 2024, unless AMD make some insane advances in 3D die packaging (an area they seem to be behind when compared to Intel). Even then, it'd have to be on 5nm EUVL, with some absolutely aggressive performance gains via state-of-the-art 3D stacking methods like POP (package-on-package), etc., to stay in a decent console TDP budget (somewhere around 200 watt TDP for the whole package).
I honestly don't see that happening by 2023 or 2024 while staying affordable; Sony and MS will have to thinking smarter for the mid-gen refreshes this time around. They can't count on basic Moore's Law and node shrinks alone.
EDIT: Oh, you said
two years from now? 2022? No way, forget it. Not possible anywhere near that soon.
10GB is "enough" for gaming in 2020.
10GB will be a noticeable limitation in some/many crossgen games in 2021
10GB of VRAM will be a major limitation for the 3080 in 2022 while the rest of the GPU is still extremely capable.
That's my prediction.
TBF, if a lot of games didn't resort to using the VRAM as a cache (or reserving it as future cache), then that'd free up a lot of that 10 GB to actually be used for highly pertinent graphics data, rather than needing stuff from many seconds out stored in as cache.
That's one of the really good things about PS5, Series X and Series S, in that they're looking to free up the need for data caching in VRAM by resolving bottlenecks in the I/O subsystems and structures. And once this proliferates on PC more predominantly (thanks to stuff like RTX I/O and DirectStorage), then you'll start seeing games outside of the immediate console ecosystems taking advantage of this as well.