• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Let's Design The 10th-Gen Consoles, Part 1: SONY PLAYSTATION 6, PART 1

When do YOU think PS6 will release?

  • 2025

    Votes: 6 6.4%
  • 2026

    Votes: 20 21.3%
  • 2027

    Votes: 47 50.0%
  • 2028 (dear lord no!)

    Votes: 18 19.1%
  • There won't be a PS6; this is the last gen and Sony's going out of business anyway

    Votes: 3 3.2%

  • Total voters
    94
  • Poll closed .
xbox-vs-ps5.jpg


(If you're interested, please give Part 2 a read, although a lot of that is actually outdated so it's more for the curious if anything.)

After a long hiatus, it feels time to move on to continuing this series of thinking out what the 10th-gen systems might bring.

Before that though, I'll briefly share some thoughts on mid-gen refreshes, which I posted on B3D Sunday. I did actually post some other mid-gen refresh stuff here (at least for PS5 Pro) a while ago, but in light of new information and discussions I've revised pretty much all of that. This isn't so much a hard detailing on possible mid-gen specs so much as it is what products could comprise of such refreshes, and expanding the term into including peripherals. I've also taken into consideration the business strategies and trajectories Sony and Microsoft seemingly seem to be going towards:

[SONY]

>PS5 Slim: 5nm, ~140 watt system TDP (30% savings on 5nm, better PPW GDDR6 chips, possibly smaller array of 3x 4-channel NAND modules 384 GB capacity each, chip-packaging changes and chiplet setup ,etc). RDNA 4-based (16-month intervals between RDNA gens would mean Jan. 2022 for RDNA 3, July 2023 for RDNA 4), 1 TB SSD storage, same SSD I/O throughput (with possibly slightly better compression due to API maturity and algorithms), same amount of GDDR-based memory and bandwidth (so, sticking with GDDR6), $299 (Digital only). November 2023 release.

>PS5 Enhanced: 5nm, ~150 watt system TDP (factoring in disc drive), RDNA 4-based, 6x 384 GB NAND modules (~2 TB SSD), same GDDR6 memory capacity but faster chips (16 Gbps vs. 14 Gbps) for 512 GB/s bandwidth, improved SSD I/O bandwidth (~8 GB/s Raw, up to 34 GB/s maximum 4.25:1 compression ratio), slightly better GPU performance (up to 11.81 TF due to 5nm; this would probably increase total system TDP to about 155 watts), Zen 2-based CPU, disc drive, $399. November 2023 release.

>PS5G (Fold): 5nm, ~25 watt - 35 watt system TDP, RDNA 4-based (18 CU chiplet block), 8 GB GDDR6 (8x 1 GB 14 Gbps chips downclocked to 10 Gbps,3D-stacked PoP (Package-On-Package), 320 GB/s bandwidth), 256 GB SSD storage (2x 2-channel 128 GB NAND modules), 916.6 MB/s SSD I/O bandwidth (compressed bandwidth up to 3.895 GB/s), Zen 2-based CPU, 7" OLED screen, streaming-orientated for PS5 and PS4 Pro titles (native play of PS4 games), $299 (Digital only). November 2023 release

>PSVR2: Wireless connectivity with PS5 systems, backwards-compatible with PS4 (may required wired connection), on-board processing hardware for task offloading from base PS5, Zen 2-based CPU, 4 GB GDDR6 as 4x 1 GB modules in 3D-stacked PoP setup (14 Gbps chips downclocked to 10 Gbps, 160 GB/s bandwidth), 128 GB onboard SSD storage (1x 2-channel 128 GB NAND module, 458.3 MB/s raw bandwidth, up to 1.9479 GB/s compressed bandwidth), AMOLED lenses, $399, November 2022 release.

[MICROSOFT]

>SERIES S Lite: 5nm, RDNA 3-based (possibly with some RDNA 4 features mixed in), possibly some CDNA 2-based features mixed in, 10 GB GDDR6, 280 GB/s bandwidth (224 GB/s for GPU, 56 GB/s for CPU/audio), 1 TB SSD, same raw SSD I/O bandwidth (2.4 GB/s) but increased compression (3.5:1 ratio, up to 8.4 GB/s maximum compression ratio), $199 (Digital only), November 2022 release

>SERIES X-2: 5nm EUV, RDNA 4-based, some CDNA 2-based features mixed in, 20 GB GDDR6 (10x 2 GB chips), 16 Gbps modules (640 GB/s bandwidth), improved SSD I/O bandwidth (~8 GB/s, 3.5:1 ratio compression, up to 28 GB/s maximum compression ratio), lower system TDP (~160 watts - 170 watts), 2 TB SSD storage, Zen 2-based CPU, disc drive, improved GPU performance (~14 TF), $449. November 2023 release.

>SERIES.AIR (Xcloud streaming box, think Apple TV-esque): 5nm, RDNA 3-based), 8 GB GDDR6 (4x 2 GB chips), 14 Gbps modules downclocked to 10 Gbps (160 GB/s bandwidth), 256 GB SSD, same SSD I/O as base Series S and Series X (2.4 GB/s) but improved compression bandwidth (up to 8.4 GB/s maximum compression ratio), $99 (Digital Only), November 2021 release

>SERIES.VIEW (Wireless display module screen that can be added to Series S Lite and Series.Air (to lesser extend Series X-2) for a makeshift portable device, or used as AR extension of VR): Zen 2-based CPU (4-core variant, lower clocks), 2 GB GDDR6 as 2x 1 GB modules (14 Gbps chips downclocked to 8 Gbps, 64 GB/s bandwidth), 8" OLED display, USB-C port (included Male/Male USB-C double-point module can be used to wire Series.View with Series S Lite), $199, Spring/early Summer 2022 release. Also compatible with PC.

>SERIES.VIRTUA (VR helmet developed in tandem with Samsung, for Series system devices as well as PC): Based on Samsung HMD Odyssey + headset but with some paired-down specs for more mid-range performance capabilities. $399, Spring/Summer 2022 release.

So that's what I'm thinking Sony and Microsoft do insofar as mid-gen refreshes and major peripheral upgrades, up to early 2024. From that point on it's really up in the air, probably easiest to see the two of them doing bundles for various mixes of these refreshes and peripherals. For example, Sony could probably do a package bundle in late 2021 and early 2022 with PS5 (base) and PSVR to drive out remaining stock for the first generation of PSVR and the original PS5 models, making way for the PS5 system refreshes and PSVR refresh in 2022 (PSVR2) and PS5 Slim & Enhanced (2023).

Meanwhile, I think Microsoft will try SKU bundles like Series.Air & Series.View around late 2023 and into 2024, or even later SKU bundles like Series X-2 & Series.Virtua in late 2024 into early 2025. I think that's what Sony & Microsoft will do going into the tail-end of 9th gen and leading into 10th-gen...

----------

That basically sums up my mid-gen refresh speculation; from herein I'll focus on 10th-gen hardware, and like I said above, starting with the PlayStation 6 here and breaking it down into parts. The first part will pretty much be completely about the GPU, and I try giving some explanation to certain decisions below. I've settled on these guesses after rewriting possible specifications over a dozen times, changing MANY things along the way.

These are, after all, just my own guesses/speculation but I tried being as realistic and technical to market realities, trends, and technological developments (plus likely business strategies) as possible. So let's just jump right in there...

ps6.jpg


playstation-6-ps5-pro.jpg


Dunno, these are just some designs I was able to find. Anyone got links to some better PS6 render concepts?

[PLAYSTATION 6]

>YEAR: 2026 or 2027.​
>2026 likely, but 2027 more likely. Would say 45/55 split between the two.​
>Gives PS5 hardware and software more time to "bake" an ecosystem market without contending with PS6 messaging/marketing​
>Allows for cheaper securement of wafer production, memory (volatile, NAND) vs. an earlier launch​
>Gives 1P studios more time to polish games intended for launch of PS6​
>Sony wants to shorten 1P dev times not to bring out hardware faster (returning console gen length to 5 years), but to release more 1P titles in a given (by modern notion) standard console cycle (6-7 years). Allows them to drive more profits in a 6-7 year period, which helps offset R&D/production costs of 10th-gen hardware provided R&D/production costs stay roughly similar to what they were for 9th-gen (PS5), or only 25% - 30% increase at most.​

>NODE: N3P​
>Only way for them to get the performance they need at a reasonable power budget​
>Will compliment contemporary RDNA architecture designs/advancements very well​
>Can have wafer costs managed through scaled offsetting of budget in other areas (die size, memory, etc.)​

[GPU]

>ARCHITECTURE: RDNA 7-based​
>Assuming 15-month intervals between RDNA refreshes, RDNA 7 would be completed by February 2027. RDNA 8 would be completed and released by May 2028. A PS6 in either 2026 or 2027 could be predominantly RDNA 7-based, with some bits maybe from RDNA 8 (or influencing RDNA 8) if the release of PS6 is 2027 rather than 2026.​

>SHADER ARRAYS: 2​
>SHADER ENGINES (PER SA): 2​
>CUs: 40​
>72 CUs would double PS5, but also at least double the silicon budget, AND would be on 3nm EUVL (+), which would be more expensive than 7nm in its own right. Only way to offset that would be to either gimp in some other area (storage, memory, CPU etc.) or going with 5nm EUVL which curbs some of the performance capability due to having less room on the power consumption budget.​
>CUs will only get bigger with more silicon packed into them. PS5 CUs are 62% larger than PS4 CUs for example, despite being on a smaller node, aka more features are built into the individual CUs relatively speaking (such as RT cores). Any features that scale better with integration in the CU will be able to bump up the CU size compared to PS5, even if the overall CU count remains the same or only slightly larger.​
>PS6 CUs could be between 50% - 60% larger than PS5 CUs​
>Chiplet design can allow for more active CUs without need to disable out of yield concerns​
>Would allow for similar GPU programming approaches in line with PS5​
>Theoretically easier to saturate with work​

>SHADER CORES (PER CU): 128​
>SHADER CORES (TOTAL): 5,120​
>Going with a smaller GPU (40 CUs) would require something else to be increased in order to provide suitable performance gains. Doubling the amount of Shader Cores per CU is one of the ways to do this, though 128 could be closer to a default for later RDNA designs by this point.​

>ROPs: 128 (4x 32-unit RBs)​
>Doubling of ROPs on the GPU in order to compliment the increase in per-CU shader cores​

>TMUs (per CU): 8​
>Assuming a 16:1 ratio between SCs and TMUs per CU is maintained, doubling the SCs from 64 to 128 would also 2x the TMUs from 4 to 8​

>TMUs (TOTAL): 320​
>MAXIMUM WORKLOAD THREADS: 40,960 (32 SIMD32 waves * 32 threads * 40 CUs)​
>MAXIMUM GPU CLOCK: 3362.236 MHz​
>PRIMITIVES (TRIANGLES) PER CLOCK (IN/OUT): Up to 8 PPC IN, up to 6 PPC OUT (current RDNA supports up to 4 PPC OUT)​
>PRIMITIVES (TRIANGLES) PER SECOND (IN/OUT): Up to 26.8978 billion PPS IN, up to 20.17335 billion PPS OUT​
>GIGAPIXELS PER SECOND: 430.366208 G/pixels per second​
>INSTRUCTIONS PER CLOCK: 2 IPC​
>INSTRUCTIONS PER SECOND: 6.724472 billion IPS​
>RAY INTERSECTIONS PER SECOND: 1075915 G/rays per second (1.075915 T/rays per second) (3362.236 MHz * 40 CUs * 8 TMUs)​
* RT intersection calculations might be off; figured RT calculations leverage the TMUs in each CU but wasn't sure if that's 100% the case.​

>THEORETICAL FLOATING POINT OPERATIONS PER SECOND: 34.4 TF (40 CUs * 128 SCs * 2 IPC * 3362.236 MHz)​
>CACHES:
>L0$: 256 KB (per CU), 10.24 MB (total)​
>L1$: 1 MB (per Dual CU), 20 MB (total)​
>L2$: 24 MB​
>L3$: 192 MB (Infinity Cache)​
*SRAM bit density is 0.027 microns per bit on 7nm, meaning 128 MB would be ~ 166 mm^2 on 7nm/7nm DUV. 87% density reduction on 3nm EUV would reduce this to about 22mm^2. SRAM cell density of 1.5x on the node could bring this to 192 MB.​

>TOTAL: 246.24 MB​
>TDP: 160 watts​
>Die Area: ~100 mm^2 - 120 mm^2 (factoring in larger CUs, additional integrated silicon, larger caches, revamped frontends and backends, etc.)​

[STATE MODES]
>There is an opportunity with future AMD hardware to figure a way for relatively wider GPUs to dynamically scale down saturation workloads to smaller cluster of CUs while proportionately increasing the frequency of clocks on those active hardware components while the inactive hardware components/CUs reserve at a dramatically lower clock (sub-100 MHz) until they are needed for more work.​
>This assumes that AMD can continue to scale GPU clock frequencies higher (4 GHz - 5 GHz) with future RDNA designs, provided they can make such work with silicon designs on smaller node processes. Since any given cluster of the GPU would need to be able to clock this high, it means the entire GPU design must be able to clock at this range, potentially across the entire chip, in order to make this feasible.​
>Power delivery designs may also have to be reworked; chiplet approach will help a lot here.​
>This approach would be more suitable for products that need to squeeze out and scale performance for various workloads, support variable frequency (this is, essentially, variable frequency within portions of the GPU itself), and has to stay within a fixed power budget...such as a games console. Therefore it might be less required (though potentially beneficial) for PC GPUs as it gives a different means of scaling clocks with workloads while having more granularity in control of the GPU's power consumption.​
>AMD's implementation would be based on Shader Array counts, so the loads would be adjusted per Shader Array. On chiplet-based designs, each chiplet would theoretically be its own Shader Array, so this is essentially a way of scaling power delivery between the multiple chiplets dynamically.​
>This could be used in tandem with already-established power budget sharing between the CPU and GPU seen in designs like PS5; in this case it would be beneficial in allowing the GPU to maintain implementation of this particular feature for games that may have lighter volume workloads, but intense iteration workloads that could stress a given peak frequency. However, this should be minimal and its fuller use would be more in the traditional fashion when talking about full GPU volume workloads.​
>Another benefit of State Mode is that when targeting power delivery to a smaller cluster of the GPU hardware and increasing the clock, clock-bound processes (pixel fillrate, instructions per second, primitives per second) see large gains, generally inverse of the decrease in active CU count. However, some other things such as L0$ and L1$ amounts will reduce, even if actual bandwidths have better-than-linear scaling respective of the total active silicon.​
[PS6 - STATE MODE IMPLEMENTATION]
>SHADER ARRAYS: 1​
>SHADER ENGINES (PER SA): 2​
>CUs: 20​
>SHADER CORES (PER CU): 128​
>SHADER CORES (TOTAL): 2,560​
>ROPs: 128​
>Future RDNA chiplet designs will probably keep the back-end to its own block. However, for design reasons ROP allocation would likely scale to per chiplet cluster evenly, so each chiplet (or if essentially a chiplet, SE) would have its own assigned group of ROPs. This equals 2x 64 ROPs for PS6.​
>TMUs (PER CU): 8​
>TMUs (TOTAL): 160​
>MAXIMUM WORKLOAD THREADS: 20,480​
>MAXIMUM GPU CLOCK: 4113.449 MHz (shaved off some clock from earlier calcs to account for non-linear clock scaling with power scaling)​
>PRIMITIVES (TRIANGLES) PER CLOCK (IN/OUT): Up to 8 PPC (IN), up to 6 PPC (OUT)​
>PRIMITIVES PER SECOND (IN/OUT): Up to 32.9 billion PPS (IN), up to 24.675 billion PPS (OUT)​
>GIGAPIXELS PER SECOND: Up to 263.26 G/pixels per second (4113.449 MHz * 64 ROPs)​
>INSTRUCTIONS PER CLOCK: 2
>INSTRUCTIONS PER SECOND: 8.226898 billion IPC​
>RAY INTERSECTIONS PER SECOND: 658.151 G/rays per second (4113.449 MHz * 20 CUs * 8 TMUs)​
>THEORETICAL FLOATING POINT OPERATIONS PER SECOND: 21.06 TF
>CACHES:
>L0$: 256 KB (per CU), 5.12 MB (total)​
>L1$: 1 MB (per Dual CU), 10 MB (total)​
>L2$: 24 MB​
**Unified cache shared with both chiplets​
>L3$: 192 MB​
**Unified cache shared with both chiplets​
>>TOTAL: 231.12 MB​

----------------​
(for some dumb reason I can't outdent this section. Oh well)​
That should be everything for a hypothetical PS6 GPU; small things like codec support, display output support etc. wouldn't really be that hard or crazy to take a crack at, and I'm not particularly interested in that. However, I AM interested in getting to the CPU, audio, memory, storage etc. and also to see what some of you have in terms of ideas for a PS6 GPU design, hypothetically speaking.​
Sound off below if you'd like and no, it's never too early to start thinking about next-gen. You think Mark Cerny and Jason Ronald aren't already brainstorming what the next round of hardware could bring? I bet you they are ;)...​
 
Last edited:

OrtizTwelve

Member
History and logic dictates that we will see a “PS6” sometime in 2025 / 2026.

As for XBOX, they have made it clear they’re somewhat done with generations and I would expect hardware upgrades every 3 or so years akin to PC and games are always forward and backward compatible. It’s just “XBOX”.
 
by the way did we forget about bits?

Nes was 8 bit, Genesis and SNES was 16-bit, Saturn and PlayStation 1 were 32 bit, Nintendo 64 was 64 bit, Dreamcast was,...wait how many bits are the following consoles:

Sega Dreamcast
Nintendo Game Cube, PlayStation 2, Original Xbox
Xbox360, PlayStation 3
PlayStation 4, PlayStation 4 Pro, Xbox One, Xbox One X
PlayStation 5, Xbox Series X and S
 
Whomever convinces Apple to make their next SoC wins the generation. ARM is just so much efficient now.

I've heard a little bit about their discrete GPU plans, they sound pretty good. But it's Apple; they'll price it way out of bounds and accessories will cost an arm and two legs. I don't think they'll make much noise unless they fix those parts of their corporate brand and style.

But klee said Titi floppies wont matter next next gen ...


zlOQfuC.jpg

I don't know everything Klee's said in that regard but TFs will still have some role as indication of some type of computational performance next gen, it'll never be 100% irrelevant.

However there's a reason I didn't focus all that much on TFs in this speculation and why it's kinda buried in-between a lot of frankly more important data points. It's a potentially nice number tho for people who care only about that.
 

Radical_3d

Member
I've heard a little bit about their discrete GPU plans, they sound pretty good. But it's Apple; they'll price it way out of bounds and accessories will cost an arm and two legs. I don't think they'll make much noise unless they fix those parts of their corporate brand and style.
The integrated GPU already gives a run for the money to the previous generation discrete graphics mounted on Macs. The rumour out there is that they’ll be launching discrete GPUs with the successor of the M1X (next year‘s chip) in two years from now. But is the evolution of the power in their SoC what should end in consoles. Consoles no longer have a discrete GPU anyways. If Qualcomm can catch up a little it may be a worthy alternative. But right now x86 is ridiculous. It has been wiped out in every measure possible by a large margin.
 
Last edited:
The integrated GPU already gives a runs for the money to the previous generation discrete graphics mounted on Macs. The rumour out there is that they’ll be launching discrete GPUs with the successor of the M1X (next year‘s chip) in two years from now. But is the evolution of the power in their SoC what should end in consoles. Consoles no longer have a discrete GPU anyways. If Qualcomm can catch up a little it may be a worthy alternative. But right now x86 is ridiculous. It has been wiped out in every measure possible by a large margin.

So I guess you're figuring wherever their SoC performance hits at by the time 10th-gen designs would be spec'd out and finalized, is what you're thinking those systems should roughly aim for?

That's one possible way to look at it, though I dunno how tuned to gaming performance Apple will dedicate their graphics designs, so there wouldn't be a 1:1 way of translating the performance of Apple's SoCs in that timeframe to console-oriented APUs.

In general performance though I think we can see mutiples of performance gains for 10th gen over 9th gen even if the raw numbers only end up a 3x increase or so, due to architectural design advancements. Say RDNA2 has 25% IPC gains over RDNA1 due to architecture refinements (and some of that also dependent on node shrinks); a hypothetical RDNA 7 GPU design serving as the basis of 10th-gen systems could have about up to 2.25x (or 125%) multiplier over an RDNA 2 design if performance gains from gen to gen stay consistent at 25%.

So a 30 TF RDNA 7 design perhaps maybe having a performance capability equivalent to a 67.5 TF RDNA 2 design, but with all (or most) of the RDNA 7 modern features and capabilities/refinements.
 
Last edited:

Radical_3d

Member
So I guess you're figuring wherever their SoC performance hits at by the time 10th-gen designs would be spec'd out and finalized, is what you're thinking those systems should roughly aim for? That's one possible way to look at it, though I dunno how tuned to gaming performance Apple will dedicate their graphics designs, so there wouldn't be a 1:1 way of translating the performance of Apple's SoCs in that timeframe to console-oriented APUs.
I certainly expect Apple to optimise their designs based in the workload of their systems but don’t forget iOS devices are basically the best selling consoles out there. And they’ve smashing competition constantly. An ARM based console design should have less space dedicated to the cpu, since consoles doesn’t need to beat an i9, and bigger integrated GPU than PCs SoC (like AMD consoles are compared to the AMD PC SoC). The 2018 iPad Pro was a XOne and the M1 is a slightly faster PS4. With the sufficient silicon budget and the progression Apple makes on a yearly base a next generation of consoles powered by this would be way superior than what other suppliers could offer IMO.

But neither of the three console manufacturers is going to convince Apple to cannibalise their iOS offering and their Apple Arcade service. I guess we should settle for Qualcomm or nVidia.
 
Last edited:

AJUMP23

Member
Right now I really care more about what Nintendo does next. They are the unknown that will reveal sooner than the others.
 
>SERIES X-2: 5nm EUV, RDNA 4-based, some CDNA 2-based features mixed in, 20 GB GDDR6 (10x 2 GB chips), 16 Gbps modules (640 GB/s bandwidth), improved SSD I/O bandwidth (~8 GB/s, 3.5:1 ratio compression, up to 28 GB/s maximum compression ratio), lower system TDP (~160 watts - 170 watts), 2 TB SSD storage, Zen 2-based CPU, disc drive, improved GPU performance (~14 TF), $449. November 2023 release.
I want this.
 
Bring back the toploader design.

What's the point when they probably won't even have disc drives?

By the way, given that machine learning gives Cyberpunk a boost from 15 FPS to 50 FPS I think is safe to say that this is the future of graphics optimisation.

Absolutely; after 4K I honestly don't think native resolution will matter much anymore as image upscaling tech and algorithms should do very well, and by 10th-gen "only" 8K sets will probably be standard/saturate the market.

Thinking I should add another section to the OP to cover some GPU features, tho things like machine learning and image upscaling silicon should be expected. One thing not mentioned too much in this kind of stuff (but should be) is data models for AI programming, stuff built off of GPT and the like. If sophisticated enough AI data models can be built and provided to automate large parts of the programming and asset creation processes, that should help save A TON on both development times and production costs.

However, it does bring up a potential ethical challenge, because it could lead to a big reduction of the human workforce in gaming space to AI data models, and the only people still around being super-specialists who can train the models on specific routines, set up/configure them, and curate the results with human oversight. I think laws would have to be put in place to ensure there's still enough actual people still present so that 90% of positions don't get replaced by literal (AI programming) bots.

It's also not quite something the actual consoles would need to worry about incorporating; this would fall more into the SDK/devkit/API realm of things.

Right now I really care more about what Nintendo does next. They are the unknown that will reveal sooner than the others.

Nintendo will be interesting. And that's mainly because of Nvidia. Switch 2 will definitely feature DLSS 2.0 or 3.0, but the real question is will DLSS finally reach a point by then where it's a standardized feature devs don't have to essentially reprogram their software around?

If it eventually becomes a feature that can be enabled and automates itself through use of the silicon as the game needs it (similar to other relatively standard GPU features old and new, at least to the level of ease I assume things like VRS are), then that's when DLSS becomes a true gamechanger. Right now the potential is there but the amount of game-by-game work needed to adopt it kind of holds it back.

Though, this is only me making some assumptions, that it's a bit more laborious to program for compared to, say, RT and the suchlike. If anyone knows the workload required for implementing DLSS 1.0 or 2.0 compared to, say, VRS or RT I'd love to know. At the very least I know it requires extensively training the model so there's a large labor of time (an redundancy) involved.
 

kyliethicc

Member
PS6 target goals will probably be something like:

2026-27 launch, $500 price

Assuming its still AMD x86 (or will it be ARM based?)

TSMC 5 or 3 nm die (3D stacked or chiplets?)
Customized version of latest AMD CPU/GPU architectures

8 core 16 thread CPU @ ~ 5 GHz
~ 54-72 CU GPU @ ~ 3.5 GHz
+ as much cache as possible to afford/fit on die

32 GB RAM (HBM?) @ ~ 1 TB/s bw
~ 2 TB SSD capacity, 10+ GB/s raw read, PCIe 5x4

PS6 games will be digital only
No internal disc drive, optional UHDBD drive accessory

~ 450 W PSU, ~ 250 W TDP

PS4, PSVR, PS5, PSVR2 back compat
revised DualSense 2 controller
 
Last edited:
Call me crazy but I don't think we're gonna see 10th gen dedicated consoles. By 2027 streaming is gonna be ubiqutous. Unlimited 5G will be almost everywhere, and if you really do live in bumfuck nowhere you can get super fast low latency internet from Starlink or one of its competitors.
 
PS6 target goals will probably be something like:

2026-27 launch, $500 price

Assuming its still AMD x86 (or will it be ARM based?)

TSMC 5 or 3 nm die (3D stacked or chiplets?)
Customized version of latest AMD CPU/GPU architectures

8 core 16 thread CPU @ ~ 5 GHz
~ 54-72 CU GPU @ ~ 3.5 GHz
+ as much cache as possible to afford/fit on die

32 GB RAM (HBM?) @ ~ 1 TB/s bw
~ 2 TB SSD capacity, 10+ GB/s raw read, PCIe 5x4

PS6 games will be digital only
No internal disc drive, optional UHDBD drive accessory

~ 450 W PSU, ~ 250 W TDP

PS4, PSVR, PS5, PSVR2 back compat
revised DualSense 2 controller

Nice specs there all in all. I think they can actually go a bit further in some of those though, like more cores/threads on the CPU (though if they went with a wider GPU as you suggest as an idea then they'd have to cut around somewhere and that could be CPU cores/threads), and I think they'll still stick with some type of GDDR-based memory. SSD will probably also be larger.

Digital-only might actually be something they do. Personally I wouldn't like it; content creators can constantly change the original content as it's being hosted on servers, even censoring stuff or removing features from previous versions in new updates, and you'd have no hard copy to fall back on (even if that would mean not being able to connect online to play that older version).

Seeing as how games from only 10-15 years ago are now getting re-released and censored for debatable reasons, I'd hate to have a future where physical copies of games were no longer provided.

Call me crazy but I don't think we're gonna see 10th gen dedicated consoles. By 2027 streaming is gonna be ubiqutous. Unlimited 5G will be almost everywhere, and if you really do live in bumfuck nowhere you can get super fast low latency internet from Starlink or one of its competitors.

I don't have any doubts fast internet will become ubiquitous by then, but if things like VR and AR take off by then, they'll still require dedicated hardware in the home to run at least some part of the game locally. Personally I hope consoles never go away, as in systems that don't need an internet connection to stream a feed of the game running off some remote server.

I can see a future where cloud gaming services provide an option for subscribers at a certain tier to download some packaged version of the game to run a local "gaming box" that's basically a PC-like console system with the hardware specs to run the games natively, which a subscriber would receive as part of a higher-tier subscription plan (think of it as the equivalent to a premium DVR cable box).

However, the software package has some means of knowing how long it should be downloaded on the box and at some point before that designated period is concluded it configures some settings requiring the player to log back in to the cloud network to update the status; this is the cloud service's way of being able to delete that downloaded copy off the subscriber's unit, since their time has ran out on the rental period (the "rental" of the game downloaded to the system would be included in the higher-tier subscription plan).

That's one way it could play out. Microsoft seem to be wanting to set themselves up to a point they could leverage such a model, with what All-Access combined with Xcloud and Gamepass bring and could potentially bring in the future (new hardware upgrades covered by All-Access). I wonder if they have been planning or discussing for similar things, actually, because it would seem a natural fit for a 10th-gen system from them.
 
Last edited:
I don’t expect a PS6 until 2028. Diminishing returns in visuals and hardware will only cause generations to get longer and longer. Sony also doesn’t have a Series S to take into consideration so they could make the PS5 into a 1080p machine and release a PS5 Pro that runs at 4k.
 
psvr2 is the next PlayStation console, if a mobile chip streaming games can be considered a console

btw, welcome to the next level, flatties
 

Bryank75

Banned
SSD's and IO's that are faster and cooler.... massive amounts of RAM and cache scrubbers similar to PS5 but brought to a whole new level.

Target a price-point of 500 yet again.

Innovative silent and highly efficient cooling developed in partnership with Dyson, combined with an advanced vapor chamber, reducing overall size.

DLSS or a similar more advanced alternative.

The optimal CPU / GPU combination that fits the budget and hopefully can offer performance on par with the highest end PC at the time or beyond it in the dream scenario.

A mix and match subscription.... where you can have PS+ (with PSNow rolled in), Spotify, Funimation, Crunchyroll and two to three additional subscriptions all for 20-25 dollars per month.

Materials and finish ahve become increasingly important and I think the bar will be raised again next gen, still with majority plastics but with some flourishes, like a Gorilla Glass top surface with PlayStation logo underneath the glass giving an optical effect that looks premium. LED's at positions on the console that highlight curves and give it a futuristic look etc.
 
I don’t expect a PS6 until 2028. Diminishing returns in visuals and hardware will only cause generations to get longer and longer. Sony also doesn’t have a Series S to take into consideration so they could make the PS5 into a 1080p machine and release a PS5 Pro that runs at 4k.

2028? Ooof, that's a long ways but hey, it could happen. Don't agree with the argument of diminishing returns, though. I USED to, but when you think about it, we still don't have games, by and large, with the visual fidelity of CG films or mixed-CG films from the mid-2000s'. I blame that mainly on things like limited animation systems, but processing power capability (as well as rates of primitive generation) is also a factor IMO.

Once fidelity of in-game graphics start to match that of high-budget CG films from the late-2000s'/early-2010s' is regular, that's when I think we'd start hitting a real ceiling of diminishing returns in terms of graphics. For actual physical feedback/immersion? We have a LONG ways to go.
 

kyliethicc

Member
PS4 Pro actually made the PS5 feel like LESS of a next gen leap cause it supports 4K TVs etc.

Will Sony do a PS5 Pro? I think it just depends on when/how they want to sell 8K TVs. The TV industry will move from 4K to 8K cause resolution is their main selling point since the HD TV boom around 2007 or so.

So will Sony just launch a PS6 in 2026-27 and make it their "8K console" or will they just rush out a PS5 Pro in 2023-24 with some 8K upscaling tech?

I wonder if they simply won't do a 5 Pro so that the jump to the PS6 feels bigger. But they might chase the extra money a Pro model can make them. No pro refresh would also shorten the life of the console (I assume) so it could impact when we get a PS6.

PS5 is already very powerful, very large, and $500. Sony might not be able to make a more powerful Pro model console in 3-4 years for the same $500. I suspect instead they will just continue towards making the PS5 Slim model and introduce a version of the PS5 with 1.65 TB of SSD storage. And they'll push people towards digital.

2023 PS5 Slim
$500 for 1.65 TB (disc model)
$400 for 1.65 TB (digital edition)
$300 for 825 GB (digital edition)
 

truth411

Member
xbox-vs-ps5.jpg


(If you're interested, please give Part 2 a read, although a lot of that is actually outdated so it's more for the curious if anything.)

After a long hiatus, it feels time to move on to continuing this series of thinking out what the 10th-gen systems might bring.

Before that though, I'll briefly share some thoughts on mid-gen refreshes, which I posted on B3D Sunday. I did actually post some other mid-gen refresh stuff here (at least for PS5 Pro) a while ago, but in light of new information and discussions I've revised pretty much all of that. This isn't so much a hard detailing on possible mid-gen specs so much as it is what products could comprise of such refreshes, and expanding the term into including peripherals. I've also taken into consideration the business strategies and trajectories Sony and Microsoft seemingly seem to be going towards:

[SONY]

>PS5 Slim: 5nm, ~140 watt system TDP (30% savings on 5nm, better PPW GDDR6 chips, possibly smaller array of 3x 4-channel NAND modules 384 GB capacity each, chip-packaging changes and chiplet setup ,etc). RDNA 4-based (16-month intervals between RDNA gens would mean Jan. 2022 for RDNA 3, July 2023 for RDNA 4), 1 TB SSD storage, same SSD I/O throughput (with possibly slightly better compression due to API maturity and algorithms), same amount of GDDR-based memory and bandwidth (so, sticking with GDDR6), $299 (Digital only). November 2023 release.

>PS5 Enhanced: 5nm, ~150 watt system TDP (factoring in disc drive), RDNA 4-based, 6x 384 GB NAND modules (~2 TB SSD), same GDDR6 memory capacity but faster chips (16 Gbps vs. 14 Gbps) for 512 GB/s bandwidth, improved SSD I/O bandwidth (~8 GB/s Raw, up to 34 GB/s maximum 4.25:1 compression ratio), slightly better GPU performance (up to 11.81 TF due to 5nm; this would probably increase total system TDP to about 155 watts), Zen 2-based CPU, disc drive, $399. November 2023 release.

>PS5G (Fold): 5nm, ~25 watt - 35 watt system TDP, RDNA 4-based (18 CU chiplet block), 8 GB GDDR6 (8x 1 GB 14 Gbps chips downclocked to 10 Gbps,3D-stacked PoP (Package-On-Package), 320 GB/s bandwidth), 256 GB SSD storage (2x 2-channel 128 GB NAND modules), 916.6 MB/s SSD I/O bandwidth (compressed bandwidth up to 3.895 GB/s), Zen 2-based CPU, 7" OLED screen, streaming-orientated for PS5 and PS4 Pro titles (native play of PS4 games), $299 (Digital only). November 2023 release

>PSVR2: Wireless connectivity with PS5 systems, backwards-compatible with PS4 (may required wired connection), on-board processing hardware for task offloading from base PS5, Zen 2-based CPU, 4 GB GDDR6 as 4x 1 GB modules in 3D-stacked PoP setup (14 Gbps chips downclocked to 10 Gbps, 160 GB/s bandwidth), 128 GB onboard SSD storage (1x 2-channel 128 GB NAND module, 458.3 MB/s raw bandwidth, up to 1.9479 GB/s compressed bandwidth), AMOLED lenses, $399, November 2022 release.

[MICROSOFT]

>SERIES S Lite: 5nm, RDNA 3-based (possibly with some RDNA 4 features mixed in), possibly some CDNA 2-based features mixed in, 10 GB GDDR6, 280 GB/s bandwidth (224 GB/s for GPU, 56 GB/s for CPU/audio), 1 TB SSD, same raw SSD I/O bandwidth (2.4 GB/s) but increased compression (3.5:1 ratio, up to 8.4 GB/s maximum compression ratio), $199 (Digital only), November 2022 release

>SERIES X-2: 5nm EUV, RDNA 4-based, some CDNA 2-based features mixed in, 20 GB GDDR6 (10x 2 GB chips), 16 Gbps modules (640 GB/s bandwidth), improved SSD I/O bandwidth (~8 GB/s, 3.5:1 ratio compression, up to 28 GB/s maximum compression ratio), lower system TDP (~160 watts - 170 watts), 2 TB SSD storage, Zen 2-based CPU, disc drive, improved GPU performance (~14 TF), $449. November 2023 release.

>SERIES.AIR (Xcloud streaming box, think Apple TV-esque): 5nm, RDNA 3-based), 8 GB GDDR6 (4x 2 GB chips), 14 Gbps modules downclocked to 10 Gbps (160 GB/s bandwidth), 256 GB SSD, same SSD I/O as base Series S and Series X (2.4 GB/s) but improved compression bandwidth (up to 8.4 GB/s maximum compression ratio), $99 (Digital Only), November 2021 release

>SERIES.VIEW (Wireless display module screen that can be added to Series S Lite and Series.Air (to lesser extend Series X-2) for a makeshift portable device, or used as AR extension of VR): Zen 2-based CPU (4-core variant, lower clocks), 2 GB GDDR6 as 2x 1 GB modules (14 Gbps chips downclocked to 8 Gbps, 64 GB/s bandwidth), 8" OLED display, USB-C port (included Male/Male USB-C double-point module can be used to wire Series.View with Series S Lite), $199, Spring/early Summer 2022 release. Also compatible with PC.

>SERIES.VIRTUA (VR helmet developed in tandem with Samsung, for Series system devices as well as PC): Based on Samsung HMD Odyssey + headset but with some paired-down specs for more mid-range performance capabilities. $399, Spring/Summer 2022 release.

So that's what I'm thinking Sony and Microsoft do insofar as mid-gen refreshes and major peripheral upgrades, up to early 2024. From that point on it's really up in the air, probably easiest to see the two of them doing bundles for various mixes of these refreshes and peripherals. For example, Sony could probably do a package bundle in late 2021 and early 2022 with PS5 (base) and PSVR to drive out remaining stock for the first generation of PSVR and the original PS5 models, making way for the PS5 system refreshes and PSVR refresh in 2022 (PSVR2) and PS5 Slim & Enhanced (2023).

Meanwhile, I think Microsoft will try SKU bundles like Series.Air & Series.View around late 2023 and into 2024, or even later SKU bundles like Series X-2 & Series.Virtua in late 2024 into early 2025. I think that's what Sony & Microsoft will do going into the tail-end of 9th gen and leading into 10th-gen...

----------

That basically sums up my mid-gen refresh speculation; from herein I'll focus on 10th-gen hardware, and like I said above, starting with the PlayStation 6 here and breaking it down into parts. The first part will pretty much be completely about the GPU, and I try giving some explanation to certain decisions below. I've settled on these guesses after rewriting possible specifications over a dozen times, changing MANY things along the way.

These are, after all, just my own guesses/speculation but I tried being as realistic and technical to market realities, trends, and technological developments (plus likely business strategies) as possible. So let's just jump right in there...

ps6.jpg


playstation-6-ps5-pro.jpg


Dunno, these are just some designs I was able to find. Anyone got links to some better PS6 render concepts?

[PLAYSTATION 6]

>YEAR: 2026 or 2027.​
>2026 likely, but 2027 more likely. Would say 45/55 split between the two.​
>Gives PS5 hardware and software more time to "bake" an ecosystem market without contending with PS6 messaging/marketing​
>Allows for cheaper securement of wafer production, memory (volatile, NAND) vs. an earlier launch​
>Gives 1P studios more time to polish games intended for launch of PS6​
>Sony wants to shorten 1P dev times not to bring out hardware faster (returning console gen length to 5 years), but to release more 1P titles in a given (by modern notion) standard console cycle (6-7 years). Allows them to drive more profits in a 6-7 year period, which helps offset R&D/production costs of 10th-gen hardware provided R&D/production costs stay roughly similar to what they were for 9th-gen (PS5), or only 25% - 30% increase at most.​

>NODE: N3P​
>Only way for them to get the performance they need at a reasonable power budget​
>Will compliment contemporary RDNA architecture designs/advancements very well​
>Can have wafer costs managed through scaled offsetting of budget in other areas (die size, memory, etc.)​

[GPU]

>ARCHITECTURE: RDNA 7-based​
>Assuming 15-month intervals between RDNA refreshes, RDNA 7 would be completed by February 2027. RDNA 8 would be completed and released by May 2028. A PS6 in either 2026 or 2027 could be predominantly RDNA 7-based, with some bits maybe from RDNA 8 (or influencing RDNA 8) if the release of PS6 is 2027 rather than 2026.​

>SHADER ARRAYS: 2​
>SHADER ENGINES (PER SA): 2​
>CUs: 40​
>72 CUs would double PS5, but also at least double the silicon budget, AND would be on 3nm EUVL (+), which would be more expensive than 7nm in its own right. Only way to offset that would be to either gimp in some other area (storage, memory, CPU etc.) or going with 5nm EUVL which curbs some of the performance capability due to having less room on the power consumption budget.​
>CUs will only get bigger with more silicon packed into them. PS5 CUs are 62% larger than PS4 CUs for example, despite being on a smaller node, aka more features are built into the individual CUs relatively speaking (such as RT cores). Any features that scale better with integration in the CU will be able to bump up the CU size compared to PS5, even if the overall CU count remains the same or only slightly larger.​
>PS6 CUs could be between 50% - 60% larger than PS5 CUs​
>Chiplet design can allow for more active CUs without need to disable out of yield concerns​
>Would allow for similar GPU programming approaches in line with PS5​
>Theoretically easier to saturate with work​

>SHADER CORES (PER CU): 128​
>SHADER CORES (TOTAL): 5,120​
>Going with a smaller GPU (40 CUs) would require something else to be increased in order to provide suitable performance gains. Doubling the amount of Shader Cores per CU is one of the ways to do this, though 128 could be closer to a default for later RDNA designs by this point.​

>ROPs: 128 (4x 32-unit RBs)​
>Doubling of ROPs on the GPU in order to compliment the increase in per-CU shader cores​

>TMUs (per CU): 8​
>Assuming a 16:1 ratio between SCs and TMUs per CU is maintained, doubling the SCs from 64 to 128 would also 2x the TMUs from 4 to 8​

>TMUs (TOTAL): 320​
>MAXIMUM WORKLOAD THREADS: 40,960 (32 SIMD32 waves * 32 threads * 40 CUs)​
>MAXIMUM GPU CLOCK: 3362.236 MHz​
>PRIMITIVES (TRIANGLES) PER CLOCK (IN/OUT): Up to 8 PPC IN, up to 6 PPC OUT (current RDNA supports up to 4 PPC OUT)​
>PRIMITIVES (TRIANGLES) PER SECOND (IN/OUT): Up to 26.8978 billion PPS IN, up to 20.17335 billion PPS OUT​
>GIGAPIXELS PER SECOND: 430.366208 G/pixels per second​
>INSTRUCTIONS PER CLOCK: 2 IPC​
>INSTRUCTIONS PER SECOND: 6.724472 billion IPS​
>RAY INTERSECTIONS PER SECOND: 1075915 G/rays per second (1.075915 T/rays per second) (3362.236 MHz * 40 CUs * 8 TMUs)​
* RT intersection calculations might be off; figured RT calculations leverage the TMUs in each CU but wasn't sure if that's 100% the case.​

>THEORETICAL FLOATING POINT OPERATIONS PER SECOND: 34.4 TF (40 CUs * 128 SCs * 2 IPC * 3362.236 MHz)​
>CACHES:
>L0$: 256 KB (per CU), 10.24 MB (total)​
>L1$: 1 MB (per Dual CU), 20 MB (total)​
>L2$: 24 MB​
>L3$: 192 MB (Infinity Cache)​
*SRAM bit density is 0.027 microns per bit on 7nm, meaning 128 MB would be ~ 166 mm^2 on 7nm/7nm DUV. 87% density reduction on 3nm EUV would reduce this to about 22mm^2. SRAM cell density of 1.5x on the node could bring this to 192 MB.​

>TOTAL: 246.24 MB​
>TDP: 160 watts​
>Die Area: ~100 mm^2 - 120 mm^2 (factoring in larger CUs, additional integrated silicon, larger caches, revamped frontends and backends, etc.)​

[STATE MODES]
>There is an opportunity with future AMD hardware to figure a way for relatively wider GPUs to dynamically scale down saturation workloads to smaller cluster of CUs while proportionately increasing the frequency of clocks on those active hardware components while the inactive hardware components/CUs reserve at a dramatically lower clock (sub-100 MHz) until they are needed for more work.​
>This assumes that AMD can continue to scale GPU clock frequencies higher (4 GHz - 5 GHz) with future RDNA designs, provided they can make such work with silicon designs on smaller node processes. Since any given cluster of the GPU would need to be able to clock this high, it means the entire GPU design must be able to clock at this range, potentially across the entire chip, in order to make this feasible.​
>Power delivery designs may also have to be reworked; chiplet approach will help a lot here.​
>This approach would be more suitable for products that need to squeeze out and scale performance for various workloads, support variable frequency (this is, essentially, variable frequency within portions of the GPU itself), and has to stay within a fixed power budget...such as a games console. Therefore it might be less required (though potentially beneficial) for PC GPUs as it gives a different means of scaling clocks with workloads while having more granularity in control of the GPU's power consumption.​
>AMD's implementation would be based on Shader Array counts, so the loads would be adjusted per Shader Array. On chiplet-based designs, each chiplet would theoretically be its own Shader Array, so this is essentially a way of scaling power delivery between the multiple chiplets dynamically.​
>This could be used in tandem with already-established power budget sharing between the CPU and GPU seen in designs like PS5; in this case it would be beneficial in allowing the GPU to maintain implementation of this particular feature for games that may have lighter volume workloads, but intense iteration workloads that could stress a given peak frequency. However, this should be minimal and its fuller use would be more in the traditional fashion when talking about full GPU volume workloads.​
>Another benefit of State Mode is that when targeting power delivery to a smaller cluster of the GPU hardware and increasing the clock, clock-bound processes (pixel fillrate, instructions per second, primitives per second) see large gains, generally inverse of the decrease in active CU count. However, some other things such as L0$ and L1$ amounts will reduce, even if actual bandwidths have better-than-linear scaling respective of the total active silicon.​
[PS6 - STATE MODE IMPLEMENTATION]
>SHADER ARRAYS: 1​
>SHADER ENGINES (PER SA): 2​
>CUs: 20​
>SHADER CORES (PER CU): 128​
>SHADER CORES (TOTAL): 2,560​
>ROPs: 128​
>Future RDNA chiplet designs will probably keep the back-end to its own block. However, for design reasons ROP allocation would likely scale to per chiplet cluster evenly, so each chiplet (or if essentially a chiplet, SE) would have its own assigned group of ROPs. This equals 2x 64 ROPs for PS6.​
>TMUs (PER CU): 8​
>TMUs (TOTAL): 160​
>MAXIMUM WORKLOAD THREADS: 20,480​
>MAXIMUM GPU CLOCK: 4113.449 MHz (shaved off some clock from earlier calcs to account for non-linear clock scaling with power scaling)​
>PRIMITIVES (TRIANGLES) PER CLOCK (IN/OUT): Up to 8 PPC (IN), up to 6 PPC (OUT)​
>PRIMITIVES PER SECOND (IN/OUT): Up to 32.9 billion PPS (IN), up to 24.675 billion PPS (OUT)​
>GIGAPIXELS PER SECOND: Up to 263.26 G/pixels per second (4113.449 MHz * 64 ROPs)​
>INSTRUCTIONS PER CLOCK: 2
>INSTRUCTIONS PER SECOND: 8.226898 billion IPC​
>RAY INTERSECTIONS PER SECOND: 658.151 G/rays per second (4113.449 MHz * 20 CUs * 8 TMUs)​
>THEORETICAL FLOATING POINT OPERATIONS PER SECOND: 21.06 TF
>CACHES:
>L0$: 256 KB (per CU), 5.12 MB (total)​
>L1$: 1 MB (per Dual CU), 10 MB (total)​
>L2$: 24 MB​
**Unified cache shared with both chiplets​
>L3$: 192 MB​
**Unified cache shared with both chiplets​
>>TOTAL: 231.12 MB​

----------------​
(for some dumb reason I can't outdent this section. Oh well)​
That should be everything for a hypothetical PS6 GPU; small things like codec support, display output support etc. wouldn't really be that hard or crazy to take a crack at, and I'm not particularly interested in that. However, I AM interested in getting to the CPU, audio, memory, storage etc. and also to see what some of you have in terms of ideas for a PS6 GPU design, hypothetically speaking.​
Sound off below if you'd like and no, it's never too early to start thinking about next-gen. You think Mark Cerny and Jason Ronald aren't already brainstorming what the next round of hardware could bring? I bet you they are ;)...​
Whats up with this massive wall of text? Lol. Anywho PS5 Slim will be on TSMC 3nm not 5. Guessing Holiday 2023.
 

Magog.

Banned
I don't see 8k taking off until it's literally the same price as 4k and possibly the only option on the shelf. With modern anti-aliasing techniques I see zero benefit to having an 8K TV.
 

kyliethicc

Member
I don't see 8k taking off until it's literally the same price as 4k and possibly the only option on the shelf. With modern anti-aliasing techniques I see zero benefit to having an 8K TV.
Picture an aerial view of a city, tiny people really far away walking on the street.

At 4K res, only so many pixels can be used to display each person. 8K is 4x, so every little detail can be displayed using 4x the pixels, greatly enhancing clarity of fine detail.

It's mostly gonna be useful for really big TVs - 65, 75, 85 inch sizes.
 

Magog.

Banned
Picture an aerial view of a city, tiny people really far away walking on the street.

At 4K res, only so many pixels can be used to display each person. 8K is 4x, so every little detail can be displayed using 4x the pixels, greatly enhancing clarity of fine detail.

It's mostly gonna be useful for really big TVs - 65, 75, 85 inch sizes.

I have a 65" 4k oled and I sit like 8 feet away while playing games. I have better than 20/20 eyesight and I still can't resolve the pixels. I don't give a shit about 8k so people with worse eyesight which is the vast majority of humanity probably don't either. Give me super computer quality AI that can respond to me in natural language with NPCs that think and "feel" independent of a script. That would be a next gen leap I could get behind. Even a pale immigration of that would be a much better upgrade than 8k.
 
Tasty infographic!

I can actually rock with all of this, altho they might use GDDR7 instead of HBM if GDDR7 has a sizable boost in bandwidth over G6 and double capacity per module. As they can technically stack it or do it as package-on-package to save on PCB real estate, tho if HBMNext/HBM3 prices are cheap enough at big economies of scale by 10th-gen then it's an easy choice to pick that over GDDR7 simply for the boost in channels, lower latency and lower power consumption alone.

I have a 65" 4k oled and I sit like 8 feet away while playing games. I have better than 20/20 eyesight and I still can't resolve the pixels. I don't give a shit about 8k so people with worse eyesight which is the vast majority of humanity probably don't either. Give me super computer quality AI that can respond to me in natural language with NPCs that think and "feel" independent of a script. That would be a next gen leap I could get behind. Even a pale immigration of that would be a much better upgrade than 8k.
Speaking of AI, I think we're gonna see a big boost in AI-assisted programming and asset creation processes for 10th-gen with companies developing training models through machine learning and technologies like GPT3, 4, and 5. I just hope that doesn't lead to massive displacement of regular people in the field; you'd still need specialists to train and curate the generated code/content but the "grunt" side of things could theoretically be 100% replaced with AI systems and I think that could have big negative impacts on employment if laws aren't in place to maintain certain minimums of actual people in those positions.
 

Honey Bunny

Member
Rehire Ken Kutaragi and shoot for the stars.

Explicitly deny anyone in California access to your hardware design process or software validation.

Playstation 6.
 

kyliethicc

Member
Still not thinking big enough. This is just a beefed up current console

Ya'll gotta start thinking outside the box
Thats all consoles will be going forward. Better each gen.

But I think a digital only game console generation would be a big shift for most. They will either get rid of discs for PS6, or just keep same discs as PS5.

Only big shifts in tech are unknowable, like future AMD architectures. The PS6 might have MCM chiplets for its SoC, that'd be cool. Maybe some HBM instead of GDDR.

What do you think?

Craziest I can see the PS6 being is like this

3nm MCM SoC
8+8 ARM CPUs, 32 threads total, 2 chiplets @ ~ 5 GHz
36+36 CUs, 72 total active, 2 chiplets @ ~ 3.5 GHz
64 GB HBM @ ~ 2 TB/s bandwidth
2 TB PCIe Gen6x4 SSD @ ~ 25 GB/s raw read
+ dedicated I/O chiplet, and as much SRAM as possible

All of that 3D stacked onto 1 big interposer. But I can't see that being just $500.
 
Last edited:

Magog.

Banned
I don’t expect a PS6 until 2028. Diminishing returns in visuals and hardware will only cause generations to get longer and longer. Sony also doesn’t have a Series S to take into consideration so they could make the PS5 into a 1080p machine and release a PS5 Pro that runs at 4k.

I'm sure existing PS5 owners would love that. The resolution wars are over. 4k is enough for any game with the great anti aliasing we have now. Future hardware advances will be about asset quality, Ray tracing and lighting improvements and hopefully AI improvements.
 

Imtjnotu

Member
Thats all consoles will be going forward. Better each gen.

But I think a digital only game console generation would be a big shift for most. They will either get rid of discs for PS6, or just keep same discs as PS5.

Only big shifts in tech are unknowable, like future AMD architectures. The PS6 might have MCM chiplets for its SoC, that'd be cool. Maybe some HBM instead of GDDR.

What do you think?

Craziest I can see the PS6 being is like this

3nm MCM SoC
8+8 ARM CPUs, 32 threads total, 2 chiplets @ ~ 5 GHz
36+36 CUs, 72 total active, 2 chiplets @ ~ 3.5 GHz
64 GB HBM @ ~ 2 TB/s bandwidth
2 TB PCIe Gen6x4 SSD @ ~ 25 GB/s raw read
+ dedicated I/O chiplet, and as much SRAM as possible

All of that 3D stacked onto 1 big interposer. But I can't see that being just $500.
This gen we had an evolution with how sony designed the ps5. I'm thinking more in line with that.

More AI and machine learning. More sad through put. More custom chiplets with infinity style links between them.
 
Here are my thoughts on next gen or mid-gen refresh:

Process: 3nm or 3nm+

CPU: Must be 12 cores (2 threads each) minimum. Each core should be dedicated to 'assist' the GPU in graphics processing, similar to CELL processor when the GPU falls behind. No more 8 core standard. Frequency should be 4-5ghz Range. Perhaps add a separate ARM based CPU for OS functions. I hate it when one of the cores of the CPU is disabled for I/O and OS functions.

GPU: I am sure it will have better implementation of Ray Tracing, Variable Rate Shading, Geometry Engine, etc, but the Xbox Series X Pro needs to get past the 2Ghz threshold and be a minimum of 2.5Ghz. PlayStation needs to get pass 36 compute units and jack it up to 80-100 range.

RAM: I am thinking GDDR7 will be ready by then. I hope there will be no more bloated OS footprints hogging up RAM. Windows Core or Windows 10x along with advancements to direct x 12 should reduce the OS footprint. You dont need high bandwidth for OS functions, so might as well put it in cheap DDR4 RAM (DDR5 would be better). The current gen RAM size is almost equal to HD-DVD disks, so i am thinking the mid-rage refresh is going to be close to standard Blu-Ray disk between 20-25 GIGs of RAM, with bandwidth close to 1 terabyte/sec.

Storage Medium: I really don't think its feasible to use optical disk drives anymore. Even if they managed to release a 8k Super Ultra Violet Disk, the cost would be too high. The big guns need to come together to create a better standard for storage medium.

SSD: ReRAM, bandwidth 20GB/sec to 60GB/sec, and if possible be close to 2 TB minimum.

Multiple USB 4.0 ports.

Wifi7, or Wifi6E minimum.

Latest version of Bluetooth and HDMI out (2.2 or 3.0) ?
 

kyliethicc

Member
Here are my thoughts on next gen or mid-gen refresh:

Process: 3nm or 3nm+

CPU: Must be 12 cores (2 threads each) minimum. Each core should be dedicated to 'assist' the GPU in graphics processing, similar to CELL processor when the GPU falls behind. No more 8 core standard. Frequency should be 4-5ghz Range. Perhaps add a separate ARM based CPU for OS functions. I hate it when one of the cores of the CPU is disabled for I/O and OS functions.

GPU: I am sure it will have better implementation of Ray Tracing, Variable Rate Shading, Geometry Engine, etc, but the Xbox Series X Pro needs to get past the 2Ghz threshold and be a minimum of 2.5Ghz. PlayStation needs to get pass 36 compute units and jack it up to 80-100 range.

RAM: I am thinking GDDR7 will be ready by then. I hope there will be no more bloated OS footprints hogging up RAM. Windows Core or Windows 10x along with advancements to direct x 12 should reduce the OS footprint. You dont need high bandwidth for OS functions, so might as well put it in cheap DDR4 RAM (DDR5 would be better). The current gen RAM size is almost equal to HD-DVD disks, so i am thinking the mid-rage refresh is going to be close to standard Blu-Ray disk between 20-25 GIGs of RAM, with bandwidth close to 1 terabyte/sec.

Storage Medium: I really don't think its feasible to use optical disk drives anymore. Even if they managed to release a 8k Super Ultra Violet Disk, the cost would be too high. The big guns need to come together to create a better standard for storage medium.

SSD: ReRAM, bandwidth 20GB/sec to 60GB/sec, and if possible be close to 2 TB minimum.

Multiple USB 4.0 ports.

Wifi7, or Wifi6E minimum.

Latest version of Bluetooth and HDMI out (2.2 or 3.0) ?
Cerny said that developers said they do not want more than 8 cores. Thats why he went with 8. Dev feedback.

And a faster clocked 8 core beats a slower clocked 12 core in gaming. Don't expect any console to have more than 8 cores 16 threads for the next decade. They'll just keep clocking it faster, reducing the latency, adding cache, etc.

Plus, they can only afford to use so much die area for the CPUs. Adding cores wastes space.

And the PS4 and PS5 already have an ARM processor on the PCB for background tasks.
 
Last edited:
Cerny said that developers said they do not want more than 8 cores. Thats why he went with 8. Dev feedback.

And a faster clocked 8 core beats a slower clocked 12 core in gaming. Don't expect any console to have more than 8 cores 16 threads for the next decade. They'll just keep clocking it faster, reducing the latency, adding cache, etc.

Plus, they can only afford to use so much die area for the CPUs. Adding cores wastes space.

And the PS4 and PS5 already have an ARM processor on the PCB for background tasks.

Why not both. gif?

12 cores plus close to 5Ghz. The CPU's need to go beyond the 4Ghz threshold
 

Silver Wattle

Gold Member
Cerny said that developers said they do not want more than 8 cores. Thats why he went with 8. Dev feedback.

And a faster clocked 8 core beats a slower clocked 12 core in gaming. Don't expect any console to have more than 8 cores 16 threads for the next decade. They'll just keep clocking it faster, reducing the latency, adding cache, etc.

Plus, they can only afford to use so much die area for the CPUs. Adding cores wastes space.

And the PS4 and PS5 already have an ARM processor on the PCB for background tasks.
That was hyperbole, more than likely Devs were told more CPU cores would eat into the GPU budget, so said 8 was fine.
Why not both. gif?

12 cores plus close to 5Ghz. The CPU's need to go beyond the 4Ghz threshold
5Ghz is not happening in a console, 4Ghz+ really eats into the power efficiency.
 
Top Bottom