• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next Gen Consoles Will Be Absolutely Insane.

What you gonna do when the "nex gen" systems don't come. All there is are streaming platforms. All developers do is make games for said platforms?

Honestly some of you lot have shit for brains. You can't see the guy with a gun in the woods, but he's there and he's coming.
That seems like a cheap shot you're taking there. People who don't think streaming will have taken over the industry by the time the next gen is due are shit for brains?
 

DeepEnigma

Gold Member
You'll all be streaming by 2028, and you'll enjoy it.
No Way Bounce GIF
 

Grechy34

Member
I
With the recent announcement of the 4090 RTX cards I am absolutely blown away at how fast and how big the leaps are in technology when it comes to GPUs and just tech in general. I know there are 2 threads on the 4090 already but I wanted to make one that specifically covers next gen systems and by next gen I am talking about the PS6 and the Next Xbox.

IMPORTANT THINGS TO NOTE - I know tFlops is not really a a perfect metric to explain how powerful a leap is in the GPU power but it's the only metric that's easiest to digest and give you a some sort of a ballpark. There are many other factors that should also be taken into account when you are talking about overall GPU performance increase.

Also, there are other factors to also consider how powerful next-gen hardware systems and not just the GPU. CPU and RAM/Memory/SSD will be another major factor, but this is strictly a GPU thread so we are only talking about GPU increases here.


First and foremost - let's lay everything out on the table here too see what we are looking at for the last 2 generations of consoles.

The PS4 and Xbox One launched in 2013

The PS4 was sitting at measly 1.84 tFlops
Xbox One was sitting at even lower 1.31 tFlops

The PS4 Pro later launched in 2016 and was sitting at 4.2 tFlops
The Xbox One X launched a year later in 2017 with 6.2 tFlops

The PS5 and Xbox Series X launched in 2020.

The PS5 is sitting at 10.28 tFlops (10.3 if we round up, so let's just stick with 10.3)
The Xbox Series X is at 12.3 tFlops (2 tFlops higher than the PS5) and is a bit higher in tFlops measurement than the 2080 RTX Super which is sitting at 11.1 tFlops. A 2080 Ti however, is 13.5 tFlops which makes it superior to both the PS5 and the Xbox Series X when talking about the GPU specifically.

Now let's look at the 3000 series and the 4000s series since we have the official numbers now.

3090 Ti is sitting at 40 tFlops which is 4x more than the PS5 and Xbox Series X and was launched earlier this year in March 2022.

3090 is sitting at 36 tFlops which is almost a 4x increase than the PS5 and Xbox Series X and was launched on September 17th 2020, 2 months before the launch of the PS5 and Xbox Series X. The consoles we already locked in to basically utilize the power of GPUs that were equivalent to the 2000 series so 1 generation behind.

3080 is sitting at 34.1 tFlops which again is a massive increase compared to the PS5 and Xbox Series X.

Another difference with these GPUs is the amount of ram and type of ram, but let's be honest RAM was never a problem anymore for console systems. The last time we had major ram/memory issues was the PS3 and 360 era and we are well past that and I highly doubt we will ever that problem again.

The PS4 and Xbox One biggest issue was the CPU, it was super outdated crappy Jaguar x86.

With how weak and pathetic PS4 and Xbox One is looking these days the wizards over at the Naughty Dog and Sony Santa Monica still have the best looking games to date. The Last of Us Part II and God of War Ragnarok are very good examples of that.

BWh1xcA.jpg

UUybFaW.png


We have seen absolutely nothing yet what the PS5 and Series X could truly do until we get those true current-gen goodies.

Anyways - back to the GPUs

We now have the specs for the 4090 and its releasing next month on October 12th with the following specs:

4090 has 84.3 tFlops that is more than double the increase of the 3090 Ti which was 40.3 tFlops.

Fast forward to 2024 we will be getting the 5000 series which will once again make the 4090 probably look like a little baby and the trend seems to always double the performance of the previous generation of the GPUs. According to Nvidia the 4090 is 2-4 times more powerful than the 3090 Ti depending on the application/game/software.

I am suspecting the 5000 series to finally break the 100 tFlops barrier, I mean that's guaranteed at this point. Don't forget there will also be a 4090 Ti and honestly that will probably very close to 100 tFlops already.

IMPORTANT THINGS TO NOTE - In 2026 it will be the 6000 series. When are the new consoles coming? I have no clue, but I am probably expecting them around 2026-2028 with 2028 at the latest and 2026 at the earliest. If the new consoles come out in 2026 then they will most likely be utilizing and be equal to the power of the 5000 series in terms of GPUs specifically, if they release in 2028 then the 6000 series. But then again, there might be some sort of a crazy curveball, I am just going of the last 10 years of how everything went. The PS6 has already entered R&D/Concepting in 2021. The PS5 started development in 2015 which is 1.5 years or so after the launch of the PS4.

What are you all expecting for the power of the GPUs out of next-gen systems?

Hopefully. This generation has been by far the worst generation of consoles I have experienced. Not that the actual consoles are bad. But there's been no stock for years, games being brought down by the need to port to PS4/Xbox and a lack of creativity.
 
I


Hopefully. This generation has been by far the worst generation of consoles I have experienced. Not that the actual consoles are bad. But there's been no stock for years, games being brought down by the need to port to PS4/Xbox and a lack of creativity.
Don't think you can blame the generation itself for that. The world was going through an insane pandemic right before launch and that fucked not just the video game industry but everything else in existence. Just very unfortunate timing is what it is.
 
Last edited:

PeteBull

Member
What you gonna do when the "nex gen" systems don't come. All there is are streaming platforms. All developers do is make games for said platforms?

Honestly some of you lot have shit for brains. You can't see the guy with a gun in the woods, but he's there and he's coming.
So far all streaming platforms were unprecedented diseasters, each and every one, not just stadia, players/customers simply ignore them(even praised gforce now from nvidia durning crazy crypto boom with most midrange gpus priced at 1k$+ was ignored), so lets not worry about this shit for now.
Btw, next one to die will be amazon's luna;D
 
I don’t think we’ll be seeing next gen for quite a long time. Chip shortages over the last couple years and the economy being shot to shit over the last few years will likely have an impact going forward.
 
What you gonna do when the "nex gen" systems don't come. All there is are streaming platforms. All developers do is make games for said platforms?

Honestly some of you lot have shit for brains. You can't see the guy with a gun in the woods, but he's there and he's coming.
What are you talking about? Streaming platforms are fucking awful. Even on a 1Gbps connection xbox cloud play has noticeable latency and artifacting everywhere. I also believe that both PlayStation and Xbox have said that more consoles will be coming in the future.
 

yamaci17

Member
rtx 3000 tflops are fake.

in reality, 3070 is not a "20 tflops" tier GPU. it merely is around 12 tflops of rdna2 (6700xt etc)
3090ti is not a 40 tflops monster. it is around 24 tflops effectively compared to rdna2 cards (6950xt is 24 tflops. and per Techpowerup, 3090ti, at AVERAGE is %9 faster than 6950xt. so it has nowhere near "40 tflops" of performance. at best case scenario, you might get 26-27 tflops-like perf. out of it)

4090 is only %60 faster than 3090ti in terms of pure cuda power. and in gaming, it will most likely be %45-50. but anyways, that means it will be around 37-38 tflops. but sure as hell nvidia will market it is 100 tflops.

i dont care or give credit to fake tflops calculations that assumes a game is running entirely on fp32. it is not possible, will never be possible.

just look at rdna2, 6700xt is near consoles (12 tflops). 7700xt should be around 16-20 tflops, and 8700xt will most likely be around 24-30 tflops. so at best scane scenario nextgen consoles could have 30 tflops. in worst scenario they would continue with 24 tflops, which is what you usually expect from midgen refreshes (2x-2.5x GPU power over the base console, similar CPU and RAM configurations)

you can't have massively inflated GPU power without adding more RAM. and adding more ram to sx and ps5 refreshes would only make it even worse for developers. (see how much they complained about series s initially. not that they can't do it, it just means more work).

TLDR: fake tflops mean nothing
 
I got a stupid noob question:

You know how there were 2 versions of windows 7: 32 bit and 64 bit? And now Windows 10/11 is purely 64 bit?
Remember how Android ARM CPU's were 32 bit before, and now finally all Snapdragon CPU's are 64 bit?
Apple made sure their ARM cpu's were 64 bit from early on.

Question is, are current Zen CPU's and Intel AlderLake/Raptor Lake CPU's 64bit? When are we gonna move to 128bit CPUs? or 256bit? or 312bit? or 512bit?
 

DeepEnigma

Gold Member
Question is, are current Zen CPU's and Intel AlderLake/Raptor Lake CPU's 64bit? When are we gonna move to 128bit CPUs? or 256bit? or 312bit? or 512bit?
Yes.

And probably not. Here is a good explanation on it,


Also this,
Advanced Vector Extensions (AVX) are extensions to the x86 instruction set architecture for microprocessors from Intel and AMD proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge[1] processor shipping in Q1 2011 and later on by AMD with the Bulldozer[2] processor shipping in Q3 2011. AVX provides new features, new instructions and a new coding scheme.

AVX2 expands most integer commands to 256 bits and introduces FMA. AVX-512 expands AVX to 512-bit support utilizing a new EVEX prefix encoding proposed by Intel in July 2013 and first supported by Intel with the Knights Landing processor scheduled to ship in 2015.
 

SumJester

Member
Frankly for me, it's reaching the realms for diminish returns regarding graphical improvements.
Not that eager to buy new ways to pay for games i don't really own, nor wasting 70 bucks to beta test.
 
Last edited:

JackMcGunns

Member
The issue is not the systems, the issue is the never satisfied gamer. Last gen the No.1 complaing was the lack of a good CPU in consoles which forced most if not all games to run at 30fps. We all dreamed of having 60fps and most real gamers put that at the top of the priority list for next gen games... and guess what? Not only did we get a massive CPU boost which practically every game supporting 60fps to the point that its weird if it doesn't, but we also got a massive IO boost with SSD drives. But Waah waaah waaaaah, never good enough.

I'm enjoying the current gen very much and the next one will be that much more of an improvement over this one.
 

Grechy34

Member
Don't think you can blame the generation itself for that. The world was going through an insane pandemic right before launch and that fucked not just the video game industry but everything else in existence. Just very unfortunate timing is what it is.

Yeah that's fair enough. Not necessarily blaming any particularly party just assessing the current generation for what it is currently.
 

JimRyanGOAT

Member
The issue is not the systems, the issue is the never satisfied gamer. Last gen the No.1 complaing was the lack of a good CPU in consoles which forced most if not all games to run at 30fps. We all dreamed of having 60fps and most real gamers put that at the top of the priority list for next gen games... and guess what? Not only did we get a massive CPU boost which practically every game supporting 60fps to the point that its weird if it doesn't, but we also got a massive IO boost with SSD drives. But Waah waaah waaaaah, never good enough.

I'm enjoying the current gen very much and the next one will be that much more of an improvement over this one.

What good are all the improved specs if developers aren't going to take advantage of them?

Cross gen means the game will be built on last gen engines first then just scaled up for next gen
 

//DEVIL//

Member
You will be playing horizon zero dawn remake the remake and the last of us remake the remake .

And you will pay 90$ per game cause Sony and idiots keeps supporting them
 
Last edited:

Corndog

Banned
rtx 3000 tflops are fake.

in reality, 3070 is not a "20 tflops" tier GPU. it merely is around 12 tflops of rdna2 (6700xt etc)
3090ti is not a 40 tflops monster. it is around 24 tflops effectively compared to rdna2 cards (6950xt is 24 tflops. and per Techpowerup, 3090ti, at AVERAGE is %9 faster than 6950xt. so it has nowhere near "40 tflops" of performance. at best case scenario, you might get 26-27 tflops-like perf. out of it)

4090 is only %60 faster than 3090ti in terms of pure cuda power. and in gaming, it will most likely be %45-50. but anyways, that means it will be around 37-38 tflops. but sure as hell nvidia will market it is 100 tflops.

i dont care or give credit to fake tflops calculations that assumes a game is running entirely on fp32. it is not possible, will never be possible.

just look at rdna2, 6700xt is near consoles (12 tflops). 7700xt should be around 16-20 tflops, and 8700xt will most likely be around 24-30 tflops. so at best scane scenario nextgen consoles could have 30 tflops. in worst scenario they would continue with 24 tflops, which is what you usually expect from midgen refreshes (2x-2.5x GPU power over the base console, similar CPU and RAM configurations)

you can't have massively inflated GPU power without adding more RAM. and adding more ram to sx and ps5 refreshes would only make it even worse for developers. (see how much they complained about series s initially. not that they can't do it, it just means more work).

TLDR: fake tflops mean nothing
I have a 3070. I don’t see current consoles coming close to what it is capable of. But that’s just my opinion. Time will tell.
 

yamaci17

Member
I have a 3070. I don’t see current consoles coming close to what it is capable of. But that’s just my opinion. Time will tell.
3070 has the advantage of being really fast in ray tracing (around %30-60 depending on the title. if a game has complex ray tracing it is going to be near %50-60. if a game has more simplistic ray tracing, it will be around %30, which is almost as much as how fast the 3070 is over ps5 in rasterization).

and we know that eventually ps5/xsx will get special raster hack/optimizations that will get more performance out of their hardware. but that's another topic (see: gtx 970 being 2-2.5x faster over ps4 between 2013-2016 games, but once nextgen started, it eventually was only 1.5-1.7x faster than ps4. this has to do with both console side optimizations and lack of gpu side optimizations for older arch)

ray tracing... is cool. when you can use it. up until now, there are no troubles. dx12 commands that %10-15 of available vram budget must be left to background applications. that gives us 6.5 gb vram of usable budget. it was a fine budget to add ray tracing on ps4-oriented games. but nextgen textures and games will demand undisputed 10 gb of memory buffer in future.

in those cases, you will run into situations where 3070 is unable to run ray tracing alongside with nextgen textures. sometimes even sacrificing huge time on textures won't even yield proper results. so that advantage will go waste at some point. and if you can't utilize the ray tracing in such cases, 3070 just becomes a gpu that is barely %25-35 faster than consoles and even that advantage can diminish quickly if Ampere is left behind by NV/devs or consoles get more performance out of their hardware all of a sudden (it happened before, will happen again)

so yeah, sorry, 3070 is not an good example. it actually made PC gaming seem real bad when it was running into all kinds of VRAM related troubles in spiderman remastered. you had to sacrifice and compromise on textures to run ray tracing at 4k/upscaled (which is what PS5 is capable of, thanks to having enough memory buffer to fit both ray tracing calculations and high quality assets/4k lods all together). i myself have a 3070. this gpu deserved 10-12 gb vram buffer. 8 gb is too tiny for what it is capable of.

for the case of complex ray tracing: 8 gb buffer is already not enough for cyberpunk even at 1440p with high textures in certain locations of the map. this is a freaking lastgen based game that has lastgen textures. how do you think 3070 will be able to upgrade both textures and keep complex ray tracing?

if such is the case, what is the point of having that big ray tracing performance advantage at all_? it was only relevant on lastgen based ray tracing games. I also had the pleasure of playing and experiencing them. but card will shit the bed once actual nextgen games start to hit.

ps5 is meant for future. if we're going to compare 3070 and ps5, than we have to see what future holds for both hardware. I just think that 3070 won't be able to capitalizate on its ray traicng advantage in future.

this is gtx 770 ps4 all over again. go see how people thought of a 2 gb 770 over ps4. whenever u say something about its tiny vram buffer, all people were like "more vram wont make ps4 faster lolol". in the end, the decrepit gpu cannot even match ps4 in true nextgen (for lastgen) games like god of war, rdr2 and many more. now ask the same people if they would prefer having a 770 or ps4 and they would ridicule 770 on how decrepit of a gpu is. it is what it is. that gpu too had enormous power over ps4. thanks to its tiny vram buffer, it cannot even shine. 4 gb 770 fairs better but it was a rare gpu. and nvidia did not make the same mistake this time (no 16 gb 3070 in sights). they're literally skimping on vram on a supposed "4080" tier card by giving it 12 gb (in reality it is a 4070.)

im not going say 3070 will fall into the same situation (2 gb vram against 4 gb budget and 6.5 gb vram against 10 gb budget. in this case, 3070 won't be decrepit like 770. but it wont be able to enjoy its premiums aside from DLSS)
 
Last edited:

Corndog

Banned
3070 has the advantage of being really fast in ray tracing (around %30-60 depending on the title. if a game has complex ray tracing it is going to be near %50-60. if a game has more simplistic ray tracing, it will be around %30, which is almost as much as how fast the 3070 is over ps5 in rasterization).

and we know that eventually ps5/xsx will get special raster hack/optimizations that will get more performance out of their hardware. but that's another topic (see: gtx 970 being 2-2.5x faster over ps4 between 2013-2016 games, but once nextgen started, it eventually was only 1.5-1.7x faster than ps4. this has to do with both console side optimizations and lack of gpu side optimizations for older arch)

ray tracing... is cool. when you can use it. up until now, there are no troubles. dx12 commands that %10-15 of available vram budget must be left to background applications. that gives us 6.5 gb vram of usable budget. it was a fine budget to add ray tracing on ps4-oriented games. but nextgen textures and games will demand undisputed 10 gb of memory buffer in future.

in those cases, you will run into situations where 3070 is unable to run ray tracing alongside with nextgen textures. sometimes even sacrificing huge time on textures won't even yield proper results. so that advantage will go waste at some point. and if you can't utilize the ray tracing in such cases, 3070 just becomes a gpu that is barely %25-35 faster than consoles and even that advantage can diminish quickly if Ampere is left behind by NV/devs or consoles get more performance out of their hardware all of a sudden (it happened before, will happen again)

so yeah, sorry, 3070 is not an good example. it actually made PC gaming seem real bad when it was running into all kinds of VRAM related troubles in spiderman remastered. you had to sacrifice and compromise on textures to run ray tracing at 4k/upscaled (which is what PS5 is capable of, thanks to having enough memory buffer to fit both ray tracing calculations and high quality assets/4k lods all together). i myself have a 3070. this gpu deserved 10-12 gb vram buffer. 8 gb is too tiny for what it is capable of.

for the case of complex ray tracing: 8 gb buffer is already not enough for cyberpunk even at 1440p with high textures in certain locations of the map. this is a freaking lastgen based game that has lastgen textures. how do you think 3070 will be able to upgrade both textures and keep complex ray tracing?

if such is the case, what is the point of having that big ray tracing performance advantage at all_? it was only relevant on lastgen based ray tracing games. I also had the pleasure of playing and experiencing them. but card will shit the bed once actual nextgen games start to hit.

ps5 is meant for future. if we're going to compare 3070 and ps5, than we have to see what future holds for both hardware. I just think that 3070 won't be able to capitalizate on its ray traicng advantage in future.

this is gtx 770 ps4 all over again. go see how people thought of a 2 gb 770 over ps4. whenever u say something about its tiny vram buffer, all people were like "more vram wont make ps4 faster lolol". in the end, the decrepit gpu cannot even match ps4 in true nextgen (for lastgen) games like god of war, rdr2 and many more. now ask the same people if they would prefer having a 770 or ps4 and they would ridicule 770 on how decrepit of a gpu is. it is what it is. that gpu too had enormous power over ps4. thanks to its tiny vram buffer, it cannot even shine. 4 gb 770 fairs better but it was a rare gpu. and nvidia did not make the same mistake this time (no 16 gb 3070 in sights). they're literally skimping on vram on a supposed "4080" tier card by giving it 12 gb (in reality it is a 4070.)

im not going say 3070 will fall into the same situation (2 gb vram against 4 gb budget and 6.5 gb vram against 10 gb budget. in this case, 3070 won't be decrepit like 770. but it wont be able to enjoy its premiums aside from DLSS)
Sorry, Im not going to read that giant post. Sum it up it a few sentences.
 

Kenneth Haight

Gold Member
You'll all be streaming by 2028, and you'll enjoy it.
Bookmarked, see you in 2028 to see how things have gone. One of the largest tech companies on the planet have just mothballed their disaster of a streaming service.

You need to offer consumers both choices, Xbox have a way better approach in providing hardware and the ability to stream. Google were too aggressive, with the garbage internet that still exists even in 1st world countries I would say it’s 10 years before we are streaming in the mainstream.

For me, I will always pick local hardware as I’m old and stuck in my ways and I know how crap latency and jitter etc. can be. It’s not something I want to be involved in until someone can demonstrate that there is no negligible difference between performance locally and in the cloud.
 

PeteBull

Member
For me, I will always pick local hardware as I’m old and stuck in my ways and I know how crap latency and jitter etc. can be. It’s not something I want to be involved in until someone can demonstrate that there is no negligible difference between performance locally and in the cloud.
Indeed, even best case scenario is far worse from local hardware and easily noticeable even by casual gamers(unless u play exclusively turnbased games, then i guess it would be okish;p), but what matters is- worst case scenario when ur internet isnt perfect every milisecond of every hour of every day, then it is unplayable experience, not 60 vs 120-178ms ping but pure diseaster.
 

PeteBull

Member
Not really.

geforce-now-rtx-3080-latency-nvidia.png
u quoting geforce now, and yet u know its still so shitty ppl prefered to not buy gpu or buy super overpriced gpu in the middle of crypto boom, than go for that supposedly great streaming experience;D
Edit i wanna see that total 56ms ping in real scenario from gfn lool, not some specially tested 1 in a milion possibility, coz in such test even stadia looked good while in real test it was like this https://d21rhj7n383afu.cloudfront.n...713498-xfd55s_t_1574098449979_640_360_600.mp4

Found actual vid from DF

If we compare apples to apples, so pc native vs gfn results are in big favour of native pc- 60fps is 49ms vs 86ms on gfn, and 120fps native is 31ms vs 72ms on gfn, huge difference, and again keep in mind its bestcase scenario when u got superfast net w/o any losses(which realistically u will never have).
;D
 
Last edited:

rodrigolfp

Haptic Gamepads 4 Life
u quoting geforce now, and yet u know its still so shitty ppl prefered to not buy gpu or buy super overpriced gpu in the middle of crypto boom, than go for that supposedly great streaming experience;D
Edit i wanna see that total 56ms ping in real scenario from gfn lool, not some specially tested 1 in a milion possibility, coz in such test even stadia looked good while in real test it was like this https://d21rhj7n383afu.cloudfront.n...713498-xfd55s_t_1574098449979_640_360_600.mp4

Found actual vid from DF

If we compare apples to apples, so pc native vs gfn results are in big favour of native pc- 60fps is 49ms vs 86ms on gfn, and 120fps native is 31ms vs 72ms on gfn, huge difference, and again keep in mind its bestcase scenario when u got superfast net w/o any losses(which realistically u will never have).
;D

For local pc hardware, yes, pc will still be supreme. But the thread is about next gen, so consoles, and local console hardware can lose to streaming at latency.
 
Last edited:
They need to 'brute force' the next gen consoles more by (I am approaching this from a conceptual level):

-incorporating Machine Learning Cores +proper implementation of Ray Tracing Cores in the GPU

More Controllers or processing Units to offset tasks away from the CPU and additional processing power to assist the GPU:
-Dedicated PPU: Physicist Processing Unit to help assist in realistic simulation type games, additional assistance in Ray Tracing performance
-Dedicated NPU: Neural Processing Unit (think Apple M1 and M2)
-Dedicated I/O Core-Controller Unit: Completely flushes and saturates the GPU and CPU without being idle and queued, removes all redundant tasks from CPU
-Customized PCIE 5 SSD: 1+ TB of base storage reaching reading/write speed of uncompressed/compressed: 10GBsec/20GB sec which the RAM bandwidth speed of PS3/Xbox360. The SSD can be an additional secondary RAM
-Multiple USB 4.0 Version 2.0 ports

CPU to hit 5 Ghz
GPU to hit 3 Ghz

Zen 4 and RDNA 3 is all about efficiency. Zen 5 is rumored to be built from the ground up. Don't know too much about RDNA 4. Ideal for next gen consoles to be Zen5/RDNA4 based.

Cost: $Your left testicle+1 of your functioning kidneys
 

PUNKem733

Member
They need to 'brute force' the next gen consoles more by (I am approaching this from a conceptual level):

-incorporating Machine Learning Cores +proper implementation of Ray Tracing Cores in the GPU

More Controllers or processing Units to offset tasks away from the CPU and additional processing power to assist the GPU:
-Dedicated PPU: Physicist Processing Unit to help assist in realistic simulation type games, additional assistance in Ray Tracing performance
-Dedicated NPU: Neural Processing Unit (think Apple M1 and M2)
-Dedicated I/O Core-Controller Unit: Completely flushes and saturates the GPU and CPU without being idle and queued, removes all redundant tasks from CPU
-Customized PCIE 5 SSD: 1+ TB of base storage reaching reading/write speed of uncompressed/compressed: 10GBsec/20GB sec which the RAM bandwidth speed of PS3/Xbox360. The SSD can be an additional secondary RAM
-Multiple USB 4.0 Version 2.0 ports

CPU to hit 5 Ghz
GPU to hit 3 Ghz

Zen 4 and RDNA 3 is all about efficiency. Zen 5 is rumored to be built from the ground up. Don't know too much about RDNA 4. Ideal for next gen consoles to be Zen5/RDNA4 based.

Cost: $Your left testicle+1 of your functioning kidneys

That sounds great, but aren't Zen 5 and RDNA 4 coming at the end of next year/beginning of 2024? These consoles are coming 4-5 years after, I would think something closer to a zen 6/rdna 5 would be used since they should be coming in 2025.
 
That sounds great, but aren't Zen 5 and RDNA 4 coming at the end of next year/beginning of 2024? These consoles are coming 4-5 years after, I would think something closer to a zen 6/rdna 5 would be used since they should be coming in 2025.

Playstation 4 came out in Late 2013
Playstation 4 Pro came out late 2016
3 year difference


Xbox One came out Late 2013
Xbox One X came out Late 2017
4 year difference

Only 3-4 year difference. I think this time it will be 4-5 years for several reasons:

-Shortages for PlayStation 5 and Xbox Series S|X
-Increase in cost for actual hardware/silicon production for any node shrinkage beyond 7nm
-Hardly any games utilizing Zen2/RDNA2/SSD, and still being cross gen
-Economic hardship due to inflation, and COVID 19 impact

I personally feel, future rumored hardware revisions of PS5 and Xseries is a great idea to reduce cost of production, reduce purchasing price of console, increase revenue and recover from losses. The extra time will also give developers to refine their development tools and utilize newer game engines like Unreal Engine 5 that tap into the power of Zen2/RDNA2/SSD.

It would be even better for the newer consoles to have Zen6/RDNA 5 but that is too optimistic. Zen5/RDNA4 is reasonable middle ground from also pessimistic Zen3/RDNA3 expectations.
 
What I would really like is for Sony to go full on with a 100% RT based architecture, no support at all for the old raster pipeline.

This would kill backwards compatibility and make porting a chore, so or will probably not happen.
 
I am okay with ps4 and xbox one graphics. Slightly better is okay too. I guess.

My main priority is cheaper gaming. Gaming is getting too expensive. Game development is taking long and too expensive.

We need really long console generation with cheap games and console.

Remember 99 dollar consoles and 5 dollar games in the bins?
 

Klosshufvud

Member
I am okay with ps4 and xbox one graphics. Slightly better is okay too. I guess.

My main priority is cheaper gaming. Gaming is getting too expensive. Game development is taking long and too expensive.

We need really long console generation with cheap games and console.

Remember 99 dollar consoles and 5 dollar games in the bins?
I can appreciate Series S for this. It's a very simple yet decently specced console. Minus lacking disc reader, it's a very old-school console. It's quick, small and quiet. It's cheap and has a decent performance baseline. It runs games like Cyberpunk at 60 fps.

The cheap games are lacking though. Sadly it seems digital market is becoming either F2P phenomenons or $70 premium games. I wish more games were released day 1 in the $20-30 category that weren't just indies. The profit margins are so large regardless for publishers selling digital. Game Pass is alright but I dislike not knowing whether or not games are there a month or five months from now.
 

Haggard

Banned
I am okay with ps4 and xbox one graphics. Slightly better is okay too. I guess.

My main priority is cheaper gaming. Gaming is getting too expensive. Game development is taking long and too expensive.

We need really long console generation with cheap games and console.

Remember 99 dollar consoles and 5 dollar games in the bins?
Gaming has never been as cheap as it is now, the bargain bin also still exists and hardware iterations are absolutely necessary if you don't want total stagnation.
 
Last edited:

Sosokrates

Report me if I continue to console war
I can appreciate Series S for this. It's a very simple yet decently specced console. Minus lacking disc reader, it's a very old-school console. It's quick, small and quiet. It's cheap and has a decent performance baseline. It runs games like Cyberpunk at 60 fps.

The cheap games are lacking though. Sadly it seems digital market is becoming either F2P phenomenons or $70 premium games. I wish more games were released day 1 in the $20-30 category that weren't just indies. The profit margins are so large regardless for publishers selling digital. Game Pass is alright but I dislike not knowing whether or not games are there a month or five months from now.
Has there been any game on the service for a month?
They seem to be on many months.
 

deriks

4-Time GIF/Meme God
Yeah, but we won't really tell aside the performance itself

I mean, PS4 and One games still looks great, the problem was more a thing here and there and the draw distance. I mean, Kingdom Hearts 3 has a Toy Story stuff that is very close to the most recent movie... We can be prettier with PS5 and Series, but taking too long only to have a pretty hair it's not the best choice
 

Klosshufvud

Member
Has there been any game on the service for a month?
They seem to be on many months.
They are on for months but just the idea that there is a rotation and the library isn't "permanent" just annoys me. Digital ownership is absolutely preferrable to digital access through a subscription.
 
I am okay with ps4 and xbox one graphics. Slightly better is okay too. I guess.

My main priority is cheaper gaming. Gaming is getting too expensive. Game development is taking long and too expensive.

We need really long console generation with cheap games and console.

Remember 99 dollar consoles and 5 dollar games in the bins?

What about gamepass and xcloud?
 
They need to 'brute force' the next gen consoles more by (I am approaching this from a conceptual level):

-incorporating Machine Learning Cores +proper implementation of Ray Tracing Cores in the GPU

This is a given, but those two cores would be the same type, i.e Tensor cores are used for both ML and RT, but only part of the process. You still need the general GPU structure for part of the process WRT RT

More Controllers or processing Units to offset tasks away from the CPU and additional processing power to assist the GPU:
-Dedicated PPU: Physicist Processing Unit to help assist in realistic simulation type games, additional assistance in Ray Tracing performance

GPUs actually had this back in the day, like with Physx. GPGPU is generally good enough to handle physics calculations these days, although GPUs have longer latencies than what PPUs had, and physics calculations benefited from that.

-Dedicated NPU: Neural Processing Unit (think Apple M1 and M2)

Technically speaking if the 10th-gen consoles have a Tensor Core-like equivalent they would already have a complex of NPUs; Tensor cores are classified as a type of NPU.

-Dedicated I/O Core-Controller Unit: Completely flushes and saturates the GPU and CPU without being idle and queued, removes all redundant tasks from CPU

The PS5 more or less already has this, an entire memory sub-system complex for handling flow of data to/from the storage and RAM memory pools. The CPU has very little involvement in the process outside of initiating the commands (I guess conceptually it'd be similar to how the CPU typically generates the command lists for the GPU).

It's basically like a DPU (Data Processing Unit), just stripped down to deal with flash and nonvolatile memory types.

-Customized PCIE 5 SSD: 1+ TB of base storage reaching reading/write speed of uncompressed/compressed: 10GBsec/20GB sec which the RAM bandwidth speed of PS3/Xbox360. The SSD can be an additional secondary RAM

Not in the way you're probably thinking. It's still NAND, probably TLC or QLC at that, so it still a page-addressed and block-level addressable memory. It doesn't allow for true random access (though there are schemes that can kind of mimic it) of RAM, has worst latency than RAM, and lacks byte-addressability (very important for purpose of data granularity; RAM is byte-addressable, NOR is even bit-addressable).

However, what fast SSDs allow is for more of the actual RAM to be freely available at any given point, since less of it needs to be used as a longer-state pre-cache for data needed in an upcoming scene. Basically, more of the physical memory can be used for only the most pertinent data. Mark Cerny talks about this in Road to PS5, and things like the PS5's SSD I/O complex, and SFS on the Series consoles, are meant to help enable it.

Although with better tech in future systems, it can be implemented even better.

-Multiple USB 4.0 Version 2.0 ports

Well it'll depend on the system needs. As something to facilitate VR/AR (either tethered or with a dongle to enable wireless communication), I'm hoping 10th-gen systems try going for a big integrated VR/AR push, including local multiplayer VR. Or at least one of the systems to :/

CPU to hit 5 Ghz
GPU to hit 3 Ghz

These kind of metrics really depend on what's needed for the design. Personally I think addressing data locality and ways to minimize the impact of data accesses across the memory bus are more important, and these can be addressed with a combination of PIN/PNM designs, better memory (HBM > GDDR), chiplets (maybe an implementation of the UCIe standard?), CPU metadata, clusters of smaller CPU (possibly ARM) cores closer to memory in some way, etc.

Zen 4 and RDNA 3 is all about efficiency. Zen 5 is rumored to be built from the ground up. Don't know too much about RDNA 4. Ideal for next gen consoles to be Zen5/RDNA4 based.

By the time these systems launch we'll probably be at Zen 6/RDNA 5 or maybe products with completely different nomenclature/brand names. Whatever is planned, I hope it involves a transition to ARM cores, involves HBM (or even HBM-PIM), PIM/PNM-favoring architectures, and realistically sufficient GPUs.

Like ByWatterson ByWatterson was saying, the bottleneck these days isn't system power but the amount of time, budget (cost), and manpower needed to make a modern high-quality AAA game. Over the past four or so generations, it's gone from teams of less than 50, budgets of maybe $2 - $5 million on average, and 2 years (at most), to now teams of upwards 500 (plus contractors), budgets of $100+ million (even up to $150 million or more in some cases), and between 5-6 years on average, with some games taking up to 7 years.

IMO that's unsustainable; I know people were freaking out about the fact Hellblade II is using AI for some of the VA work but if y'all want high-budget AA & AAA games to be made in shorter spans of time, AI-powered programming models and tools for VA, graphics (textures, animations, meshes etc.), programming/code etc. are going to have to become more of an industry standard. And yes that will create some sore spots with possibly certain lower-end positions getting phased out, but the industry can self-regulate that with a code for developers & publishers to go by stipulating you always need some percentage of actual people in all the different position levels.

Plus, this could create new positions for people who can specifically curate and fine-tune the results, add a human touch, etc. and you're always going to need certain positions that simply cannot be replaced by AI programming models, since those are the high-level creative, director, producer etc. positions and whatnot. AI programming would just be used to handle a lot of the labor-intensive grunt work at bulk scale (and technically speaking, things like procedural generation already implement a lot of this conceptually, and hasn't caused too much industry displacement to my knowledge).
 

sankt-Antonio

:^)--?-<
Oh I do think that dev time will be reduced at a certain point.

AI generated contend will make this very fast.
Extrapolate what AI networks can already do for concept art in still images. Only a matter of time until an AI can create 3D models including rigs and fitting animation. Whole worlds even.

It’s going to be grandiose since budgets will get lowered, risk will sink, new concepts and ideas will hit the market at a fast rate.
 
Oh I do think that dev time will be reduced at a certain point.

AI generated contend will make this very fast.
Extrapolate what AI networks can already do for concept art in still images. Only a matter of time until an AI can create 3D models including rigs and fitting animation. Whole worlds even.

It’s going to be grandiose since budgets will get lowered, risk will sink, new concepts and ideas will hit the market at a fast rate.

Very true. But that content's still going to need actual people to manage it and curate it, and while theoretically entire non-AI labor for various positions could probably be replaced with this kind of "AI-automation", I think for both practicality and ethical reasons some stipulations should be in place to ensure at least some portion of all positions are still staffed by, well, actual human beings 😂.

But yeah, the sheer benefits it can bring in terms of reigning down on budgets and time, while still getting very impressive and AAA results production-wise, is going to be such a boon not just for AAA devs but even cascading down to AA and indie devs. I hope it starts manifesting as a legit reality at some point before 10th-gen consoles hit.
 

Sosokrates

Report me if I continue to console war
I think the biggest advantage consoles will have is the ability to make a custom motherboard and add custom chips and features. Like the ps5 and Xseries have woth there decompression chips.

It will be interesting if they have any knew innovations next gen, or if it will just advance the ssd bandwidth and decompression/io power.

No one was really expecting nvme SSDs,decompression chips and such an emphasis on the ssd and io stack.
 

PUNKem733

Member
Playstation 4 came out in Late 2013
Playstation 4 Pro came out late 2016
3 year difference


Xbox One came out Late 2013
Xbox One X came out Late 2017
4 year difference

Only 3-4 year difference. I think this time it will be 4-5 years for several reasons:

-Shortages for PlayStation 5 and Xbox Series S|X
-Increase in cost for actual hardware/silicon production for any node shrinkage beyond 7nm
-Hardly any games utilizing Zen2/RDNA2/SSD, and still being cross gen
-Economic hardship due to inflation, and COVID 19 impact

I personally feel, future rumored hardware revisions of PS5 and Xseries is a great idea to reduce cost of production, reduce purchasing price of console, increase revenue and recover from losses. The extra time will also give developers to refine their development tools and utilize newer game engines like Unreal Engine 5 that tap into the power of Zen2/RDNA2/SSD.

It would be even better for the newer consoles to have Zen6/RDNA 5 but that is too optimistic. Zen5/RDNA4 is reasonable middle ground from also pessimistic Zen3/RDNA3 expectations.
I only think it's gonna be a cut down Zen6/RDNA5, as these will be several years old by the time next gen is ready, especially when current gen got RDNA 2 which was new.
 
Top Bottom