• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Pro devkits arrive at third-party studios, Sony expects Pro specs to leak

Stooky

Member
About 30k is enough and still we could work with less. More important are good blend shapes (geometry plus normal maps).
No lol Fx characters in film use more, they have more details modeled in and has to hold up on a big screen and when the camera gets close to it and thats why it looks better than games. there are details that are modeled in that depth maps can't get, you only from having the extra polys. In games we are using work arounds using joints derived from expression head scans instead blends shapes because of memeory limitations . Most of the poly count on a character in a game goes to to the face, devs that don't do you can tell. So no 30k is a mininum not enough if you trying to hit film quality.
 
Last edited:

leizzra

Member
No lol Fx characters in film use more, they have more details modeled in and has to hold up on a big screen and when the camera gets close to it and thats why it looks better than games.
First of all I was referring to games, not movies. Secondly I can’t agree with that. Many movies now have bad CGI that you clearly see despite having high polygon count. Moreover, it is usually worse in motion so not when you see all those details.

I would argue that lighting, color grading/post processing, and shaders are more important then how much polygons you have. If you will agree that CGI in movies was improving over the years it was not because of polygons (those didn’t changed over the years, only how they are being made thanks to sculpting solutions like Zbrush) but because of better shaders (like physical based materials, proper subsurface scattering with simulating layers of skin), simulation of cloths or muscles and lighting.

You can have lower poly count and yet model looks realistic because of materials setup, lighting with good shadows, proper color grading. You can have high poly count model with the rest being bad/basic and it won’t look realistic. I also think that good example are cut-scenes vs gameplay in Naughty Dog games (The Last of Us Part 2 or Uncharted 4), where it shows how big of a difference is made by scene setup change, not the models (sure, there can be some tweaks in shaders but it’s not the way it was in PS3 era or later).

In games we are using work arounds using joints derived from expression head scans instead blends shapes because of memeory limitations

I’m not a technical animator, but I was working with FACS based blend shapes for last game. It uses geometry from blend shapes as a correction for a specyfik expression, then it is supported by texture blending (normal map and sometimes color) for the details. I believe it is standard for years now (remember being at Sony’s GDC talk in 2014 about that for TLoU1 and The Order) yet the implementation may vary a bit (like Santa Monica made their own stuff, but the basics are the same).

Most of the poly count on a character in a game goes to to the face

I wouldn’t say that most, not this days, but a lot for sure.

When you look at some characters models the head is in about 30-40k triangles range (God of War, Uncharted, The Order). Even MetaHuman is there with their mesh (and it has also shoulders). I think that for games it is optimal. Can you have more for better results? Probably. Will it be visible? Maybe but I wouldn’t bet my money on that, not with good use of the system that we have.
 
For the past decade, detailed specs of upcoming consoles from the big 3 have been leaked in one way or another at least one year in advance of official release. PS4, Xbox One, Switch, PS4 Pro, Xbox Scorpio, PS5, Xbox Series X/S, etc. The fact that no such leak has happened for this supposed PS5 Pro when it is rumored to release this year either means Sony has somehow figured out how to keep these leaks from happening (very doubtful), PS5 Pro is still in concept/planning phases and there are no specs locked down yet (possible if the console is planned for 2025 or later), or PS5 Pro never existed past boardroom discussions at Sony (doubtful).
I think it’s possible the pro gets delayed to 2025
 
PS5 specs werent leaked by devs either. There was a github leak that was outdated info, but no devs reached out to DF or other outlets. That 10.2 tflops reveal was kept a secret until the very end.

Hell, it took DF 2-3 years just to find out the RAM and CPU allocations for the OS.

A leak isnt happening.
I wish we got a cpu leak if nothing else
 
Yep. My favorite image to drive this point home is this one:

2486940-0248877224-ChsSw.png


We're pushing more pixels, polygons, effects, and geometry than ever before by a wide margin but the more we have the less we notice.

It's a psychological principle (don't know the name in English) that is called the minimum threshold in my native language. For instance, you'll immediately notice the difference between 1 and 2 pounds. You won't notice the difference between 30 and 33 pounds.
This picture is misleading with the statues.It it stays further away or small then yes doesn’t make a big difference.But if it is big or right in front of you or you can zoom in its a big difference.Best example is Horizon the game from Guerilla games.While the first game had huge polygon numbers for the main characters, the second game had exponentially more and you can see the difference especially in ears.Characters with less polygons have corners or spikes in regions where the polygon budget was not important enough
 

Gaiff

SBI’s Resident Gaslighter
Nah, if anything they need it this year more than ever, in order to inject new life onto the PS5 generation and prevent a dramatic decline in sales vs the previous fiscal year by getting double dippers or hardcore newcomers to the ecosystem to purchase the Pro.
The Pro wouldn’t change that. People don’t care about higher frame rates. They want games and nothing will make them forget this.

Xbox tried for years but it never worked.
 

Perrott

Member
The Pro wouldn’t change that. People don’t care about higher frame rates. They want games and nothing will make them forget this.

Xbox tried for years but it never worked.
I'm not talking about "people" here, but about all of us here in this thread. An enthusiast like HeisenbergFX4 HeisenbergFX4 doesn't need a new The Last Of Us as an excuse for buying the latest and best hardware. Hardcore folks like him will be there no matter what, especially during the lead up into the release of Grand Theft Auto VI in 2025.

We're talking about a million or two of users, mostly double dippers, that are going to buy into this thing over the latter half of the next fiscal year. For Sony, that's the difference between a slight 1-2M decline in console sales YOY and a larger drop in the 3-4M range.
 
Last edited:

HeisenbergFX4

Gold Member
I'm not talking about "people" here, but about all of us here in this thread. An enthusiast like HeisenbergFX4 HeisenbergFX4 doesn't need a new The Last Of Us as an excuse for buying the latest and best hardware. Hardcore folks like him will be there no matter what, especially during the lead up into the release of Grand Theft Auto VI in 2025.

We're talking about a million or two of users, mostly double dippers, that are going to buy into this thing over the latter half of the next fiscal year. For Sony, that's the difference between a slight 1-2M decline in console sales YOY and a larger drop in the 3-4M range.
Steve Bannon Bingo GIF
 

Gaiff

SBI’s Resident Gaslighter
Funny, people really cared a lot when the PS4 was running games at 1080p and the XBO was at 720p.
Most of the time it was 1080p on PS4 and 900p on Xbox One. Whatever the case, people didn't care either, only nerds did. If X1 was 720p, then PS4 was 900p. Very few examples of the Xbox being 720p and the PS4 being 1080p.
 

ManaByte

Gold Member
Very few examples of the Xbox being 720p and the PS4 being 1080p.

Call of Duty Ghosts, a fucking LAUNCH TITLE, and the biggest game on both consoles at the time was 1080p/720p.

Battlefield 4 as well. Lots of major games were in the first year.

XB1 games didn't start hitting 900p until long after launch.
 

Gaiff

SBI’s Resident Gaslighter
Call of Duty Ghosts, a fucking LAUNCH TITLE, and the biggest game on both consoles at the time was 1080p/720p.

Battlefield 4 as well. Lots of major games were in the first year.

XB1 games didn't start hitting 900p until long after launch.
BF4 was 900p on PS4 though. Don't remember COD but the most common resolution spread was 900p vs 1080p or 720p vs 1080p. This makes sense with their specs too.
 

ManaByte

Gold Member
BF4 was 900p on PS4 though. Don't remember COD but the most common resolution spread was 900p vs 1080p or 720p vs 1080p. This makes sense with their specs too.

Again for the first year it was mostly 720p on XB1. A rare game like Black Flag hit 900p, but it wasn't until the first E3 after launch that most stuff was hitting 900p.
 
I remember a few ps4 games, like black flag got a post launch patch. That upped the resolution amongst other things.

Can’t recall how wide spread out was, back in those days. (I waited on the sidelines till the white ps4 was released)
 
I'm really interested to see just how much of a game-changer AI will be in helping this console(and future) punch more above their weights.
 
BF4 was 900p on PS4 though. Don't remember COD but the most common resolution spread was 900p vs 1080p or 720p vs 1080p. This makes sense with their specs too.
And BF4 was 720p on XB1. MGS was 720p on XB1 (due to esram woes) but it was eventually patched to 900p (vs always 1080p on PS4).
 

Mokus

Member
Most of the time it was 1080p on PS4 and 900p on Xbox One. Whatever the case, people didn't care either, only nerds did. If X1 was 720p, then PS4 was 900p. Very few examples of the Xbox being 720p and the PS4 being 1080p.
Because of the Kinect, on Xbox One, about 10% its resources were always reserved. Only after dropping the full support, games started to hit more often the 900p (or even 1080).
 

onQ123

Member
They won’t want that IF Nextbox is pushing for an earlier release.
That only matters if Xbox can show it's self to be more than a pre built PC next Gen.


I feel like they have the perfect chance to create their own identity with a early start & that's what they will have to do to even gain traction with the next Xbox. If they try to sell the next Xbox as just a more powerful Xbox it will be battling with PC & Steam accounts.
 

saintjules

Member
I'm not sure if the Pro is real or not.
On one hand nothing has leaked but on the other hand the Switch 2 dev kits are 100% real but nothing has leaked about those either.

So far Tom Henderson has been spot on with his info and most of the news/rumors out there based off his initial rumor. It would be a huge miss for him if this doesn't exist.

I know my guy over at MS says that Xbox is anticipating the Pro for this year.
 

PaintTinJr

Member
True Moore's Law plays a big role but techniques such as AI upscaling and this patent should help over come Moore's Law.
At least for next-gen.


7N7xSoy.jpg


Which actually reminds me of this AMD's patent.
yUVKhF6.jpg

I saw this great info you posted the other day and was getting Cell BE - particularly ring bus EiB (Elemental Interconnecting Bus) vibes from the Cerny patent of the multi-GPUs and wanted to comment.

I thought the snippet of wording in the patent seemed very interesting in regards of the vague "screen region" description.

Rather than use something more specific such as "viewport region" - if you spilt the display into just 4 viewport sections like a 4 screen video wall - or a "cascade region" - if you split the frustum like they do in cascaded shadow mapping as increasing slices, say if you split the stub nose view cone (frustum) up like a loaf of bread.


This got me thinking that the reason for the "screen region" description might be related to the main reason for needing more processing: ray tracing where the activity density is vague and fits with where ever would be appropriate for placing light probes in a game scene, essentially denoising floating point calculations in the RT rendering problem somewhat by placing a GPU at a probe location meaning the calculations are most accurate at the sources of lighting interaction and the ability to absorb or eliminate redundant work at a probe that doesn't impact, or bottleneck the other GPUs and add any latency overall to a frame, probably allowing more aggressive elimination testing per probe and then presumably trace further towards the scene light sources, improving visual quality and resulting in finer grain use of BVHs at each GPU, as a subset of the scene total BVH, further reducing data transfers,


The reason I was getting EiB/Cell BE vibes about the GPU interconnections is because I believe it is the only way you could reliably make a solution and it scale, either as a stacked set of chips with ringbuses, or as a lidded multi-chip mounted on a board solution like the WiiU chip but with the chips connected by ringbuses.


Obviously, the problem with ringbuses is that they trade higher bandwidth, peak throughput and lower latency of the less complex crossbar bus for scalability and deterministic throughput. So that would still lean towards a crossbar bus, but given the locality of the memory modules to each GPU and appearing owned by a GPU as a split memory, a ringbus could easily abstract that separation to make them appear unified, and by using multiple rings, probably 4 - with at least 1 starting at each GPU and being a GPU-to-GPU timeslice starting behind each other, the latency of signalling the another GPU once all GPUs were kicked off would be 1/4 of a single bus solution with 4 buses.

I said awhile back that PlayStation's standout specialism is producing cheap, reliable, cutting edge solutions with higher performance interconnects, and this in a stack or lidded would need that. Moore's law is certainly making monolithic chip gains much harder, but I suspect most of the northbridge chip-chip connections like the EiB will now need to either be a stacked chip, which still sounds like a cooling issue for 4 GPUs, or that the chip-chip interconnects will have to operate in the southbridge domain, but provide a path to close the gap to give northbridge level performance.

My speculation might be all wishful thinking, and Sony might not even be looking to use that patent for the PS6, but from my thinking here, it feels tp me like it might be able to tick all the boxes they need for a meaningful performance leap without TDP issues, cooling, moore's law limitations, or cost issues being prohibitive for a £450 PS6.
 
I saw this great info you posted the other day and was getting Cell BE - particularly ring bus EiB (Elemental Interconnecting Bus) vibes from the Cerny patent of the multi-GPUs and wanted to comment.

I thought the snippet of wording in the patent seemed very interesting in regards of the vague "screen region" description.

Rather than use something more specific such as "viewport region" - if you spilt the display into just 4 viewport sections like a 4 screen video wall - or a "cascade region" - if you split the frustum like they do in cascaded shadow mapping as increasing slices, say if you split the stub nose view cone (frustum) up like a loaf of bread.


This got me thinking that the reason for the "screen region" description might be related to the main reason for needing more processing: ray tracing where the activity density is vague and fits with where ever would be appropriate for placing light probes in a game scene, essentially denoising floating point calculations in the RT rendering problem somewhat by placing a GPU at a probe location meaning the calculations are most accurate at the sources of lighting interaction and the ability to absorb or eliminate redundant work at a probe that doesn't impact, or bottleneck the other GPUs and add any latency overall to a frame, probably allowing more aggressive elimination testing per probe and then presumably trace further towards the scene light sources, improving visual quality and resulting in finer grain use of BVHs at each GPU, as a subset of the scene total BVH, further reducing data transfers,


The reason I was getting EiB/Cell BE vibes about the GPU interconnections is because I believe it is the only way you could reliably make a solution and it scale, either as a stacked set of chips with ringbuses, or as a lidded multi-chip mounted on a board solution like the WiiU chip but with the chips connected by ringbuses.


Obviously, the problem with ringbuses is that they trade higher bandwidth, peak throughput and lower latency of the less complex crossbar bus for scalability and deterministic throughput. So that would still lean towards a crossbar bus, but given the locality of the memory modules to each GPU and appearing owned by a GPU as a split memory, a ringbus could easily abstract that separation to make them appear unified, and by using multiple rings, probably 4 - with at least 1 starting at each GPU and being a GPU-to-GPU timeslice starting behind each other, the latency of signalling the another GPU once all GPUs were kicked off would be 1/4 of a single bus solution with 4 buses.

I said awhile back that PlayStation's standout specialism is producing cheap, reliable, cutting edge solutions with higher performance interconnects, and this in a stack or lidded would need that. Moore's law is certainly making monolithic chip gains much harder, but I suspect most of the northbridge chip-chip connections like the EiB will now need to either be a stacked chip, which still sounds like a cooling issue for 4 GPUs, or that the chip-chip interconnects will have to operate in the southbridge domain, but provide a path to close the gap to give northbridge level performance.

My speculation might be all wishful thinking, and Sony might not even be looking to use that patent for the PS6, but from my thinking here, it feels tp me like it might be able to tick all the boxes they need for a meaningful performance leap without TDP issues, cooling, moore's law limitations, or cost issues being prohibitive for a £450 PS6.

If they end up going with this, we will likely see leaks earlier than for PS5. This seems like it would require major engine rewrites, so the (pre)alpha kits will likely have to be shipped much earlier than the PS5.
 

SF Kosmo

Al Jazeera Special Reporter
Yep. My favorite image to drive this point home is this one:

2486940-0248877224-ChsSw.png


We're pushing more pixels, polygons, effects, and geometry than ever before by a wide margin but the more we have the less we notice.

It's a psychological principle (don't know the name in English) that is called the minimum threshold in my native language. For instance, you'll immediately notice the difference between 1 and 2 pounds. You won't notice the difference between 30 and 33 pounds.
Yeah, we're reaching this point when it comes to asset fidelity; in fact I would argue that development costs are the bigger limitation here than the technology.

We're not quite there with our ability to present those assets though. Pop-in or LOD issues are still more or less ubiquitous in open world games. Technologies like nanite are trying to change that, but they're not quite performant enough on mainstrean consoles to see wide adoption yet.

And then there are major paradigm shifts like ray-tracing and AI that could totally shift the way games look, but require a large leap in hardware capability to really get there.

I'm not worried about us hitting gaming's last gen just yet, but I do think we'll see longer gaps between consoles.
 

sankt-Antonio

:^)--?-<
Yeah, we're reaching this point when it comes to asset fidelity; in fact I would argue that development costs are the bigger limitation here than the technology.

We're not quite there with our ability to present those assets though. Pop-in or LOD issues are still more or less ubiquitous in open world games. Technologies like nanite are trying to change that, but they're not quite performant enough on mainstrean consoles to see wide adoption yet.

And then there are major paradigm shifts like ray-tracing and AI that could totally shift the way games look, but require a large leap in hardware capability to really get there.

I'm not worried about us hitting gaming's last gen just yet, but I do think we'll see longer gaps between consoles.
Yeah. Development cost will be lower the more powerful hardware (baseline) gets, as you can scrap lengthy and costly optimization like LOD models, or budgeting triangles here and there.

Once 100% path tracing, and AI autocompleted scans are available in consoles, games like GTA are going to be made within a year. Less if Video by AI gets used as a games foundation.
 
Yeah. Development cost will be lower the more powerful hardware (baseline) gets, as you can scrap lengthy and costly optimization like LOD models, or budgeting triangles here and there.

Once 100% path tracing, and AI autocompleted scans are available in consoles, games like GTA are going to be made within a year. Less if Video by AI gets used as a games foundation.
So ps7 basically
 

leizzra

Member
Yeah. Development cost will be lower the more powerful hardware (baseline) gets, as you can scrap lengthy and costly optimization like LOD models, or budgeting triangles here and there.
With more powerful hardware the costs of development goes up not down. LOD aren’t costing much time, usually they are made automatically (LOD1 can be made by hand but it’s rather quick task). Rendering large quantity of triangles is less of a problem since last gen. It’s what you do with them next is. Like vertex shaders, because they’ll operate on those triangles.

games like GTA are going to be made within a year.

bbf5a26ec3060546ecb4fed36a297fc7.gif


No, because creating games is not limited only by time needed for asset creation. What about environment art? Lighting? Animation? Montion Capture? Level design? Gameplay design? We can go through all departments. Their work is not made in a year.
 

sankt-Antonio

:^)--?-<
With more powerful hardware the costs of development goes up not down. LOD aren’t costing much time, usually they are made automatically (LOD1 can be made by hand but it’s rather quick task). Rendering large quantity of triangles is less of a problem since last gen. It’s what you do with them next is. Like vertex shaders, because they’ll operate on those triangles.



bbf5a26ec3060546ecb4fed36a297fc7.gif


No, because creating games is not limited only by time needed for asset creation. What about environment art? Lighting? Animation? Montion Capture? Level design? Gameplay design? We can go through all departments. Their work is not made in a year.
You drive around LA, scan the whole city, have AI build 3D assets out of the scan, it placed lights where they are irl, you place the Sun on the map. Done. Gameplay and Mocap done within a year.
Why do you think NVIDIA is a trillion dollar company now? About to overtake Apple. Everybody is banking on that their AI implementation can reduce cost in a lot of areas exponentially.
 

leizzra

Member
You drive around LA, scan the whole city, have AI build 3D assets out of the scan, it placed lights where they are irl, you place the Sun on the map. Done. Gameplay and Mocap done within a year.
Why do you think NVIDIA is a trillion dollar company now? About to overtake Apple. Everybody is banking on that their AI implementation can reduce cost in a lot of areas exponentially.
You clearly don’t know anything about making games not to mention games like GTA. It’s not this simple and the AI won’t cut the process to one year. The best thing would be if it could shorten the development length by one year.
 

sankt-Antonio

:^)--?-<
You clearly don’t know anything about making games not to mention games like GTA. It’s not this simple and the AI won’t cut the process to one year. The best thing would be if it could shorten the development length by one year.
This can be done in minutes, one scene that would take Hollywood months to plan, cast and shoot with a lot of post FX work. A movie has around 200 scenes.

Once this is done for game assets, texturing, animation rigging etc. it will be a minutes to create a single level. Once hardware is powerful enough that everything is physics based and no time consuming trickery by hand is needed, it will take a year to make a GTA like game.
 

leizzra

Member
In the future maybe yes, but still games and movies are different. Sora is about something else than what you are referring to. If games are meant to be streamed movies but with interaction then it would fit better (then again, it’s not generating this in real time). Even then there are problems with games logic and more.

As for the implementation which you are referring to - first of all you need software that will do that. Then you need hardware for that. Photogrammetry of single asset is time consuming in physical way and AI won’t go and do that. Not to mention that for best results you are bound by weather conditions (overcast sky for best results for software to remove light from textures). There are many steps in the process that will make problems. In the end they can be probably overcome but it won’t be that easy. And it’ll be that way with every part of making games.

And there is still game design, mission structure, the technical part of design (like d tricks that you invent along the way for better feel of gameplay - like the way aiming speed changes in Gears of War). And QA? It needs to be tested probably even more.

Not to mention things like ethical use of AI for most of the work and the quality of results it’ll produce. I think that this type of use of the AI is in far future at best. Maybe I’m wrong but I look at it from a point of person who is making graphics for games.
 
Probably. Sony and MS just have to wait for it to happen, then and only then is a Netflix like gaming service a viable business strategy. Until then they need to stay in the console business to have people attached to their game library.
What I expect with the ps6 which they have laid a bit of groundwork for with this gen is to have a somewhat modular console I expect there because there is a detachable disc drive maybe attached to it will be a version that ups the ram and improves the cooling further so the cpu and gpu can run at higher clocks the ps7 may be more than halfway modular
 

PaintTinJr

Member
If they end up going with this, we will likely see leaks earlier than for PS5. This seems like it would require major engine rewrites, so the (pre)alpha kits will likely have to be shipped much earlier than the PS5.
Possibly, but after the need for the ICE team to write SPURS for the Cell BE to lower the technical barrier to PS3 they would wrapper all that complexity in an API this time IMO so it wouldn't negatively impact time to first triangle and wouldn't recreate the difficulty problems of later porting software like the PS2/PS3 had IMO.
 
Yeah, just like Tesla cars will be self driving every single year since 2016
Tesla cars had insufficient computation onboard. In any case they still drive quite decently. But for self driving to be legal they need 99.9999% superhuman driving skills, and they'll achieve that soon.

What you have to familiarize yourself is the knee of the curve
 
Top Bottom