• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS3 Cell made Sonys first party what they are today.

oldergamer

Member
I disagree completely. Looking at the hardware now it was complete trash even back then. And scaling back games for 360 is hilarious considering most ps3 games suffered horrible fps, even late in the generation as lead platform.
PS3 had the more powerful CPU, but weaker GPU when compared to xbox 360. It was also much more difficult to get the performance out of the CPU on PS3, so in general cross platform games looked or performed worse on PS3.
 

Romulus

Member
"Just a reminder of CELL powa. Best devs on the planet struggled with that thing."

The_Thing_%281982%29_theatrical_poster.jpg




The%20Thing%204.jpg

Kurt Rusell programming "The Last of Us"

I think it was alot of the problem for sure, but the gpu was anemic and so was the RAM.
 
It wasn’t just Sony studios. By the end of the gen, almost everyone had figured out the cell processor. Several studios were using MLAA on the ps3. Rockstar’s gta5 port ran pretty much on par with the 360 despite gta4 running way better than the ps3.

I do agree that an upgraded cell might have been then the jaguar cpus, but the cost would’ve been higher and who knows what would have happened to a $500 ps4.

The ps5 io is a fantastic and unique design. Its a shame no one is using it because this thing has the potential to be a game changer. But unlike the cell, no one seems to really want to extract the most of out it. Not even Sony’s own first party studios.
Give them a few years I bet mid gen exclusives start to.
 
If you updated the CELL and kept it's core design you'd still be stuck with hard to develop and expensive. And if you got rid of in-order and shortened stages to make it more efficient you'd have an oven in your hands. Switching to x86 would also have been impossible and doing a x86 cpu with the cell ideology basically the equivalent of doing a Pentium 4 with compute units.

I disagree completely. Again, Cell was build using the constrains of the time: 90nm, 234 million transistors and an area of about 221 square millimeters. Now we have 20,000 million transistors to play with. You don't have to fit an in-order core, you don't just need one PPE-type element. You don't need the EIB, you can swap it out for a cross-bar. You can beef up the SPE's.

Then again, a lot has changed, but the fact I'm trying to make is that heterogenous computing focusing on computational density with tightly integrated, high-bandwidth interconnects is a win in a closed box. The PS5 and XBox aren't that impressive compared to what's possible if you give a group of engineers like STI five years and millions of dollars with a tranistsor budget like that to make a game box. But those days are unfortunately over. Now, they are basically off-the-shelf mid-range PCs like Elon Musk just eluded to. It's all commodity. Fucking Apple is more exciting.

Again, as I said I posted a link earlier in this thread to a Peter Hofstee presentation where he shows Cell attaining good speed on OpenCL code verse hand-tuned Cell code, things today would be different.

And the in-order guTS derived PPE was an IBM thing. You could have done something else in Cell.
 
Last edited:
Yea ML path is one major potential diversion - kind of hard to even imagine where that can end up in, but there's some really freaky possibilities.
Personally I wish more focus was put on leveraging ML for interactivity/simulation (since these are the areas where we've made so little progress in last 30 years) but ah well.
The other less sci-fi fork on the horizon is if things like Nanite take off - it could spell the end of fixed function rasterization blocks.

The ML graphics stuff is amazing to me. I'm in awe and really hope we see some major advancement in this area. And great point about simulation!

Funny thing about Talisman - I did play around with concepts it proposed in PS2 environment - if GS had a proper bi-directional memory interface, and double the VRam, PS2 could have done a really compelling pass at such type of rendering, the rest of hw was well suited for it. OG XBox setup wasn't too bad either - but that was hampered by the ram-interface and lack of real low-level hw-access.

Very neat! Thanks for sharing, Faf.
 
I think it was alot of the problem for sure, but the gpu was anemic and so was the RAM.
deferred rendering games benefited a lot, and plenty of games worked really well with lot of recognition from the press back then, if they were too ambitious to run better then is not a problem of the machine as it performed really well for the time, it appears you hate the machine of the sake of it
 
Last edited:

Fafalada

Fafracer forever
And the in-order guTS derived PPE was an IBM thing. You could have done something else in Cell.
Indeed - IIRC some patents were still explicitly referencing MIPS general-purpose cores, PPC decision must not have been there right from the start. RS patent also had modified SPEs + MIPS cores explicitly outlined in it. And I might be misremembering this 20 years later, but didn't Toshiba even build some custom-Cells that used a different core instead of PPC?

The ML graphics stuff is amazing to me. I'm in awe and really hope we see some major advancement in this area. And great point about simulation!
Yea - there's a paper that's a few years old now that really inspires imagination


Sure there's certain limitations to the approach - but irony is that 'subspace simulation' is already decades old established practice for any complex sim (fluids, cloth etc. - even for non-realtime), where you deal with cross-subspace interactions as special cases and solve inside only most of the time to stay computationally efficient.
And the promise of increasing interactivity of worlds by 1-2 orders of magnitude, even if it's not perfectly accurate - it's baffling everyone hasn't been jumping on this already. Especially since state of the art in AAA gaming for entire gen has been the same almost entirely static worlds - and the most hyped 'next-gen' thing in UE5 is literally just more of the same.
 
Last edited:

Romulus

Member
deferred rendering games benefited a lot, and plenty of games worked really well with lot of recognition from the press back then, if they were too ambitious to run better then is not a problem of the machine as it performed really well for the time, it appears you hate the machine of the sake of it

I showed AAA exclusives running at god-awful framerates, no need to make it about hating for the sake of it. It is a problem of the machine when so many top-tier devs have problems across the board. And I did hate its design, nothing more.
 
I showed AAA exclusives running at god-awful framerates, no need to make it about hating for the sake of it. It is a problem of the machine when so many top-tier devs have problems across the board. And I did hate its design, nothing more.
every system have top tier games with framerate problems, you can check since the nes days, even the game in your avatar "battle engine aquila" have similar framerate dips in the consoles it released(ps2 and xbox)
 

Romulus

Member
every system have top tier games with framerate problems, you can check since the nes days, even the game in your avatar "battle engine aquila" have similar framerate dips in the consoles it released(ps2 and xbox)

Battle Engine Aquila has a small and basically unknown dev team from the 6th generation and is not exclusive, so that's really far outside my point. Do you have any framerate counters for the Xbox version?
I"m not sure if you actually combed through my examples though. The PS3 generation brought the super studios era into play that could focus on one console. Sony had a slew of pedigree developers and almost in every category, and they have issues with framerate that were just god awful in most cases. During that time, the 360 had fewer exclusive studios, and even less "top tier" devs. You might be able to argue Bungie and Turn 10 were good, not great. Those two studios produced games with great framerates almost across the board, definitely superior to the offerings I mentioned. God of War 3 was considered the premium experience on ps3 at that time over the course of the generation really, 40-55fps all over the place with a fixed camera angle, which no other exclusive had and is much less taxing than the ability to rotate the camera at free will. It just seems to me that even though their ambition in some cases tried to relieve the ps3 from taxation, they still failed to deliver decent framerates when their sole focus was one console. Again, these are some of the best devs on the planet that delivered an even more ambitious God of War on ps4 with a solid fps using a tablet CPU lol
 
Last edited:

solidus12

Member
Hey, so it’s safe to say that the Cell was more powerful than the PS4’s Jaguar? Or is it more complicated than that?
 

Fafalada

Fafracer forever
God of War 3 was considered the premium experience on ps3 at that time over the course of the generation really, 40-55fps all over the place with a fixed camera angle, which no other exclusive had and is much less taxing than the ability to rotate the camera at free will.
This kind of extreme necromancy really isn't necessary - but either way, Mr. Ericson (TD for God Of War 3 at the time) would have a word with you:
https://forum.beyond3d.com/threads/...ds-and-optimisations-spawn.48156/post-1429378

I can see you're taking a deep breath running up to the keyboard to write a clever/witty counter-argument to the dev - let me save you the time there too - that entire debate also happened already, with the same results:
https://forum.beyond3d.com/threads/...ds-and-optimisations-spawn.48156/post-1429389
https://forum.beyond3d.com/threads/...ds-and-optimisations-spawn.48156/post-1429394
https://forum.beyond3d.com/threads/...ds-and-optimisations-spawn.48156/post-1429406

As for the framerate claims - that gen was just poor, 360 exclusives included. Just because some exceptions exist (there were exclusives with stable fps on PS3 too) it doesn't preclude the fact majority weren't. That console gen just had its priorities upside down a bit with regards to framerate - as evidenced by how much better both PS2/GC/OG Xbox were in that regard.
 
PS3 had the more powerful CPU, but weaker GPU when compared to xbox 360. It was also much more difficult to get the performance out of the CPU on PS3, so in general cross platform games looked or performed worse on PS3.
CPU was also weaker for cpu-centric tasks. In fact for that it had 1/3rd of the processing power (1 core vs 3 cores for general purpose with the same frequency and based on the same architecture). It had a deficit bigger than the gpu, it just mattered less (specially for exclusives).
Hey, so it’s safe to say that the Cell was more powerful than the PS4’s Jaguar? Or is it more complicated than that?
It wasn't more powerful unless you were trying to run code optimized for cell on jaguar.

Jaguar was leagues removed from how bad cell was for normal cpu tasks.
 
Last edited:

Romulus

Member
This kind of extreme necromancy really isn't necessary - but either way, Mr. Ericson (TD for God Of War 3 at the time) would have a word with you:
https://forum.beyond3d.com/threads/...ds-and-optimisations-spawn.48156/post-1429378

I can see you're taking a deep breath running up to the keyboard to write a clever/witty counter-argument to the dev - let me save you the time there too - that entire debate also happened already, with the same results:
https://forum.beyond3d.com/threads/...ds-and-optimisations-spawn.48156/post-1429389
https://forum.beyond3d.com/threads/...ds-and-optimisations-spawn.48156/post-1429394
https://forum.beyond3d.com/threads/...ds-and-optimisations-spawn.48156/post-1429406

As for the framerate claims - that gen was just poor, 360 exclusives included. Just because some exceptions exist (there were exclusives with stable fps on PS3 too) it doesn't preclude the fact majority weren't. That console gen just had its priorities upside down a bit with regards to framerate - as evidenced by how much better both PS2/GC/OG Xbox were in that regard.


I remember that. Geez, speaking of necromancy. It's dev nonsense. Do you really think devs don't lie or exaggerate to bolster their work? Or I guess you know him personally.

A fixed camera saves on resources, that's the end of the story. There's nothing clever or witty to say. They got shit for doing it back then because the game ran like crap and that was part of the damage control. "But no you can actually move the camera anywhere, we just don't allow it!" lol Devs do it all the time and the only people that buy it are superfans. I've already mentioned the only 2 devs worth mentioning on 360 and they have solid framerates. Very good actually by comparison. In terms of the environment, God of War is very much on rails, and if you really think allowing the user to see those expansive scenes with Gods fighting each other and particle effects everywhere would be equally taxing, there's no hope. It's similar to when you look up in the sky in FPS games and the framerate skyrockets, they control most of what you see for a reason. As soon as they ditched the CELL, they could actually do things without so many limitations.

And just so you know, if you actually read the conversation at beyond3d as I did, the main dev bailed out of the conversation when the guy who created the camera showed up. He outright said there were performance benefits to the fixed camera, which for HIM to admit that means there were probably way more than we actually knew about.
https://forum.beyond3d.com/threads/...ds-and-optimisations-spawn.48156/post-1429410

It's just absolutely ridiculous to assume that being able to move the camera around in such a detailed action-packed world would have the same frame rate as a fixed camera.
 
Last edited:
I"m not sure if you actually combed through my examples though. The PS3 generation brought the super studios era into play that could focus on one console. Sony had a slew of pedigree developers and almost in every category, and they have issues with framerate that were just god awful in most cases. During that time, the 360 had fewer exclusive studios, and even less "top tier" devs. You might be able to argue Bungie and Turn 10 were good, not great. Those two studios produced games with great framerates almost across the board, definitely superior to the offerings I mentioned. God of War 3 was considered the premium experience on ps3 at that time over the course of the generation really, 40-55fps all over the place with a fixed camera angle, which no other exclusive had and is much less taxing than the ability to rotate the camera at free will. It just seems to me that even though their ambition in some cases tried to relieve the ps3 from taxation, they still failed to deliver decent framerates when their sole focus was one console. Again, these are some of the best devs on the planet that delivered an even more ambitious God of War on ps4 with a solid fps using a tablet CPU lol
sorry but none of that shows how cell hampers the performance of their games you presented videos but they are no proof of what you claim in fact you are talking about games not the hardware and you are not presenting other than some framerate problems without context, the exclusive games you mention do lot of different effects not present in other games, for example you see the helgast eyes and the blue lights in the earth soldiers? well those are real lights and you can see the helgast eyes each projecting a light in the walls instead of being limited by the number of hardware lights there are plenty of light sources in the game, how many FPS games at that time did that? the gbuffer for KZ2 is 36MB way bigger than your common forward rendered game among other drawbacks, maybe it drops a couple frames more than other games in busy situations but its also delivering better graphics, your logic reduces the merit of a game to how locked framerate is instead of what the game is showing in the screen, and when a game experience framerate problem is the fault of the cpu or a closed piece of hardware instead of the ambition of the developer on that machine, and no, the situation is not different in other systems, other devs may take more care with framerate but there is nothing special that hampers it other than the the devs and their ambition, each frame has different time required to render it, depending what is on the frame, is not the same showing a couple enemies on screen than 20, so if there are lot of enemies in a given moment the framerate may not achieve its goals, you are basically asking developers to refrain of doing complex situations or to make graphics worse and pretending this is only a problem on PS3
 
Last edited:

Lillie

Member
Instead of buying a PS5 I should have bought a PS3 to play the GoW Saga collection. Unfortunately I realized I can't buy and play them on PS5. Apparently the only way is streaming with PS+'s highest tier which I don't want to pay for, I would rather just own the games.
 

Romulus

Member
sorry but none of that shows how cell hampers the performance of their games you presented videos but they are no proof of what you claim in fact you are talking about games not the hardware and you are not presenting other than some framerate problems without context, the exclusive games you mention do lot of different effects not present in other games, for example you see the helgast eyes and the blue lights in the earth soldiers? well those are real lights and you can see the helgast eyes each projecting a light in the walls instead of being limited by the number of hardware lights there are plenty of light sources in the game, how many FPS games at that time did that? the gbuffer for KZ2 is 36MB way bigger than your common forward rendered game among other drawbacks, maybe it drops a couple frames more than other games in busy situations but its also delivering better graphics, your logic reduces the merit of a game to how locked framerate is instead of what the game is showing in the screen, and when a game experience framerate problem is the fault of the cpu or a closed piece of hardware instead of the ambition of the developer on that machine, and no, the situation is not different in other systems, other devs may take more care with framerate but there is nothing special that hampers it other than the the devs and their ambition, each frame has different time required to render it, depending what is on the frame, is not the same showing a couple enemies on screen than 20, so if there are lot of enemies in a given moment the framerate may not achieve its goals, you are basically asking developers to refrain of doing complex situations or to make graphics worse and pretending this is only a problem on PS3


What is it then the gpu? I only remember multiplatform devs complaining about the cell so I can only estimate it wasn't much better for exclusive devs. But yes the gpu was underwhelming also and with limited RAM that didn't help either.
 

Crayon

Member
PS2 isa whole lot harder to work with and naughty dog/insomniac/Santa Monica/polyphony were wringing every drop out of it. They've been gradually getting better and at this point their graphics are in the top tier. Cell was a speed bump for them.
 

Romulus

Member
well, you were so sure blaming cell, maybe you should estimate less and investigate more next time

No one in here is a ps3 developer. So no "investigation" would yield absolute results for any of those frame counters lol. You're being unrealistic. From a common sense perspective when a CPU is designed to do so much of the workload AND developers were complaining about it? Yeah, there's that. I was trying to get your take but seems you're bothered on the subject and can't objectively look at it without being defensive. On another note when a console is such a pos from top to bottom I guess you can't be completely sure in a field full of bottlenecks.
 
Last edited:

Melchiah

Member
What is it then the gpu? I only remember multiplatform devs complaining about the cell so I can only estimate it wasn't much better for exclusive devs. But yes the gpu was underwhelming also and with limited RAM that didn't help either.

You do realize, that different developers have different targets? Bungie no doubt had framerate as a top priority, whereas single player titles often prioritized image quality over performance. It's just different development targets, which have nothing to do with the capability of the hardware, considering that WipEout HD ran at 60fps, and at 1080p in most situations (sans explosions galore).

Personally, I generally prefer 30fps high quality version over 60fps with visual compromises. Lower framerate has never bothered me as much as visual deficiencies, like aliasing, flickering and blocky shadows. When TLOU remaster came out on PS4, I tried both quality and performance modes, and didn't notice any meaningful difference with a higher framerate, whereas the blocky shadows were immediately an eyesore.
 

Romulus

Member
sorry but none of that shows how cell hampers the performance of their games you presented videos but they are no proof of what you claim in fact you are talking about games not the hardware and you are not presenting other than some framerate problems without context, the exclusive games you mention do lot of different effects not present in other games, for example you see the helgast eyes and the blue lights in the earth soldiers? well those are real lights and you can see the helgast eyes each projecting a light in the walls instead of being limited by the number of hardware lights there are plenty of light sources in the game, how many FPS games at that time did that? the gbuffer for KZ2 is 36MB way bigger than your common forward rendered game among other drawbacks, maybe it drops a couple frames more than other games in busy situations but its also delivering better graphics, your logic reduces the merit of a game to how locked framerate is instead of what the game is showing in the screen, and when a game experience framerate problem is the fault of the cpu or a closed piece of hardware instead of the ambition of the developer on that machine, and no, the situation is not different in other systems, other devs may take more care with framerate but there is nothing special that hampers it other than the the devs and their ambition, each frame has different time required to render it, depending what is on the frame, is not the same showing a couple enemies on screen than 20, so if there are lot of enemies in a given moment the framerate may not achieve its goals, you are basically asking developers to refrain of doing complex situations or to make graphics worse and pretending this is only a problem on PS3

BTW, you're not really saying anything here at all. It's a word salad that does zero to counter anything I mentioned, talking about lights on helgast eyes. What does that actually translate to onscreen? You have no idea. You're just saying things like "light sources" to come off like you have a real understanding of what taxed the hardware. So if they took away the eyes, would there be a big gain? You don't know. And the gbuffer being 36mb is way bigger, how do you know that, how did it specifically tax the hardware? What's your source compared to other games and how does it affect performance exactly? You're trying to sound technical but you have zero technical data to back this up as it pertains to the conversation.
 
Last edited:
PS2 isa whole lot harder to work with and naughty dog/insomniac/Santa Monica/polyphony were wringing every drop out of it. They've been gradually getting better and at this point their graphics are in the top tier. Cell was a speed bump for them.
And PS4 was a speed bump as well. No generation was ever a step back. Two PS2's duck taped together would be a step forward in that logic, but not a dream scenario. I'm sure PS3 wasn't a dream to work with for anyone but Ken Kutaragi.

Naughty Dog/Insomniac/Santa Monica/Polyphony would be just as happy with Xbox 360 if not more happy. And they would have been even happier with a good desktop PC spec. Like they are now. The only downside is that more companies would reach really good results with way less R&D. But we've seen that this generation and games from these devs still managed to punch above their weight.
BTW, you're not really saying anything here at all. It's a word salad that does zero to counter anything I mentioned, talking about lights on helgast eyes. What the hell does that actually translate to onscreen? You have no idea. You're just saying things like "light sources" to come off like you have a real understanding of what taxed the hardware. So if they took away the eyes, would there be a big gain?
Well, light sources always had a cost and that's precisely the kind of workload where the SPE's in CELL could help. I'm sure they were using it for everything they could otherwise the GPU alone would have a low ceiling for a game as hyped as Killzone 2. Light sources always look nice and part of what made Killzone 2 even "compete" as texture resolution was bellow average.
You don't know. And the gbuffer being 36mb is way bigger, how do you know that? What's your source compared to other games and how does it affect performance exactly? You're trying to sound technical but you have zero technical data to back this up as it pertains to the conversation.
Full buffers being 36 MB wasn't that uncommon for AA games that gen on Xbox 360, plenty games had deferred lightning and shading passes that gen, also other complex render-passes and workflows.

If anything, it was probably harder to pull that amount of memory on PS3 because since it lacked the eDRAM setup it just wasn't built for tiling (which was a bitch but actually helped on X360) and had bigger memory constraints already from only having 256 MB allocated to video ram tasks.

A 36 MB budget for the framebuffer from a pool of 512 MB is different than 36MB in a pool of 256 MB, just saying.
 

Fafalada

Fafracer forever
A fixed camera saves on resources, that's the end of the story.
You'll have to expand on this a bit if it's such a fact - how exactly do you save resources?
Since you've already said dev-view isn't applicable - I'm not qualified to comment myself - but do feel free to educate us.
Just remember not to reference any dev-articles/info about it - as those are nonsense.

He outright said there were performance benefits to the fixed camera
No - he specifically said there were production costs benefits (less assets to build if you never look at certain areas).
If you want a specific analogy - for an open-world game to match visual quality of a more curated experience - production costs are substantially higher, because a lot more of the world needs to be built out. Or to be more on the nose - the reason say - The Division reveal* looked better than final game wasn't performance downgrades, but having an army of artists build a tiny sliver of NYC shown in linear fashion at E3 in a couple of months, and then spread the same artists to build 100x more assets for the whole city without giving them 100x more time/money.

*This is actually a common scenario with a lot of demos - but I use The Division example because I had first hand insights into the demo and the tech.
 
Last edited:

Romulus

Member
You'll have to expand on this a bit if it's such a fact - how exactly do you save resources?
Since you've already said dev-view isn't applicable - I'm not qualified to comment myself - but do feel free to educate us.
Just remember not to reference any dev-articles/info about it - as those are nonsense.


No - he specifically said there were production costs benefits (less assets to build if you never look at certain areas).
If you want a specific analogy - for an open-world game to match visual quality of a more curated experience - production costs are substantially higher, because a lot more of the world needs to be built out. Or to be more on the nose - the reason say - The Division reveal* looked better than final game wasn't performance downgrades, but having an army of artists build a tiny sliver of NYC shown in linear fashion at E3 in a couple of months, and then spread the same artists to build 100x more assets for the whole city without giving them 100x more time/money.

*This is actually a common scenario with a lot of demos - but I use The Division example because I had first hand insights into the demo and the tech.

Do you even understand that assets onscreen aren't free for performance? Rendering more onscreen assets onscreen is more taxing. You're the one that tried to pitch dev comments are some gospel-worthy source to push your agenda, Now when I turned it around on you, you're backing down. I showed you that if we're going to be talking about the camera, might as well hear it from the guy that actually worked on it.


You're stating above that there was no performance gain and only "production" cost benefits lol. The sony dev that actually programmed it claimed the exact opposite:

performance benefits are that we can focus asset creation quality on what we want to look at. Sometimes this means that we don't need to build the back wall of a room. Or we can unload an area behind us that we can no longer see. Although often we'll just swap it for a lower detail version of the same area (think of the horse / chain section of GoW2).

Keep in mind the main dev bailed after this comment because it contradicted his original stance. If common sense prevailed, we could trust the technician who actually programmed the camera, but that might not align with our agenda in proving the ps3 was a secret powerhouse that could actually render those entire scenes with zero performance loss lol, when it was already struggling to begin with.

So I don't need your analogy when you're using weak diversionary tactics and pretending to not understand that assets onscreen aren't free for performance. At least I hope you're pretending that. Either way, ignore list you go.
 
Last edited:
BTW, you're not really saying anything here at all. It's a word salad that does zero to counter anything I mentioned, talking about lights on helgast eyes. What does that actually translate to onscreen? You have no idea. You're just saying things like "light sources" to come off like you have a real understanding of what taxed the hardware. So if they took away the eyes, would there be a big gain? You don't know. And the gbuffer being 36mb is way bigger, how do you know that, how did it specifically tax the hardware? What's your source compared to other games and how does it affect performance exactly? You're trying to sound technical but you have zero technical data to back this up as it pertains to the conversation.

they could remove the eyes and the blue light on the earth soldiers that actually doesnt make sense in a war like the green lights in sam fisher in splinter cell games, but looks cool and obviously they put them because the deferred rendering allowed them to have hundreds of lights on screen something its very costly in forward rendering were the more lights more calculations, they said it was 36 MB in eurogamer and their presentations, deferred rendering games use a big Gbuffer wich consists in lot of buffers that are combined to form the final image they change based on resolution too, the Gbuffer require more space and more bandwidth that is why its more taxing , they mentioned its 36 MB because the anti alias, its possible to have a smaller gbuffer(I think resistance 2 o 3 used a smaller Gbuffer I think 27 MB) IIRC in the case of xbox 360 its possible to mix the buffers using the redrawing capacity as eDram is very small dont rememeber the name of that technique sorry
 
Last edited:
From a common sense perspective when a CPU is designed to do so much of the workload AND developers were complaining about it?

they are complaining about a CPU designed to support the GPU, you said developers complain about it, but what is their complain exactly? you remmeber? yes, is that its difficult to use, nobody said it hampers the performance, if its so difficult they dont use it and loose the advantage it gives them that is common sense, they used as simple render path from the sdk examples, if you read the top tier devs presentations at the time that used the cell you will realize they used custom techniques integrating the SPU to support the rendering, they basically used their knowledge and experience to develop ways to use the CPU as support while your average dev had to wait for updates in the SDK for improved stablished render paths easy to follow, newer libraries and good practices because the didint know how to do it or was costly to investigate and lacked the resources or use a newer version of an engine, it was very similar to PS2 that offered I think 3 different render paths, one was very simple inteded for devs to release the first game very simple but with most hardware unused so later they could investigate the hardware deeply to use it better, particularly its redrawing capabilities and its texture streaming along VU code, that is why in both cases you had devs complaining about difficulty of the hardare with underperforming ports and yet you had games better and better as the generation progressed
 
Last edited:
no "investigation" would yield absolute results for any of those frame counters lol. You're being unrealistic.

you dont have to get absolute results, some investigation about whats is involved in the game will give you more accurate results than blaming blindly
 
Last edited:

Romulus

Member
you dont have to get absolute results, some investigation about whats is involved in the game will give you more accurate results than blaming blindly

You mean like bringing up helmet lights, as if that's any relevance to the conversation on performance. We have no idea how much any of those tax a specific scene. And then where will the "investigation" take us? To exclusive developer conversations from 10 years ago, to people that rarely say anything bad about the hardware in the first place because they were under contract. But if helmet lights and vague info about the gbuffer is the idea, I think we're better off not doing that at all.
 
Last edited:

PaintTinJr

Member
you dont have to get absolute results, some investigation about whats is involved in the game will give you more accurate results than blaming blindly
Going purely by memory from the demo, the game engine looked like it was an enhanced port of the PS2 engine they previously used, with it still leaning heavily on the graphics strengths of PS2 with the massive amounts of alpha fx for rendering the gore.

On that basis alone, I would guess the rsx's weakness to render alpha fx heavy scenes was the cause of the dips, and the choice was either to accept the dips or artistically re-work that core element or have the alpha fx scaled down to render to less pixels to stay in render budget, and so - if that was the problem - they will have chosen the cheapest option that visually matches their target the best.

I'm sure with an extra 3-6months they would have remedied the cause of the dips, but probably would have been unlikely to help improve the game's sales or profitability.
 
Last edited:
You mean like bringing up helmet lights, as if that's any relevance to the conversation on performance.

I said it was an example of a benefit of using defferred rendering but such techniques have its own set of problems, if its involved it game it is relevant in a conversation about performance, why you think its not?

We have no idea how much any of those tax a specific scene. And then where will the "investigation" take us? To exclusive developer conversations from 10 years ago, to people that rarely say anything bad about the hardware in the first place because they were under contract. But if helmet lights and vague info about the gbuffer is the idea, I think we're better off not doing that at all.

but we know it taxes the scene at the benefit of allowing a great number of lights and its something extra other games dont use, and in the case of kill zone it is even more heavy than other games that use similar techniques and if its not used and instead graphics are paired back then is not unreasonable to assume performance will increase, usually in their presentations devs put the millisecond certain parts of the rendering take to complete, it may not represent absolute knowledge for everything in a specific scene of the game but its infinitely better than having nothing

for example GOW3 uses MLAA its a technique tath gives you way better results than MSAA2X, it takes 4 ms of cell if using 5 SPU for it instead of taking 5 ms from the GPU for MSAA 2X(wich is worse than MLAA) on a game that targets 60 fps that is the difference in having good AA, using cheaper AA or no AA(like most nintendo 60 fps games on WiiU and switch) all that at the level of quality GOW3 has, if you put that SPU time to help RSX in other tasks and ouright disable the AA you will be able to improve framerate , are you going to claim the devs lie and MLAA is cheap? maye you preffer better framerate in expensive scenes at the cost of its incredible AA but it was a decision of the developer not the machine like you claims

I know I am repeating myself but the problem is your comparison metrics only reduces the merit of the game to the framerate without considering anything because you dont investigate what is happening in the game you ONLY take framerate as a metric, you appear to not understand or refuse to understand what is wrong with that, it has implications that are ridiculous for example it implicates that its perfectly valid to compare an hypothetical tetris game that runs at rock solid 60 fps and declare it has more merits than GOW 3 because GOW3 can go 55 or lower fps, its an extreme comparison but its the kind of comparisons you expect to be taken seriously and declare that no investigation is required or that "it wont take you anywhere" you are happy to consider any game as technically equal with total disregard of what is involved in the game and somehow you try to blame the cell or the PS3 as the only place where this situation happens, when other top notch games in other systems also experience similar problems which is something that a small investigation will reveal you refuse to consider information very pertinent to the discussion that you obviously didnt know, you refuse to investigate about the tech involved on each game and disregard any explanation of it, its implications and its influence and then you talk about having a "common sense perspective" and you say I "cannot look at it objectively without being defensive", I am not trying to be defensive I am making a big effort trying to be polite to discuss with you why your comparison is deeply flawed
 
Last edited:

PaintTinJr

Member
I said it was an example of a benefit of using defferred rendering but such techniques have its own set of problems, if its involved it game it is relevant in a conversation about performance, why you think its not?



but we know it taxes the scene at the benefit of allowing a great number of lights and its something extra other games dont use, and in the case of kill zone it is even more heavy than other games that use similar techniques and if its not used and instead graphics are paired back then is not unreasonable to assume performance will increase, usually in their presentations devs put the millisecond certain parts of the rendering take to complete, it may not represent absolute knowledge for everything in a specific scene of the game but its infinitely better than having nothing

for example GOW3 uses MLAA its a technique tath gives you way better results than MSAA2X, it takes 4 ms of cell if using 5 SPU for it instead of taking 5 ms from the GPU for MSAA 2X(wich is worse than MLAA) on a game that targets 60 fps that is the difference in having good AA, using cheaper AA or no AA(like most nintendo 60 fps games on WiiU and switch) all that at the level of quality GOW3 has, if you put that SPU time to help RSX in other tasks and ouright disable the AA you will be able to improve framerate , are you going to claim the devs lie and MLAA is cheap? maye you preffer better framerate in expensive scenes at the cost of its incredible AA but it was a decision of the developer not the machine like you claims

I know I am repeating myself but the problem is your comparison metrics only reduces the merit of the game to the framerate without considering anything because you dont investigate what is happening in the game you ONLY take framerate as a metric, you appear to not understand or refuse to understand what is wrong with that, it has implications that are ridiculous for example it implicates that its perfectly valid to compare an hypothetical tetris game that runs at rock solid 60 fps and declare it has more merits than GOW 3 because GOW3 can go 55 or lower fps, its an extreme comparison but its the kind of comparisons you expect to be taken seriously and declare that no investigation is required or that "it wont take you anywhere" you are happy to consider any game as technically equal with total disregard of what is involved in the game and somehow you try to blame the cell or the PS3 as the only place where this situation happens, when other top notch games in other systems also experience similar problems which is something that a small investigation will reveal you refuse to consider information very pertinent to the discussion that you obviously didnt know, you refuse to investigate about the tech involved on each game and disregard any explanation of it, its implications and its influence and then you talk about having a "common sense perspective" and you say I "cannot look at it objectively without being defensive", I am not trying to be defensive I am making a big effort trying to be polite to discuss with you why your comparison is deeply flawed
Reading through your great comment , the part about MLAA inadvertently reminded me that the Cell BE (as a result of the SPUs and EiB) was really deterministic in how it processed things and it was very deterministic in how it slotted in with the PPE and RSX.

Deterministic compute is rarely going to be the source of performance issues, especially correctly utilised ring buses (like the EiB), because ring buses are used extensively in other fields like networking/telecoms to provide robust performance under heavy load situations at the centre of critical netowrk topologies. So it is very unlikely IMHO that the later gen game dips are caused by the Cell BE.
 
Do you even understand that assets onscreen aren't free for performance? Rendering more onscreen assets onscreen is more taxing. You're the one that tried to pitch dev comments are some gospel-worthy source to push your agenda, Now when I turned it around on you, you're backing down. I showed you that if we're going to be talking about the camera, might as well hear it from the guy that actually worked on it.


You're stating above that there was no performance gain and only "production" cost benefits lol. The sony dev that actually programmed it claimed the exact opposite:



Keep in mind the main dev bailed after this comment because it contradicted his original stance. If common sense prevailed, we could trust the technician who actually programmed the camera, but that might not align with our agenda in proving the ps3 was a secret powerhouse that could actually render those entire scenes with zero performance loss lol, when it was already struggling to begin with.

So I don't need your analogy when you're using weak diversionary tactics and pretending to not understand that assets onscreen aren't free for performance. At least I hope you're pretending that. Either way, ignore list you go.

You literally quotes exactly what Faf was claiming.

I assign a neutral uniform prior to this and after updating with the knowledge you provided, it's now sharply spiked in Faf's corner.

Doesn't look good for you, but I think you don't understand what you're quoting and in this case the "common sense" position about camera position just isn't true. No big deal.
 
Last edited:

Ozrimandias

Member
Not played Siren is it good? Awesome memories of first play for the others
To me, it's one of the hidden gems from the PS3. Some people of the "Silent Team" we're involved in Siren. Not only scare you, but it has some mind blowing scenes. (and a bizarre ending)
 

rnlval

Member
As everyone knows by now the infamous Cell CPU in the PS3 was a really hard and time consuming to code. There is a YT video by Modern Vintage Gamer who goes into detail about what was involved. The amount of code that was required to just send one command was alot more than typical core would use.

We saw just how this effected the multiplatform games that was released on PS which ran alot worse on the PS3 than the 360 for the majority of the generation.
In response to the trouble developers were having with the Cell Sony put alot of effort into the ICE teams to get the absolute best tools for taking advantage of the Cell and help development of third party games on the platform. From my understanding the ICE team was taken from the Sony first party teams such as Naughty Dog, GG and Santa Monica Studios.
By the end of the generation Sony's internal teams were putting out games that were amongst the most impressive of the generation.
Each Sony studio developed their own internal game engines, built from the ground up to take advantage of the parallel processing that the Cell offered.
As a result their current and recent projects are extremely well coded and efficient on multicore processors and their engines keep up with the best of them including Idtech and Unreal Engine.
The hard graft that these studios had to do when stuck with the Cell has given them a skill set and coding tools that are benefiting them today.

As someone who loves the tech side of things I wonder if Sony had of stuck with the Cell and improved its shortcomings like making it Out of order, streamlining the command requirements what it could have been. No doubt it would have been more powerful than the jaguar cores in the PS4.

While I understand why both Sony and MS moved to PC parts for their new consoles, I really miss the days of proprietary processors from Sony, Sega etc.

This is my first thread on GAF, so go easy on me.
AMD RDNA 2 and X86-64 instruction sets are AMD's proprietary IP. AMD wouldn't even show the internal RISC-like instruction set after the X86 decoder stage. X86 instruction set acts like an abstraction layer for AMD's micro-architecture implementation.

Sega Saturn has Hitachi's SuperH-2. SuperH-2 is a replacement for Hitachi's 68000 licenses. There are many RISC instruction sets that replaced Commodore's MOS 65xx and Motorola's 68K instruction set.

Acorn's ARM replaced Commodore's MOS 65xx since the R&D road map is inferior to the competition. Later in BBC Micro's shelf life, ARM1 was used as a co-processor. Commodore didn't evolve the 65xx CPU family like Intel X86.

HP's PA-RISC replaced Motorola's 68K since Motorola killed the 68K family as a high-performance CPU. After the 68K, Motorola's pushed its customers towards the joint PowerPC with IBM. HP Unix workstation and servers were powered by Motorola 68K CPU family. HP's PA-RISC was later replaced by Intel Itanium.

The problem with CELL is the IPC has fallen below 1 IPC and its pipeline length rivals Pentium IV's. long pipeline lenght.

Remember, RISC's main concepts are separate load, store, and simple arithmetic operations with 1 IPC (instructions per clock) throughput minimum. X86 instruction can include memory load/store and arithmetic operations, hence it acts like instruction compression for the modern X86 CPUs.

Meanwhile, GPUs went towards extreme complex instructions (CISC) methods with the fixed graphics function hardware e.g. hardware rasterization that resolves floating point data with the integer pixel grid, texture processors, ROPS blenders, hardware ROPS MSAA, hardware T&L, hardware tessellation, hardware BVH raytracing and 'etc'. GPUs adopted RISC's expanded register storage argument to the extreme with thousands of register storage.

GPU's scatter and gather instructions are complex instructions since it's a series of load and store operations from one instruction issue. GPU's scatter and gather instructions wreaked RISC's atomic instruction argument.

Advanced Computing Environment (ACE) was defined by an industry consortium in the early 1990s to be the next generation commodity computing platform, the successor to personal computers based on Intel's 32-bit instruction set architecture. Advanced Computing Environment (ACE) wa supported by Compaq, Microsoft, MIPS Computer Systems, Digital Equipment Corporation (DEC), and the Santa Cruz Operation (SCO). The CPU instruction set selection from the ACE consortium was MIPS and Alpha. Sony's PS1 and PS2 were powered by MIPS-based CPUs, but Sony didn't follow the ACE-defined hardware abstraction layer (HAL) and boot environments.

Advanced Computing Environment (ACE)'s existence was killed by Intel's CISC-RISC hybrid Pentium Pro (P6) that was supported by CISC-RISC hybrid X86 clones such as AMD K6 and Cyrix 6x86.

In the early 2000s, Intel attempted to replace 32-bit X86 with Itanium and it was later killed by second source insurance AMD's X86-64.
 
Last edited:

rnlval

Member
No

Cell is the worst CPU Sony made...

I'm still pretty sure if the PS3 have EE2/GS2... First party of Sony doing a far better job. (More trick, more HD or Full HD games, more new shaders).
Even if first party create their own shaders on PS3 with the CELL (MLAA/Physics for example), the GPU is so fking trash...
Even with the x86 CPU on PS4... you can't believe RDR2 and TLOU2 was made with a CPU "Laptop" and a GPU of 2012/2011...

The "perfect" hardware (for his time) still the PS2 for me..

EDIT : By worst, I mean espacially how I am disapointed how the Cell hard to work on it
CELL is mostly IBM's IP when the SPU instruction set is based on PowerPC's Altivec/VMX. PS4 "netbook" Jaguar out-of-order CPU has more than 1 IPC (instructions per clock) and it's superior when compared to IBM PPE/SPE's IPC.

ATI/NVIDIA GpGPU has progressed from N64's simple array of SIMDs with tiny fixed graphics function hardware.

Remember, ATI stands for Array Technology Inc which specialized in array co-processors.
 

rnlval

Member
You mean 200+ GFLOPS (8 FP Ops per clock per SPU) and for that time it allowed a lot of flexibility and control that took a while to get to GPU shaders (async compute and all).


The buggy RSX (some bugs on vertex processing took a long while to work around) was to be fair not their initial plan not the weird FlexIO connection (super slow in one direction leading to imbalances to the original ideal scenario where CPU and GPU were accessing each other memory pools as if they were one).
CELL's 200 GFLOPS assumes 1 IPC instead of a real-world 0.5 IPC.

For Xbox 360 GPU... notice the 8 simultaneous contexts in-flight at once.

zCct8CW.jpg

d6C6vh6.png


Your narrative is based on Sony's narrative when GCN's design originated with the Xbox 360! Sony can't admit they are wrong!

AMD GCN (Direct3D) is backward compatible with Xbox 360 GPU (Direct3D) hence a no-brainer for Xbox One's backward compatibility with Xbox 360.


ymDOB1Y.jpg


jo4KNSv.jpg



Nvidia's CUDA architecture still lives to this day with Ampere evolution.

ATI/AMD's many SIMD data processing Direct3D topology from Xbox 360 still live through GCN and RDNA 2 evolution.

ATI/AMD's VLIW-based GPUs are dead.

CELL's SPU... is dead.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
CELL's 200 GFLOPS assumes 1 IPC instead of a real-world 0.5 IPC.

For Xbox 360 GPU... notice the 8 simultaneous contexts in-flight at once.

zCct8CW.jpg

d6C6vh6.png


Your narrative is based on Sony's narrative when GCN's design originated with the Xbox 360! Sony can't admit they are wrong!
What narrative? Who is talking about GCN? I just stated a very simple sentence 😂. What does cheap context switching and the ability to switch shaders and render state quickly got to do with… you are throwing factoids now (GS in PS2 supported 2 contexts and was able to switch render state per primitive drawn… something something narrative).

AMD GCN (Direct3D) is backward compatible with Xbox 360 GPU (Direct3D) hence a no-brainer for Xbox One's backward compatibility with Xbox 360.


ymDOB1Y.jpg


jo4KNSv.jpg



Nvidia's CUDA architecture still lives to this day with Ampere evolution.

ATI/AMD's many SIMD data processing Direct3D topology from Xbox 360 still live through GCN and RDNA 2 evolution.

ATI/AMD's VLIW-based GPUs are dead.

CELL's SPU... is dead.
Thanks for another word salad on the SPU’s vs GCN as you spotted something that was not in the expected hierarchy of PC >> anything Xbox >> PS ;). You will probably keep replying till you have the last word but it is an old topic and you are just picking a fight here, so have it.
 
Last edited:

rnlval

Member
What narrative? Who is talking about GCN? I just stated a very simple sentence 😂. What does cheap context switching and the ability to switch shaders and render state quickly got to do with… you are throwing factoids now (GS in PS2 supported 2 contexts and was able to switch render state per primitive drawn… something something narrative).


Thanks for another word salad on the SPU’s vs GCN as you spotted something that was not in the expected hierarchy of PC >> anything Xbox >> PS ;). You will probably keep replying till you have the last word but it is an old topic and you are just picking a fight here, so have it.

You stated "control that took a while to get to GPU shaders (async compute and all)" narrative. It's well known late 2011 AMD GCN marketing made a big deal about Async compute.

For Battlefield 3, ATI Xenos is able to perform the deferred rendering via custom ALU mode similar to PS3's SPU deferred rendering.

Async compute is important for dispatching commands between the main CPU workload and sync-lock GPU workload.

IF a powerful desktop PC CPU is able to calculate at 120 hz from 30 hz targeted game console workload, that's a 4-fold tighter completion time for the CPU side, hence idle gap for the GPU side is about four times less.

Async compute has higher importance for low-performing CPUs.

When AMD Zen CPU was released, AMD reduced its Async compute marketing.


As for your "What does cheap context switching and the ability to switch shaders and render state quickly got to do with" statement...

Again, for Xbox 360 GPU... notice the 8 simultaneous contexts in-flight at once. The word simultaneous is important since the context tracking can be sequential. The 8 simultaneous contexts in-flight are in parallel. This is for reducing CPU context management overheads which is important for low-performing CPUs. Xenos 8 simultaneous contexts covers Xbox 360's six PPE CPU threads.
 
Last edited:

rnlval

Member
What narrative? Who is talking about GCN? I just stated a very simple sentence 😂. What does cheap context switching and the ability to switch shaders and render state quickly got to do with… you are throwing factoids now (GS in PS2 supported 2 contexts and was able to switch render state per primitive drawn… something something narrative).


Thanks for another word salad on the SPU’s vs GCN as you spotted something that was not in the expected hierarchy of PC >> anything Xbox >> PS ;). You will probably keep replying till you have the last word but it is an old topic and you are just picking a fight here, so have it.

For the 1985-era hardware, Amiga 1000/500/600's Dread (Doom clone) game,

1. C2P (chunky to planar pixel format) conversion process via the hardware Billter that is running async from the CPU via DMA.

2. Copper (co-processor) running async changing color palette for "racing the beam" color raster tricks for hardware sprites graphics render. This trick increases the effective displayed color per frame. Hardware sprites are used to render the player's hand and weapon graphics. Copper hardware can modify the hardware sprite's location registers, not just from the CPU.

3. While graphics custom chips are running their workload, the 68000 CPU calculates the next frame.

Both Blitter hardware (reading from memory and slicing chunky data into planar pixel format that is compatible with Denise display chip's raster pixel format) and Copper hardware (modifying Denise chip's color and location registers) are located in the Agnus chip. That's two concurrent contexts from the Agnus chip. Blitter hardware has fixed function adder ALUs to enable hardware accelerated 2D line draws.

The goal is to run a Doom-like 3D game on Amiga 1000/500/600 with a 7.1 Mhz 68000 CPU and the developer used all the known tricks to compact the render time.

The Amiga is used to smash patent trolls.

Unlike Atari ST competition, Amiga's graphics chipset is largely decoupled from the CPU, hence enabling CPU upgrades (e.g. ARM Cortex A53 @ 1.5Ghz via PiStorm/Pi 3a/Emu68) without wreaking raster effects timings.
 
Last edited:

PaintTinJr

Member
CELL is mostly IBM's IP when the SPU instruction set is based on PowerPC's Altivec/VMX. PS4 "netbook" Jaguar out-of-order CPU has more than 1 IPC (instructions per clock) and it's superior when compared to IBM PPE/SPE's IPC.

ATI/NVIDIA GpGPU has progressed from N64's simple array of SIMDs with tiny fixed graphics function hardware.

Remember, ATI stands for Array Technology Inc which specialized in array co-processors.
Only if you ignore that unlike any other multi-core CPU or GPU processor in a console or PC, the Cell BE was reasonably unique in that it was truly multi-core once each SPU had been kicked off with work, to run on their own independently, unlike all other systems that run under the control of their system's primary CPU's 1st core thread.

The difference this makes is that a bottleneck on a systems' primary core will cause all aspects of a system to potentially slow and lose efficiency, where as the Cell BE was like having 7 standalone ASICS which because of the small amount of memory each had were tilted towards running deep enough algorithms to make them efficient with data transfers that complimented the small amount of very low latency memory.
 
CELL's 200 GFLOPS assumes 1 IPC instead of a real-world 0.5 IPC.

For Xbox 360 GPU... notice the 8 simultaneous contexts in-flight at once.

I absolutely fear responding to you and going down this rabbit hole, but can I have several sources for the 0.5 "real-world" IPC on SPU code?

Also, I find it interesting that you talk about Cell being not 100% efficient in it's SPU given it's memory architecture, yet automatically peg the XGPU at the 100% optimal 8 contexts, in all situations.
 

rnlval

Member
I absolutely fear responding to you and going down this rabbit hole, but can I have several sources for the 0.5 "real-world" IPC on SPU code?

Also, I find it interesting that you talk about Cell being not 100% efficient in it's SPU given it's memory architecture, yet automatically peg the XGPU at the 100% optimal 8 contexts, in all situations.
70t0XTg.jpg



PS3's 3.2 Ghz CPU/SPE (5 SPEs were used) is showing a real-world IPC disadvantage when compared to PS4's Jaguar @ 1.6 Ghz/1.75Ghz (six cores were used).
----

Xbox 360's PPE X3 and Xenos are capable of pointer swap for implied data transfer.

SPEs can't pointer swap with the host CPU, hence higher programmer intervention hence CELL is not "fusion" as defined by AMD's "Fusion".
 
Last edited:

rnlval

Member
Only if you ignore that unlike any other multi-core CPU or GPU processor in a console or PC, the Cell BE was reasonably unique in that it was truly multi-core once each SPU had been kicked off with work, to run on their own independently, unlike all other systems that run under the control of their system's primary CPU's 1st core thread.

The difference this makes is that a bottleneck on a systems' primary core will cause all aspects of a system to potentially slow and lose efficiency, where as the Cell BE was like having 7 standalone ASICS which because of the small amount of memory each had were tilted towards running deep enough algorithms to make them efficient with data transfers that complimented the small amount of very low latency memory.
FYI, ATI Xenos has 8 concurrent contexts and 64 (hyper) threads for 48 unified shaders. The lowest latency for data storage known to man is an SRAM register file, hence why GpGPUs has thousands of them.

For ATI Xenos
vqxNYBx.png


CPU jobs can be kicked from the Xenos GPU using a Memexport.

AMD's GCN origins come from ATI Xenos.

----

Note that Amiga's Copper (short for co-processor) can run independently from the CPU since it can modify registers via DMA.

1992 Macintosh IIfx can support Apple 8*24GC NuBus Video Card that includes AMD 29000 RISC processor. Am29000 RISC architecture processor running at 30 MHz with 22 MIPS. Am29000 @ 30 Mhz with 22 MIPS is about 68040 @ 28 Mhz level performance. AMD 29000 RISC processor was used as a video accelerator processor.
 
Last edited:

rnlval

Member
PS3 had the more powerful CPU, but weaker GPU when compared to xbox 360. It was also much more difficult to get the performance out of the CPU on PS3, so in general cross platform games looked or performed worse on PS3.
PS3 CELL's SPEs are "DSP-Like" i.e. SPE is not at PPE in-order CPU level.
 

rnlval

Member
I disagree completely. Again, Cell was build using the constrains of the time: 90nm, 234 million transistors and an area of about 221 square millimeters. Now we have 20,000 million transistors to play with. You don't have to fit an in-order core, you don't just need one PPE-type element. You don't need the EIB, you can swap it out for a cross-bar. You can beef up the SPE's.

Then again, a lot has changed, but the fact I'm trying to make is that heterogenous computing focusing on computational density with tightly integrated, high-bandwidth interconnects is a win in a closed box. The PS5 and XBox aren't that impressive compared to what's possible if you give a group of engineers like STI five years and millions of dollars with a tranistsor budget like that to make a game box. But those days are unfortunately over. Now, they are basically off-the-shelf mid-range PCs like Elon Musk just eluded to. It's all commodity. Fucking Apple is more exciting.

Again, as I said I posted a link earlier in this thread to a Peter Hofstee presentation where he shows Cell attaining good speed on OpenCL code verse hand-tuned Cell code, things today would be different.

And the in-order guTS derived PPE was an IBM thing. You could have done something else in Cell.

QtLCpzV.jpg


At 28 nm process tech, AMD's Jaguar chip area size was competitive against ARM Cortex A15 while Jaguar has preformance superiority over ARM Cortex A15.

For PS4 and XBO era, IBM lost against AMD's 2-for-1 deal.
 
Last edited:

Shut0wen

Member
As everyone knows by now the infamous Cell CPU in the PS3 was a really hard and time consuming to code. There is a YT video by Modern Vintage Gamer who goes into detail about what was involved. The amount of code that was required to just send one command was alot more than typical core would use.

We saw just how this effected the multiplatform games that was released on PS which ran alot worse on the PS3 than the 360 for the majority of the generation.
In response to the trouble developers were having with the Cell Sony put alot of effort into the ICE teams to get the absolute best tools for taking advantage of the Cell and help development of third party games on the platform. From my understanding the ICE team was taken from the Sony first party teams such as Naughty Dog, GG and Santa Monica Studios.
By the end of the generation Sony's internal teams were putting out games that were amongst the most impressive of the generation.
Each Sony studio developed their own internal game engines, built from the ground up to take advantage of the parallel processing that the Cell offered.
As a result their current and recent projects are extremely well coded and efficient on multicore processors and their engines keep up with the best of them including Idtech and Unreal Engine.
The hard graft that these studios had to do when stuck with the Cell has given them a skill set and coding tools that are benefiting them today.

As someone who loves the tech side of things I wonder if Sony had of stuck with the Cell and improved its shortcomings like making it Out of order, streamlining the command requirements what it could have been. No doubt it would have been more powerful than the jaguar cores in the PS4.

While I understand why both Sony and MS moved to PC parts for their new consoles, I really miss the days of proprietary processors from Sony, Sega etc.

This is my first thread on GAF, so go easy on me.
But there cpu killed off way more of there franchises though, socom, mag mod nations, warhawk
 
Top Bottom