• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS3 Cell made Sonys first party what they are today.

As everyone knows by now the infamous Cell CPU in the PS3 was a really hard and time consuming to code. There is a YT video by Modern Vintage Gamer who goes into detail about what was involved. The amount of code that was required to just send one command was alot more than typical core would use.

We saw just how this effected the multiplatform games that was released on PS which ran alot worse on the PS3 than the 360 for the majority of the generation.
In response to the trouble developers were having with the Cell Sony put alot of effort into the ICE teams to get the absolute best tools for taking advantage of the Cell and help development of third party games on the platform. From my understanding the ICE team was taken from the Sony first party teams such as Naughty Dog, GG and Santa Monica Studios.
By the end of the generation Sony's internal teams were putting out games that were amongst the most impressive of the generation.
Each Sony studio developed their own internal game engines, built from the ground up to take advantage of the parallel processing that the Cell offered.
As a result their current and recent projects are extremely well coded and efficient on multicore processors and their engines keep up with the best of them including Idtech and Unreal Engine.
The hard graft that these studios had to do when stuck with the Cell has given them a skill set and coding tools that are benefiting them today.

As someone who loves the tech side of things I wonder if Sony had of stuck with the Cell and improved its shortcomings like making it Out of order, streamlining the command requirements what it could have been. No doubt it would have been more powerful than the jaguar cores in the PS4.

While I understand why both Sony and MS moved to PC parts for their new consoles, I really miss the days of proprietary processors from Sony, Sega etc.

This is my first thread on GAF, so go easy on me.
 

SlimySnake

Flashless at the Golden Globes
It wasn’t just Sony studios. By the end of the gen, almost everyone had figured out the cell processor. Several studios were using MLAA on the ps3. Rockstar’s gta5 port ran pretty much on par with the 360 despite gta4 running way better than the ps3.

I do agree that an upgraded cell might have been then the jaguar cpus, but the cost would’ve been higher and who knows what would have happened to a $500 ps4.

The ps5 io is a fantastic and unique design. Its a shame no one is using it because this thing has the potential to be a game changer. But unlike the cell, no one seems to really want to extract the most of out it. Not even Sony’s own first party studios.
 

ManaByte

Member
The ps3 certainly set the tone for modern playstation, especially games like uncharted and the last of us. Its crazy that sony themselves dont have more reverence for that platform
Well the launch was a black eye for them. Tretton even said taking over Sony at the time was like being made captain of the Titanic.
 

bender

What time is it?
giphy.gif
 

Romulus

Member
Give those same Sony studios a more well rounded, easier to develop machine and we would have seen even more but I see your point. The ps3 was a complete mess of a machine, ridiculously restrictive RAM and weak GPU, putting all the stress of the cpu that took devs years to figure out. They could have used that time somewhere else. All ps3 games that people consider really impressive come down to things that don't need a cell processor.
 
Last edited:

I Master l

Banned
As far as i understand the Cell CPU provided some functionality that duplicated what the GPUs provided
at the time and Sony wanted to develop their own rasterizer but that design failed so they had to get a 3rd
party GPU and selected Nvidia, I am not sure what gains would Sony get form improving on Cell frankenstein
design
 
Sony’s mantra to make cinematic video games from the beginning of the PS3 era, is why Sony is what it is today.
Yeah it's Sony's thing but their games from their top studios are very well optimised as well. They rarely have big framerate issues in their games for instance.
To get the best out of the cell it demanded that you were able to utilise parallel cores and know how to split your code evenly and equally.
I don't think there is any doubt that the internal Sony engines are developed to make this a priority.
 
As far as i understand the Cell CPU provided some functionality that duplicated what the GPUs provided
at the time and Sony wanted to develop their own rasterizer but that design failed so they had to get a 3rd
party GPU and selected Nvidia, I am not sure what gains would Sony get form improving on Cell frankenstein
design
I think originally Sony was going to have multiple cells in the PS3, but then chose for whatever reason to have one of them and a GPU.
 

Yoboman

Member
They were already headed that way but PS2 was a massive limiter on their ambitions

Naughty Dog was doing some crazy stuff with the Jak games, the animation system, the first load free open world, bump mapping which no other PS2 game was doing. People didn't really give it the credit because of the cartoony graphic style but it was no coincidence they were at the lead graphically from their first PS3 game

Polyphony were at the top of the game graphically in that era

Shadow of the Colossus team did some insane lighting and animation feats that really shouldn't have even been attempted on PS2 era hardware

Killzone wasn't a great game but was a graphical showpiece
 

ParaSeoul

Member
Only way Sony's plan for the Cell would have worked is if Microsoft had not brought their A game for that gen and actually gave them a good fight. Their plan was to get devs to use all their resources for the PS3 port and ignore the Xbox entirely.
 

nush

Gold Member
As everyone knows by now the infamous Cell CPU in the PS3 was a really hard and time consuming to code. There is a YT video by Modern Vintage Gamer who goes into detail about what was involved. The amount of code that was required to just send one command was alot more than typical core would use.

We saw just how this effected the multiplatform games that was released on PS which ran alot worse on the PS3 than the 360 for the majority of the generation.
In response to the trouble developers were having with the Cell Sony put alot of effort into the ICE teams to get the absolute best tools for taking advantage of the Cell and help development of third party games on the platform. From my understanding the ICE team was taken from the Sony first party teams such as Naughty Dog, GG and Santa Monica Studios.
By the end of the generation Sony's internal teams were putting out games that were amongst the most impressive of the generation.
Each Sony studio developed their own internal game engines, built from the ground up to take advantage of the parallel processing that the Cell offered.
As a result their current and recent projects are extremely well coded and efficient on multicore processors and their engines keep up with the best of them including Idtech and Unreal Engine.
The hard graft that these studios had to do when stuck with the Cell has given them a skill set and coding tools that are benefiting them today.

As someone who loves the tech side of things I wonder if Sony had of stuck with the Cell and improved its shortcomings like making it Out of order, streamlining the command requirements what it could have been. No doubt it would have been more powerful than the jaguar cores in the PS4.

While I understand why both Sony and MS moved to PC parts for their new consoles, I really miss the days of proprietary processors from Sony, Sega etc.

This is my first thread on GAF, so go easy on me.

Most of these points also apply to the Sega Saturn, back then the narrative was hard to code for = bad. But with PS3 it's now, hard to code for = good.
 

Dr.D00p

Gold Member
Its really annoying how many games are CPU limited in the RPCS3 emulator, simply because studios had to shift so much of the rendering load on to the Cell, because Sony paired the PS3 with such a dog shit GPU, barely a year before launch.

Games which rely almost exclusively on the dog shit GPU, run pretty much flawlessly, neither CPU or GPU limited, but as soon as you come across a game that started shifting stuff over to the Cell, pretty much all of the big Triple A stuff....well, watch those frame rates drop.
 

Drew1440

Member
Whatever benefit the Cell processor provided, the underpowered RSX took it away, with the Cell SPE having to assist with the graphics rendering in order to get parity with the 360.

I wonder what the original Toshiba-designed GPU would have looked like? It was believed to be an iteration of the graphics synthesizer used in the PS2, with a few SPE's for rendering and a graphics core for rasterization, paired with eDRAM. My guess was they found performance to be very poor with the ATI cores that Microsoft was going to use at the time and shelved it. Another issue with the PS2 graphics system was it was too different compared to how other GPU's functioned, and Sony was aware of how poor some of the PS2 ports were, and the issues gathered from developer feedback.
Then again had they delayed the PS3 and used a GPU like the GeForce 8800GT, the PS3 would have wiped the floor with the 360 in the graphics department, but how much would such a console cost? They were already pushing it with the stock PS3.
 

ZywyPL

Banned
I think originally Sony was going to have multiple cells in the PS3, but then chose for whatever reason to have one of them and a GPU.

Rumor says that initially PS3 had two Cell and no GPU at all, but the devs panicked as they had absolutely no ide how to ise it and Sony reached Nvidia very late into the development, and received basically a 6800GT with half ROPs.

Whatever benefit the Cell processor provided, the underpowered RSX took it away, with the Cell SPE having to assist with the graphics rendering in order to get parity with the 360.

It was actually the othervway around - Cell was supporting the GOU with the tasjs the RSX couldn't handle. Same on the CPU side of things.

So all in all nothing a proper CPU+GPU combo couldn't do, hence Sony weren't interested in continuing this path for the PS4.
 
Most of these points also apply to the Sega Saturn, back then the narrative was hard to code for = bad. But with PS3 it's now, hard to code for = good.
Not really. I think easier is always better, but I think that what the Cell did for Sony devs is make them very good and efficient is parallel Compute before it was thr norm.
With the reduction in ability to increase clock speeds linearly it was always going to move to more cores rather than faster cores. To get the performance out of the PS3 to give games such as Killzone, GOW and TLOU you have to been coding that cell as effectively as possible.
The SPUs on the Cells were even harder to maximise than a core on a CPU. They had to work in a synchronised way otherwise it would make performance worse.

My point is that by having to learn these coding methods and skills back in the PS3 day really gave their studios a headstart. Their in-house engines were also developed with this requirement in mind.
I think there is a reason why Sony's main studios have kept their proprietary engines rather than adopt UE for instance.
Personally I think they are most likely better engines for CPU performance.
 

MonarchJT

Banned
Not really. I think easier is always better, but I think that what the Cell did for Sony devs is make them very good and efficient is parallel Compute before it was thr norm.
With the reduction in ability to increase clock speeds linearly it was always going to move to more cores rather than faster cores. To get the performance out of the PS3 to give games such as Killzone, GOW and TLOU you have to been coding that cell as effectively as possible.
The SPUs on the Cells were even harder to maximise than a core on a CPU. They had to work in a synchronised way otherwise it would make performance worse.

My point is that by having to learn these coding methods and skills back in the PS3 day really gave their studios a headstart. Their in-house engines were also developed with this requirement in mind.
I think there is a reason why Sony's main studios have kept their proprietary engines rather than adopt UE for instance.
Personally I think they are most likely better engines for CPU performance.
The bold part is the opposite philosophy behind the ps5 design.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Rumor says that initially PS3 had two Cell and no GPU at all, but the devs panicked as they had absolutely no ide how to ise it
Toshiba was making a pixel shader only focused GPU with gobs of eDRAM for it (CELL taking over the vertex shading duties… innovating but also taking the PS2 design and turning it up to 11, you can see where they wanted to go), but yes it was ditched for RSX at the last minute and got a bugged chip too.

Their approach could have worked had MS not launched early and with such good HW performance (and architecture) and SW support.
PS3, due to some issues, ended up feeling like less than the sun of its parts for a while, but devs who tamed CELL ended up liking it a lot.
 

nush

Gold Member
Cell did for Sony devs is make them very good and efficient is parallel Compute before it was thr norm.

The Saturn also had parallel coding way before the norm too, that's one of the reasons it was difficult to code for. You're making negatives into positives on a similar environment, just because of the platform. Both machines had developers that overcame those challenges and made great software, but not because the Cell is unique.
 

Panajev2001a

GAF's Pleasant Genius
The bold part is the opposite philosophy behind the ps5 design.
For the time 3.2 GHz was fast for a console CPU clock too. 3.5 GHz now with modern processes vs when PS3 was being designed is also something that should not be hand waived away easily too.

The issue was hitting a power wall with the same approach to wider and wider and more complex core designs. Faster clocks without compromises is always better, but that has not been the option for a long while now hence why people are exploring both simpler and more cores as well as heterogenous designs and system integration optimisations (see M1 SoC for example).
 

winjer

Gold Member
The Cell CPU as a mistake for the most part. Yes, it had the advantage of being able to do some interesting things and being powerful for the time.
But the complexity created more problems, than the advantages it brought. There is a clear reason why no one, not even Sony, made another console like that again.
It was a nightmare of a CPU to create games for. Not only because of the complexity of the SPEs, but also because it was an In-Order arch.
This increase the cost and time of making games for the platform. And for studios that did not have the technical know-how and budget, created sub-par versions of games.
And even today, the complexity of the Cell CPU still causes problems for Sony, as creating an emulator for PS3 games is so expensive and complex, that Sony just gave up on it.

Now I know that some will point out that some games did amazing things, but those were the exception, not the rule. And most of these were first party from Sony.

Sony should have ditched the Cell CPU, maybe gone with a similar setup as MS did with the X360. Or even better, go with an Out-of-Order X86 CPU from AMD or Intel.
This would have made developing games much easier, and would have avoided all those terrible ports the PS3 had.
Also very important, not use that cut down 7900GT for a GPU. That was an outdated GPU, using dedicated pixel and vertex shaders, at a time when unified shader model arch were being released.
The 7900GT, even for the time was a bit weak for pixel shading. Even weaker than the X1900, that released at the same time, and also had dedicated pixel and vertex shaders.
So the Cell had to pick up the slack from the 7900GT in most games.

PrevGen-UT3.png


Then there were also the issues with yields from producing the Cell, that forced the PS5 to be delayed by a year. And made it so expensive at launch.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
The Saturn also had parallel coding way before the norm too, that's one of the reasons it was difficult to code for. You're making negatives into positives on a similar environment, just because of the platform. Both machines had developers that overcame those challenges and made great software, but not because the Cell is unique.
Nobody said that twin SH-2’s were a bad idea in and of itself, people have problems in how they were integrated and how the SDK exposed them and what maxing the architecture of the system got you vs maxing PSOne.

CELL as a processor forced the right patterns but was not designed to be completely general purpose (despite the SPE’s being very flexible). One could say that for the time it had a clever and efficient flexible approach to vector computing and how it could accelerate graphics, physics, sound, AI, etc… work. The way it was architected, PowerPC based PPU with its warts aside, and the patterns that sung on it once they were fine tuned to it are the best practices people use to push modern parallel GPU monsters of today.

Again, do not remember many developers that pushed the Saturn hard and well singing its praises as a HW architecture, but developers they got the PS3 to sing well seemed not that unhappy about CELL in and of itself.
 
Last edited:

nush

Gold Member
The Cell CPU as a mistake for the most part. Yes, it had the advantage of being able to do some interesting things and being powerful for the time.
But the complexity created more problems, than the advantages it brought. There is a clear reason why no one, not even Sony, made another console like that again.

wasn't the long term plan by Sony to have many products outside of gaming to use the Cell? Then PS3 showed how that was not going to be feasable and the dropped the plan?
 

winjer

Gold Member
wasn't the long term plan by Sony to have many products outside of gaming to use the Cell? Then PS3 showed how that was not going to be feasable and the dropped the plan?

The Cell was collaboration between several companies, including Sony and IBM.
And there were plans to have it in other uses besides the PS3. But it's adoption was very limited.
Besides, by 2006 we had things like the G80 from nVidia, that had more compute power. The Cell could do 179.2 GFLOPs of compute. A GTX 8800GTX could do 345.6 GFLOPS.
And by 2007, we had things like Cuda, that made programing for GPUs much easier. So for workstations, the Cell was already outdated for most uses.

But for games, the PS3 paradigm was the wrong choice. A strong CPU with a weak GPU is the wrong way of doing things.
And Sony paid the price in that generation, almost losing it to the X360.
There is a reason why all modern consoles have a greater emphasis on a powerful GPU, than on the CPU.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
The Cell was collaboration between several companies, including Sony and IBM.
And there were plans to have it in other uses besides the PS3. But it's adoption was very limited.
Besides, by 2006 we had things like the G80 from nVidia, that had more compute power. The Cell could do 32GFLOPs of compute.
You mean 200+ GFLOPS (8 FP Ops per clock per SPU) and for that time it allowed a lot of flexibility and control that took a while to get to GPU shaders (async compute and all).

A GTX 8800GTX could do 345.6 GFLOPS.
And by 2007, we had things like Cuda, that made programing for GPUs much easier. So for workstations, the Cell was already outdated for most uses.

But for games, the PS3 paradigm was the wrong choice. A strong CPU with a weak GPU is the wrong way of doing things.
And Sony paid the price in that generation, almost losing it to the X360.
There is a reason why all modern consoles have a greater emphasis on a powerful GPU, than on the CPU.
The buggy RSX (some bugs on vertex processing took a long while to work around) was to be fair not their initial plan not the weird FlexIO connection (super slow in one direction leading to imbalances to the original ideal scenario where CPU and GPU were accessing each other memory pools as if they were one).
 

M1chl

Currently Gif and Meme Champion
Jesus fucking Christ, just NO

First of all, devolopers are much more than just Engine programmers. You wouldn't get much, if it was just dependent on people who are good at micromanaging code and so on.

Second of all, the inclusion of Zen2 CPU, instead of some weak ass netbook ones, means that "supercharged" architecture, which is CPU+GPU running CPU-like code is bad. So that is a nail in coffin for CELL type programming, which was designed like that in one packaged (PPE+SPU).

Third of all, hardness of the development meant that compromises had to be made, not that it supports anything or makes devs better, devs gets better because they have

1) Good money from employer
2) Passion for programming

So no. What made the game for Sony studios better, is passion for good product, good artist, good writers, good budget and last but not least good management.

PS3 is piece of shit and it's great that it's left in the history.
 

winjer

Gold Member
You mean 200+ GFLOPS (8 FP Ops per clock per SPU) and for that time it allowed a lot of flexibility and control that took a while to get to GPU shaders (async compute and all).

Yes, I was not looking at the whole number for the PS3. It's actually 179.2 GFLOPs of FP32.
Much closer to a 888GTX.
Apologies for the error.
 
It made games not stable to 30fps? Dont think theres 1 ps3 exclusive that never hit 30 from beginning to the end, even killzone added motion blur to hide it, ps3 imo is one of the worst consoles since the virtual boy, every third party ran like shit, betheada games were un playable after 10 hours and though it had some good exclusives a majority had technical issues except maybe uncharted 2 and 3
 

cireza

Banned
SEGA had been using two or more CPUs in their arcade boards twenty years before the PS3 was released.


Sega Y Board had three 68K CPUs.

More detailed list of all the chips embedded : https://segaretro.org/Sega_Y_Board

SEGA were used to this, and the architecture of the Saturn makes much more sense when seeing this. It was the norm for them.

The CELL certainly forced developers into dealing with parallelism, and Sony coming from the success of the PS1, then the PS2 managed to push hardware that was very difficult to develop for two times in a row (PS2 then PS3).
 
Last edited:

squarealex

Member
No

Cell is the worst CPU Sony made...

I'm still pretty sure if the PS3 have EE2/GS2... First party of Sony doing a far better job. (More trick, more HD or Full HD games, more new shaders).
Even if first party create their own shaders on PS3 with the CELL (MLAA/Physics for example), the GPU is so fking trash...
Even with the x86 CPU on PS4... you can't believe RDR2 and TLOU2 was made with a CPU "Laptop" and a GPU of 2012/2011...

The "perfect" hardware (for his time) still the PS2 for me..

EDIT : By worst, I mean espacially how I am disapointed how the Cell hard to work on it
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
SEGA had been using two or more CPUs in their arcade boards twenty years before the PS3 was released.


Sega Y Board had three 68K CPUs.

More detailed list of all the chips embedded : https://segaretro.org/Sega_Y_Board

SEGA were used to this, and the architecture of the Saturn makes much more sense when seeing this. It was the norm for them.

The CELL certainly forced developers into dealing with parallelism, and Sony coming from the success of the PS1, then the PS2 managed to push hardware that was very difficult to develop for two times in a row (PS2 then PS3).
I think that the argument is not that PS3’s CELL was the first attempt at parallel processing (I would not just quote the number of processors in the system though, PSOne and PS2 had a lot of semi autonomous cores), but in going a bit further in therms of how autonomous each processor was and in how it enabled developers that dealt with data oriented parallel algorithms efficiently to get a great deal of performance.
 

Panajev2001a

GAF's Pleasant Genius
No

Cell is the worst CPU Sony made...

I'm still pretty sure if the PS3 have EE2/GS2... First party of Sony doing a far better job. (More trick, more HD or Full HD games, more new shaders).
Even if first party create their own shaders on PS3 with the CELL (MLAA/Physics for example), the GPU is so fking trash...
Even with the x86 CPU on PS4... you can't believe RDR2 and TLOU2 was made with a CPU "Laptop" and a GPU of 2012/2011...

The "perfect" hardware (for his time) still the PS2 for me..

EDIT : By worst, I mean espacially how I am disapointed how the Cell hard to work on it
Considering that Toshiba promised to develop a GS on steroids and pixel shaders, why was CELL worse than the EE?

Was the PowerPC PPU much much worse than the 333 MHz R5900 MIPS CPU in PS2? Was the presence of the sound chip and image decompressor/video decoder (SPU2 and IPU units) a dealbreaker? We’re the VU’s that could not DMA data in and out themselves, had 1/4th of the registers and 1/8th of the local memory (and split into data and instructions to boot, I am also only looking at VU1 as VU0 had even less memory… I do like the VIF’s, some people made crazy stuff with those programmable interfaces), and could only deal with physical not virtual addresses and had not ways to talk to each other and synch autonomously better than the SPE’s?
 
I agree but not just because the Cell required more from devs but because it put Sony in a position where they had to find new ways to make their console more attractive and so they started to heavily invest into their internal studios. Sony's bad decisions forced them into making some key good decisions.

The 360 was a lot cheaper, released one year earlier and multiplatform games run about the same on both console (or even better on the 360 specially early on). Sony also lost many key partnership and exclusivities to the 360 at that time.

The hardware of the PS3 was ambitious and had a lot going on for it, it was very forward thinking but it was clearly a massive mistake that still get in the ways of things to this day (see PS3 games not running natively on PS5). The PS3 release one year later was a lot more expensive and still had a worst GPU than what the 360 had (I don't know who's fault that was, if it was NVidia's or Sony's but that is by itself a massive failure).

Sony from 2009 onwards where at their best in a lot of ways and that is what set the stage for the PS4. Today they are too conservative and focused too much into just their flagship releases, I prefer a little more variety from them, more new IPs, some smaller games, partnerships that result in interesting and unique games and less reliance on moneyhats.
 
Last edited:

I Master l

Banned
Wasn't the PS3 the worst console ever in terms of performance per cost wise ? 900$ to make at lunch while the Xbox 360 cost half as much at the time
 
Last edited:

MonarchJT

Banned
For the time 3.2 GHz was fast for a console CPU clock too. 3.5 GHz now with modern processes vs when PS3 was being designed is also something that should not be hand waived away easily too.

The issue was hitting a power wall with the same approach to wider and wider and more complex core designs. Faster clocks without compromises is always better, but that has not been the option for a long while now hence why people are exploring both simpler and more cores as well as heterogenous designs and system integration optimisations (see M1 SoC for example).
Years ago when I was young all the chip manufacturers essentially chased the last MHz today, although they still manage to increase the frequency, it is obvious that the development has taken another road. As the frequency increases, the performance does not increase linearly due to the speed differences of the various components which leave the fastest components waiting, but above all even the TDP does not. With this I am not saying that with the same architecture, higher frequency does not do better results, it does it 100% but it is preferable (and in fact this is what all CPU and GPU manufacturers do) increase parallelization by trying to divide the workload in the best possible way. Incidentally not even this increases the performances linearly but the closer we get to a perfect subdivision of the wordload between the cores / cus, the closer we get to a linear growth .... let's say that the limit is optimization ..the advantages are absolutely clear. obviously there is a limit in lowering the frequency even in more paralyzed arch before this is beaten by a system with less cores or cus but with a higher frequency.
 
Last edited:
Top Bottom