• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS3 Cell made Sonys first party what they are today.

Fiber

Member
Cell was/were never/ever powerful.
All the ps3's thing was bluray that can hold pre rendered scenes on it's side.
You can download god of war's fully pre rendered cinematics scenes on warez and can play it on pentium 4.

Pre rendered scenes looked good. thanks to 50gb space. it was never rendered on cell.
 

Romulus

Member
PS3 did have an edge though



I disagree completely. Looking at the hardware now it was complete trash even back then. And scaling back games for 360 is hilarious considering most ps3 games suffered horrible fps, even late in the generation as lead platform.
 
Last edited:

PaintTinJr

Member
..


I meant the market for the console itself. Chip-market at large would not have been negatively affected by different CPU paradigms, that'd be silly.
I had meant both too, in addition to the chip-market, because it would have been the graphics software that wouldn't have ported easily had there been a fork, much like Apple is intentionally doing IMO to nullify any regulation to open up access for their general purpose computer platforms (mobile, table desktop) from license fees.
As mentioned earlier in this very thread - original Reality Synthesizer was a completely different paradigm from 'more SPEs everywhere'. It wasn't a bad approach either - but the engineering realities (or capabilities of teams building it) did not result in a realistic path to a commercial product. And like I said - the moment Sony killed that project, the doors to future custom-built GPUs were pretty much gone, especially given how disastrous PS3 turned out to be financially.
I also wouldn't oversubscribed to what Cell did - we eventually got to vast amounts of general-purpose compute anyway with GPU centric approach. It's the approach to rasterization that could have been radically different (or at least more diverse).
I still don't subscribe to that based on how numerically successful Roadrunner was for a decade in performance/power efficiency and would have won every US large compute government contract if it continued to be the most performant and the most power efficient - at the expense of the intel/nvidia status quo. We only got to the point that the GTX 200 series became more flexible and performant at a wider range of uses because they had to compete - with the Cell BE's ability to massively accelerate algorithms that didn't accelerate well on CPU or older GPUs or a combination of the two in tandem. A Cell BE 2 finished R&D IIRC from the info that was available around 2010 but Sony never used it. So the time line for the RSX inclusion killing off the Cell BE doesn't really fit IMO - Toshiba would have been ready to provide their new RSX component to compliment the STI group Cell BE 2 if needed, I suspect - and from its inception the Cell BE was expected to be power efficient enough for ARM's wheel house of smartphone/tablets/TVs/set top boxes/IoT, etc but failed to match the power/perf ratio gains of ARM by the time Cell BE 2 would have been ready to use in that capacity. The Toshiba Regza Cell BE tvs weren't as power efficient as hoped - despite providing amazing TV DSP - which was an earlier indicator of them missing their target IMO.
 
As mentioned earlier in this very thread - original Reality Synthesizer was a completely different paradigm from 'more SPEs everywhere'. It wasn't a bad approach either - but the engineering realities (or capabilities of teams building it) did not result in a realistic path to a commercial product. And like I said - the moment Sony killed that project, the doors to future custom-built GPUs were pretty much gone, especially given how disastrous PS3 turned out to be financially.
I also wouldn't oversubscribed to what Cell did - we eventually got to vast amounts of general-purpose compute anyway with GPU centric approach. It's the approach to rasterization that could have been radically different (or at least more diverse).

I just wanted to reiterate what Faf is saying.

The Reality Synthesizer and PS2 platform evolution through the GS Cube concepts would have been a drastic departure from what we saw in terms of rasterization and shading and memory hierarchies.

I would differ though in that Cell was a brilliant idea to utilize available die area on computational density and find a optimal region on the curve for a game-box on 90nm if you graph out computational flexibility verse performance. Now, things are completely different and we shouldn't measure our paradigm of today's tens of billions of transistor designs against them. Keep constraints in mind.
 
Last edited:
Cell was/were never/ever powerful.

Here we have Cell outperforming the XBox360 CPU by 3X and keeping pace with the next-generation CPUs using much more advanced lithography.

Care to rephrase that assessment?!
dRrGF3r.jpg
 

squarealex

Member
I don't think so

A better GPU with a Good CPU on PS3, I'm pretty sure going mores games (like inFamous with better graphics) from First Party and espacially mores games from Japan

Don't forget many games on PS3 uses FMV, like all Uncharted, TLOU and God of War III.. (Even FFXIII with in engine render)

Thanks Cell ? No... Thanks Blu-Ray
 
Last edited:

Romulus

Member
Here we have Cell outperforming the XBox360 CPU by 3X and keeping pace with the next-generation CPUs using much more advanced lithography.

Care to rephrase that assessment?!
dRrGF3r.jpg


Which translated to nothing in real-world results. Anyone can pull graphs out of their butt, it means nothing. Their best developers could rarely get decent framerates on anything. Multiplatforms sucked even harder. Even when the ps3 was the lead platform it could barely edge the 360, and the 360 wasn't even anything special.
 

Romulus

Member
Just a reminder of CELL powa. Best devs on the planet struggled with that thing.



God of War 3 has a fixed camera and still struggled hard.







Killzone 3(KZ 2 was even worse) All the player is doing is moving done linear paths and it's still struggling to hold 30fps






GT5







Uncharted 3






Last of Us. Just walking around and doing absolutely nothing dropping frames. Top tier ND at the peak of ps3 development.



The point is we have actual evidence that it could not handle the games that were on it. In order to run those games, further downgrades would be needed to hit even bare bones 30fps Infamous included.
 
Last edited:

ReBurn

Gold Member
Here we have Cell outperforming the XBox360 CPU by 3X and keeping pace with the next-generation CPUs using much more advanced lithography.

Care to rephrase that assessment?!
dRrGF3r.jpg
It's interesting because in real world use cases (and not dancers for PowerPoint slides) most third party games ran better on Xbox 360 than they did on PS3. Cell may have been a more powerful CPU but it was hard to develop for and it was constrained by the rest of the PS3 architecture. Raw power doesn't do any good if you can't access it. So theoretically? Yeah this is probably accurate. Practically? Only Sony's first party ever got a real boost out of it.

In reality Cell's power proved to be largely irrelevant and it was pretty much a disaster for Sony. It ended up costing Sony nearly all of their gaming profit from the prior two generations and it ended up getting Crazy Ken booted in favor of Kaz to run the gaming business and Cerny's more practical approach to hardware architecture. But Sony first party was true wizardry at times during that generation.
 
If that's a case we need to go back and blow up the Lap with the Cell in it in 2005. Jak (and Ratchet and clanks original humor) has been saved..
 

ReBurn

Gold Member
It definitely got developers on multi threaded coding and data management before others. Your coding had to be super efficient
Console game developers were doing multithreaded development on Xbox 360 a year before PS3 released. The IBM Xenon CPU in the 360 was 3 SMT cores, 2 threads per core and the cores were a variation of the PPE cores in the Cell. Multithreading in consumer desktop CPU's had been around since Pentium 4 released with hyperthreaded models in 2002.

A lot of the ability for devs to write efficient code really came down to to the SDK's that were available to developers. I'd be surprised if they were all able to code straight to the metal like they had in previous generations considering the complexity of either CPU or GPU. There was a lot of criticism of the early tools Sony provided to developers for PS3. Sony's dev teams had the luxury of access to the hardware engineers that third parties didn't have.

The good thing is that Sony learned from their mistakes of that generation. It helped them deliver two solid, developer friendly platforms with PS4 and PS5.
 
Last edited:

PaintTinJr

Member
Console game developers were doing multithreaded development on Xbox 360 a year before PS3 released. The IBM Xenon CPU in the 360 was 3 SMT cores, 2 threads per core and the cores were a variation of the PPE cores in the Cell. Multithreading in consumer desktop CPU's had been around since Pentium 4 released with hyperthreaded models in 2002.

A lot of the ability for devs to write efficient code really came down to to the SDK's that were available to developers. I'd be surprised if they were all able to code straight to the metal like they had in previous generations considering the complexity of either CPU or GPU. There was a lot of criticism of the early tools Sony provided to developers for PS3. Sony's dev teams had the luxury of access to the hardware engineers that third parties didn't have.

...
Which games on the 360?

We heard directly from Epic that the graphics rendering on other PPEs wasn't available until much later in the Too Human lawsuit, and only made it into Gears because it was a beta feature they were still developing when Silicon Knights accused them of withholding the feature to gain unfair sales advantage for Gears over games like Too Human iirc

Sweeney was also on record saying the PS3 was easy - to initially - develop for because developers could continue to use it as a Single core PC CPU and high-end graphics card - ignoring the SPUs for the beginning - which was the normal at the time, For the first 2years of the Xbox 360's life, games were larger single core and GPU - unless their engine had a recent rewrite, like Capcom's one they used for Lost Planet - because they were cross-gen ports from the original Xbox.

Had you bothered to investigate Cell BE's publicly available SDK at the time with the PS3's OtherOS, which was extensively documented or read the developer info published by Insomniac with advanced examples you would know that developers had the same access to the Cell BE as using assembly language with any CPU - because the SPUs could still run the same code as PPUs -just slowly unless optimised to the SPUs - and the RSX provided developer level access the same as any other console. So it was completely to the metal.
 
Last edited:

Stooky

Member
Console game developers were doing multithreaded development on Xbox 360 a year before PS3 released. The IBM Xenon CPU in the 360 was 3 SMT cores, 2 threads per core and the cores were a variation of the PPE cores in the Cell. Multithreading in consumer desktop CPU's had been around since Pentium 4 released with hyperthreaded models in 2002.

A lot of the ability for devs to write efficient code really came down to to the SDK's that were available to developers. I'd be surprised if they were all able to code straight to the metal like they had in previous generations considering the complexity of either CPU or GPU. There was a lot of criticism of the early tools Sony provided to developers for PS3. Sony's dev teams had the luxury of access to the hardware engineers that third parties didn't have.

The good thing is that Sony learned from their mistakes of that generation. It helped them deliver two solid, developer friendly platforms with PS4 and PS5.
Not the same as coding cell spe. You had to code to the metal to get good performance from ps3. Sdks wouldn’t get you there alone. ps3 cell spe were a beast nothing like 360s hardware.
 
Last edited:
It's interesting because in real world use cases (and not dancers for PowerPoint slides) most third party games ran better on Xbox 360 than they did on PS3. Cell may have been a more powerful CPU but it was hard to develop for and it was constrained by the rest of the PS3 architecture. Raw power doesn't do any good if you can't access it. So theoretically? Yeah this is probably accurate. Practically? Only Sony's first party ever got a real boost out of it.

In reality Cell's power proved to be largely irrelevant and it was pretty much a disaster for Sony. It ended up costing Sony nearly all of their gaming profit from the prior two generations and it ended up getting Crazy Ken booted in favor of Kaz to run the gaming business and Cerny's more practical approach to hardware architecture. But Sony first party was true wizardry at times during that generation.

Faf or Panajev could speak to the programming far better than I, but from an architectural design point of view I stick by my assessment.

Again, many are falling into confusing [output] which is a variable that is dependent on many things, from developer time investment and managerial prerogatives to corporate strategy and market economics with what the hardware is capable of.

Let's look at this on several levels:

As a system: PS3 as sold was not optimal and I agree and have through this thread. Ideally they either would have diverged with RS and really spiced up the development situation or if they went the moderate route, there was the G80 which was aligned in launch window but would have needed more, heavy, early investment -- so they got nVidia-fucked and went with what they could get and afford in time and cost. It sucks for the tech-forward crowd.

As a 'CPU': Cell was a neat and novel solution to the problems faced at that time. It widely outperformed the competition -- it was literally a generation ahead of it's time, see the numbers -- and if it wasn't covering up for systemic failures with the RSX, would have been able to apply 200Gflop/sec to accelerate additional lighting and deferred shading or advanced physics interactions or AI or post-processing or whatever. Again, this is just a fact in objective reality, it's computational density is a nice mix between CPUs and GPUs on the curve I described earlier and, for it's time, was really neat. Now, things have shifted with transistor budgets and architecture and GPUs have assumed this role wonderfully, but again, you need to stay in a 2001-2005 mindset.

This is just a fact, Cell made it into supercomputing clusters on the Top500 (and dominated the Green500!), nobody in their right mind was using the XBox CPU.

EDIT: And my understanding is that the parts of Cell which fundamentally sucked, basically sucked times 3 on XBox CPU. The SPUs are pretty cut-and-dry and a super-set of the VU's which developers had on the EE, free of the hardwired idiosyncrasies of VU0, for example. And while their ISA wasn't overly verbose, I seem to have a memory of Gschwind and those guys basically going for the most optimal bang-for-buck instructions and chopping everything else out, they do their job. It's IBM's PPE which, correct me if wrong was build on IBM Austin's work with the guTS architecture, that basically was lacking for gamelogic and the branchy code and such.
 
Last edited:
Not the same as coding cell spe. You had to code to the metal to get good performance from ps3. Sdks wouldn’t get you there alone. ps3 cell spe were a beast nothing like 360s hardware.

Did you work on it?

Again, this was the 2001-2005 timeframe. This wasn't modern-day where we have CUDA and OpenCL and TensorFlow, etc. CUDA didn't even exist until 2007. I'm an idiot and have worked in Quant Finance and now Neuroscience and run things daily on GPUs easily. In 2005 this was not so. Everything computationally dense that was run in real-time was basically relegated to 3D graphics (outside of the big supercomputing physics projects) and they were basically run at a lower-level when possible. Not everything, but many things. If I was doing what I do now in 2005, I'd be working on Cell for the speed-up, no question.

And when OpenCL got off the ground later, an example showed it would basically run at 80-90% of hand-coded Cell code as per a Peter Hofstee presentation that I posted earlier in this thread. Which isn't that bad. So, again, the toolsets just didn't exist. You need to put yourself into the period when saying these things.

Hopefully others can further explain.
 
Last edited:

LordOfChaos

Member
I love that Cell was so weird that we're still debating it in 2022. We're not even really debating what's in our current year old consoles now.

I would have loved to see that alternate universe where end of life Cell, when developers knew how to use it, wasn't just making up for the RSX being lackluster compared to Xenos. Something with unified shaders that could adjust to a scene's mix, unified memory or at least better made for FlexIO, perhaps eDRAM.

As interesting as it was, ultimately it was not a great choice for a gaming console imo. Or at least, not one that didn't dominate like the PS2 did. It wasn't developer laziness, people vastly underestimate the explosion in code complexity jumping to a Cell SPU.

You take these 60 lines of code targeting general processors with prefetching and caches and the stuff that makes stuff "automatically" fast
Code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
 
/* ... */
 
/* the graph */
vertex_t * G;
 
/* number of vertices in the graph */
unsigned card_V;
 
/* root vertex (where the visit starts) */
unsigned root;
 
void parse_input( int argc, char** argv );
 
int main(int argc, char ** argv)
{
  unsigned *Q, *Q_next, *marked;
  unsigned  Q_size=0, Q_next_size=0;
  unsigned  level = 0;
 
  parse_input(argc, argv);
  graph_load();
 
  Q      =
          (unsigned *) calloc(card_V, sizeof(unsigned));
  Q_next =
          (unsigned *) calloc(card_V, sizeof(unsigned));
  marked =
          (unsigned *) calloc(card_V, sizeof(unsigned));
 
  Q[0] = root;
  Q_size  = 1;
  while (Q_size != 0)
    {
      /* scanning all vertices in queue Q */
      unsigned Q_index;
      for ( Q_index=0; Q_index<Q_size; Q_index++ )
      {
        const unsigned vertex = Q[Q_index];
        const unsigned length = G[vertex].length;
        /* scanning each neighbor of each vertex */
        unsigned i;
      for ( i=0; i<length; i++)
          {
            const unsigned neighbor =
              G[vertex].neighbors[i];
      if( !marked[neighbor] ) {
            /* mark the neighbor */
            marked[neighbor]      = TRUE;
            /* enqueue it to Q_next */
            Q_next[Q_next_size++] = neighbor;
          }
        }
      }
      level++;
      unsigned * swap_tmp;
      swap_tmp    = Q;
      Q           = Q_next;
      Q_next      = swap_tmp;
      Q_size      = Q_next_size;
      Q_next_size = 0;
    }
  return 0;
}
The optimized example for SPUs in turn seems to have been deleted with age from the internet, but once you unroll the loops and do everything that makes an SPU fast, you've ended up hand coding 1200 lines of code vs 60

Time is money in development, and if the PS3 was the only game in town or if it was paired with a better GPU that could do the start to mid generation heavy lifting while SPE programming got off the ground, it may have been interesting, but it was ultimately too much of a complication for most developers, though ahead of its time vs OpenCL, and libGCM was even the first modern low level graphics API with low draw call overhead

Sony learned all this well. Sometimes being too smart or complicated, you can just outsmart yourself.

The performance though? On a Pentium 4 HT running at 3.4 GHz, this algorithm is able to check 24-million edges per second. On the Cell, at the end of our optimization, we achieved a performance of 538-million edges per second.
 
Last edited:

Romulus

Member
I love that Cell was so weird that we're still debating it in 2022. We're not even really debating what's in our current year old consoles now.


Mostly because with exotic hardware, you can claim said hardware had "untapped" potential without really needing to provide any evidence. Most consoles were untapped to an extent, but with ps3, that card can be played to death because of its developmental woes. PS3 is the king of graph charts and ridiculous numbers that accounted for absolutely zero. Their exclusives struggled to hit 30fps with the best devs on the planet.
 
It wasn’t just Sony studios. By the end of the gen, almost everyone had figured out the cell processor. Several studios were using MLAA on the ps3. Rockstar’s gta5 port ran pretty much on par with the 360 despite gta4 running way better than the ps3.

I do agree that an upgraded cell might have been then the jaguar cpus, but the cost would’ve been higher and who knows what would have happened to a $500 ps4.

The ps5 io is a fantastic and unique design. Its a shame no one is using it because this thing has the potential to be a game changer. But unlike the cell, no one seems to really want to extract the most of out it. Not even Sony’s own first party studios.
Give them time. When the PS4 is out of the picture, you will retract your words, you can bet on it. Whe are only two years in, and you have seen what they can do with PS5 only games.
The first one PS5 Ratchet and Clanck Rift Apart only lifted a tip of what the PS5 can do......you understand what a learningcurve is dont you?

One thing is for shure, Ken Kutaragi was a great hardware engineer, he developed the PS1 and the PS2, but got to cocky with the PS3 and did not think about the developers that coding for the cell would be a pain in the ass.
 
Last edited:

Stooky

Member
Did you work on it?

Again, this was the 2001-2005 timeframe. This wasn't modern-day where we have CUDA and OpenCL and TensorFlow, etc. CUDA didn't even exist until 2007. I'm an idiot and have worked in Quant Finance and now Neuroscience and run things daily on GPUs easily. In 2005 this was not so. Everything computationally dense that was run in real-time was basically relegated to 3D graphics (outside of the big supercomputing physics projects) and they were basically run at a lower-level when possible. Not everything, but many things. If I was doing what I do now in 2005, I'd be working on Cell for the speed-up, no question.

And when OpenCL got off the ground later, an example showed it would basically run at 80-90% of hand-coded Cell code as per a Peter Hofstee presentation that I posted earlier in this thread. Which isn't that bad. So, again, the toolsets just didn't exist. You need to put yourself into the period when saying these things.

Hopefully others can further explain.
I did losely when I optimized my work for a few ps3 projects, there was a lot back and forth with programmers so I got a good understanding of the jobs the spus were handling. I don’t know enough technically to get in the details. We would talk a lot about 360 and ps3 development.
 
Last edited:

Blade2.0

Member
The SPUs of the Cell were kind of like the Cores in chips nowadays, right? kind of crazy how ahead the cell was in terms of chip design. This was when manufacturers thought they'd just perpetually up the GHZ in a chip.
 

Fafalada

Fafracer forever
I had meant both too, in addition to the chip-market, because it would have been the graphics software that wouldn't have ported easily had there been a fork
Graphics has been through multiple forks anyway (and we're watching in realtime an attempt to create another one - and I'm not referring to Raytracing). The impact might have happened on the companies involved, but the market itself would continue innovating, as it always has.

A Cell BE 2 finished R&D IIRC from the info that was available around 2010 but Sony never used it. So the time line for the RSX inclusion killing off the Cell BE doesn't really fit IMO - Toshiba would have been ready to provide their new RSX component to compliment the STI group Cell BE 2 if needed, I suspect
I said it killed off Sony's ambitions to use custom developed chipset for their consoles. Obviously, they 'officially' switched to semi-custom chips somewhere around 2008 when Vita development started proper, but that's an aside.
As for 'what-if' scenarios like - 'if NVidia failed to compete'? We might as well ask 'what if Larrabee, or even better - Talisman actually shipped a product/succeeded? I mean any what-if like that can drastically change history, but that's not what happened.

I would differ though in that Cell was a brilliant idea to utilize available die area on computational density and find a optimal region on the curve for a game-box on 90nm if you graph out computational flexibility verse performance. Now, things are completely different and we shouldn't measure our paradigm of today's tens of billions of transistor designs against them. Keep constraints in mind.
Yea sorry, I probably could have phrased that bit better. I was mainly alluding to the fact that path to massively parallel, general-purpose compute was something inevitable by then, it was just a question of timelines.
But indeed Cell was substantially ahead of the curve there. Ie. GPU market was nearly 2 decades late (1.5 if not counting the PS2) to deliver with mesh-shaders what we could do in early 00s on these highly-vectorized chipsets. And that's not the only aspect where GPU market lagged - had we received PS3 Cell with the memory-subsystem as originally intended we could have 'properly' solved shadows over a decade ahead of Raytracing acceleration that does the same today. Etc.
Though I still resent IBM engineers for that ISA - it was a tough to take-in after working with Allegrex/VFPU, or Emotion Engine/VUs.

and libGCM was even the first modern low level graphics API with low draw call overhead
Hey now - most of us had written our own low-level graphics APIs on the PS2. Ok not all of them had low-draw call overhead but... 🤭
 
Last edited:

ReBurn

Gold Member
Not the same as coding cell spe. You had to code to the metal to get good performance from ps3. Sdks wouldn’t get you there alone. ps3 cell spe were a beast nothing like 360s hardware.
I never said it was the same as coding the SPE. I said the Xenon was a 3 core/6 thread CPU and multithreaded development was happening on 360.

For Cell being a beast the ability to tame it was largely underutilized. The 360 consistently delivered better performing multi-platform games. it doesn't matter how much power a CPU has if it the power can't be utilized. So yeah, theoretically the Cell was more powerful. But practically most third party developers got better performance out of the 360. Having to code to the metal was a limitation of the development environment Sony provided. Like I said, Sony learned from their mistakes with PS3 and delivered more developer friendly platforms with PS4 and PS5.
 
Mostly because with exotic hardware, you can claim said hardware had "untapped" potential without really needing to provide any evidence.

there are tools that show you how much of the hardware is in use during a game, in the PS3 in particular there are very interesting interviews with developer about the techniques they used with the SPU's in their games and its not only the PS3 even the Xbox 360 CPU was used to support its GPU specially by the end of the generation, also there are differences in architecture that may favor one console or the other for example deffered rendering typically was better on PS3 because the memory array compared to the edram on xbox 360


Their exclusives struggled to hit 30fps with the best devs on the planet.

I dont get it, are you suggesting that if a game runs at 30 fps is only because the CPU?, are you suggesting that a game at 60 fps its automatically better techwise than another different game at 30 fps just because it has more fps?
 
Last edited:

Romulus

Member
there are tools that show you how much of the hardware is in use during a game, in the PS3 in particular there are very interesting interviews with developer about the techniques they used with the SPU's in their games and its not only the PS3 even the Xbox 360 CPU was used to support its GPU specially by the end of the generation, also there are differences in architecture that may favor one console or the other for example deffered rendering typically was better on PS3 because the memory array compared to the edram on xbox 360




I dont get it, are you suggesting that if a game runs at 30 fps is only because the CPU?, are you suggesting that a game at 60 fps its automatically better techwise than another different game at 30 fps just because it has more fps?

I haven't seen much actual evidence of these gains working in games. People can argue ps3 multiplatform near the end of the generation with slight differences, but outright ignore the often massive differences in the beginning and middle in favor of 360. Those are just "Ps3 development woes."

I'm saying they couldn't even hit bare-bones targets of 30fps. They needed to downgrade their games to achieve the absolute bottom of the barrel and still failed.
 
I haven't seen much actual evidence of these gains working in games.

there are lot of articles about tech using the SPUs and specific techniques in PS3 in fact lot of them are in eurogamer

People can argue ps3 multiplatform near the end of the generation with slight differences, but outright ignore the often massive differences in the beginning and middle in favor of 360. Those are just "Ps3 development woes."

but is that relevant for a discussion about the potential of the hardware?, the only thing that tells you is that the systems was difficult to work for most devs, is anyone disputing that?

by the end of the generation the games were very complex and used lot of techniques compared to the first games, that complexity cannot be achieved if not for a more proper use of the hardware as in "underused hardware", xbox 360 also had improvements during its lifecycle(not only PS3) and used lot of techniques very specific too, for example you can compare both consoles in GTAIV but whatever you conclude may change with GTAV wich is a much more complex game made by the same devs and of the same genre, both consoles achieved higher level of complexity on its games isn't that more telling about the capacity of the hardware than a list of port where devs lacked more knowledge in one system than the other at the time?



I'm saying they couldn't even hit bare-bones targets of 30fps. They needed to downgrade their games to achieve the absolute bottom of the barrel and still failed.

sorry but still dont get it, their exclusive games were top notch games compared to the competition back then, you are comparing them against tech demos and videos of their own development not against actual games, that is like saying zelda ocarine of time was an awful game because it doesnt look like the zelda 64 tech demos, the end game looked really good and competitive against the games available at its release just like PS3 exclusive sony games
 
Last edited:

Stooky

Member
I never said it was the same as coding the SPE. I said the Xenon was a 3 core/6 thread CPU and multithreaded development was happening on 360.

For Cell being a beast the ability to tame it was largely underutilized. The 360 consistently delivered better performing multi-platform games. it doesn't matter how much power a CPU has if it the power can't be utilized. So yeah, theoretically the Cell was more powerful. But practically most third party developers got better performance out of the 360. Having to code to the metal was a limitation of the development environment Sony provided. Like I said, Sony learned from their mistakes with PS3 and delivered more developer friendly platforms with PS4 and PS5.
You're wrong about coding to the metal, especially on a closed system like a consoles. Some things just work better coded that way. Its something xbox devs complained about not having that capability. Its one of the things that made Sony 1st party stand out from 360. From my knowledge 360s biggest advantage was it’s gpu
 
Last edited:
Graphics has been through multiple forks anyway (and we're watching in realtime an attempt to create another one - and I'm not referring to Raytracing). The impact might have happened on the companies involved, but the market itself would continue innovating, as it always has.

Just curious, I haven't been as deep into rendering as I used to be, what are you referring to? The advent of ML approaches through the pipeline all the way to Imagen and DALL-E2?

I said it killed off Sony's ambitions to use custom developed chipset for their consoles. Obviously, they 'officially' switched to semi-custom chips somewhere around 2008 when Vita development started proper, but that's an aside.

As for 'what-if' scenarios like - 'if NVidia failed to compete'? We might as well ask 'what if Larrabee, or even better - Talisman actually shipped a product/succeeded? I mean any what-if like that can drastically change history, but that's not what happened.

Very well said. Intel and Larrabee *could* have shaken things up and Talisman would have changed everything. We shouldn't forget these projects and that rendering evolution has been pretty conservative and step-wise compared to what could have been. It's in that vein that I miss what Sony was doing, PS4 and 5 are great consoles, but from a technology standpoint this was the one place where there was the economic foundation to take risks....
 

Three

Member
Cell was severe parallel programming in its infancy and while that and multicore multithreaded code is indispensable now and it was a really good crash course for them the idea that sony devs now are great because of it I don't think is true for the simple fact that I don't think most senior devs remain at a single studio, XDEV, or ICE to begin with. They move about a fair bit. Sony's studio's are talented because they hire talent and make good games with the time it needs. I'd say it's just good consistent management and expectation.
 
Last edited:

Romulus

Member
sorry but still dont get it, their exclusive games were top notch games compared to the competition back then, you are comparing them against tech demos and videos of their own development not against actual games, that is like saying zelda ocarine of time was an awful game because it doesnt look like the zelda 64 tech demos, the end game looked really good and competitive against the games available at its release just like PS3 exclusive sony games


I'm not comparing them to 360 or tech demos. The actual results were awful in practice and that's my main point, even with the best devs in the world the ambitious concepts should have been downgraded to achieve serviceable framerates.


 

Stooky

Member
Cell was severe parallel programming in its infancy and while that and multicore multithreaded code is indispensable now and it was a really good crash course for them the idea that sony devs now are great because of it I don't think is true for the simple fact that I don't think most senior devs remain at a single studio, XDEV, or ICE to begin with. They move about a fair bit. Sony's studio's are talented because they hire talent and make good games with the time it needs. I'd say it's just good consistent management and expectation.
Believe me It’s true. Sony 1st party learning on the cell coding to the metal, keeping the spu full, scheduling etc made that ps3 sing, made them able to make ps4 punch above its weight. A lot of the programmers are still there.
 
Last edited:

Three

Member
I'm not comparing them to 360 or tech demos. The actual results were awful in practice and that's my main point, even with the best devs in the world the ambitious concepts should have been downgraded to achieve serviceable framerates.


[/URL][/URL]
But you're directly correlating framerate drops in games (which a lot of games had back then) with the Cell processor. It would be like looking at Ocarina of times 14fps and saying going with SGI was awful for N64.

So I'm not sure what you're trying to say. That the Cell was causing framerate drops because those games were CPU bound or that they should have just cut back on their good looking games to get a completely consistent framerate?
Killzone on PS3 was one of the best looking games back then even if the devs thought that drops to 27fps from 30 are ok. Not sure what it has to do with the Cell processor though.

With killzone shadowfall and the PS4 they didn't maintain 60fps either dropping to 56fps and that was the most powerful console with a regular x86 processor. Framerate drops doesnt tell you much.
 
I'm not comparing them to 360 or tech demos. The actual results were awful in practice and that's my main point, even with the best devs in the world the ambitious concepts should have been downgraded to achieve serviceable framerates.



ok, but you are blaming entirely on the cpu, do you have some kind of tool that shows the amount of frame time used in the cpu, gpu or memory to see what is the bottleneck?
 
Last edited:

Gankthenew

Member
PS3 want to use a tech which beyond its time.
It failed but did give their studios a great lesson on how to make a game in good codes.

But now...where is the real Japan Studio?
 

Fafalada

Fafracer forever
Just curious, I haven't been as deep into rendering as I used to be, what are you referring to? The advent of ML approaches through the pipeline all the way to Imagen and DALL-E2?
Yea ML path is one major potential diversion - kind of hard to even imagine where that can end up in, but there's some really freaky possibilities.
Personally I wish more focus was put on leveraging ML for interactivity/simulation (since these are the areas where we've made so little progress in last 30 years) but ah well.
The other less sci-fi fork on the horizon is if things like Nanite take off - it could spell the end of fixed function rasterization blocks.

Very well said. Intel and Larrabee *could* have shaken things up and Talisman would have changed everything. We shouldn't forget these projects and that rendering evolution has been pretty conservative and step-wise compared to what could have been. It's in that vein that I miss what Sony was doing, PS4 and 5 are great consoles, but from a technology standpoint this was the one place where there was the economic foundation to take risks....
Indeed - the console internals got a lot less interesting in the recent decade. I'm thankful we at least got VR to foster some kind of innovation (who would have thought we'd live to see eye-tracked displays in a consumer device this soon) but yea.
Funny thing about Talisman - I did play around with concepts it proposed in PS2 environment - if GS had a proper bi-directional memory interface, and double the VRam, PS2 could have done a really compelling pass at such type of rendering, the rest of hw was well suited for it. OG XBox setup wasn't too bad either - but that was hampered by the ram-interface and lack of real low-level hw-access.
 

Majukun

Member
more than that, during the ps3 era sony was forced to do a lot of "gardening", cutting studios left and right leaving only the best ones and re-organizing its first party portfolio a lot
 

winjer

Gold Member
It's funny to see people still defending Cell just because it had huge theoretical numbers.
But when even Sony and it's first party studios struggled to get good performance out of it, it shows it's big problems in a very nasty way.
Then we had the third party games, with so many disasters in performance, because most studios didn't have the time, budget or patience to optimize for such a complicated architecture.
 
As someone who loves the tech side of things I wonder if Sony had of stuck with the Cell and improved its shortcomings like making it Out of order, streamlining the command requirements what it could have been. No doubt it would have been more powerful than the jaguar cores in the PS4.
That wasn't hard to beat, they were chosen for being low power and using less die space than a more powerful design.

If you updated the CELL and kept it's core design you'd still be stuck with hard to develop and expensive. And if you got rid of in-order and shortened stages to make it more efficient you'd have an oven in your hands. Switching to x86 would also have been impossible and doing a x86 cpu with the cell ideology basically the equivalent of doing a Pentium 4 with compute units.

Cerny did some customization on the GPU to help CELL developers just move whatever they were doing onto the GPU, basically implementing more dedicated L1 cache for CU's to be inline with the GDDR5 pool instead of parallel (and increasing and dedicating the cache to specific units at that) and then adding shared L2 cache for every 4 CU's for being able to stream/quick access stuff. This was, of course, a lot more similar to SPE's (and a lot more efficient at that, with less bottlenecks) without having to have them. Infinitive better solution than actually having them.

I think they learned a bit, yes, but probably don't look that fondly on it.


On the topic of the ICE team, Nintendo also shared a lot of tech internally and sometimes with close third parties although there was a clear communication issue with the western devs initially. Mario Sunshine and Wind Waker and their sequels share the engine and they basically even decided to keep the GCN architecture for 3 generations just so they could keep adding to it vertically. They also had a website called WarioWorld that was managed by their US arm of development Nintendo Software Technology whose real importance wasn't developing games but tools, emulators, etc. Microsoft also had Rareware as a deployable studio that gave support and libraries to other studios and devs.
 
Last edited:
For the time 3.2 GHz was fast for a console CPU clock too. 3.5 GHz now with modern processes vs when PS3 was being designed is also something that should not be hand waived away easily too.
Well, the thing is modern cpu's often don't go that high (sustained) due to efficiency.

PS3 ethos was already proven wrong by the time it launched by the Pentium 4. Pentium 4 was out of order, but needed a lot of MHz to match a cpu with less Mhz, Pentium 3 Tualatin wiped the floor with Pentium 4 easily at just 1.2-1.4 GHz and against initial Pentium 4's offerings as did AMD K7/Athlon CPU's, so it was mostly useless to have that overhead in an era we already knew dual core was in the cards. The main advantage was that they had the overhead to do SMT (simultaneous multi threading) a.k.a hyperthreading simulating a second cpu in some instances to curb their inefficiency, but it was not enough in real world and code had to be written to take advantage of that.

You had both Pentium 4 and Pentium D (dual core Pentium 4) being sold with 3,73 GHz stock clocks and the dual core variant (released in Q1 2006) got a extensive beating from dual 2.4 GHz AMD processors (AMD 4800+ x2) released in Q1 2005, 1 year prior.

What worked against it is, more MHz than the competition being possible meant you had to have more stages, and more stages meant more latency and more chances of cache miss happening. Cache miss was big problem because it would stall everything until you cleared the pipeline and send the same command again. This is also a simplistic explanation, but clearly something you don't want in an efficient architecture going forward. Cell was magnitudes worst than the Pentium 4 on that end because it was in-order and it's branch prediction capabilities were hideous as was performance. the PPE at 3200 MHz was basically only as powerful as a PowerPC G3 at 1 GHz when it came to general processing stuff like running an operating system. Yes, it was that bad.

The SPUs of the Cell were kind of like the Cores in chips nowadays, right? kind of crazy how ahead the cell was in terms of chip design. This was when manufacturers thought they'd just perpetually up the GHZ in a chip.
No, no. Cell was single core, while X360 had a 3-core variant of the same design plus vmx 128 (to make up a little for the loss of floating point from the SPE's).

SPE's were floating point cascade execution units (very similar to stream processors in GPU lingo), they were unable to run the processes a CPU does and often had to be manipulated by it. but they were fast if you'd write your code for floating point results and parallelization. Back them GPU's already had this type of setup in place, they just weren't thinking of compute tasks yet. SPE's were, but ended up being used for CPU-side GPU tasks.

They were incredibly niche and more in line with video encode/decode tasks if not for massive downfalls in the rest of the PS3 design.
 
Last edited:
Top Bottom