• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DF - RETRO: Sony PlayStation 3: Chasing the 1080p Dream (2006/2007) (UPDATE: PART 4)

PaintTinJr

Member
Wrong! early games reflected how inferior RSX was to Xenos, it was only later in the generation when developers started to use SPU's to offload work off of RSX in order to make speed improvements, that finally the PS3 games started to keep up with 360, but despite all the effort with techniques like "SPU-based deferred shading" (Which was no easy task btw) the PS3 failed to beat the 360 counterparts even in the best case scenarios, whereas the 360 was achieving it on Xenos alone and was still wining head to heads late in the game. Developers described working on the Cell as "sweating blood" if they were ever going to achieve anything above what the 360 was already doing without the extra effort and we still haven't even touched some of the widely unused special features in Xenos like MEMEXPORT.

Even John speaks about this at the 17:05 mark of the video.
That's nonsense and anyone that understands how a correctly wound model using roughly one vert per mesh polygon using quad strips on a superscalar card with a 1.1Billion Polys/s theoretical upper limit versus performance of unoptimised model meshes on the same hardware would know it is the software, no the hardware that was the limiting factor.

VF5b/VF5c and VT3 on both console - that me and StateofMajora StateofMajora have been discussing - proves this point because the Sega lindberg arcade hardware that outperformed the 360 with VF5c doesn't have SPUs and has a weaker 7xxx series GPU than the RSX, and the game still doesn't even match the arcade for polygon counts and rendering precision - unlike the PPU + RSX.
 

Mung

Member
If I remember correclty, it wasn't about whether 1080p should be supported, it's about Sony saying that it was making it a "Standard" and claiming that only the PS3 was capable of FullHD™ and the toxicity that ensued after that. Sony execs refered to 360 as a mere Xbox 1.5 when referring to Xbox 2 and at the end of the day, 720p was the best way to go during the 360/PS3 era.

Ironically it was Xbox 360 that worked out for my display at the time. I had a TV that supported upconverting 720p broadcasts to 1080i when needed, but would NOT upcovert 720p content connected via Component cables, so PS3 games that were 720p native had to be played in 480p on my TV and MANY gamer's TV at the time, in fact it applied to early adopters more than anyone else. Xbox 360 engineers had the foresight to include a chip that would upcovert 720p games to 1080i, you just needed to select 1080i in the video settings and voila. PS3 did not offer such a basic function, so even if you selected 1080i, games that were 720p would go down to 480p :messenger_neutral:
There was this too sure. But it was also about 1080p support. It's easy to find examples. Just need to watch the 1up show (RIP)..

As a point of differentiation (initially) it became an issue to either big up or minimise/dismiss/mock.
 
Last edited:
That wouldn't solve the internal IO and lack of general purpose core count clusterfuck.

Rendering would improve a lot (perhaps they could pull 1280x720 with ease in multiplat) but a lot third party games would continue to run slow despite of that because they would be CPU-bound. PPE is more at fault than RSX a lot of the times.
True, but at least it wouldn't be a CPU power issue at that point, but an dev optimization problem.

Better dev tools would have helped.

I wouldn't be surprised to see a PS3 game at 1080p while the 360 hit 720p with an 8 series gpu.
 

JackMcGunns

Member
He was talking about theoretical performance. I'm not even going to get into that as Nvidia usually inflates their specs. But even if you took both GPU's, made PC counterparts, drivers weren't a factor and the X360 GPU took a beating in that scenario, it was totally different when working embedded in their respective systems.


My statement wasn't based on what John was saying, but the reality of that console generation as reflected in the games and expressed by developers.

Everyone remembers the GDC presentation where it was shown off how Battlefied 3 used SPU-based Deferred shading for the PS3.

However...

When you talk about one side of development and not the other, it's kind of hard to be objective, no? The GDC presentation was all about how they were able to achieve so much from the PS3. In escense, the presentation was PS3 centric with no direct comparison on how things were going to be done on 360 or whether it meant the 360 version would be inferior or not (Head to head actually show advantages on 360 image quality).

Ok, I present to you SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques) it's an annual conference on computing graphics founded in 1974 that is not nearly as mainstream as GDC. Siggraph slides from DICE regarding the Frostbite 2 engine that powered Battlefield 3 and Need for Speed: The Run, touch upon how they were able to achieve this shading on 360 without the need for SPUs because Xenos was an ALU monster.

It starts on slide 63, but the real juicy stuff is on slide 69 (no pun intented)

qk8eLvA.png


https://www.gamedevs.org/uploads/rendering-in-battlefield3.pdf
 
Last edited:

PaintTinJr

Member
Even if it was inferior in raw numbers such as polygon throughput it wasn't in features, X360 GPU, unlike it's desktop counterpart had forward looking features such as unified shaders, a tesselation unit (mostly unused), it had the ability to communicate with CPU directly with less latency than RSX/Cell, and the unified memory pool further reduced a bottleneck against PS3. Also features like like MSAA. Nvidia's solution was definitely behind, quincunx aa definetly cost more. High quality rendering is useless if you're rendering less pixels because your hardware is stalling. They also supported compression in several parts of the rendering pipeline, GPU/CPU could use compression when exchanging data, and even the SRGB gamut was "lossy" against PC and PS3 on purpose. Less coordinates/precision=less data spent. These were all good decisions by the end of the day.
Okay, lets say I don't repeat what I've already written in this thread to correct the bolded part. How exactly does that help the 360 Xenos argument? The RSX even just used primarily for H/W GLSL combined with the SPUs is over a generation more flexible than the Xenos + Xenon - as it took until the GTX 2xx series before even Nvidia hardware could run most of the algorithms that could be accelerated on the Cell BE for graphics and compute, and still couldn't do them all.

Tessellation - unless you are doing on the 2nd and 3rd Xenon PPEs slowly can't bypass the acceleration limit of 400-650Mpolys/sec that ATI cards of the time had. The MSAA on Xenos wasn't a free H/W acceleration feature- unlike the QXAA on the RSX which never got used after Richard at DF did a sterling job telling everyone it was worse than no AA - and if it was getting used on the 360 games it was only because the edram was being under utilised - probably from the rendering being vertex limited - and was just using up spare edram bandwidth - presumably just doing 9 backbuffer lookups with gaussian kernel offsets and a write per final pixel.

The fact PS3 lacked a full scaler function (just able to stretch horizontal by hardware also fucked it massively against X360, which is why a lot of games rendered different, like rendering 1280x720 on Xbox 360 against shit like 880x1080 on PS3.
Please provide any one example so we can rule out marketing deals or dev incompetence. The vast majority of games that don't tear, hit a constant frame-rate and are 720p native are on the PS3. Look at Split Second - a 360 favourable developer - to see the real gulf in hardware, where the dev even managed - probably the world's first - probe GI lighting in a game, and that feature was exclusive to the PS3, despite the 360 version being lower res than native, had tears, lesser frame-rate and less polygon's and lower precision specular reflections IIRC from playing it on both systems.
 
That's just one feature. Doesn't represent the reality of both GPU's feature set, and like they say, Xbox had plenty ALU against the PS3 because it had 3 cpu cores (and because it supported direct communication between CPU and GPU without the same heavy caveats PS3 experienced).

I'm pretty sure that they could have done DirectCompute on Xenos seeing that R520 supported it and there was plenty talk of the feasibility of general compute on it, thing is it was probably disabled or disallowed because it didn't perform well enough and devs didn't need it. It's commonplace for some calls to be disabled, disallowed or undocumented due to such reasons.

This is par the course for the architecture, Nvidia was almost at the point of unveiling CUDA (with the 8800GT), while AMD wasn't. The reason is simple, AMD architecture was VLIW, VLIW did a lot of things right but it wasn't a good fit for general compute. That was the reason of it's demise for GCN (AMD's GPU architecture on PS4/Xbox One's architecture).

Here's evidence of Xbox 360's GPU doing General Compute,

Linear Genetic Programming GPGPU on Microsoft's Xbox 360:

-> https://citeseerx.ist.psu.edu/viewd...ing GPGPU on Microsoft's Xbox 360&osm=&ossid=

Anyway and reiterating, I wouldn't say a 7th generation console GPU is more advanced than the other just because it supported beta-CUDA. ahead of it's time on that front, sure. But doesn't really tilt the scales.
 
Okay, lets say I don't repeat what I've already written in this thread to correct the bolded part. How exactly does that help the 360 Xenos argument? The RSX even just used primarily for H/W GLSL combined with the SPUs is over a generation more flexible than the Xenos + Xenon - as it took until the GTX 2xx series before even Nvidia hardware could run most of the algorithms that could be accelerated on the Cell BE for graphics and compute, and still couldn't do them all.
I don't think I read it.

Regardless, I think my point was succinct, so if yours is going the other way I agree to disagree.
Tessellation - unless you are doing on the 2nd and 3rd Xenon PPEs slowly can't bypass the acceleration limit of 400-650Mpolys/sec that ATI cards of the time had. The MSAA on Xenos wasn't a free H/W acceleration feature- unlike the QXAA on the RSX which never got used after Richard at DF did a sterling job telling everyone it was worse than no AA - and if it was getting used on the 360 games it was only because the edram was being under utilised - probably from the rendering being vertex limited - and was just using up spare edram bandwidth - presumably just doing 9 backbuffer lookups with gaussian kernel offsets and a write per final pixel.
See, I wrote "mostly unused", just to not write unused because I don't know instances they did. But nothing is "mostly unused" if it's useful or usable.

Regardless, it was forward thinking. PS4 and Xbox One again had tesselation units, but didn't use them much even if they were more mature this time.

The fact PS3 lacked a full scaler function (just able to stretch horizontal by hardware also fucked it massively against X360, which is why a lot of games rendered different, like rendering 1280x720 on Xbox 360 against shit like 880x1080 on PS3.
Please provide any one example so we can rule out marketing deals or dev incompetence. The vast majority of games that don't tear, hit a constant frame-rate and are 720p native are on the PS3. Look at Split Second - a 360 favourable developer - to see the real gulf in hardware, where the dev even managed - probably the world's first - probe GI lighting in a game, and that feature was exclusive to the PS3, despite the 360 version being lower res than native, had tears, lesser frame-rate and less polygon's and lower precision specular reflections IIRC from playing it on both systems.
Majority of games hitting 720p and 30 fps on PS3 is something I can't really agree with. That wasn't the case even with the X360. It was a sub-HD generation early on with AAA titles.

You seem to be grasping at specific games that used features on PS3 to it's advantage, when the majority of cases actually went the other way. Nobody is saying that PS3 wasn't able to do crazy shit that was actually good.

But you're quoting me on the scaler so I'll just give you that:
Amongst the newer versions of the various tools included in the SDK lies a new function: the ability for developers to use some of the functionality of the fabled hardware scaler, a scaler many previously doubted existed at all. Interestingly enough, "some" is the key word when describing the unlocked functionality; SCEI only gave access to hardware accelerated horizontal scaling. Horizontal scaling on its own cannot upscale a 720p image into 1080p/i --this would require both horizontal and vertical scaling. Hence, the newly exposed scaler functionality is not enabled in the PS3's user interface directly, but instead will still require developer support to work.
Source: https://arstechnica.com/gaming/2007/01/6783/

Vertical scaling was never unlocked/made available through hardware because it didn't exist.

To output proper standardized resolutions like 720 or 1080p on PS3, anyone doing sub-hd resolutions would have to scale (at least the vertical part) by software unless they were doing native vertical 720p or 1080p - vertical scaling wasn't free and surely made a difference for a lot of games running with the same graphical make-up as they did on Xbox 360. reserving one SPE was a known good way of doing it, if convoluted/less than ideal, but added latency.

Of course, most devs on PS3 said screw it to reaching 1080p vertical resolution and just stuck to how to make it 720p, but even that wasn't transparent if they didn't have the resolution locked to that specific number. On the X360 it was a non-issue, just do the resolution that's best for you, scaling is free.
Better dev tools would have helped.
Sony actually did one of my favourite moves that gen, they provided their devs with the free Phyre Engine and allowed them to use it for whatever they wanted (including multiplat/Xbox 360 games).

Remnants of it remain on Codemasters Ego engine that started from them using Phyre Engine for Colin McRae: Dirt, was used by Square-Enix on "remasters" like FFX/X-II and FXII and is still the core tech behind all Soulsborne games since Demon Souls.


I wish console manufacturers did more of that.
 
Last edited:
I don't think I read it.

Regardless, I think my point was succinct, so if yours is going the other way I agree to disagree.

See, I wrote "mostly unused", just to not write unused because I don't know instances they did. But nothing is "mostly unused" if it's useful or usable.

Regardless, it was forward thinking. PS4 and Xbox One again had tesselation units, but didn't use them much even if they were more mature this time.


Majority of games hitting 720p and 30 fps on PS3 is something I can't really agree with. That wasn't the case even with the X360. It was a sub-HD generation early on with AAA titles.

You seem to be grasping at specific games that used features on PS3 to it's advantage, when the majority of cases actually went the other way. Nobody is saying that PS3 wasn't able to do crazy shit that was actually good.

But you're quoting me on the scaler so I'll just give you that:

Source: https://arstechnica.com/gaming/2007/01/6783/

Vertical scaling was never unlocked/made available through hardware because it didn't exist.

To output proper standardized resolutions like 720 or 1080p on PS3, anyone doing sub-hd resolutions would have to scale (at least the vertical part) by software unless they were doing native vertical 720p or 1080p - vertical scaling wasn't free and surely made a difference for a lot of games running with the same graphical make-up as they did on Xbox 360. reserving one SPE was a known good way of doing it, if convoluted/less than ideal, but added latency.

Of course, most devs on PS3 said screw it to reaching 1080p vertical resolution and just stuck to how to make it 720p, but even that wasn't transparent if they didn't have the resolution locked to that specific number. On the X360 it was a non-issue, just do the resolution that's best for you, scaling is free.

Sony actually did one of my favourite moves that gen, they provided their devs with the free Phyre Engine and allowed them to use it for whatever they wanted (including multiplat/Xbox 360 games).

Remnants of it remain on Codemasters Ego engine that started from them using Phyre Engine for Colin McRae: Dirt, was used by Square-Enix on "remasters" like FFX/X-II and FXII and is still the core tech behind all Soulsborne games since Demon Souls.


I wish console manufacturers did more of that.
Looks like they released that engine in 2008, so late, and also it doesn't help developers using their own engines.

Just better support for extracting performance out of the cell was necessary. As well as not having such a bloated operating system, but that was eventually addressed.
 

Panajev2001a

GAF's Pleasant Genius
In the original UC box there was a making video, on disc or a download link and I'm 99.9% sure it is mentioned that it is UE3 in the video, or on one of the screens in the studio when they are showing the advanced systems they've added to the game. That, or maybe it was from when Cambridge Studio was making Heavenly Sword and advertising in develop magazine for staff, and because they were PlayStation's core technology group for UE back then IIRC Uncharted might have been listed as a project gaining from the core technology.

Given that they've had GDC/siggraph type presentations over the years, and were sharing know how of code examples that can easily be incorporated into other publisher's game projects, the list of engines it can be based on, is either UE or Unity, no? And you only need look back at UE3 partnering with PlayStation for new rendering at an E3 prelaunch PS3 launch for cinematics, and then look at UC cinematics of the time and how they were using the SPUs for new systems like weather in UC1 to see it either had to be a custom engine like PhyreEngine that the Souls games use - as Sony had sold their other(Neon?) engine to Codemasters - or UE, as unity wasn't at that cinematic level back then, and do we really believe it uses the PhyreEngine, given the difference in fidelity of DeS and UC1?

Short of me finding the old video to check or someone that worked on it breaking an NDA, or the PS3 emulator exposing UE libraries in UC1-4 or Last of Us that would be a good enough citation; a citation is going to be hard to produce for any debadged engine game, sadly.

I am pretty sure Uncharted was their own internal engine, but they may have collaborated on libraries and toolings with the studios behind Heavenly Sword (Ninja Theory and Sony Cambridge) where the other developer tooling for CELL (ICE was ND’s solution while Edge Tools were Sony UK’s ATG baby).
 
Last edited:

RoadHazard

Gold Member
I am pretty sure Uncharted was their own internal engine, but they may have collaborated on libraries and toolings with the studio behind Heavenly Sword where the other developer tooling for CELL (ICE was ND’s units Edge Tools were Sony UK’s ATG baby).

Yeah, I've never heard anything about UE being part of the ND engine, it has always been talked about as their own thing completely.
 

PaintTinJr

Member
I am pretty sure Uncharted was their own internal engine, but they may have collaborated on libraries and toolings with the studio behind Heavenly Sword where the other developer tooling for CELL (ICE was ND’s units Edge Tools were Sony UK’s ATG baby).
If you can remember Carmack being asked about the rise of UE licensing compared to idtech3, he made the point that the main difference was that idtech was primarily made for idsoftware games, and the licensing out was just an added bonus, and that UE had focused on being able to deliver amazing cinematics and middleware integration which were becoming essential for the industry but less so for idsoftware.

At the time when UC was made the only engine licensable or inhouse engine that had shown cinematics at or around that level was UE3 in the 2005 E3 showcase IMO.



So even though every line of code in ND's engine will be their own or middleware add-ons, I'm 99.9% confident that their engine was transformed from UE3 originally. If they had any inhouse engine to rival this video, they would have used it IMO, as they were chasing the gen at that stage.
 

PaintTinJr

Member
...

Majority of games hitting 720p and 30 fps on PS3 is something I can't really agree with. That wasn't the case even with the X360. It was a sub-HD generation early on with AAA titles.

You seem to be grasping at specific games that used features on PS3 to it's advantage, when the majority of cases actually went the other way. Nobody is saying that PS3 wasn't able to do crazy shit that was actually good.

But you're quoting me on the scaler so I'll just give you that:

Source: https://arstechnica.com/gaming/2007/01/6783/

Vertical scaling was never unlocked/made available through hardware because it didn't exist.

To output proper standardized resolutions like 720 or 1080p on PS3, anyone doing sub-hd resolutions would have to scale (at least the vertical part) by software unless they were doing native vertical 720p or 1080p - vertical scaling wasn't free and surely made a difference for a lot of games running with the same graphical make-up as they did on Xbox 360. reserving one SPE was a known good way of doing it, if convoluted/less than ideal, but added latency.

Of course, most devs on PS3 said screw it to reaching 1080p vertical resolution and just stuck to how to make it 720p, but even that wasn't transparent if they didn't have the resolution locked to that specific number. On the X360 it was a non-issue, just do the resolution that's best for you, scaling is free.
I was referring to you saying 'shit like ...' part. Resolution is a choice no different to considering whether to tessellate more on a horizontal or vertical axis when modelling, it is a choice of balance and has implications to pixel quality. So my angle is if you were saying the 720 or sub-720p 360 resolution - typically with tearing - was superior in a particular game that chose half horizontal res with 1080p on PS3, then let's discuss if it was the wrong choice for better pixel quality.
Sony actually did one of my favourite moves that gen, they provided their devs with the free Phyre Engine and allowed them to use it for whatever they wanted (including multiplat/Xbox 360 games).

Remnants of it remain on Codemasters Ego engine that started from them using Phyre Engine for Colin McRae: Dirt, was used by Square-Enix on "remasters" like FFX/X-II and FXII and is still the core tech behind all Soulsborne games since Demon Souls.


I wish console manufacturers did more of that.
The Ego engine was produced by a completely different team according to the articles of the time I read, when Sony effectively used the engine sale as a make-way to let Codemasters buy the F1 license rights from them. The wiki citation is from a article written by Eurogamer's DF author Richard. So take that information with a pinch of salt unless you know someone from codies to corroborate it was the PhyreEngine. IIRC Ego was either a US or Central European initiative, whereas PhyreEngine is Sony UK London.
 
(...) At the time when UC was made the only engine licensable or inhouse engine that had shown cinematics at or around that level was UE3 in the 2005 E3 showcase IMO.

So even though every line of code in ND's engine will be their own or middleware add-ons, I'm 99.9% confident that their engine was transformed from UE3 originally. If they had any inhouse engine to rival this video, they would have used it IMO, as they were chasing the gen at that stage.
If it was never said, implied or hinted about it's not only unlikely but heavily down in the hearsay/unlikely speculation field. Might as well say that Killzone was also using unreal engine 3 because no one codes their own engine from the ground up right? RIGHT?

Why would they pick the engine that was famously unfinished when they launched their platform (to horrible results even for devs on x360) and ran worst on it...

Either Epic would have a field day with suing their ass like they did with silicon knights or, ND (or the ICE team) might as well have helped in making UE3 run properly on the system. They didn't because they didn't use UE3.
The Ego engine was produced by a completely different team according to the articles of the time I read, when Sony effectively used the engine sale as a make-way to let Codemasters buy the F1 license rights from them. The wiki citation is from a article written by Eurogamer's DF author Richard. So take that information with a pinch of salt unless you know someone from codies to corroborate it was the PhyreEngine. IIRC Ego was either a US or Central European initiative, whereas PhyreEngine is Sony UK London.
They started with phyre engine and then evolved it. Tech was improved not switched. Teams share their existing tech all the time and improve upon it.

I'm unsure their current tech is still using something from PhyreEngine, but Dirt2/Dirt3 or something? Yes, it is/was.

CoD teams share tech despite not being on the same building, and make custom implementations on top, on the car side of things, Turn 10 shares their engine with playground... (I could go on, more examples of building upon what they have than throwing it away for something that does the same, remember it's in their interest to salvage as much scripting and assets as possible between games) It just makes more sense, it's common knowledge and to my knowledge it hasn't been said otherwise, ever. I don't need to corrobate it with sources, you do.
I was referring to you saying 'shit like ...' part. Resolution is a choice no different to considering whether to tessellate more on a horizontal or vertical axis when modelling, it is a choice of balance and has implications to pixel quality. So my angle is if you were saying the 720 or sub-720p 360 resolution - typically with tearing - was superior in a particular game that chose half horizontal res with 1080p on PS3, then let's discuss if it was the wrong choice for better pixel quality.
You asked me for a source that proved the console lacked an internal vertical scaler.

Anyway, sure. But your choices were a lot more limited on PS3. It amounts to the same thing, people were forced to do creative solutions that weren't their first choice and that's when things worked out right.

Tearing is meaningless on this discussion/topic. Plenty games on both consoles had it and PS3 didn't exactly have an advantage because it didn't scale vertical.
 
Last edited:

PaintTinJr

Member
...

You asked me for a source that proved the console lacked an internal vertical scaler.
I didn't mean to convey that, much like I didn't mean to say "all games on PS3 were 720p" as you took it, just that of the games targeting 720p, PS3 produced more games without tearing and +95 percentile 30fps than 360; especially when you include first party exclusives.
Anyway, sure. But your choices were a lot more limited on PS3. It amounts to the same thing, people were forced to do creative solutions that weren't their first choice and that's when things worked out right.
But a scaler - software or hardware - still ads latency, and because it was fixed hardware it is a little similar to the quincux AA - maybe better substituted with a superior algorithm running on 1 of 6 SPUs. I'm sure if it was an issue we'd have games to hold up as looking "shit" in comparison, no?
Tearing is meaningless on this discussion/topic. Plenty games on both consoles had it and PS3 didn't exactly have an advantage because it didn't scale vertical.
The too small edram size chosen around a 1024x768 (windows defacto) resolution that made double and triple buffering on the Xenos at 720p30 with a full sets of buffers is the reason most 360 games needed to tear. The PS3 would have triple buffered everything if DF hadn't made latency a quincux situation, again. I never owned a game on PS3 that had tearing as its only option so you'd have to point to those games for me.
 

PaintTinJr

Member
If it was never said, implied or hinted about it's not only unlikely but heavily down in the hearsay/unlikely speculation field. Might as well say that Killzone was also using unreal engine 3 because no one codes their own engine from the ground up right? RIGHT?

Why would they pick the engine that was famously unfinished when they launched their platform (to horrible results even for devs on x360) and ran worst on it...

Either Epic would have a field day with suing their ass like they did with silicon knights or, ND (or the ICE team) might as well have helped in making UE3 run properly on the system. They didn't because they didn't use UE3.
Why would Epic sue a partner that would have paid them a big flat fee and be contributing back to their code base in exchange for cart blanche with their engine? But, now you mention it, it was maybe I read it in the Silicon Knights coverage Naughty Dog were mentioned as one of the UE3 licensees getting preferential treatment on rendering tech, but I'm still 99% sure it is evidenced in the UC1 making video back at the time that I watched.
They started with phyre engine and then evolved it. Tech was improved not switched. Teams share their existing tech all the time and improve upon it.

I'm unsure their current tech is still using something from PhyreEngine, but Dirt2/Dirt3 or something? Yes, it is/was.
Articles - IIRC gamesindustry.biz and gamasutra - at the time when the deal was done made no mention of it being based on PhyreEngine, and the retrospective Interview with the PhyreEngine team (think it was a 2013 article I read a few months back when Eldren Ring came out and I was checking for uptodate info on PhyreEngine) said that the engine for using when it hit version 2.0?3.0? still wasn't really feature complete like a UE or Unity, requiring those adopting it to build substantially upon it, so how the Ego team could have built enough on top of PhyreEngine back then - when it was less feature complete - and add enough to transform it from a free engine into one worth selling for millions to Codies makes it sound like a stretch saying it was PhyreEngine originally IMO.
CoD teams share tech despite not being on the same building, and make custom implementations on top, on the car side of things, Turn 10 shares their engine with playground... (I could go on, more examples of building upon what they have than throwing it away for something that does the same, remember it's in their interest to salvage as much scripting and assets as possible between games) It just makes more sense, it's common knowledge and to my knowledge it hasn't been said otherwise, ever. I don't need to corrobate it with sources, you do.
ND presentations have had some very low level memory and processing stuff IIRC - from the the PS4 UC4 one - which would surely require resources to be predictably available on other games - like ones using a well known engine like UE or unity - for others to incorporate the techniques, don't you think?
 
Last edited:

GAF machine

Member
PS3 was my least favorite PS console. It took forever to finally see anything worth mentioning. The first mainstream console that got consistently beat in multiplatform performance by an older console.

XB360 was older than PS3 if you go launch dates, not GPU specs. Both console's GPUs are based on 2005 designs. PS3's GPU specs (revealed at E3 2005) weren't change during the console's delay, so XB360 wasn't at a disadvantage when PS3 launched in 2006.

As for XB360 vs. PS3 multi-platform performance comparisons, they're largely meaningless. Out of the dozens of third-parties who developed for PS3, only a small fraction of them can honestly say that they fully exploited the system's architecture as intended. The vast majority of them avoided SPU programming like the plague for the first few years, and it wasn't until the back half of the gen that some third-parties took a development approach like the one Irrational Games adopted to benefit all platforms equally:

"We all know that the PS3 is powerful but unique console with its own strengths and challenges. But compared to the PC, the Xbox 360 is challenging too. So instead of declaring a 'lead platform' and porting the game to the others, we've instead changed the game engine so that all platforms look (to a programmer) more like a PS3. This means implementing a task-oriented task processor that assumes a NUMA (non-uniform memory access) design that mimics the PPU/SPU split of the PS3. Writing code this way is more difficult for us, but has a key advantage: it's both optimal for the PS3 *and* gives speed improvements on other platforms due to increased cache coherence and more efficient use of multiple processing units." -- Chris Klein
 
Last edited:

Romulus

Member
XB360 was older than PS3 if you go launch dates, not GPU specs. Both console's GPUs are based on 2005 designs. PS3's GPU specs (revealed at E3 2005) weren't change during the console's delay, so XB360 wasn't at a disadvantage when PS3 launched in 2006. As for XB360 vs. PS3 multi-platform performance comparisons, they're largely meaningless. Out of the dozens of third-parties who developed for PS3, only a small fraction of them can honestly say that they fully exploited the system's architecture as intended. The vast majority of them avoided SPU programming like the plague for the first few years, and it wasn't until the back half of the gen that some third-parties took a development approach like the one Irrational Games adopted to benefit all platforms equally:

PS3 was the newer console. Basing them both on 2005 design is not MS's fault, that's sony's. Essentially the machine was cemented too early.

Saying something wasn't fully exploited is just an opinion, you would have to provide each case and why, and then give a developer's opinion on the reason and exact shortcomings that could have been improved. By the same token, it's possible 360 games weren't properly utilizing the 360. 3rd parties could have easily just done a dirty port to 360 without pushing it so they could allocate resources to wrestling with the CELL. I know this happened during the OG Xbox generation, where a developer claimed it was common practice just toss code at the xbox and be done while focusing on the more exotic ps2.

How do you know most titles avoided SPU programming in the first few years?
 
Last edited:

GAF machine

Member
PS3 was the newer console. Basing them both on 2005 design is not MS's fault, that's sony's. Essentially the machine was cemented too early.

Saying something wasn't fully exploited is just an opinion, you would have to provide each case and why, and then give a developer's opinion on the reason and exact shortcomings that could have been improved. By the same token, it's possible 360 games weren't properly utilizing the 360. 3rd parties could have easily just done a dirty port to 360 without pushing it so they could allocate resources to wrestling with the CELL. I know this happened during the OG Xbox generation, where a developer claimed it was common practice just toss code at the xbox and be done while focusing on the more exotic ps2.

How do you know most titles avoided SPU programming in the first few years?

Pardons for the delay. You seemed to suggest that PS3 being newer meant it had a performance advantage over the older XB360. My point was that PS3's one year delay wasn't so that SIE could give it a performance advantage over XB360. The XB360 and PS3 were, as you put it, "cemented" in 2005.

I find it hard to believe that the vast majority of third-parties went to the lengths Irrational Games did and used all six available SPUs to offload the PPE and RSX for all their titles. If that were the case, there would've been closer performance parity between all their XB360/PS3 titles from the start to finish. The state of multi-plats on PS3 in the front half of the gen versus the back half is fact enough for me that it wasn't the case. Sure 3rd parties could've done dirty ports to XB360 in order to put more weight behind trying to pin CELL to the mat; but even if, the XB360 was "just like a dumb PC" so if dirty ports were dumped on it they would still be more performant than the PS3 versions by default. You'll probably say that was SIE's fault too, but I don't fault them for trying to actually challenge Moore's law with exotic CPU architectures that assist or accelerate GPUs.

How do I know most 3rd-parties avoided SPU programming in the first few years?... It isn't that I know. It's that I believe based on various forum posts and interviews by/with this or that studio's engine architect or programmer, who stated that they were only using the PS3's PPE and RSX initially (if memory serves, PlatinumGames also took this approach with Bayonetta). Some of them went on to say that they would eventually look into using the SPUs to offload the PPE and/or RSX, but the general view was that dealing with SPUs would be a head-ache, nightmare, etc. Such opinion was held industry wide and it showed, particularly for the first few years.
 
Last edited:

PaintTinJr

Member
Pardons for the delay. You seemed to suggest that PS3 being newer gave it a performance advantage. My point was that PS3 didn't have a performance advantage over XB360 just because it launched a year after XB360.

I find it hard to believe that the vast majority of third-parties went to the lengths Irrational Games did and used all six available SPUs to offload the PPE and RSX for all their titles. If that were the case, there would've been closer performance parity between all their XB360/PS3 titles from the start to finish. The state of multi-plats on PS3 in the front half of the gen versus the back half is fact enough for me that it wasn't the case. Sure 3rd parties could've done dirty ports to XB360 in order to put more weight behind trying to pin CELL to the mat; but even if, the XB360 was "just like a dumb PC" so if dirty ports were dumped on it they would still be more performant than the PS3 versions by default. You'll probably say that was SIE's fault too, but I don't fault them for trying to actually challenge Moore's law with exotic CPU architectures that assist or accelerate GPUs.

How do I know most 3rd-parties avoided SPU programming in the first few years?... It isn't that I know. It's that I believe based on various forum posts and interviews by/with this or that studio's engine architect or programmer, who stated that they were only using the PS3's PPE and RSX initially (if memory serves, PlatinumGames also took this approach with Bayonetta). Some of them went on to say that they would eventually look into using the SPUs to offload the PPE and/or RSX, but the general view was that dealing with SPUs would be a head-ache, nightmare, etc. Such opinion was held industry wide and it showed, particularly for the first few years.
Me and StateofMajora StateofMajora have been discussing that bolded point in this thread extensively using Virtua Figher 5b(PS3) and the Version C on 360 and the Virtua Tennis 3, using the Intel Pentium 4 and Nvidia GTX 7600gs Sega Lindberg arcade hardware as a reference, and the exact opposite is true because the RSX is more feature rich and has higher polygon throughput for optimised models than the Xenos by 2:1, given the 360 couldn't match the 7600GS, never mind the RSX being a Quadro version of the GTX 7800 with hw customisation as an Opengl ES accelerator.
 

Romulus

Member
Pardons for the delay. You seemed to suggest that PS3 being newer should've given it a performance advantage over the older XB360. My point was that PS3's one year delay wasn't so that SIE could give it a performance advantage over XB360. The hardware was as you put "cemented" in 2005.

I find it hard to believe that the vast majority of third-parties went to the lengths Irrational Games did and used all six available SPUs to offload the PPE and RSX for all their titles. If that were the case, there would've been closer performance parity between all their XB360/PS3 titles from the start to finish. The state of multi-plats on PS3 in the front half of the gen versus the back half is fact enough for me that it wasn't the case. Sure 3rd parties could've done dirty ports to XB360 in order to put more weight behind trying to pin CELL to the mat; but even if, the XB360 was "just like a dumb PC" so if dirty ports were dumped on it they would still be more performant than the PS3 versions by default. You'll probably say that was SIE's fault too, but I don't fault them for trying to actually challenge Moore's law with exotic CPU architectures that assist or accelerate GPUs.

How do I know most 3rd-parties avoided SPU programming in the first few years?... It isn't that I know. It's that I believe based on various forum posts and interviews by/with this or that studio's engine architect or programmer, who stated that they were only using the PS3's PPE and RSX initially (if memory serves, PlatinumGames also took this approach with Bayonetta). Some of them went on to say that they would eventually look into using the SPUs to offload the PPE and/or RSX, but the general view was that dealing with SPUs would be a head-ache, nightmare, etc. Such opinion was held industry wide and it showed, particularly for the first few years.


Being new should give you a performance advantage, especially during that time. It's no one else's fault but sony's that it did not in this case. Being cemented is purely their fault. But it makes sense considering their thought process. Nothing they did made much sense in hindsight.

Well, you believe certain events that are to be true, but there's no evidence of it. Saying things like "most devs" aren't using the SPU programming doesn't mean much at all based on a few forum posts etc. We don't know what went on behind the scenes at all those studios and how their specific engines were able to be offloaded. Too many variables. There were ALOT of developers at that time. Also, how do we know that the ps3's shortfalls weren't because of the RAM split? We know the consoles were RAM starved and sony went the extra mile to make development even more difficult by splitting the pool. That was supposedly a nightmare to work with too. They just couldn't succeed anywhere. The CPU was too complex, the GPU was underpowered, and the RAM split was bizarre and unhelpful. The only thing that bailed them out (sort of) was the absolutely best devs in the world and top tier budgets.
 
Last edited:
One has to wonder why Sony didn't go for unified ram. I mean, the PS2 was unified. It would be interesting to know how much more it would have cost for a unified pool of xdr ram on a 256 bit bus. I can understand not having more RAM (although it needed it esp. given the somewhat higher peak performance vs. 360 when you take full, full advantage of PS3) because cost was already out of control.
 

GAF machine

Member
Being new should give you a performance advantage, especially during that time. It's no one else's fault but sony's that it did not in this case. Being cemented is purely their fault. But it makes sense considering their thought process. Nothing they did made much sense in hindsight.

Well, you believe certain events that are to be true, but there's no evidence of it. Saying things like "most devs" aren't using the SPU programming doesn't mean much at all based on a few forum posts etc. We don't know what went on behind the scenes at all those studios and how their specific engines were able to be offloaded. Too many variables. There were ALOT of developers at that time. Also, how do we know that the ps3's shortfalls weren't because of the RAM split? We know the consoles were RAM starved and sony went the extra mile to make development even more difficult by splitting the pool. That was supposedly a nightmare to work with too. They just couldn't succeed anywhere. The CPU was too complex, the GPU was underpowered, and the RAM split was bizarre and unhelpful. The only thing that bailed them out (sort of) was the absolutely best devs in the world and top tier budgets.

Agreed. We don't the know the specifics and what went on behind the scenes. But I have to say, when industry veterans and fairly prominent programmers lined up to tar, feather and set PS3 ablaze in the press, it was easy to presume and conclude that "most" of the industry didn't try in earnest to extract all of the console's performance.

PS3's RAM was physically split, but a whitepaper SCEA published on deferred pixel shading says that despite the split PS3 has a unified memory architecture:

IV. PLAYSTATION®3 SYSTEM

(While the Cell/B.E. architecture specifies eight SPEs our system uses Cell/B.E.s with seven functioning SPEs in order to increase manufacturing yield.) The processors are connected to each other and to system memory through a high speed Element Interconnect Bus (EIB). This bus is also connected to an interface (IOIF) to the GPU and graphics memory. This interface translates memory accesses in both directions, allowing the PPE and SPEs access to graphics memory and providing the GPU with access to system memory. This feature makes the system a unified memory architecture since graphics memory and system memory both are visible to all processors within a single 64-bit address space. -- SCEA

If devs still found PS3's memory setup difficult, then I don't know what to say. Perhaps they were trying to put/keep everything CPU related in system RAM and eveything GPU related in VRAM as is done on PC, even though CELL can use system RAM as VRAM when processing GPU work on SPUs.

Maybe the root cause of their difficulties was in trying to force PS3 to process like a PC. It wouldn't be the first time that happened with a PS console. I'm reminded of a PS2 slide presentation where devs were told not to ignore the PS2's VU0 and VU1 and not to treat PS2 like a PC.

I don't view PS3's system/memory architecture as bizarre, just different. To me it looks like an architecture that was designed with scarcity, speed and efficiency in mind. And I don't think we've seen the last of it.
 
Last edited:

GAF machine

Member
1080p only test? What about 120 fps games? lol


PS3 could run at 120 fps​

According to Ken Kutaragi.

News by Ellie Gibson Contributor
Updated on 31 Oct 2005

Sony Computer Entertainment boss Ken Kutaragi has claimed that the PlayStation 3 will run games at an unprecedented (and perhaps rather pointless) 120 frames per second.

According to Japanese news service Nikkei BP, Kutaragi's comments were made at the Tokyo International Digital Conference last week where he turned up to extol the virtues of the PS3 and its Cell processor. And, of course, to make his rather astonishing claim.

It's particularly interesting because there isn't actually a TV in the world which can refresh the screen at a rate of 120 times per second. Kutaragi acknowledged this, but said he wants the PS3 to be ready to make the best of the technology once it finally arrives.

Kutaragi pointed out that the Cell chip can decode more than ten HDTV channels at a time, and can be used for rotating and zooming effects. He also discussed some of the different ways in which it could be used - to display actual-size newspaper pages, for example, to show more than one HD channel on the screen at a time, or for video conferences.

Kutaragi also explained how a processing power of 25.6 teraflops could be achieved - by creating a Cell cluster server with 16 units, each made up of eight Cell processors running at 2.5Ghz.

LOL, meanwhile most games struggled with their 30 or 60fps target.

PS3 could run at 120 fps. How is it you found that article, but you didn't find this one?


Anyway, Gibson/Eurogamer got Ken Kutaragi wrong. His comments weren't about PS3 running games at 120 fps. They were about PS3 processing video frames at 120 fps and using a high-speed camera with PS3 to analyze video frames for game inputs at 120 fps. The Nikkei article Gibson cited has a section titled '120フレーム/秒の映像を送出". A full translation of the section by DeepL reads:

Transmitting video at 120 frames/second

 Next, one of the possibilities of future technology is the frame rate of video display. In contrast to the 50 to 60 fields per second used in current TVs and the 72 to 90 frames per second used in PCs, PS3 will be able to transmit video at higher frame rates, such as 120 frames per second, with the evolution of video interface standards in the future.

Related to frame rate is the combination of video input and high-speed frame rate. For example, by connecting a high-speed camera, the PS3 can quickly analyze the meaning of the video and input the results into a game.
He suggested that such a combination could also be used in the boundary area between broadcasting and communications.--
Nikkei

There's nothing in the section about PS3 running games at 120 fps, and the only sentence with the word 'game' in it discusses using a high speed camera for inputs into a game. PS3 eventually got a high speed camera (i.e. PS Eye) for that purpose, and for use with PS Move. The camera has a 120 fps mode that PS3 uses to capture video frames at high-speed for object tracking. Per its creator:

The device supports both 640 x 480 resolution at 60 frames per second and 320 x 240 resolution at 120 frames per second. What are the scenarios where people might want to use one versus the other?
320 x 240 at 120 frames per second is a more specialized mode intended for high-speed tracking applications. Most TV display modes are limited to 60 frames per second, so the doubled framerate of the video will not be directly visible. But it means the PS3 can get twice as many video frames to process, which translates into being able to track things twice as fast, or to observing an object at twice as many points along the path it travels." -- Richard Marks


They didn’t call him Crazy Ken for nothing.
like a fox.

lol, jesus Sony really talked some major bullshit ahead of the PS3 launch... god damn...

You misspelled Eurogamer
 
Last edited:

GAF machine

Member
Last edited:

01011001

Banned
you misspelled Eurogamer

Sony literally had a PowerPoint slide on stage saying the PS3 has 2 Teraflops of compute power, which is complete and utter nonsense of the highest order. even if you combine CPU and GPU and then multiply by 4 you're not at 2TF

then there was that "rumble is a last gen feature" nonsense when the real reason was a patent/licensing issue which also made them release rumbleless PS2 controllers for a period of time lol.

then there were all the fake trailers that were falsely claimed to be in engine when they were literally done by animation studios.

Sony lied like crazy ahead of the PS3's launch
 
Last edited:

Romulus

Member
PS3 could run at 120 fps, and it did. How is it you found that article, but you didn't find this one?

I'm sure it was possible, but they couldn't even get their flagships running at decent fps.



God of War 3 has a fixed camera and still struggles.







Killzone 3(KZ 2 was even worse) All the player is doing is moving done linear paths and it's still struggling to hold 30fps






GT5







Uncharted 3






Last of Us. Just walking around and doing absolutely nothing dropping frames. Top tier ND at the peak of ps3 development.

 

PaintTinJr

Member
Sony literally had a PowerPoint slide on stage saying the PS3 has 2 Teraflops of compute power, which is complete and utter nonsense of the highest order. even if you combine CPU and GPU and then multiply by 4 you're not at 2TF

then there was that "rumble is a last gen feature" nonsense when the real reason was a patent/licensing issue which also made them release rumbleless PS2 controllers for a period of time lol.

then there were all the fake trailers that were falsely claimed to be in engine when they were literally done by animation studios.

Sony lied like crazy ahead of the PS3's launch
The 2TF claim is in the video I posted further up on this page from the 2005 E3 show - which was at a time before people quoted FLOPS for GPUS, like they do today, and by today's definition is clearly wrong and very misleading, however, looking at the context they are talking about dot product calculations and in reference to CPUs too, so they must be talking about the 1.1Billion polygons/s - each with a surface normal vector too - being calculated with programmable H/w T&L compared to on a CPU or PS2, or to older pre shader GPUs, and maybe thought that the number conveyed a good performance comparison, even against ATI cards like the Xenos, which would be 0.9-1.3TF/s using the same weird definition.

I mean, other than PC CPUs, or PS2 coprocessors, what else was really defined by FLOPs at the time? OpenCL/Cuda didn't exist in parlance until the Cell BE in the PS3, which the modern FLOPS metric - specifically for GPUS - has been established to mean a general purpose floating point calculation with OpenCL/Cuda. A calculation that is almost flexibly on par with the old CPU FLOP.

This 2TF claim is at a time when a GPU(or an ASIC) beating a CPU for FLOPs work - and quoting as a multiple of the CPU FLOPS - would probably been consider as okay within the tiny limits of the comparison, because FLOPS work was CPU only at that time ,it was the only choice because GPUS were non-programmable or non-flexible ASICs in GPUs until shaders.

edit:
Actually that still won't work, as a billion being 1000x times smaller than Tera, it would have to be 1000 vert shader ops and 1000 fragment shader ops too - per polygon - to reach that number, which sounds possible, but a highly pointless comparison.
 
Last edited:

Bullet Club

Member



PART TWO OF A FOUR-PART SPECIAL! In an era defined by sub-HD resolutions and 'challenging' performance, the concept of PlayStation 3 delivering on its promised 1080p dream seems almost ridiculous. And it's true that only a tiny proportion of the library rendered at full HD. And yet, in this DF Retro Special, 85 games are tested - and a majority of them target 60fps! John Linneman is on top form here, presenting the lengthiest DF Retro project yet.
 

Panajev2001a

GAF's Pleasant Genius
One has to wonder why Sony didn't go for unified ram. I mean, the PS2 was unified. It would be interesting to know how much more it would have cost for a unified pool of xdr ram on a 256 bit bus. I can understand not having more RAM (although it needed it esp. given the somewhat higher peak performance vs. 360 when you take full, full advantage of PS3) because cost was already out of control.
PS2 was not fully unified, GS did not have access to the main DRDRAM pool, it was the CPU constantly streaming data in to keep the 4 MB of eDRAM pool filled with the relevant frame data (very heavy streaming: per mip map or portion of mip map streaming was not invented only a few years ago :)). You could reverse the bus (GIF to GS), but I do not think any title did it and all streamed data only one way.

PS3 was originally meant to have a lot more eDRAM than the GS (and very fast too) and possibly it might have been meant to hold more than 256 MB of XDR, but I bet it was costly.
 
PS2 was not fully unified, GS did not have access to the main DRDRAM pool, it was the CPU constantly streaming data in to keep the 4 MB of eDRAM pool filled with the relevant frame data (very heavy streaming: per mip map or portion of mip map streaming was not invented only a few years ago :)). You could reverse the bus (GIF to GS), but I do not think any title did it and all streamed data only one way.

PS3 was originally meant to have a lot more eDRAM than the GS (and very fast too) and possibly it might have been meant to hold more than 256 MB of XDR, but I bet it was costly.
Well, it was unified physically is what I meant, but that's interesting.
 

Panajev2001a

GAF's Pleasant Genius
Well, it was unified physically is what I meant, but that's interesting.
Technically yes, if you see the eDRAM as a kind of scratchpad / cache. For me a truly unified design is a design where CPU and GPU share the same address space and physical memory (so that you do not ever move data from CPU accessible memory to GPU one and viceversa).
 
Technically yes, if you see the eDRAM as a kind of scratchpad / cache. For me a truly unified design is a design where CPU and GPU share the same address space and physical memory (so that you do not ever move data from CPU accessible memory to GPU one and viceversa).
I guess I thought it worked like it does on GameCube/Xbox 360... But PS2 is certainly a unique beast!
 

jimmyd

Member
I remember being a bit confused when the PS3 games I was playing weren't 1080p, as marketing led me to believe that most games would be.
 

Romulus

Member
I remember being a bit confused when the PS3 games I was playing weren't 1080p, as marketing led me to believe that most games would be.


I remember the confusion when multiplatform comparisons started. We already had a 360 and when we got the ps3 we expecting to be play improved multiplatforms. Ended up buying most everything on 360.
 

01011001

Banned
The 2TF claim is in the video I posted further up on this page from the 2005 E3 show - which was at a time before people quoted FLOPS for GPUS, like they do today, and by today's definition is clearly wrong



"2 Teraflops single precision floating point performance"

that's exactly how we use TFLOPs today.
single precision floating point = FP32

so he is clearly stating that the PS3 has 2TF of FP32 performance. which is about 1.7TF more than it actually has if you combine both the peak theoretical performance of the RSX and the Cell
 

Panajev2001a

GAF's Pleasant Genius


"2 Teraflops single precision floating point performance"

that's exactly how we use TFLOPs today.
single precision floating point = FP32

so he is clearly stating that the PS3 has 2TF of FP32 performance. which is about 1.7TF more than it actually has if you combine both the peak theoretical performance of the RSX and the Cell

The claim came from nVIDIA and they were not the only ones counting the “equivalent” FLOPS coming from the fixed function HW at the time. They could have boasted about other numbers and not the GPU FLOPS, but if we want to have a hate boner about PS3 (the only console where the “greedy” HW maker was taking a $200-300 bath on each unit sold and included features like WIFi their competitor charged over $60 for, HDMI / SACD / Blu-Ray [one of the best players you could get too] which their competitor did not, etc… 🤷‍♂️).
 
Last edited:

01011001

Banned
The claim came from nVIDIA and they were not the only ones counting the “equivalent” FLOPS coming from the fixed function HW at the time. They could have boasted about other numbers and not the GPU FLOPS, but if we want to have a hate boner about PS3 (the only console we’re the “greedy” HW maker was taking a $200-300 bath on each unit sold and included features like WIFi their competitor charged over $60 for, HDMI / SACD / Blu-Ray [one of the best players you could get too] which their competitor did not, etc… 🤷‍♂️).
the PS3 is a shit system, everyone knows that, it almost lost against the 360 which had barely any sales in Japan, my point is that the lies told at the time were wild

and them saying 2TF fp32 performance is simply misleading nonsense
 
Last edited:

Paulxo87

Member
I still wonder what PS3 could have been if the final specs resembled figure 6 from the cell(broadband engine) patent that was discussed on forums for two years prior to PS3 even being unveiled. Would have been an absolute tank
 
Last edited:

assurdum

Banned
I didn't mean to convey that, much like I didn't mean to say "all games on PS3 were 720p" as you took it, just that of the games targeting 720p, PS3 produced more games without tearing and +95 percentile 30fps than 360; especially when you include first party exclusives.

But a scaler - software or hardware - still ads latency, and because it was fixed hardware it is a little similar to the quincux AA - maybe better substituted with a superior algorithm running on 1 of 6 SPUs. I'm sure if it was an issue we'd have games to hold up as looking "shit" in comparison, no?

The too small edram size chosen around a 1024x768 (windows defacto) resolution that made double and triple buffering on the Xenos at 720p30 with a full sets of buffers is the reason most 360 games needed to tear. The PS3 would have triple buffered everything if DF hadn't made latency a quincux situation, again. I never owned a game on PS3 that had tearing as its only option so you'd have to point to those games for me.
Many of UE3 games and probably all Ubisoft games had more tearing on ps3 than x360 eh. There were plenty of games with more tearing on ps3. And ps3 hardware was totally shitty in transparencies. RDR1 and Bayonetta was totally impossible to code properly on ps3 without heavy cutback compared the x360 version. In the end x360 was definitely a better balanced hardware. Ps3 not at all with his shitty bandwidth and transparecies were his big limitations. You will never find a game on ps3 with vegetation or transparencies without heavy compromises compared the x360 hardware. If you go back to check most of multiplat has less grass on ps3 and a 1/4 of resolution for explosion, fires and particles. Let's not talk about the fact most of multiplat were lower resolution on ps3. Honestly it's the worst hardware Sony has designed. Not totally shitty but too many bottlenecks for the obsession of the Cell tech which is became an useless investment (as predictable). Some concept around the Cell were surely cool I guess but screwing the balance of the whole hardware just for it, well, it was really a terrible idea. It has almost cause the death of the PlayStation brand.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
the PS3 is a shit system, everyone knows that,
“I hate hyperbolic statements” —> makes a hyperbolic statement and projects it as self evident truth…

it almost lost against the 360 which had barely any sales in Japan, my point is that the lies told at the time were wild
PS3 did not exactly sell gangbusters in Japan either. PS3 came out more than a year later and still caught up / beat Xbox 360, but whatevs 🤷‍♂️.

and them saying 2TF fp32 performance is simply misleading nonsense
Misleading, agreed… nonsense eh not completely, we’re not the only ones playing that game…
 

SkylineRKR

Member
You have to give Sony credit for actually overtaking the 360 which was immensely popular, much cheaper for a good while and launched a full year earlier and with Europe the release gap was even bigger.

The PS3 was initially offered at 599/499, which was at least 200 bucks more expensive than MS' cheapest option. It got tons of bad rep, slaughtered in reviews, PS3 has no games meme etc. The fact it managed to sell 90 million units in the end is rather impressive. Look at what happened with the Xbox One which was actually less of a clusterfuck than the PS3 was, it capped off at 50 million ish.

The PS3 is in the top 10 of best selling systems, one place below the Wii with not that many less numbers sold. Its just that PS3 is the worst selling Sony system, but if we're honest PS1 barely had competition (basically only N64 much later on, which was a wholly different system) and PS2 also had no competition for a good while. The PS3 faced an MS that was pulled all registers open and invested a lot of money to challenge them from day one. I think Sony was always going to lose market share, the share and third party deals they had wasn't sustainable anyway. But the PS3 lost too much because of fuckups, and it cost Sony dearly.
 

PaintTinJr

Member
Many of UE3 games and probably all Ubisoft games had more tearing on ps3 than x360 eh. ..
You'll need to list the ones you are referring to. I don't remember the major ones tearing on PS3, instead of occasional slow downs, because why would any competent dev do that on a system that can easily double or triple buffer?
 

PaintTinJr

Member
the PS3 is a shit system, everyone knows that, it almost lost against the 360 which had barely any sales in Japan, my point is that the lies told at the time were wild

and them saying 2TF fp32 performance is simply misleading nonsense
See, that's revisionist history you are doing because GPUs weren't quoted in FLOP terms before Cuda/OPENCL, so they clearly never meant 2TF in today's scale.

Take the scenario in the context they were using comparing to CPUs, if you render (an UE3 or Renderman, etc) FMV frame on a 20 GFLOP Pentium CPU and it took 5secs, and you can render the same frame in just 33ms on an RSX, then ~100x difference in rendering time would have enabled you to say the RSX rendered 100x the performance of 20GFLOP, which would give you your 2TF of CPU compute statement.

Since that presentation GPUs have got their own independent FLOP metric, so aren't indirectly worked out against their effectiveness in CPU FLOPs.
 
Top Bottom