• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Beyond3D Xenos Article

thorns said:
most of it is too much details for me but some quotes..:



so xenos is more like 382M transistors.


very interesting. regardless of if it is 332M or ~382M, it is going to be more than PS3's RSX, unless Nvidia was downplaying the transistor count, or decide to beef up RSX in the final design.



also, this caught my eye

MEMEXPORT

In addition to its other capabilities Xenos has a special instruction which is presently unique to this graphics processor and may not necessarily even be available in WGF2.0 and this is the MEMEXPORT function. In simple terms the MEMEXPORT function is a method by which Xenos can push and pull vectorised data directly to and from system RAM. This becomes very useful with vertex shader programs as with the capabilities to scatter and gather to and from system RAM the graphics processor suddenly becomes a very wide processor for general purpose floating point operations. For instance, if a shader operation could be run with the results passed out to memory and then another shader can be performed on the output of the first shader with the first shader's results becoming the input to the subsequent shader.

MEMEXPORT expands the graphics pipeline further forward and in a general purpose and programmable way. For instance, one example of its operation could be to tessellate an object as well as to skin it by applying a shader to a vertex buffer, writing the results to memory as another vertex buffer, then using that buffer run a tessellation render, then run another vertex shader on that for skinning. MEMEXPORT could potentially be used to provide input to the tessellation unit itself by running a shader that calculates the tessellation factor by transforming the edges to screen space and then calculates the tessellation factor on each of the edges dependant on its screen space and feeds those results into the tessellation unit, resulting in a dynamic, screen space based tessellation routine. Other examples for its use could be to provide image based operations such as compositing, animating particles, or even operations that can alternate between the CPU and graphics processor.

With the capability to fetch from anywhere in memory, perform arbitrary ALU operations and write the results back to memory, in conjunction with the raw floating point performance of the large shader ALU array, the MEMEXPORT facility does have the capability to achieve a wide range of fairly complex and general purpose operations; basically any operation that can be mapped to a wide SIMD array can be fairly efficiently achieved and in comparison to previous graphics pipelines it is achieved in fewer cycles and with lower latencies. For instance, this is probably the first time that general purpose physics calculation would be achievable, with a reasonable degree of success, on a graphics processor and is a big step towards the graphics processor becoming much more like a vector co-processor to the CPU.

Seeing as MEMEXPORT operates over the unified shader array the capability is also available to pixel shader programs, however the data would be represented without colour or Z information which is likely to limit its usefulness.

ATI indicate that MEMEXPORT functions can still operate in parallel with both vertex fetch and filtered texture operations.


so Xenos has at least one feature that might not be present even in WGF2.0 | Shader Model 4.0 | DirectX 10 | DirectX Next
 

MetalAlien

Banned
midnightguy said:
very interesting. regardless of if it is 332M or ~382M, it is going to be more than PS3's RSX, unless Nvidia was downplaying the transistor count, or decide to beef up RSX in the final design.

Yea but by their own admission 80m of whatever number it turns out to be is memory. On the PS3 it's logic.
 

TheDuce22

Banned
I was refering to this post.

they go to all the trouble of having edram and it still has to write the frame/tile to the main ram... and then write that to a seperate display processor? And they were trying to say the PS3 would have bandwidth troubles... Grrr...

I dont know if its true or not, I just figured since it took so long for someone to say something negative, especially given the bias certian people have, xenos must be an awesome peice of equipment.
 

MetalAlien

Banned
TheDuce22 said:
I was refering to this post.



I dont know if its true or not, I just figured since it took so long for someone to say something negative, especially given the bias certian people have, xenos must be an awesome peice of equipment.


Oh that... Well yea it does piss me off that they have something as useful as edram, and then they design it to pass that information around 3 or 4 times over the much slower busses before it can be displayed... what the hell kind of design is that?

Hey it's going to look great so who cares right? I was just saying...
 

dorio

Banned
MetalAlien said:
Oh that... Well yea it does piss me off that they have something as useful as edram, and then they design it to pass that information around 3 or 4 times over the much slower busses before it can be displayed... what the hell kind of design is that?

Hey it's going to look great so who cares right? I was just saying...
I think they looked at bus requirements and designed the unit so that the bus allocations are placed where its most needed. Transferring a 3rd of a frame between the daughter and parent card isn't that bandwidth consuming for that part of the pathway and tiling allows them to get 4X AA for free at high def resolutions.

It seems like at least on paper which is kind of meaningless, that MS came out with some nice tech for a console first out the gate.

It would nice to get a diagram of the traditional pipeline vs. the xeros "pipeline" because I didn't understand alot of that.
 

MetalAlien

Banned
dorio said:
I think they looked at bus requirements and designed the unit so that the bus allocations are placed where its most needed. Transferring a 3rd of a frame between the daughter and parent card isn't that bandwidth consuming for that part of the pathway and tiling allows them to get 4X AA for free at high def resolutions.

It seems like at least on paper which is kind of meaningless, that MS came out with some nice tech for a console first out the gate.

It would nice to get a diagram of the traditional pipeline vs. the xeros "pipeline" because I didn't understand alot of that.


Yea but that's it though, because it must be wrote to the main ram and then to a display processor (not the GPU BTW) it's eating bandwdith for nothing. I mean it will all work out in the end and it will be the bomb i'm sure.... but to take 3 steps forward and 2 back...

It just seems they took a different (more complex...sort of) route only to end up right where sony will be...

EDIT: anyway, I'm going to shut up now because these machines are going to be so ridiculously powerful I can't argue specs with a straight face.. it's just pointless.. It's like trying to say which will do more damage, a 10 megaton bomb or a 12.... both will destroy any city on earth... LOL
 

MetalAlien

Banned
midnightguy said:
the only thing that seriously bothers me about both PS3 and Xbox360, are the 128-bit main memory busses.


They explain that, saying it allows them to reduce the size of the chips as the console ages... The pins for the wider bus being a problem for a 256bit wide bus... that's what they say anyway..

EDIT: here ya go..

http://www.beyond3d.com/articles/xenos/index.php?p=03

Both XBOX 360 and Playstation 3 feature UMA and graphics busses, respectively, that have been announced to use fairly fast 700MHz GDDR3 memory, but both only have a 128-bit interface. Whilst this is less of a surprise for XBOX 360 as Xenos's use of eDRAM will move the vast majority of the frame buffer bandwidth to the EDRAM interface leaving the system memory bandwidth available primarily for texturing bandwidth. It does seem odd that by the time the consoles will be released the likelihood is that high end PC graphics will using at least the same speed RAM but on double wide busses. The primary issue here is, again, one of cost - the lifetimes of a console will be much greater than that of PC graphics and process shrinks are used to reduce the costs of the internal components; 256-bit busses may actually prevent process shrinks beyond a certain level as with the number of pins required to support busses this width could quickly become pad limited as the die size is reduced. 128-bit busses result in far fewer pins than 256-bit busses, thus allowing the chip to shrink to smaller die sizes before becoming pad limited - by this point it is also likely that Xenos's daughter die will have been integrated into the shader core, further reducing the number of pins that are required.
 

Lazy8s

The ghost of Dreamcast past
Fafalada:
What do you think Tile buffers are then?
Tile buffers don't have to hold a whole framebuffer. With the need for framebuffers to be large for good image qualities and the need for embedded memory to be small for chip production practicalness, trying to make room for a whole frame on a chip can lead to undesired compromise. Tile buffers, on the other hand, only have to deal with a small amount of data, so they can always use high precision and still not make the chip excessively complicated with lots of embedded memory. Tile accelerated rendering with tile buffers is a bandwidth saving approach which doesn't have to compromise quality or cost.
 
Hajaz said:
perceptions certainly have changed since Sony's initial "twice as powerful" claim at E3 :lol
Amongst internet chatting game geeks - myself included of course :) . The rest of the world remains blissfully ignorant and of the belief that PS3 has twice the power of X360. That perception will have a long shelf life I think, at least 12 months anyways.
 

kaching

"GAF's biggest wanker"
Hajaz said:
perceptions certainly have changed since Sony's initial "twice as powerful" claim at E3 :lol
And perceptions are all you're going to have until we know the full story on both pieces of hardware.
 

MetalAlien

Banned
Hajaz said:
perceptions certainly have changed since Sony's initial "twice as powerful" claim at E3 :lol


Well Sony seems to blow up what the numbers on paper mean, they can find a way to make it seem more powerful.... But the other side of that is (so far) the demos they actually show have proven (for 2 generations) to be spot on...

With the PS2, as soon as the actual specs were released, especially the ram specs, everyone knew it was going to be very hard to make the graphics look smooth. It seems the PS2 was built around 480i. They also seemed to bank everything on polygons and effects, but not high res textures (no real time compression to speak of, that's pretty odd I think). When the XB came out (supporting progessive scan graphics with little effort) developers had to rethink everything and shoot for resolutions the PS2 wasn't ment to do easily. I just don't think Sony thought people would care about jaggies. It was as if they suspected this machine would only be plugged into small standard definition TVs. :-/

So far this upcoming gen, neither side seems to have a kink in it's armor.... It's going to be a street brawl this time because the playing feilds will be level.
 

Kleegamefan

K. LEE GAIDEN
Not only does it not compare Xenos with PS3....it doesn't compare Xenos with *anything*


It just discribes the features of the chip, in great, great detail...

We won't be able to benchmark it against anything else for a while and benchmarking Xenos will be important since it is so different from other GPUs...
 

dorio

Banned
So does the article answer the question posed by KK?

For example, some question where will the results from the vertex processing be placed, and how will it be sent to the shader for pixel processing. If one point gets clogged, everything is going to get stalled.
 

Pug

Member
Doria, KK is not in an interview say that Xenos is an excellent unified GPU. he will try and pick holes in it, and to a certain extent thats his job. The fact is you shouldn't believe everything he says. From the B3D article posted by NoA,

"ATI, probably understandably, weren't too keen on giving many details out in regards to the prioritisation methodology, probably because there is some fairly proprietary logic behind it, but also because for the most part you shouldn't need to know much about it other than "it happens". From ATI's comments it sounds like a fairly complicated procedure, but conceptually it appears to monitor the vertex buffer and pixel export buffer (just before the transfer to the daughter die) and, depending on application program mix, there is an equation that prioritises between pixel shading and vertex shading dependant on the size of the buffers and how full they are."
 

mrklaw

MrArseFace
MetalAlien said:
Oh that... Well yea it does piss me off that they have something as useful as edram, and then they design it to pass that information around 3 or 4 times over the much slower busses before it can be displayed... what the hell kind of design is that?

Hey it's going to look great so who cares right? I was just saying...


Everything is a compromise.

They save a *lot* of bandwidth by having the FSAA stuff and pixel writes etc on the eDRAM. They only use external RAM when writing the final frame buffer out to main memory. Relatively speaking thats a tiny amount of bandwidth, so its a reasonable compromise to make. And if they have to tile for hires and FSAA, they'd need to write out anyway (or put in 40MB of eDRAM, which isn't practical)

Its a very neat solution.
 

Nostromo

Member
Lazy8s said:
Tile buffers don't have to hold a whole framebuffer
Neither Xenos edram/tiles buffer has to hold a whole framebuffer.
With the need for framebuffers to be large for good image qualities and the need for embedded memory to be small for chip production practicalness, trying to make room for a whole frame on a chip can lead to undesired compromise.
Every architecture has to make compromises since no one can afford unlimited complexity/transistors count.
Tile buffers, on the other hand, only have to deal with a small amount of data, so they can always use high precision and still not make the chip excessively complicated with lots of embedded memory.
A Binning engine doesn't come for free, do you know?

Tile accelerated rendering with tile buffers is a bandwidth saving approach which doesn't have to compromise quality or cost.
If we take this sentence as it is there absolutely no difference between Xenos and a PowerVR GPU.
Xenos uses a bigger tile buffer, that's all.
Xenos is not caching geometry but it just tag drawing commands (while it's populating the zbuffer) with tiles tags (it would not make any sense to cache all the geometry and to bin stuff at primitives level with a so big tiles buffer) in order to skip/cull drawing commands on a per tile basis (only the stuff that crosses tiles is reprocessed, but with so huge tiles most of the time primitives groups are going to lie on a single tile)
 

dorio

Banned
Pug said:
Doria, KK is not in an interview say that Xenos is an excellent unified GPU. he will try and pick holes in it, and to a certain extent thats his job. The fact is you shouldn't believe everything he says. From the B3D article posted by NoA,

"ATI, probably understandably, weren't too keen on giving many details out in regards to the prioritisation methodology, probably because there is some fairly proprietary logic behind it, but also because for the most part you shouldn't need to know much about it other than "it happens". From ATI's comments it sounds like a fairly complicated procedure, but conceptually it appears to monitor the vertex buffer and pixel export buffer (just before the transfer to the daughter die) and, depending on application program mix, there is an equation that prioritises between pixel shading and vertex shading dependant on the size of the buffers and how full they are."
I realize that, I was just trying to find out the solution to the problem he presented. I read the article I was just trying to confirm my understanding of it. Thanks for the info.
 

Nostromo

Member
HokieJoe said:
So enlighten me some more here. What is the significance of this?
Current GPUs can't write data where you desire, GPUs just walk on pixels that lie into a primitive (triangles) and execute a shader each pixel, results are written out on pixels, you can't just write data wherever you like, in any quantity you desire (even if SM2.0 let you have up to 4 render targets at the same time).
Now Xenos can act as general purpose processor here, it can read/write stuff whereever a shader dictate trough the memexport mechanism.
This new feature makes possibile to implement on Xenos a whole full world of algorithms that were never implemented before on a GPU (cause it was simple not possible or tricky).
 

MetalAlien

Banned
mrklaw said:
Everything is a compromise.

They save a *lot* of bandwidth by having the FSAA stuff and pixel writes etc on the eDRAM. They only use external RAM when writing the final frame buffer out to main memory. Relatively speaking thats a tiny amount of bandwidth, so its a reasonable compromise to make. And if they have to tile for hires and FSAA, they'd need to write out anyway (or put in 40MB of eDRAM, which isn't practical)

Its a very neat solution.

I see what your saying, but you did miss a step, once the frame is assembled it must pass on the bus once more to a display processor... Which in the XB360 is not the GPU... I'm sure it only uses a tiny amount of bandwidth... but it just seems a weird way of doing it.
 
So...for those of us who have no idea what the hell is going on here, what does all this amount to? Is it safe to assume that initial Xbox360 shots don't SCREAM next gen due to just being early, or is it limitations of the hardware?

Or is it still too early to tell?
 

Hajaz

Member
morbidaza said:
So...for those of us who have no idea what the hell is going on here, what does all this amount to? Is it safe to assume that initial Xbox360 shots don't SCREAM next gen due to just being early, or is it limitations of the hardware?

Or is it still too early to tell?

read the conslusion of the article.

"It will be very interesting to see the performance and quality of graphics it is able to produce once developers have had decent access to development kits based on the final hardware, however we suspect that it won't be until the second generation of XBOX 360 titles before we see developers being able to seriously scratch the surface of understanding the processing capabilities of Xenos and the XBOX 360 as a whole"
 
Hajaz said:
read the conslusion of the article.

"It will be very interesting to see the performance and quality of graphics it is able to produce once developers have had decent access to development kits based on the final hardware, however we suspect that it won't be until the second generation of XBOX 360 titles before we see developers being able to seriously scratch the surface of understanding the processing capabilities of Xenos and the XBOX 360 as a whole"

I did, I was just hoping somebody could put it in concrete terms as far as referencing what we've seen so far and comparing the possibilities to previous generation gaps. Probably a bit too early for that though. We'll see in due time, I suppose.
 

HokieJoe

Member
Nostromo said:
Current GPUs can't write data where you desire, GPUs just walk on pixels that lie into a primitive (triangles) and execute a shader each pixel, results are written out on pixels, you can't just write data wherever you like, in any quantity you desire (even if SM2.0 let you have up to 4 render targets at the same time).
Now Xenos can act as general purpose processor here, it can read/write stuff whereever a shader dictate trough the memexport mechanism.
This new feature makes possibile to implement on Xenos a whole full world of algorithms that were never implemented before on a GPU (cause it was simple not possible or tricky).

Very cool, thanks for the clarification.

So how is ATI's use of PowerVR's tiling technique related to Microsoft's procedural synthesis?
 

Fafalada

Fafracer forever
So how is ATI's use of PowerVR's tiling technique related to Microsoft's procedural synthesis?
The tiling approach they use isn't PVR's, and it's not really related to procedural synthesis either. You should be looking at MEMEXPORT for connections to PS.

Speaking of which, since when was procedural snythesis "Microsoft's"? :)
 

Pimpwerx

Member
I don't think it's right to assume the PS3 will be twice the 360, but I don't think it's right to assume that they're gonna be even in power either. Cell already drubs the XeCPU. Ignoring bandwidth questions, RSX has a transistor and clock speed advantage, as well as a time advantage. RSX doesn't have to be massively more powerful than Xenos to be a hell of a chip. Just for an example, let's assume NVidia figured out how to make 64/128bit HDR useable at good framerates. That could translate to the lighting we saw in most of those PS3 demos. Most of what was seen in those videos was largely due to great lighting. There's so much debate about how much those videos represent the PS3 game graphics, but if we were to get that level of performance, I think it would constitute a noticeable step up in graphics, no? Until we know what RSX is packing, the graphics advantage could be anywhere from marginal to monumental. I think it can go either way.

BTW, there are compromises in Xenos beyond the eDRAM size. So devs will just code games to take advantage of its strengths. PEACE.
 

gofreak

GAF's Bob Woodward
Pug said:
Doria, KK is not in an interview say that Xenos is an excellent unified GPU. he will try and pick holes in it, and to a certain extent thats his job. The fact is you shouldn't believe everything he says. From the B3D article posted by NoA,

"ATI, probably understandably, weren't too keen on giving many details out in regards to the prioritisation methodology, probably because there is some fairly proprietary logic behind it, but also because for the most part you shouldn't need to know much about it other than "it happens". From ATI's comments it sounds like a fairly complicated procedure, but conceptually it appears to monitor the vertex buffer and pixel export buffer (just before the transfer to the daughter die) and, depending on application program mix, there is an equation that prioritises between pixel shading and vertex shading dependant on the size of the buffers and how full they are."

I don't think this really answers KK's question though?
 

Lazy8s

The ghost of Dreamcast past
Nostromo:
Neither Xenos edram/tiles buffer has to hold a whole framebuffer.
I agree that they're tile buffers if they don't hold a whole backbuffer.

Being tile buffers, they could've been smaller and less costly to the chipset while still meeting the requirements of the job if the tile accelerator was strengthened to push through more smaller tiles, but maybe Xenos's balance of embedded memory size and tiling will be good like I was saying.
 

Hajaz

Member
Pimpwerx said:
I don't think it's right to assume the PS3 will be twice the 360, but I don't think it's right to assume that they're gonna be even in power either. Cell already drubs the XeCPU. Ignoring bandwidth questions, RSX has a transistor and clock speed advantage, as well as a time advantage. RSX doesn't have to be massively more powerful than Xenos to be a hell of a chip. Just for an example, let's assume NVidia figured out how to make 64/128bit HDR useable at good framerates. That could translate to the lighting we saw in most of those PS3 demos. Most of what was seen in those videos was largely due to great lighting. There's so much debate about how much those videos represent the PS3 game graphics, but if we were to get that level of performance, I think it would constitute a noticeable step up in graphics, no? Until we know what RSX is packing, the graphics advantage could be anywhere from marginal to monumental. I think it can go either way.

BTW, there are compromises in Xenos beyond the eDRAM size. So devs will just code games to take advantage of its strengths. PEACE.

well thats one point of view.
ATI's point of view seems to be that x360 will actually push better looking graphics then ps3, due to Xenos more flexible architecture.


Since ive never cought ATI in a lie, i tend to believe what they say.
Nvidea's credibility is *in my opinion* alot worse.
First they announced Nv30 as an 8 pipe part, while it actually only had 4 pipes, and then they did the whole 3dmark clipping planes / below dx9 specs - cheating thing. Not to mention they tried to discredit futuremark.

the graphics difference could go either way indeed, but i wouldnt say it is definitely going to be in the PS3's favor
 

gofreak

GAF's Bob Woodward
Hajaz said:
Since ive never cought ATI in a lie, i tend to believe what they say.

:lol You'd expect them to tell the truth if their part wasn't as powerful? Like any company, they'll always present things in as best a light as possible. They're not going to say they're less powerful even if that's the case.

There are many questions remaining about relative power on both systems - Dave's article actually sheds little light on this issue, so there's not much new to work with here from that POV. We've a somewhat better idea in terms of capability, but little re. performance.
 
Pimpwerx said:
I don't think it's right to assume the PS3 will be twice the 360, but I don't think it's right to assume that they're gonna be even in power either. Cell already drubs the XeCPU. Ignoring bandwidth questions, RSX has a transistor and clock speed advantage, as well as a time advantage. RSX doesn't have to be massively more powerful than Xenos to be a hell of a chip. Just for an example, let's assume NVidia figured out how to make 64/128bit HDR useable at good framerates. That could translate to the lighting we saw in most of those PS3 demos. Most of what was seen in those videos was largely due to great lighting. There's so much debate about how much those videos represent the PS3 game graphics, but if we were to get that level of performance, I think it would constitute a noticeable step up in graphics, no? Until we know what RSX is packing, the graphics advantage could be anywhere from marginal to monumental. I think it can go either way.

BTW, there are compromises in Xenos beyond the eDRAM size. So devs will just code games to take advantage of its strengths. PEACE.


I don't think RSX has much of a transister advantage, if any at all now.
RSX: 300M transistors, mostly logic and caches, no eDRAM.

Xenos is looking to be in the 350 to 382M range. 232M + 80M eDRAM plus possibly upto 70M more of logic.


yes, RSX will have some advantages over Xenos, and Xenos will have some advantages over RSX. I almost agree with J Allard that it is "a wash". at least as far as graphics.

and even though Cell trounces XeCPU in floating point, the XeCPU might have better general purpose CPU performance, interger performance. as far as pure cache, XeCPU has more: 1 MB - compared to Cell's 512K cache for the PPE. the 2 MB, er actually more like 1.8 MB (256K x 7) of Local Storage on Cell gives Cell more total memory than XeCPU, but LS has some disadvantages over cache, which XeCPU has more of.

ok enough rambling, it boils down to this: both PS3 and Xbox 360 are very powerful consoles, each will have advantages over the other and weaknesses compared to the other. it'll be upto developers to maximize the strengths of each, and hide the weaknesses.
 
Hajaz said:
well thats one point of view.
ATI's point of view seems to be that x360 will actually push better looking graphics then ps3, due to Xenos more flexible architecture.


Since ive never cought ATI in a lie, i tend to believe what they say.
Nvidea's credibility is *in my opinion* alot worse.
First they announced Nv30 as an 8 pipe part, while it actually only had 4 pipes, and then they did the whole 3dmark clipping planes / below dx9 specs - cheating thing. Not to mention they tried to discredit futuremark.

the graphics difference could go either way indeed, but i wouldnt say it is definitely going to be in the PS3's favor

It's an efficiency issue more then a power issue. Both consoles will be extremely powerful the question is will dev's be able to tap that power with comparitive cost, and dev time while maintianing a level of graphical fedility that equals the 360's. We know where the 360 stands in terms of both power and efficiency, the RSX we know a little about one and nothing of the other.

At this point it would be silly to assume the PS3 has the right mix of power and efficiency to declare it might even be more powerful. The diplomatic choice of "theyre about equal" is the safer course at this time.
 

gofreak

GAF's Bob Woodward
midnightguy said:
I don't think RSX has much of a transister advantage, if any at all now.
RSX: 300M transistors, mostly logic and caches, no eDRAM.

Xenos is looking to be in the 380M range. 232M + 70M worth of logic, 80M eDRAM

This is based on a guess..

I could believe it since such an amount of transistors is far more doable when the die is split, but I'm not sure how solid that figure is for now.

midnightguy said:
interger performance.

If you mean mathematical integer performance...no.

midnightguy said:
as far as pure cache, XeCPU has more: 1 MB - compared to Cell's 512K cache for the PPE.

That's 512KB for one PPE versus 1MB for 3 cores (yes, the SPEs can access the PPE cache, but I don't think they'll be using it too much).

midnightguy said:
the 2 MB, er actually more like 1.8 MB (256K x 7) of Local Storage on Cell gives Cell more total memory than XeCPU, but LS has some disadvantages over cache

And some advantages. The 2x size difference in total mem size aside.
 
gofreak said:
This is based on a guess..

I could believe it since such an amount of transistors is far more doable when the die is split, but I'm not sure how solid that figure is for now.

but it is the best and most informed guess that we have, being that DaveB and Beyond3D are pretty reliable and he has been in direct contact with ATI.

at the very least, it is looking more and more that RSX will not outclass Xenos.
 

gofreak

GAF's Bob Woodward
midnightguy said:
but it is the best and most informed guess that we have, being that DaveB and Beyond3D are pretty reliable and he has been in direct contact with ATI.

Dave's said himself that his guess wasn't borne out of any info from ATi, so..we'll see.

midnightguy said:
at the very least, it is looking more and more that RSX will not outclass Xenos.

Although you're not saying this is the case, I think it's worth mentioning again that it's FAR too early to say much outside of "mights and maybes" with regard to RSX/Xenos comparisons. Questions remain over Xenos, let alone, more obviously, RSX. I wouldn't be shocked if they emerged as well-matched contemporaries, but I wouldn't be surprised either if one was more powerful than the other. Many things are too uncertain at this stage.

I think it's also worth mentioning that the question of system capabilities with regard to graphics is a subtly different one of that regarding GPUs, much more so this gen than previously, which should make things interesting.
 

Hajaz

Member
gofreak said:
:lol You'd expect them to tell the truth if their part wasn't as powerful? Like any company, they'll always present things in as best a light as possible. They're not going to say they're less powerful even if that's the case.

ya, but that goes 2 ways. For all we know PS3 might actually be weaker then x360 graphically. Sony or NV certainly wouldnt tell us.

Youve got to admit that the lie about NV30's number of pipes, and the futuremark scandal were too much over the top to be considered simply "presenting a product in the best possible way". That was outright misleading of consumers, and damaging the reputation of another company (futuremark)
 

MmmBeef

Member
Sal Paradise Jr said:
That's already been disproven a thousand times over. It's what is affectionately referred to as "Quack" in the community.

Ah really? Well, that's what I get for having a selective memory.
 
Top Bottom