• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

intresting interview with Xenos dev

Hajaz

Member
http://www.bit-tech.net/bits/2005/06/10/richard_huddy_ati/1.html

"That mere 10 per cent clock speed that RSX has on Xenos is easily countered by the unified shader architecture that we've implemented," he claimed. "Rather than separate pixel and vertex pipelines, we've created a single unified pipeline that can do both."

"Providing developers throw instructions at our architecture in the right way, Xenos can run at 100 per cent efficiency all the time, rather than having some pipeline instructions waiting for others," Huddy explained. "For comparison, most high-end PC chips run at 50-60% typical efficiency. The super cool point is that 'in the right way' just means 'give us plenty of work to do'. The hardware manages itself."


This time around, they don't have the architecture and we do, so they have to knock it and say it isn't worthwhile," he said. "But in the future, they'll market themselves out of this corner, claiming that they've cracked how to do it best. But RSX isn't unified, and this is why I think PS3 will almost certainly be slower and less powerful."








wow

its as ive said all along... ATI has the most talented designers nowadays
 

DarienA

The black man everyone at Activision can agree on
This time around, they don't have the architecture and we do,

But I thought the beef this generation was that the PS2 didn't have the architecture? And that's why it was hard to program for?

Goddamn I'm so confused....
 

Elios83

Member
"This time around, they don't have the architecture and we do, so they have to knock it and say it isn't worthwhile," he said. "But in the future, they'll market themselves out of this corner, claiming that they've cracked how to do it best. But RSX isn't unified, and this is why I think PS3 will almost certainly be slower and less powerful."


So he's accusing nVidia to do what they have done with pixel shaders 3.0 until now... with the difference they still haven't a ps3.0 part on the market as their R520 seems to be the new NV30.
As for the rest of the comments of course they're biased,let's ask nVidia what they think about this and we'd got the opposite situation.
 

Hajaz

Member
DarienA said:
But I thought the beef this generation was that the PS2 didn't have the architecture? And that's why it was hard to program for?

Goddamn I'm so confused....

i think theyre comparing to past nvidea architectures, witch has nothing to do with PS2
 
DarienA said:
But I thought the beef this generation was that the PS2 didn't have the architecture? And that's why it was hard to program for?

Goddamn I'm so confused....
ATi didn't make the Xbox GPU, Nvidia didn't make the PS2 GPU. Either way, be confused. maybe they just mean in last "gen"'s pc graphics cards? no clue really...
 
DarienA said:
But I thought the beef this generation was that the PS2 didn't have the architecture? And that's why it was hard to program for?

Goddamn I'm so confused....

I think this is more of a ATI/Nvidia thing than having anything to do with Sony.
 

Hajaz

Member
Elios83 said:
"This time around, they don't have the architecture and we do, so they have to knock it and say it isn't worthwhile," he said. "But in the future, they'll market themselves out of this corner, claiming that they've cracked how to do it best. But RSX isn't unified, and this is why I think PS3 will almost certainly be slower and less powerful."


So he's accusing nVidia to do what they have done with pixel shaders 3.0 until now... with the difference they still haven't a ps3.0 part on the market as their R520 seems to be the new NV30.
As for the rest of the comments of course they're biased,let's ask nVidia what they think about this and we'd got the opposite situation.

r520 is done, working silicon has been demonstrated. its an SM3.0 part. neither g70 nor r520 are in the hands of consumers


that said, Nvidea has stated that it wishes to move to a unified shader architecture in the future, but that they just couldnt get it working with RSX or g70
 

gofreak

GAF's Bob Woodward
"Providing developers throw instructions at our architecture in the right way, Xenos can run at 100 per cent efficiency all the time, rather than having some pipeline instructions waiting for others," Huddy explained.

An odd choice of words given that part of Xenos's claim to fame is that it's supposed to fit more around your workload rather than vice versa. I mean, the way to reach best performance on any architecture is to map your algo to the hardware and "throw instructions at it in the right way", but Xenos is supposed to be more flexible than usual.

I'm also not sure about reaching 100% efficiency - ordinary realities aside, I thought the division of processing wasn't possible on a per-ALU basis. Initially there were reports that all the chip had to be doing either vertex or pixel shading at any single moment, then that it was any "pipe" (3 of 16 ALUs) had to be doing all vertex or all pixels at any one time. That coarser level of granularity would make 100% utilisation more difficult.

Chittagong said:
Seems over to Sony

Of course, you're asking ATi :lol
 
V

Vennt

Unconfirmed Member
Summary:

ATI say ATI chip is "Da bomb"

Hold the front page... Give up & go home nVidia ;)
 

Elios83

Member
DarienA said:
But I thought the beef this generation was that the PS2 didn't have the architecture? And that's why it was hard to program for?

Goddamn I'm so confused....

He's not talking about PS2.
He's saying that last time nVidia bashed them because they didn't have ps3.0 and now they can do the same with nVidia because they haven't unified shaders.
 

akascream

Banned
An odd choice of words given that part of Xenos's claim to fame is that it's supposed to fit more around your workload rather than vice versa. I mean, the way to reach best performance on any architecture is to map your algo to the hardware and "throw instructions at it in the right way", but Xenos is supposed to be more flexible than usual.

There are probably just a few constraints to keep in mind so the shader operation scheduling runs smooth.
 

DarienA

The black man everyone at Activision can agree on
Oh so this is still an ATI/nVidia pissing fight... great.... just great...
 

gofreak

GAF's Bob Woodward
akascream said:
There are probably just a few constraints to keep in mind so the shader operation scheduling runs smooth.

Sure, but I didn't think he really got the message across that he should. He could have made more of a point of competing "fixed" architectures.

That's really Xenos's biggest selling point - flexibility to accomodate workload distributions efficiently beyond the "typical" enshrined in fixed architectures.

We'll have to see if it was a worthwhile gamble, there are certainly downsides as well as that upside, and such flexibility may be somewhat meaningless in the multiplatform world (but I suppose that's the case with most unique hardware aspects anyway!).
 

Mrbob

Member
gofreak said:
Sure, but I didn't think he really got the message across that he should. He could have made more of a point of competing "fixed" architectures.

That's really Xenos's biggest selling point - flexibility to accomodate workload distributions efficiently beyond the "typical" enshrined in fixed architectures.

We'll have to see if it was a worthwhile gamble, there are certainly downsides as well as that upside, and such flexibility may be somewhat meaningless in the multiplatform world (but I suppose that's the case with most unique hardware aspects anyway!).

Well hopefully it translates into being relatively easy for developers to take advantage of X360 specs in multi platform games.
 

gofreak

GAF's Bob Woodward
Mrbob said:
Well hopefully it translates into being relatively easy for developers to take advantage of X360 specs in multi platform games.

What it would translate into would be if a dev needs a particular proportion of vertex processing and particular proportion of pixel processing, it should map more easily to Xenos's architecture..since at some level of granularity its execution units can be assigned to do either (although at what level seems now to be up in the air, so it may be more or less limiting). With something like RSX, it's best if your proportions match what's enshrined in the hardware. Why I mention multiplatform games, is that for such, to get best performance on both consoles, it may make more sense to design around the "fixed" platform and then translate that over to the other, since it doesn't mind what proportions you give it (but the other does...and again, "it doesn't mind" with caveats..it may not be as flexible as people first envisioned, if you can't distribute work arbitrarly per ALU).

But yeah, it may be somewhat easier if you have a certain distribution of processing that's beyond the "typical" fixed architectures aim to accomodate. How much easier is hard to tell until certain questions are answered (again, about at what level you can make the split between vertex or pixel processing).
 

Lord Error

Insane For Sony
This interview was put more in laymans terms than some hardcore stuff. I think anyone could make a point for ATI the way he did, just as anyone could make a counterpoint, considering that going by spec sheets, RSX performs (a lot) more shader instructions per second.

Ati's own R520 is not a unified architecture, but will most likely be faster, or at least on par with R500 in performance.
 

Hajaz

Member
Marconelly said:
This interview was put more in laymans terms than some hardcore stuff.
much like your reply then ;)

didnt ATI state that r500 was more advanced then r520? didnt both ATI and NV state that they intend to move to unified shader architecture in the future for their pc parts?
 

gofreak

GAF's Bob Woodward
Hajaz said:
didnt ATI state that r500 was more advanced then r520?

The part uses an architecture of a next-next-gen chip. Don't confuse that with the performance of a next-next-gen chip. R520 will very likely have more raw horsepower than R500.

R500 arranges its power differently from R520, for want of a better expression. That doesn't mean it has more power, it likely doesn't (at least in most areas).
 

Elios83

Member
Hajaz said:
much like your reply then ;)

didnt ATI state that r500 was more advanced then r520? didnt both ATI and NV state that they intend to move to unified shader architecture in the future for their pc parts?

nVidia seems to think that having dedicated pipelines for vertex and pixel shaders is a better thing perfomance wise. But of course since Microsoft has decided with WGF2.0 that the future will be unified shaders they can't withdraw to implement them when Longhorn is released.
 

Hajaz

Member
hm? but if theres any truth to the above article, more raw power doesnt mean much does it? the guy pretty much sais developers only reach 60% efficiency on architectures like r520/g70
 

gofreak

GAF's Bob Woodward
Elios83 said:
nVidia seems to think that having dedicated pipelines for vertex and pixel shaders is a better thing.But of course since Microsoft has decided with WGF2.0 that the future will be unified shaders they can't withdraw to implement them when Longhorn is released.

Someone can correct me if I'm wrong on this, but I believe NVidia negotiated a compromise whereby unified shaders can be limited to the software layer..on the hardware layer, they can be discrete pixel or vertex units if they so wish, just as long as the hardware appears unified to the programmer. So when they feel the performance/flexibility tradeoff is worthwhile, they can migrate to unified shaders in hardware.
 

Hajaz

Member
Elios83 said:
nVidia seems to think that having dedicated pipelines for vertex and pixel shaders is a better thing perfomance wise. But of course since Microsoft has decided with WGF2.0 that the future will be unified shaders they can't withdraw to implement them when Longhorn is released.

thats not what i read in the anandtech interview with nvidea 2 weeks ago. it said something along the lines of:
"we tried unified shaders but we just couldnt get it working, well get it working in the future though"

thats quite different from

"we think unified shaders are poo but well be forced to use them because of ms"
 

gofreak

GAF's Bob Woodward
Hajaz said:
hm? but if theres any truth to the above article, more raw power doesnt mean much does it? the guy pretty much sais developers only reach 60% efficiency on architectures like r520/g70

It depends on the proportions of your vertex processing vs your pixel processing. I'm sure there are some games where the distribution used compromises efficiency. But "fixed" architectures do design around the "typical" distributions.

And efficiency issues in PC cards relate to many factors beyond workload distributions too..

If you map your work directly to the hardware, fixed or not, you'll get as close to its best performance as is possible. Xenos will see a lower drop in ultilisation vs a fixed architecture if you don't map to the hardware, however.

BTW, on a side note relating to what I was saying earlier, apparently the division of labour between vertex and pixel shading isn't made on a per ALU basis, but on a per "pipe" basis (the 3 of 16 ALUs). So while more flexible, it's not arbitrarily so. Some issues are still up in the air though.
 

Elios83

Member
Hajaz said:
hm? but if theres any truth to the above article, more raw power doesnt mean much does it? the guy pretty much sais developers only reach 60% efficiency on architectures like r520/g70

PC and consoles are completely different architectures.
PC has a lot of bottlenecks and has the limit of the HAL,so I don't think it's correct to put efficiency numbers of a PC enviroment whe discussing about consoles.
 

Lord Error

Insane For Sony
hm? but if theres any truth to the above article, more raw power doesnt mean much does it? the guy pretty much sais developers only reach 60% efficiency on architectures like r520/g70
The only comparision we have to go off right now, was the Ruby demo at E3, quickly ported by Ati people from R520 to (close to finished) R500, and it didn't quite ran at the rate of what I'd expect it to run at R520. It was 30FPS with some framedrops and no AA. I know this is not entirely greatest comparision, but it gives some indication that R520 will at least match or outperform R500 despite any raw power/efficiency ratios.

Well... IMO nVidia chips always were weaker than ATI.
6800 series has better performance than Ati's X800 series. Tables turn quickly in these things, but I think that Nvidia should drop in the Earth from shame if they can't deliver better performing chip with six months more of dev time. Of course, specs and numbers alone won't tell much this time. Real test will be multiplatform games running under Unreal 3 engine or something like that.
 

Elios83

Member
Hajaz said:
thats not what i read in the anandtech interview with nvidea 2 weeks ago. it said something along the lines of:
"we tried unified shaders but we just couldnt get it working, well get it working in the future though"

thats quite different from

"we think unified shaders are poo but well be forced to use them because of ms"


http://www.extremetech.com/article2/0,1558,1745060,00.asp

David Kirk,Chief Scientist of nVidia is not sure unified shaders is the right solution to have the best performance because he thinks the two operations need different optimizations.
The part about nVidia forced anyway to implement them because of Microsoft withWGF2.0 is my personal consideration,sorry if it wasn't clear.I think that clearly nVidia can't go against Microsoft decisions on the future even if it doesn't agree unless they want to act like 3dfx which didn't care about DX feature set and implemented the features they wanted.
 

akascream

Banned
Marconelly said:
Tables turn quickly in these things, but I think that Nvidia should drop in the Earth from shame if they can't deliver better performing chip with six months more of dev time.


Do you mean six months of more dev time?...or a six month technological advantage. It could very well be that nvidia was brought into this at 'the last minute', so to speak, and haven't had as much time as ATI have to put together an elegant solution for thier respective console. The dual video outs is a big indication of this.

David Kirk,Chief Scientist of nVidia is not sure unified shaders is the right solution to have the best performance because he thinks the two operations need different optimizations.
The part about nVidia forced anyway to implement them because of Microsoft withWGF2.0 is my personal consideration,sorry if it wasn't clear.I think that clearly nVidia can't go against Microsoft decisions on the future unless they want to act like 3dfx which didn't care about DX feature set and implemented the features they wanted.

AFAIK, unified shaders isn't a prerequisite for WGF2.0 compliance.
 

Pimpwerx

Member
Hajaz said:
thats not what i read in the anandtech interview with nvidea 2 weeks ago. it said something along the lines of:
"we tried unified shaders but we just couldnt get it working, well get it working in the future though"

thats quite different from

"we think unified shaders are poo but well be forced to use them because of ms"

I believe it was that they've been considering unified shaders for some time, but haven't gotten the performance up to snuff to beat a traditional arrangement of seperate VS and PS pipes. That's different from being able to get it to work. NVidia has been on the ball with every new tech revision, so I don't see why that would have changed now. Consoles are closed boxes. Unified shaders aren't a major necessity when devs know exactly what they're working with. Besides which, also, if Cell's SPEs are running Cg shaders, then you need that flexibility even less since you've got a major vertex shading supply on your CPU. PEACE.
 

Lord Error

Insane For Sony
Gofreak, clean up yor private message box :)

It could very well be that nvidia was brought into this at 'the last minute', so to speak, and haven't had as much time as ATI have to put together an elegant solution for thier respective console. The dual outputs is a big indication of this.
Being the exact opposite of elegant never stopped Xbox from being the most powerful console. Six months is not a short time. It's not exactly the long time either, but it would certainly paint the picture of incompetence over Nvidia if they don't deliver superior product (especially after all the boasting of superiority they themselves gave)
 
Marconelly said:
6800 series has better performance than Ati's X800 series. Tables turn quickly in these things, but I think that Nvidia should drop in the Earth from shame if they can't deliver better performing chip with six months more of dev time. Of course, specs and numbers alone won't tell much this time. Real test will be multiplatform games running under Unreal 3 engine or something like that.


The real test will be from an engine optimized for Nvidia hardware from a company that has been heavily pro-Nvidia for what seems forever? I'll wait for more objective sources. :)
 

Pimpwerx

Member
akascream said:
Do you mean six months of more dev time?...or a six month technological advantage. It could very well be that nvidia was brought into this at 'the last minute', so to speak, and haven't had as much time as ATI have to put together an elegant solution for thier respective console. The dual video outs is a big indication of this.

Dual outputs are evidence of nothing. NVidia and Sony have both claimed to have put in over 18 months of time into this project. This is ignoring the collaboration they had before the RSX project began. AFAIK, MS and ATI have only been working on Xenos for a bit over 2 years. If it was such a rush job, you'd think NVidia would have provided a larger team to Sony as well. Everything about the specifics of the deal point to it be that Sony preferred the NVidia design and went with it. There's no reason they couldn't have gotten a custom jobber if it was necessary. Besides which, they keep saying this is a custom design anyway. If it's got a different pipeline arrangement from the G70, what will people say? It's already clocked higher, with a supposedly higher tranny count. It's supposedly got some special HDR implementation. So how custom does a part need to be to be a custom part? It's not gonna have eDRAM or unified shaders, but that's never been a prerequisite for a custom (or at least modified) design. PEACE.
 

Elios83

Member
Pimpwerx said:
Dual outputs are evidence of nothing. NVidia and Sony have both claimed to have put in over 18 months of time into this project. This is ignoring the collaboration they had before the RSX project began. AFAIK, MS and ATI have only been working on Xenos for a bit over 2 years. If it was such a rush job, you'd think NVidia would have provided a larger team to Sony as well. Everything about the specifics of the deal point to it be that Sony preferred the NVidia design and went with it. There's no reason they couldn't have gotten a custom jobber if it was necessary. Besides which, they keep saying this is a custom design anyway. If it's got a different pipeline arrangement from the G70, what will people say? It's already clocked higher, with a supposedly higher tranny count. It's supposedly got some special HDR implementation. So how custom does a part need to be to be a custom part? It's not gonna have eDRAM or unified shaders, but that's never been a prerequisite for a custom (or at least modified) design. PEACE.

Agreed
 

thorns

Banned
Off topic but all evidence suggests that nvidia started working with sony on the ps3 quite late, even though they say otherwise.
 

gofreak

GAF's Bob Woodward
akascream said:
Do you mean six months of more dev time?...or a six month technological advantage. It could very well be that nvidia was brought into this at 'the last minute', so to speak, and haven't had as much time as ATI have to put together an elegant solution for thier respective console. The dual video outs is a big indication of this.

What they're putting into PS3 is more than the x months of dev time since Sony entered into the equation. It's a variant of their PC chip which was being worked on for quite some time, and while requirements between PC and console differ, the time given to customise it for the console is enough I think. If the Sony/NVidia deal was announced at the end of 2004. Assuming they were working together seriously 6 months prior to the announcement, which seems conservative if anything, that'll have been 18 months of work or more up until PS3 goes into production. Whole chips (refreshes, certainly) are turned around in that time, let alone customisations. Not to mention Kutaragi's claim that his hand was in the design from the start.

And anyway, as already stated, elegance and "level of customisation" or whatever is a seperate issue from performance and power.

edit - inbox clean, sorry marco
 

akascream

Banned
Marconelly said:
Gofreak, clean up yor private message box :)


Being the exact opposite of elegant never stopped Xbox from being the most powerful console. Six months is not a short time. It's not exactly the long time either, but it would certainly paint the picture of incompetence over Nvidia if they don't deliver superior product (especially after all the boasting of superiority they themselves gave)


I didn't mean to imply that ps3 would be less powerful as a result. I was just confused whether you were saying that nvidia will have 6 months of more time on thier respective console project, or if you were pointing out the 6 months later launch date of ps3 (or both). I thought maybe you had some info on nvidia's involvement with the ps3.

Dual outputs are evidence of nothing.

You think dual video outs was actually part of the ps3 spec, and not a 'might as well'?
 

Lord Error

Insane For Sony
The real test will be from an engine optimized for Nvidia hardware from a company that has been heavily pro-Nvidia for what seems forever? I'll wait for more objective sources.
U3 engine and Renderware are the two engines that we'll see used over and over on future multiplatform games, which is the only way I see to make any kind of direct comparision.
 
ATI v Nvidia: RSX, PS3 and the console wars

With the Xbox 360 Xenos core running at 500MHz, and the PlayStation3’s RSX graphics core running at 550MHz, the non-techie press are calling the specs a win for Sony. Is this really the case, though?

Richard is adamant that the extra graphics speed on paper is more than made up for by the differing architecture of the Xenos. “That mere 10% clock speed that RSX has on Xenos is easily countered by the unified shader architecture that we’ve implemented.

“Rather than separate pixel and vertex pipelines, we’ve created a single unified pipeline that can do both. Providing developers throw instructions at our architecture in the right way, Xenos can run at 100% efficiency all the time, rather than having some pipeline instructions waiting for others. For comparison, most high-end PC chips run at 50-60% typical efficiency. The super cool point is that ‘in the right way’ just means ‘give us plenty of work to do’. The hardware manages itself.”

Okay that is all well and good, saying that Xenos with its unified shader architecture is more effecient than a non-unified shader architecture, and it is fine to say that the 50 MHz clockspeed advantage that RSX has is overcome by Xenos' unified architecture, but that does not take into account the architectural advantages that RSX might have over Xenos, like more logic transistors, probably more functional units, probably resulting in more processing resources. arguements can be made in favor of either GPU. I believe that, neither clockspeed, nor unified vs non-unified will determine which GPU is better. but rather, in inner workings of each functional block, ALU, pipeline, etc, and the amount of processing resources: ALUs, pipelines, of each GPU. in otherwords, not general things like clockspeed or unifed vs non-unified (thats the layout) but the quality and quantity of the chip-architecture itself. I know I did not say that as elequently as I would've liked, but hopefuly my post can be understood :)
 
I think developers should bite their tongue until they know more on both sides. However, it would appear that the R500 chip inside the Xbox 360 is a monster that aims for 100% efficiency. The design of the console itself stresses to get the most out of what they put in it. Now if only the developers who haven't touched a final beta kit for the Xbox 360 would shutup about 3 x the performance for PS3 then we could all just wait for the games.
 

gofreak

GAF's Bob Woodward
midnightguy said:
saying that Xenos with its unified shader architecture is more effecient than a non-unified shader architecture

Again, depending on the workload. With a workload that maps well or exactly to the fixed architecture, there'd be little or no gain in efficiency. And while you may be gaining efficiency on a high level with other workloads, you're also probably losing on a low level - unified ALUs by their nature aren't going to be as optimised or efficient as dedicated units.

I'd characterise comparisons between Xenos and other chips as one of flexibility vs whatever rather than efficiency vs whatever. They're trading off efficiency at different levels.
 

KingV

Member
The more I hear about X360, the more it reminds me of the things we heard about the gamecube before its release. Mainly that it doesn't have the raw power of the other consoles, but is well optimized so that the end result is far superior than what you'd see based on raw specs alone. I'm no developer, but from what I've seen of the gamecube, they either low-balled their specs or did a great job optimizing the machine to make it ass efficient as possible. This worked pretty well for them, graphically speaking (commercial success aside) as the top tier gamecube games are pretty much on par with top Tier Xbox games, at a much lower launch price point. Time will tell if less = more for X360 as well.
 
KingV said:
The more I hear about X360, the more it reminds me of the things we heard about the gamecube before its release. Mainly that it doesn't have the raw power of the other consoles, but is well optimized so that the end result is far superior than what you'd see based on raw specs alone. I'm no developer, but from what I've seen of the gamecube, they either low-balled their specs or did a great job optimizing the machine to make it ass efficient as possible. This worked pretty well for them, graphically speaking (commercial success aside) as the top tier gamecube games are pretty much on par with top Tier Xbox games, at a much lower launch price point. Time will tell if less = more for X360 as well.

WTF on the bold. I think Sony's got most of the gaming populace in a debilitating mental headlock after E3. It's more the case of Cell > Xenon CPU, and Xenos > RSX and not much else (and those are not that clear cut either since Integer perf of Xenon CPU > Cell, etc...).

After both X360 and PS3 ships, I think this board's gonna be a boring place since the power differential is going to materialize nowhere near as people deluded about. infact, I'd doubt the power difference will even manifest concretely in visuals at all.
 

dorio

Banned
The super cool point is that 'in the right way' just means 'give us plenty of work to do'. The hardware manages itself."

This must have been what Deano, the ps3 Heavenly Sword developer, was referring to when he said that if you throw enough pixel and vertices at the 360 that he can see a situation where it outperforms the ps3.
 
Top Bottom