• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DF: Leadbetter interviews 4A's Oles Shishkovstov about current gen consoles + PC

d00d3n

Member
Spiritual sequel to STALKER pleaseee

Although I would prefer some kind of middle ground with strong story and level design (which we know that 4A excel at), while experimenting with the simulation aspects that made the STALKER games so memorable.
 

Seanspeed

Banned
See I'm not guessing anything. I did not make up an arbitrary number or something along those lines. The quote from the developer is pretty straight forward. IMHO it leaves very little room for interpretation like "oh he certainly only meant draw calls, etc.". As we already said its a rough estimation and a broad generalization, yet I grant him the benefit of doubt and say that he might be right. Simply because the rest of his statements made sense, he has no obvious incentive to lie at that point and he is qualified enough to make an educated guess about how an optimized game engine may perform. What is your personal knowledge that allows you to outright dismiss his statement?
I'm not completely dismissing the statement as I've said a couple times now.

And you are guessing. You are trying to go into detail about what the person said(saying its an 'average'), but your argument was never based on anything but an appeal to authority in the first place, not any personal knowledge. Without that personal knowledge, it is only possible for you to guess.

Also, if you're saying it leaves little room for interpretation, why are you allowed to interpret it as them saying its an 'average'?

Truth is, it does leave a lot of room for interpretation. It is one short little comment that doesn't go into any detail and isn't a part of a larger, more in-depth discussion about the issue to put anything into context. You saying it has no room for interpretation is merely what is convenient for your argument, once again.

I know I'm being a little harsh here, and I'm sorry for that, but its pretty clear what you're doing, after being backed into a corner when discussing it with somebody who actually is personally knowledgeable on the subject(Durante, not me! lol).
 

martino

Member
I think The Order 1886 looks absolutely gorgeous, especially for a game coming from a small studio that has only made handheld titles in the past. They shouldn't be mocked.

Yeah you can subjectively do that but you're narrowing the scope with conditions here and in the end if the tech is not be at the same level ingame...it'snot be at the same level
 

MaLDo

Member
If the budget, the dev time, and the minds behind The Last of Us in PS3 was used instead to create from scratch an hypothetical PC version of the game aiming specs with similar raw performance than the hard into the console, the game would have gone virtually identical, imo.

Ninja edit:

When you start to develop for a specific, complex and with its own set of tools machine you must strive more for the starting point and will be better for the use of the machine than if you start to develop for a standard machine where a part of the work is already done. You will need less time in the second scenario, and you will spend less money. But you do not bring forth so much juice for that reason.
 
Although I would prefer some kind of middle ground with strong story and level design (which we know that 4A excel at), while experimenting with the simulation aspects that made the STALKER games so memorable.
Does 4A have much people that worked on STALKER, and specifically the simulation aspects? Let me check.

EDIT: STALKER lead designer Andrew Prohorov is creative director on Metro games. Lead designer Yuriy Negrobov isn't, who is part of West Games on that Areal thing.

Level design since STALKER was an open world game and Metro games are more linear. So, let's check the teams of level designers on STALKER: Metro games (Andrey Tkachenko as art director now, Alexander Pavlenko as lead env artist, Sergey Karmalsky as lead env artist), Acony games (Evgeniy Zaitsev), NewFX Games (Suprun Bogdan), Crytek (Yuriy Petrovskiy on Crysis 2).

Lead programmer is the same person in this interview who's now part of 4A, Oles Shishkovtsov. Vostok Games (Dmitriy Iassenev, Andrew Kolomiets, Ruslan Didenko, Alexander Plichko), 2033 (Konstantin Slipchenko), Samsung (Vladimir Tunduk), Logicking (Yuriy Dobronravin), Ubisoft (Viktor Reutsky), Gaijin Entertainment (Roman Marchenko).

On game design front, some are at Vostok games (Alexey Sityanov), some worked on 2033 (Vyacheslav Aristov, Denis Volvach), Last Light (Ivan Veretiannikov, Andrey Verpahovskiy), West games (Peter Dushynsky), Gestalt Games (Roman Shyshkin).

So yeah, it's possible that they could do a STALKER-like game more on the level design than programming side although they still have a heavyweight like Oles who would just need to want to do a more open world game.
 

jgf

Member
I'm not completely dismissing the statement as I've said a couple times now.
Well you never mentioned it when talking to me. So you say that he might be right? Thats all I'm asking for.

And you are guessing. You are trying to go into detail about what the person said(saying its an 'average'), but your argument was never based on anything but an appeal to authority in the first place, not any personal knowledge. Without that personal knowledge, it is only possible for you to guess.

Also, if you're saying it leaves little room for interpretation, why are you allowed to interpret it as them saying its an 'average'?

Truth is, it does leave a lot of room for interpretation. It is one short little comment that doesn't go into any detail and isn't a part of a larger, more in-depth discussion about the issue to put anything into context. You saying it has no room for interpretation is merely what is convenient for your argument, once again.
I really don't know what your problem with the word average is in this case.

Again, I say there is little room for interpretation, because I think its the only way that it could be meant. He gives a rough estimate about the performance gain he expects. Of course some things (inside his game/engine) can't be made faster other may get 10x faster. In average in his usecase --a game engine-- he expects it to be around 2x.

Just by sheer logic you can deduce that if someone says a certain system gets 2x faster, it automatically means that on average, when all parts of the system are combined it gets around 2x faster. Some parts improved more, others less.

I know I'm being a little harsh here, and I'm sorry for that, but its pretty clear what you're doing, after being backed into a corner when discussing it with somebody who actually is personally knowledgeable on the subject(Durante, not me! lol).

I think its nice of you that you want to defend Durante but he can pretty much take care of him himself. Also I never attacked him or said he doesn't know what he's talking about. So I don't get where you're coming from.
 
RAM is irrelevant.
Why is the RAM irrelevant?
Because RAM doesnt affect performance and because many games do not even implement special streaming designed for consoles on PC
I must press you on this. How can system memory not matter?

This, surely, is a good example of the point John Carmack is making. Both 360 and PS3 included a cheap amount of memory (pitiful really). Yet developer focus and gradual optimisation produced a very impressive solution. How else is GTA V possible on a console that has only 256 MB of system memory.

79907-GTA-V-screen-meme-Imgur-KdbQ.jpeg


Can anyone tell me what the minimum system requirements for GTA V on PC are? I'm willing to go out on a limb and say it requires at least 2 GB to perform on a similar level.
 

Henrar

Member
Because RAM doesnt affect performance and because many games do not even implement special streaming designed for consoles on PC, like for example Crysis 2.
Additionally PC version has different, bigger framebuffer layout for higher HDR precision.
Depends on the game. Some don't perform better with faster RAM, some scale well (like F1 2012)
http://www.tomshardware.com/reviews/memory-bandwidth-latency-gaming,3409-4.html

If you ahve multi GPU setup, faster RAM certainly makes a difference:
http://www.anandtech.com/show/7364/memory-scaling-on-haswell/8
 

KKRT00

Member
Depends on the game. Some don't perform better with faster RAM, some scale well (like F1 2012)
http://www.tomshardware.com/reviews/memory-bandwidth-latency-gaming,3409-4.html

If you ahve multi GPU setup, faster RAM certainly makes a difference:
http://www.anandtech.com/show/7364/memory-scaling-on-haswell/8

Ok, but i meant in 1080p 30hz and standard settings. Of course where You're going for bleeding edge performance You will see the difference between fast and slow RAM.
Same goes if You go with high texture and framebuffer settings in game, the RAM configuration can bottleneck Your performance causing random slow downs or stuttering.

But for 30hz on normal setting, the RAM configuration is not variable in performance comparisons.

---
Can anyone tell me what the minimum system requirements for GTA V on PC are? I'm willing to go out on a limb and say it requires at least 2 GB to perform on a similar level.
Very aggressive streaming. You need to consider that many games do not have additional streaming systems in PC versions implemented, because there is no strict RAM limitation, so direct comparison in terms of RAM configurations, are not viable.
 
I must press you on this. How can system memory not matter?

This, surely, is a good example of the point John Carmack is making. Both 360 and PS3 included a cheap amount of memory (pitiful really). Yet developer focus and gradual optimisation produced a very impressive solution. How else is GTA V possible on a console that has only 256 MB of system memory.

Through the streaming of assets? As I said earlier, RAM comparisons are pointless since we're comparing a general purpose machine with a dedicated games console.
 

jgf

Member
Very aggressive streaming. You need to consider that many games do not have additional streaming systems in PC versions implemented, because there is no strict RAM limitation, so direct comparison in terms of RAM configurations, are not viable.

That means that the console essentially needs to do additional work (streaming) in order to compensate for its lesser system memory. There is no technical reason why a PC can't also use this aggressive streaming technology. Its probably not implemented, because there is no need for it as PCs in general have more RAM available. But at that point you're no longer comparing similar specced systems.

In many cases you can trade space (memory usage) for time (cpu usage) when you're trying to optimize your algorithms. E.g. adding additional hash maps for a convenient quick lookup is one of those easy tricks.

Through the streaming of assets? As I said earlier, RAM comparisons are pointless since we're comparing a general purpose machine with a dedicated games console.

So your basically saying that a dedicated system needs less specs to achieve the same thing as a general purpose machine? Which in turn means that similar specced systems will have a performance difference in practice.
 
So your basically saying that a dedicated system needs less specs to achieve the same thing as a general purpose machine? Which in turn means that similar specced systems will have a performance difference in practice.

In a sense. To be specific, in the same sense that a motorcycle needs only two wheels but a car needs four, because the first one is designed to carry two people and the second one four or five.
 

jgf

Member
In a sense. To be specific, in the same sense that a motorcycle needs only two wheels but a car needs four, because the first one is designed to carry two people and the second one four or five.

Going with your metaphor I would argue that optimization for a fixed platform (your motorcycle) is like kicking off that additional 2 people you would need to carry along in your general purpose vehicle ;)
 

cheezcake

Member
Can anyone tell me what the minimum system requirements for GTA V on PC are? I'm willing to go out on a limb and say it requires at least 2 GB to perform on a similar level.

Oh cool we can pull numbers out of our arses now?

The Last of Us would require at least 50 trillion naughtyflops of system power to run on PC
 

Seanspeed

Banned
Well you never mentioned it when talking to me. So you say that he might be right? Thats all I'm asking for.


I really don't know what your problem with the word average is in this case.

Again, I say there is little room for interpretation, because I think its the only way that it could be meant. He gives a rough estimate about the performance gain he expects. Of course some things (inside his game/engine) can't be made faster other may get 10x faster. In average in his usecase --a game engine-- he expects it to be around 2x.

Just by sheer logic you can deduce that if someone says a certain system gets 2x faster, it automatically means that on average, when all parts of the system are combined it gets around 2x faster. Some parts improved more, others less.



I think its nice of you that you want to defend Durante but he can pretty much take care of him himself. Also I never attacked him or said he doesn't know what he's talking about. So I don't get where you're coming from.
I'm not trying to defend Durante. :/ Nor did I accuse you of attacking him. Again - :/

I just don't think you have a very good argument here. You say you've struck on the 'only' way to interpret it, which really just means 'this is the way I'm interpreting it and I am unwilling to accept any other explanation'.

As for whether the comment is right, I've said my part on that before. I cant say for sure, but I know that I'm not just going to accept anything someone tells me if the evidence doesn't seem to support it. Seems like its a theoretical thing, but maybe not ever really reached due to practical reasons(like where Carmack talks about how long it would take to truly optimize code and it not being worth it)? Or like some others have said, perhaps its only talking about a few specific parts of the process, but not necessarily a total end result of 2x the performance overall.
 
Going with your metaphor I would argue that optimization for a fixed platform (your motorcycle) is like kicking off that additional 2 people you would need to carry along in your general purpose vehicle ;)

I understand the concept, but I still would need to see actual results. Comparing console and PC graphics hardware is pretty straightforward, both are designed with gaming in mind. The results we've seen so far couldn't be more clear: the PS4's 1.84 teraflop gpu performs almost exactly like a PC 1.84 teraflop gpu. Same for the Xbox One. This basically proves what was already shown with Mantle, that the reduction of overhead through 'coding to the metal' is almost exclusively about the CPU and it doesn't affect gpu performance significantly under normal conditions. We've seen Mantle provide big benefits in cases where a really weak cpu is paired with a strong gpu or on super high end systems with multiple graphics cards.

The benefit of that reduced overhead can be already seen on consoles since it's what allows their really weak cpus to at least keep pace with the more powerful Intel and AMD offerings. But that's all. An 1.84 teraflop gpu is not going to perform the same as a 3.7 teraflop one.Not going to happen. Maybe HSA can provide some benefits, we'll see if that becomes a thing.
 

martino

Member
I understand the concept, but I still would need to see actual results. Comparing console and PC graphics hardware is pretty straightforward, both are designed with gaming in mind. The results we've seen so far couldn't be more clear: the PS4's 1.84 teraflop gpu performs almost exactly like a PC 1.84 teraflop gpu. Same for the Xbox One. This basically proves what was already shown with Mantle, that the reduction of overhead through 'coding to the metal' is almost exclusively about the CPU and it doesn't affect gpu performance significantly under normal conditions. We've seen Mantle provide big benefits in cases where a really weak cpu is paired with a strong gpu or on super high end systems with multiple graphics cards.

Thanks for the perfectly understable way to explain it .
 

ethomaz

Banned
What? Completely different? I doubt so and I don't know what your constant talk about blurryness means...
Means the game is a bit blurry... it was the first thing I noticed when I played it last year.

Different in presentation... sharper, addiction details, better effects... everything else will be the same... well maybe the gameplay get a little better with high framerate on PC.
 

KKRT00

Member
That means that the console essentially needs to do additional work (streaming) in order to compensate for its lesser system memory. There is no technical reason why a PC can't also use this aggressive streaming technology. Its probably not implemented, because there is no need for it as PCs in general have more RAM available. But at that point you're no longer comparing similar specced systems.
Sure, the reason for that is logical, its a waste of time for devs.
And its still can be similar specs, as i said RAM doesnt affect performance in any scenario we were talking about. And You wont never compare same specs, its just no physically possible.
 

c0de

Member
Means the game is a bit blurry... it was the first thing I noticed when I played it last year.

Different in presentation... sharper, addiction details, better effects... everything else will be the same... well maybe the gameplay get a little better with high framerate on PC.

Are you talking about the game not running at 1080p or do you think they used blurryness intentionally? I really don't know what you mean.
 

ethomaz

Banned
Are you talking about the game not running at 1080p or do you think they used blurryness intentionally? I really don't know what you mean.
PC pics released are sharper... so I guess it is resolution or AA tech (or a combination)... not intentional.

This is one of the PC pics they released:

 

c0de

Member
PC pics released are sharper... so I guess it is resolution or AA tech (or a combination)... not intentional.

Sure but you started with that the game itself looked blurry and I wanted to know why. It could be that you think that every game not running at 1080p is blurry as your preference this gen is "1080p or won't buy".
Of course PC shots look sharper...
 

ethomaz

Banned
Sure but you started with that the game itself looked blurry and I wanted to know why. It could be that you think that every game not running at 1080p is blurry as your preference this gen is "1080p or won't buy".
Of course PC shots look sharper...
The game is blurry because it shows on TV in that way... the technical reason I don't but I can speculate it is resolution and/or AA... and it is not only blurry compared with PC games but it is compared with others games like Forza for example (the AA is way better in Ryse than Forza btw).

And yes... I won't buy sub-1080p games this gen because it is unacceptable to me when the industry is already moving to 4k.
 

d00d3n

Member
Does 4A have much people that worked on STALKER, and specifically the simulation aspects? Let me check.

*SNIP*

So yeah, it's possible that they could do a STALKER-like game more on the level design than programming side although they still have a heavyweight like Oles who would just need to want to do a more open world game.

Wow, that is some good research. And yeah, the level design track record is excellent, but I am not sure what the economical implications of doing a STALKER like game would be for 4A. They seem to have gained momentum with streamlining and simplifying going from 2033 to Last Light, but I guess that STALKER sold a lot of copies as well.
 

jgf

Member
I understand the concept, but I still would need to see actual results. Comparing console and PC graphics hardware is pretty straightforward, both are designed with gaming in mind. The results we've seen so far couldn't be more clear: the PS4's 1.84 teraflop gpu performs almost exactly like a PC 1.84 teraflop gpu. Same for the Xbox One. This basically proves what was already shown with Mantle, that the reduction of overhead through 'coding to the metal' is almost exclusively about the CPU and it doesn't affect gpu performance significantly under normal conditions. We've seen Mantle provide big benefits in cases where a really weak cpu is paired with a strong gpu or on super high end systems with multiple graphics cards.

The benefit of that reduced overhead can be already seen on consoles since it's what allows their really weak cpus to at least keep pace with the more powerful Intel and AMD offerings. But that's all. An 1.84 teraflop gpu is not going to perform the same as a 3.7 teraflop one.Not going to happen. Maybe HSA can provide some benefits, we'll see if that becomes a thing.

I completely agree with you that the early titles of current gen systems show no indication of a 2x performance gain. New APIs like mantle seem to open up PCs for a higher optimization potential. Especially when trying to reduce bottlenecks in graphic performance due to a slow CPU. Is 2x still achievable on a fixed platform in this scenario? I don't know, but some engine developers say so. We may be seeing indications whether its likely or not when we compare multiplatform titles in some years, but not now. You obviously can't make a GPU thats 80% utilized suddenly run on 160%, but you may find tricks to offload computation to CPU, utilize HSA, adapt algorithms to the specifics of the hardware etc.. I won't rule that out by now. Is it less likely to be achievable as in the previous generation? Due to PC-like hardware and new low-level (Mantle) APIs it probably is less likely. But I don't have enough experience in that field to judge and I refuse to make predictions about optimization potential for games releasing in 4 years by looking at benchmarks of games today. I think thats a reasonable point of view.

Thats why I would like to focus on systems where we actually have that data. Or at least can produce it. The old gen systems where games have already been optimized for. Why don't we take games of the people that actually said those quotes? Like the 360 versions of Rage from Carmack and Last Light from Shishkovstov. Then find a similar specced PC with about the same CPU (kind of tricky with a powerpc, but at it should be at least comparable performance wise), same amount of ram and similar GPU and see if it runs about 2x as good on console. If the outcome is that there is no significant difference then I'm more then willing to admit that the quote we are talking about is wrong and I have been wrong too.

So I just came up with another idea. When we want to somehow measure the performance gain due to optimization on a fixed platform, how about comparing the performance of early gen launchwindow titles to late gen showpieces? I would argue that early gen titles pretty much show what the system is capable to do without specialized optimization, whereas late gen titles probably use most of the tricks available. So we should compare a barely optimized early gen multiplatform title to a late gen fully optimized exclusive title. In my opinion the difference between them should give an indication of what performance gains are possible due to said optimization.
 

jgf

Member
I just don't think you have a very good argument here. You say you've struck on the 'only' way to interpret it, which really just means 'this is the way I'm interpreting it and I am unwilling to accept any other explanation'.

Of course I may be wrong, but I don't see any other way to interpret it. If you disagree care to explain, what in your opinion he meant by saying he "can get 2x performance gain over the equivalent PC spec." ?
 

Metfanant

Member
And yes... I won't buy sub-1080p games this gen because it is unacceptable to me when the industry is already moving to 4k.
What industry exactly is making the move to 4k?...because it sure as hell isn't the content providers...

What is with this leadbetter guy? Id like to see 4A get last of us running at 60fps with same time constraints.

Why is it always a witch hunt with leadbetter?...yes I see how his comment could be interpreted as a jab...but did he say anything that is incorrect regarding Naughty Dog and or TLoU:R?
 

Hellshy.

Member
What industry exactly is making the move to 4k?...because it sure as hell isn't the content providers...



Why is it always a witch hunt with leadbetter?...yes I see how his comment could be interpreted as a jab...but did he say anything that is incorrect regarding Naughty Dog and or TLoU:R?

never said is said anything incorrect but it was a jab and one that makes little sense. comparing a multiplatform game with a ps3 exclusive that was engineered with one console in mind would appear to be naive yet we know leadbetter knows better then that. With all the proprietary tech designed with ps3 in mind he should be praising ND for getting the game running so well on ps4 not belittling their accomplishment
 

Kezen

Banned
PC pics released are sharper... so I guess it is resolution or AA tech (or a combination)... not intentional.

This is one of the PC pics they released:

This looks much more detailed than the Xbox One build. And we can expect much more for the final PC build.
There will be a very significant objective difference between Ryse running on Xbox One vs at very high settings on an high-end PC.

Can't wait.
 

infovore

Neo Member
I think we all are well aware about the hardware differences but we should keep in mind that all numbers given are theoretical max values which probably can't be achieved all the time.
For example we know currently that the os by PS4 and Xbone (afair) take a good amount of RAM (3gb, if I am not wrong) and take 2 CPU cores. Than you have to keep in mind that both systems run on multi-tasking OS's. So the scheduler is constantly stealing CPU time from your game (of course with priorities, but still).
The comparison between both systems with raw numbers is, in my opinion, of no real use this gen as there are side effects on both systems we are uncertain of considering performance hits.

Actually, I'd sort of assumed that both Sony and Microsoft built their respective OS's so that they keep their worker threads restricted to the two cores that have been reserved for the OS, and do only the absolute minimum amount of work on the other six cores. This would be some interrupt handlers (tlb, timers, etc) but possibly very little else. For example, if games are expected to do cooperative multi-tasking among their own threads, then there would be no need for a pre-emptive scheduler with its attendant overhead to be running on the six game cores.
 

EGM1966

Member
What industry exactly is making the move to 4k?...because it sure as hell isn't the content providers...



Why is it always a witch hunt with leadbetter?...yes I see how his comment could be interpreted as a jab...but did he say anything that is incorrect regarding Naughty Dog and or TLoU:R?
Lead better would refi the witch hunt stuff if he stopped making stupid comments like the TLOU / ND one.

It's a silly, cheap comment that seeks to produce an unnecessary sense of competition between ND and 4K and by extension Sony's ability on their own hardware.

The bulk of the interview is good and the content good but that was a poor approach and should have been rephrased and edited out.

There is no point trying to compare porting a code base so heavily tailored to PS3 to PS4 with updating a multi-platform code base built around a PC core to PC and the PS4/XB1 and draw some sort of "who did better job" conclusion.

He should simply have commended 4K for their efforts and asked what they felt underpinned such good work?

Cheap comments draw replies so it's hardly a witch hunt; although personally I'd simply chose to ignore the comment or simply point out why it's cheap and unnecessary.
 

tuna_love

Banned
Lead better would refi the witch hunt stuff if he stopped making stupid comments like the TLOU / ND one.

It's a silly, cheap comment that seeks to produce an unnecessary sense of competition between ND and 4K and by extension Sony's ability on their own hardware.

The bulk of the interview is good and the content good but that was a poor approach and should have been rephrased and edited out.

There is no point trying to compare porting a code base so heavily tailored to PS3 to PS4 with updating a multi-platform code base built around a PC core to PC and the PS4/XB1 and draw some sort of "who did better job" conclusion.

He should simply have commended 4K for their efforts and asked what they felt underpinned such good work?

Cheap comments draw replies so it's hardly a witch hunt; although personally I'd simply chose to ignore the comment or simply point out why it's cheap and unnecessary.
4a
 

KKRT00

Member
@EGM1966
We've already discussed it and its 4A, not 4K.
Also, Metro 2033 and LL were not very multithreaded and running it in almost 100hz on low frequency cores, which was required for totally locked 60fps is great achievement.

---
What is with this leadbetter guy? Id like to see 4A get last of us running at 60fps with same time constraints.
I would like to see ND get to run both Metro games in locked 60fps with the same time constains as 4A.
See how stupid this point is?
 

Kinthalis

Banned
So I just came up with another idea. When we want to somehow measure the performance gain due to optimization on a fixed platform, how about comparing the performance of early gen launchwindow titles to late gen showpieces? I would argue that early gen titles pretty much show what the system is capable to do without specialized optimization, whereas late gen titles probably use most of the tricks available. So we should compare a barely optimized early gen multiplatform title to a late gen fully optimized exclusive title. In my opinion the difference between them should give an indication of what performance gains are possible due to said optimization.

This would also be a troublesome comparison because of the state of 3D rendering at the time and the state of the same now.

Back then brand new hardware and software technologies were just starting to be implemented all of which affected how modern 3D rasterization was being done. From asset development to rendering to lighting, etc, etc.

The amount of talent that knew how all of that could come together was virtually none. It's WHY games look so drastically different at the start of the last gen and at the end. IT's why something like a 8600GT with 1/2 the shader power of an Xbox 360 could play games at a better level of IQ than the Xbox when the gen started, but couldn't keep up once more modern engines relying on lots more shading performance started bein developed.

This is NOT the case today by and large. Almost anyone in game development is likely to have a firm grasp of what modern hardware can do. That doesnt mean things like better use of compute on the GPU won't bring about more complex interactions in physics and lighting and who knows what else, but for the most part, developers are already knocking on the door of what can be done on the current hardware, whereas they weren't even close at the start of last gen. Not only that but any improvements leveraged on compute power will also benefit PC hardware, this isn't just a console thing.

But following your original train of thought, if we could compare similar PC GPU hardware to an Xbox 360 and show that the xbox 360 hardware outperforms it by anything close to 2X, then I would think you would have a point.

Except these comparisons have been done to death and they have repeatedly shown that that is simply NOT the case. Ignoring CPU overhead the games perform similarly.

Everyone always brings up optimization on consoles as this thing that magically makes games perform several times better on these platforms, except that the evidence suggest no such thing. And I'll take evidence over tweets and comments from anyone, any day of the week.
 

UnrealEck

Member
Can anyone tell me what the minimum system requirements for GTA V on PC are? I'm willing to go out on a limb and say it requires at least 2 GB to perform on a similar level.

The game as it is on the old consoles (and will be the same for the remaster on new consoles) will still use up the same amount of memory.
System requirements for memory take into account Windows services and leaves headroom for other programs.

So yes, of course to play GTAV (exactly as it looks on last gen) on PC you would need more memory than is in the PS3 or X360. Because you will be running more services alongside it than would be running on either of those consoles.
New consoles are a bit different. Their OSes will have a lot more services, so a larger chunk of memory is needed. But of course they have more memory anyway, similar to how a PC will generally have more memory than a PS3 or X360.
This is why I was a bit puzzled about your comment.
 

Oemenia

Banned
A question to Durante if you don't mind. What do you think of the current state of threading on multi-core CPUs? Is it true that it only goes as far as hiving off some tasks such as AI and physics with the bulk all being done on one?

Also do you think it will change with the more common architecture in the current-gen consoles?
 

mitchman

Gold Member
What industry exactly is making the move to 4k?...because it sure as hell isn't the content providers...

Well, Netflix has offered 4k of some series for a while, plus documentaries and some movies. Amazon Prime is starting up in October with 4k. A few other popular streaming services will also offer 4k soon. I expect Apple to follow suit with the next AppleTV and iTV.
As with all new display tech, content takes a while to catch up but it's getting better.
 

c0de

Member
Actually, I'd sort of assumed that both Sony and Microsoft built their respective OS's so that they keep their worker threads restricted to the two cores that have been reserved for the OS, and do only the absolute minimum amount of work on the other six cores. This would be some interrupt handlers (tlb, timers, etc) but possibly very little else. For example, if games are expected to do cooperative multi-tasking among their own threads, then there would be no need for a pre-emptive scheduler with its attendant overhead to be running on the six game cores.

I guess we can assume but still we don't know exactly (and probably never will) but you can't take it out of the scope when talking about performance and real world performance. It's always in the background when people again feel the need to post the "hard facts".
 

Hellshy.

Member
Why would that matter? He's just stating a fact. I don't get why people are so upset about that comment

I am not upset. I dont get upset over what gaming journalist say about people I dont know. I just had a thought . My thought was that he made a jab at ND that doesnt make much sense considering one game was created with tech designed to scale to multiple platforms and the other game was designed with one console in mind that has a notoriously difficult architecture. This man is suppose to know his tech so why would he choose ND and the last of Us of all games to take a jab at.
 

ethomaz

Banned
What industry exactly is making the move to 4k?...because it sure as hell isn't the content providers...
All big content providers already have 4k or are moving before the end of the year....

The biggest issue is not content but the 4k HDTVs that are still expensive but things will be better in 2015 lineup.
 

Leb

Member
I am not upset. I dont get upset over what gaming journalist say about people I dont know. I just had a thought . My thought was that he made a jab at ND that doesnt make much sense considering one game was created with tech designed to scale to multiple platforms and the other game was designed with one console in mind that has a notoriously difficult architecture. This man is suppose to know his tech so why would he choose ND and the last of Us of all games to take a jab at.

I have this totally crazy theory why he picked TLOU:R -- because it was either that or TR:DE and when you're comparing the technical brilliance of 4A to that of their closest competitor, well, nobody in their right mind is going to pick Crystal Dynamics as that competitor.
 

jgf

Member
This would also be a troublesome comparison because of the state of 3D rendering at the time and the state of the same now.

Back then brand new hardware and software technologies were just starting to be implemented all of which affected how modern 3D rasterization was being done. From asset development to rendering to lighting, etc, etc.

The amount of talent that knew how all of that could come together was virtually none. It's WHY games look so drastically different at the start of the last gen and at the end. IT's why something like a 8600GT with 1/2 the shader power of an Xbox 360 could play games at a better level of IQ than the Xbox when the gen started, but couldn't keep up once more modern engines relying on lots more shading performance started bein developed.

That sounds like a fair point to why simply comparing early gen to late gen does not work. That basically leaves us with comparing late gen multiplatforms.

This is NOT the case today by and large. Almost anyone in game development is likely to have a firm grasp of what modern hardware can do. That doesnt mean things like better use of compute on the GPU won't bring about more complex interactions in physics and lighting and who knows what else, but for the most part, developers are already knocking on the door of what can be done on the current hardware, whereas they weren't even close at the start of last gen. Not only that but any improvements leveraged on compute power will also benefit PC hardware, this isn't just a console thing.

But following your original train of thought, if we could compare similar PC GPU hardware to an Xbox 360 and show that the xbox 360 hardware outperforms it by anything close to 2X, then I would think you would have a point.

Except these comparisons have been done to death and they have repeatedly shown that that is simply NOT the case. Ignoring CPU overhead the games perform similarly.

Everyone always brings up optimization on consoles as this thing that magically makes games perform several times better on these platforms, except that the evidence suggest no such thing. And I'll take evidence over tweets and comments from anyone, any day of the week.

First of all I would argue that putting a 360-like GPU into a modern PC is like strapping an old Volkswagen motor to the chassis of an F1 car and then compare tracks times with the original car. Of course the driver of the F1 car can pretty much put the pedal to the metal without the need for some "optimized" driving skills. But then you say games are not CPU bound, so the only performance relevant factor is the GPU (in that case the motor). Fair enough. I might argue that this is the case for modern CPUs that are pretty much capable, but is this also true for CPUs as slow as the 360s XCPU?

So as I wanted to compare games from the people whose quotes we are discussing, I searched for a Rage benchmark.

I found this for PC: http://www.tomshardware.com/reviews/rage-pc-performance,3057-5.html

And that one for 360: http://www.eurogamer.net/articles/digitalfoundry-rage-face-off

I hope those are ok to use and not outdated in any way. So I assume that the lowest possible PC setting is somewhat comparable to 360 settings. Still we have 1280x1024 on lowest PC vs 1280x720 on 360.

Then I tried to find out what the equivalent GPU for 360 is. Wikipedia says its an Radeon X1800 http://en.wikipedia.org/wiki/Xenos_(graphics_chip)

I went to see benchmark results for a X1800 card and found this: http://www.videocardbenchmark.net/gpu.php?gpu=Radeon+X1800+GTO

So around 140 G3D score. I hope that score is meaningful in any way.

Then I went to check said score for the graphiccards used in the PC benchmark. A Radeon HD 6450 and an Geforce GT 430. Those clock in around 290 and 650 on the G3D scoreboard.
http://www.videocardbenchmark.net/gpu.php?gpu=Radeon+HD+6450
http://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GT+430

So to me they seem quite more powerful then the 360 GPU.

The CPU used for the benchmark was an overclocked i5-2600k SandyBridge with 4GB of system memory. So thats not what I would call similar specs to 360 in any way.

So still after running Rage on a superior CPU, GPU and with more RAM the game settles at roughly 33fps for the HD 6450 and 55fps for the GT 430. On 360 it runs in 60fps. Granted in lower resolution and with drops to 640x720, but also the system as a whole seems significantly less powerful then the slowest PC used in the benchmark.

So coming from that comparison I don't see how I should draw the conclusion, that a similar powered PC will run Rage with equal performance. But maybe I'm missing something here.
 

Leb

Member
So as I wanted to compare games from the people whose quotes we are discussing, I searched for a Rage benchmark.

I found this for PC: http://www.tomshardware.com/reviews/rage-pc-performance,3057-5.html

And that one for 360: http://www.eurogamer.net/articles/digitalfoundry-rage-face-off

I hope those are ok to use and not outdated in any way. So I assume that the lowest possible PC setting is somewhat comparable to 360 settings. Still we have 1280x1024 on lowest PC vs 1280x720 on 360.

Then I tried to find out what the equivalent GPU for 360 is. Wikipedia says its an Radeon X1800 http://en.wikipedia.org/wiki/Xenos_(graphics_chip)

I went to see benchmark results for a X1800 card and found this: http://www.videocardbenchmark.net/gpu.php?gpu=Radeon+X1800+GTO

So around 140 G3D score. I hope that score is meaningful in any way.

Then I went to check said score for the graphiccards used in the PC benchmark. A Radeon HD 6450 and an Geforce GT 430. Those clock in around 290 and 650 on the G3D scoreboard.
http://www.videocardbenchmark.net/gpu.php?gpu=Radeon+HD+6450
http://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GT+430

So to me they seem quite more powerful then the 360 GPU.

The CPU used for the benchmark was an overclocked i5-2600k SandyBridge with 4GB of system memory. So thats not what I would call similar specs to 360 in any way.

So still after running Rage on a superior CPU, GPU and with more RAM the game settles at roughly 33fps for the HD 6450 and 55fps for the GT 430. On 360 it runs in 60fps. Granted in lower resolution and with drops to 640x720, but also the system as a whole seems significantly less powerful then the slowest PC used in the benchmark.

So coming from that comparison I don't see how I should draw the conclusion, that a similar powered PC will run Rage with equal performance. But maybe I'm missing something here.

Well, I would have said that this was a pretty manifestly unsuitable comparison, considering that the console versions use dynamic resolution scaling to maintain a steady 60 fps (at a maximum resolution that is 70% of the PC at best and 35% of the PC at worst) while as far as anyone can tell, the PC version's Auto-Balancer does not utilize any such resolution scaling tech.
 

infovore

Neo Member
Isn't fully coherent the ultimate goal of HSA and heterogeneous compute?

Does that mean gaming is unlikely to benefit from such?

Consider the following scenario:

1) A scheduler thread writes a value in location A.
2) It writes a value in location B that tells other threads the value in A is valid.
3) A worker thread running on another core reads location B, and sees the value in A should be valid.
4) It reads the old value in A, and the program (or system!) crashes.

I've seen this happen on a system using Itanium CPUs. As far as I know, this cannot happen in exactly the same way on systems using x86 CPUs because the x86 memory model imposes more stringent coherency requirements, but these guarantees may not be present once you have CPUs and GPUs using the same memory. (The solution, in case you were wondering, is to insert a memory barrier between steps 1 and 2.)

It is bugs like this, which cause "inexplicable" application or system crashes to happen once in a blue moon, that make memory coherency so valuable for HSA. Will gaming benefit from coherency? If your workload can be easily be split like this:

- CPU
- barrier
- GPU
- barrier
- CPU
- ...

then the work ensuring coherency between CPU and GPU is just wasted effort. But with coherency it will be much easier to explore models that mix CPU and GPU work. I don't think we know yet whether this will be a net benefit or a dead end.
 

jgf

Member
Well, I would have said that this was a pretty manifestly unsuitable comparison, considering that the console versions use dynamic resolution scaling to maintain a steady 60 fps (at a maximum resolution that is 70% of the PC at best and 35% of the PC at worst) while as far as anyone can tell, the PC version's Auto-Balancer does not utilize any such resolution scaling tech.
Then let's just assume that the addition of the magic auto-balancer lets the slowest PC run at 60fps. Which means there will be heavy resolution drops to translate a 32fps average into 60fps. That still leaves us with a vastly >>30% superior system running the game in a 30% higher resolution than 360. So what is your point?
 

Easy_D

never left the stone age
Although I would prefer some kind of middle ground with strong story and level design (which we know that 4A excel at), while experimenting with the simulation aspects that made the STALKER games so memorable.

Something a bit closer to Deus Ex than Stalker? That could turn out to be something really special.
 

Durante

Member
Then let's just assume that the addition of the magic auto-balancer lets the slowest PC run at 60fps. Which means there will be heavy resolution drops to translate a 32fps average into 60fps. That still leaves us with a vastly >>30% superior system running the game in a 30% higher resolution than 360. So what is your point?
The point is that you are again making up arguments based on not directly comparable numbers, while you still dismiss all arguments based on actual comparisons at the same settings. Don't you see what's off about this?
 
Top Bottom