• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia GT300 + ATI Rx8xx info-rumor thread

tokkun

Member
His comments about Fusion being a 'deathblow' to CUDA is not consistent with what I've been hearing from people in ATI/AMD.
 

artist

Banned
godhandiscen said:
Charlie is beyond optimistic with his ATI fanboyism. Those are a couple rumors he got right in how long? Everyday Charlie has new BS to bash Nvidia, of course something must be true every once in a while. If ATI wins, cool, I want ATI to win. I want to lick Nvidia fanboy tears because it will be fun for a night at the bar. However in the long run I would miss Nvidia the strong competition that delivered excellent products during these last two years. If the GT300 is such a flop, and I can see ATI just OC'ing its card for Q2 of 2010, which would suck.
Eh, I didnt suggest you to take his word as the gospel of truth. Neither did I imply that Nvidia (GT300) should go kaput :/

His sources claiming the GT300 is behind schedule could be accurate, given his past record. However when he claims that RV870 will kick GT300's ass is what should be taken with salt, because he knows squat about either's computational power to be making those statements, if he had this info why wouldnt he post it?

So like I said, read through the article and you'll know what parts are his speculation and what came from his sources.
 

godhandiscen

There are millions of whiny 5-year olds on Earth, and I AM THEIR KING.
irfan said:
Eh, I didnt suggest you to take his word as the gospel of truth. Neither did I imply that Nvidia (GT300) should go kaput :/

His sources claiming the GT300 is behind schedule could be accurate, given his past record. However when he claims that RV870 will kick GT300's ass is what should be taken with salt, because he knows squat about either's computational power to be making those statements, if he had this info why wouldnt he post it?

So like I said, read through the article and you'll know what parts are his speculation and what came from his sources.
Well, I am truly hoping the GT300 series gets delayed until Q1 of 2010 since I don't think I am growing attached to my GTX295.
 
M3d10n said:
The Doom 3 imp model. On left it's just subdivided, on the right it's using a displacement map. The base model has roughly the same amount of polygons as the original Doom3 model.

It will still require a bit of processing power to generate the displacement maps, and it certainly doesn't make modelling them easier ;p

Displacement maps always end up fucked for me when I generate them in zbrush and use them in Maya :(
 

camineet

Banned
Nvidia’s 512-core GT300 taped out at 40nm, already in A1 silicon
May. 18, 2009 (1:50 pm) By: Rick Hodgin

Nvidia’s next-gen Tesla GPGPU engine, the 40nm GT300 GPU, has been confirmed to be in A1 silicon at Nvidia’s labs, meaning it actually taped out sometime in January, February or March.

The first silicon produced would’ve been A0, meaning Nvidia is already through one stepping pre-production, which is not uncommon. In fact there may be a solid explanation for it as it was previously rumored that both ATI and Nvidia are having trouble with TSMC’s 40nm process technology, and that could be affecting yields. If true, then the re-spin (moving from one stepping to another) could be done not for performance reasons exactly, but rather to address TSMC’s 40nm issues.

The GT300 is the Tesla part. There are additional Gx300yy chips, such as the G300 which will be the GeForce desktop card, along with G300GL which will be a Quadro part. The specs include 512 cores, 512-bit memory interface, 256 GB/s to 280 GB/s depending on whether or not the part is overclocked, and these specs will approach different thermal, power and performance envelopes based on target use and relative clocks.


Rick’s Opinion

Nvidia’s GT300 is believed also to be a cGPU, which is to say it shares traits with a CPU in addition to the traditional GPU engine. If true, the cGPU may begin to expose additional abilities which allow for more exciting gaming effects, more generic programming abilities (such as a different approach to PhysX integration), and many other compute possibilities–especially in Tesla or Quadro when used in a supercomputer configuration.

http://www.geek.com/articles/games/...d-out-at-40nm-already-in-a1-silicon-20090518/


BTW, to avoid any confusion about the GT300 or GeForce GTX300 series, nVidia's GT300 chip has several codenames. The GT300 silicon is destined to become a Tesla part; G300 is the desktop GeForce card, while G300GL is upcoming Quadro part. nVidia's old-timers still call the chip NV70 and if you roam in the halls of Graphzilla's Building C in Santa Clara, you might find papers with NV70 all over it. nVidia's current parts, the GeForce GTX 285 are all based on NV65 chips.

We saw how the board looks like and there are plenty of surprises coming to all the nay-sayers - expect world-wide hardware media going into a frenzy competition who will score the first picture of GT300 board. If not in the next couple of days, expect GT300 pictures coming online during Computex.

According to our sources, nVidia has no plans to show the GT300to the stockholders, analysts and the selected invited press [no, we're not in that club], but you can expect that Jen-Hsun and the rest of the exec gang will be bullish about their upcoming products.



http://www.brightsideofnews.com/new...dy-taped-out2c-a1-silicon-in-santa-clara.aspx


GT300 is gonna be an absolute BEAST, but I'm pretty sure we won't see it until Q1 2010. Given that Larrabee won't be out until Q1 2010 also, that means ATI will be the only one with a DX11 GPU out in 2009.
 
280GB/s of memory bandwidth!?

Oh fap, fap, fap.

Kinda makes the ~ 20GB/s of RSX look a little pathetic, that's a 14x increase! Yet there's still some people that believe console technology is in the same realm as the high end PC space, its not, its closer to the Wii than hardware like this.

For the record that's more bandwidth than the eDRAM in Xenos which makes up something like a 1/3 of its dies space. :lol

I'm actually scared to think how big GT300 is going to end up, they must be pushing more than 2 billion transistors with that thing, surely? I think the rumours of them ditching a lot of fixed function hardware has to be true, otherwise I don't see how they're going to be able to push 512 stream processors without making the thing the size of King Kong. GT200 was big enough as it is.
 

M3d10n

Member
Dabookerman said:
It will still require a bit of processing power to generate the displacement maps, and it certainly doesn't make modelling them easier ;p

Displacement maps always end up fucked for me when I generate them in zbrush and use them in Maya :(
It's because you don't have a DX11 card, my friend! I'm pretty sure future versions of programs like zbrush and mudbox will be able to use hardware displacement maps. Right now zbrush actually generates millions and millions of polygons using the CPU.
 
I'm currently waiting for the next generation standard cards. Maybe these will do the job?

TheExodu5 said:
Can't wait. Holding it out with my 8800GT until this series comes out...I figure it's worth waiting out the current gen of cards.



Well yes and no. With this kind of hardware, we can finally start to expect a constant 60fps V-Synced with full anti-aliasing with most games. Many games still don't maintain 60fps on today's high end cards at the highest quality settings, especially when AA comes into play. I look forward to not having to make any graphical compromises.

Me and you both. Unless it's a low tech game (Team Fortress 2) or optimized extremely well (Devil May Cry 4) constant 60fps is impossible. I'm shocked that I get dips to 22fps in 720p with Lost Planet (even though DMC4 on max DX9 settings (only on the second highest level of AA) runs at a smooth 60fps)
 

camineet

Banned
brain_stew said:
280GB/s of memory bandwidth!?

Oh fap, fap, fap.

Kinda makes the ~ 20GB/s of RSX look a little pathetic, that's a 14x increase! Yet there's still some people that believe console technology is in the same realm as the high end PC space, its not, its closer to the Wii than hardware like this.

For the record that's more bandwidth than the eDRAM in Xenos which makes up something like a 1/3 of its dies space. :lol

I'm actually scared to think how big GT300 is going to end up, they must be pushing more than 2 billion transistors with that thing, surely? I think the rumours of them ditching a lot of fixed function hardware has to be true, otherwise I don't see how they're going to be able to push 512 stream processors without making the thing the size of King Kong. GT200 was big enough as it is.


Believe it or not I was gonna mention that 256 ~ 280 GB/sec bandwidth is as much as, or more than, the EDRAM bandwidth of Xbox 360's Xenos GPU (256 GB/sec) which is really insane! :lol Even the PS3's main XDR memory & GDDR3 graphics memory bandwidths (25.6 and 22.4 GB/sec respectively) are not as high as last-gen PS2's EDRAM bandwidth (48 GB/sec).

The GT300 is gonna need all that bandwidth to feed all its shaders, texture units, rops, etc because it's gonna be such a beast. I'm expecting GT300 to be at least 2.5 billion transistors, perhaps closer to 2.8 billion, which would be double that of GT200 (1.4 billion).

I read somewhere that ATI's RV870 (which is gonna be another mid-range GPU, used to make highend dual GPU X2 cards) is larger than GT200, and so it might come close to 2 billion transistors. If that's the case, another relatively "small" and incredibly efficient per square mm GPU, then one could imagine Nvidia's latest large monolithic GPU coming it at around 3 billion transistors. Besides, Nvidia almost always doubles their transistor count each generation.
 

Zero Hero

Member
nVidia already has boards with it's chipsets, why can't they just make their own boards with their graphics chip on board? The heat sink and fan could cover both the cpu and gpu like in the PS3.
 
Zero Hero said:
nVidia already has boards with it's chipsets, why can't they just make their own boards with their graphics chip on board? The heat sink and fan could cover both the cpu and gpu like in the PS3.

Good luck finding a heatsink capable of cooling both a Nehalem and GT300! :lol :lol
 

Nirolak

Mrgrgr
Truespeed said:
And also that Cell copycat Larrabee :lol
And in this exhibit, we witness a person who has no idea how either chipset is structured.

They're completely different, outside of having many cores.
 
brain_stew said:
That's like comparing a 360 with a Geforce 2 (five years difference) and proclaiming that "the ~ 7GB/s of the Geforce 2 look a little pathetic". Hell, the difference there is even bigger, 34x.
 

SapientWolf

Trucker Sexologist
Aizu_Itsuko said:
That's like comparing a 360 with a Geforce 2 (five years difference) and proclaiming that "the ~ 7GB/s of the Geforce 2 look a little pathetic". Hell, the difference there is even bigger, 34x.
It's really apples and oranges, because consoles are a closed platform designed for one thing that they can optimize the hell out of. A lot of power is going to waste in PC gaming.
 

Zaptruder

Banned
While its great and all to hear about ever faster graphics tech... what exactly are they going to use it on?

I mean resolution seems to be converging at 1080p as a standard, frame rates are already consistently high...

Is this why they've introduced things like 3D Vision?
"If the content developers aren't going to create the assets to push the technology, then we'll have to up the ante by rendering at twice the frame rate... and 3D is the best way for the frame rate increase to mean anything!"
 
SapientWolf said:
It's really apples and oranges, because consoles are a closed platform designed for one thing that they can optimize the hell out of. A lot of power is going to waste in PC gaming.

No amount of optimisation is going to make up for 14x the bandwidth and 10x the compute. :lol

They're absolutely comparable, since Sony used a cut down off the shelf Nvidia GPU. Optimisation is nice and all, but can never compete with new generation silicon. Good luck in getting the Wii to match a PS3, because that's essentially what you're proposing.
 

camineet

Banned
brain_stew said:
No amount of optimisation is going to make up for 14x the bandwidth and 10x the compute. :lol

They're absolutely comparable, since Sony used a cut down off the shelf Nvidia GPU. Optimisation is nice and all, but can never compensate for new generation silicon. Good luck in getting the Wii to match a PS3, because that's essentialy what you're proposing.


Exactly.

100% agreed. There is no arguing this. It's fact.

Console optimisation might allow consoles to compete with PCs that are, say, several times more powerful but NOT 10x more powerful. That's an order of magnitude difference. An entire console generation. Upcoming highend PC components are gonna be as much of a leap beyond 360/PS3 as 360/PS3 are beyond Wii.
 

camineet

Banned
Zaptruder said:
While its great and all to hear about ever faster graphics tech... what exactly are they going to use it on?

I mean resolution seems to be converging at 1080p as a standard, frame rates are already consistently high...

framerate and resolution are only the tip of the graphical iceberg. What needs to be improved is the detail/complexity of each frame, with far better lighting, post-processing, effects, etc, beyond what DX9 and DX10 cards can do today.



Yes 3D Vision can greatly benfit from an Nvidia GPU that's at least twice as powerful as the current strongest GPU, but that's only one thing. Instead of needing two cards or a dual-GPU card, one GPU will be able to handle it better than SLI.
 
camineet said:
Exactly.

100% agreed. There is no arguing this. It's fact.

Console optimisation might allow consoles to compete with PCs that are, say, several times more powerful but NOT 10x more powerful. That's an order of magnitude difference. An entire console generation. Upcoming highend PC components are gonna be as much of a leap beyond 360/PS3 as 360/PS3 are beyond Wii.

Um, even that's quite a stretch. Twice? Maybe, in a worst case scenario sure, don't expect much more than 50%, if that most of the time though. A 1.83ghz Core 2 Duo system with a X1900Pro will run decent 360 to PC ports (like Unreal Engine 3 games for example), essentially the same as on the consoles, and many would put that level of hardware on par with a 360, if not a little above it, no where near two-three times ahead.

Poor optimisation on the PC side is way overblown, its not as if many (if any) 360 developers code to the metal anymore, they just use a standard DirectX API for their graphics functions as they do on the PC.

RAM usage is poorly optimised on the PC side, sure, but when 4GB can be had for $40 and 3GB is standard even on bottom of the barrel laptops these days, it really doesn't make a difference anyway.


Truespeed said:
And also that Cell copycat Larrabee :lol

Say what now!?

Larrabee is a GPU

Cell is a CPU.

That's a pretty fucking fundamental difference right there, I'm not going to go into all the other fundamental differences, suffice it say, they follow very different design philosiphies and are meant for totally different functions.

Ugh, yeah, Sony invented the concept of a manycore processor design and all others are just derivative copies of it, sure whatever helps you sleep at night.
 

Zaptruder

Banned
camineet said:
framerate and resolution are only the tip of the graphical iceberg. What needs to be improved is the detail/complexity of each frame, with far better lighting, post-processing, effects, etc, beyond what DX9 and DX10 cards can do today.



Yes 3D Vision can greatly benfit from an Nvidia GPU that's at least twice as powerful as the current strongest GPU, but that's only one thing. Instead of needing two cards or a dual-GPU card, one GPU will be able to handle it better than SLI.

I know how the graphics can get better.

My point was more a case of asset developers (i.e. game developers) been unable (due to market reasons) to catch up to the pace at which graphics technology is moving... and that 3D vision may be the method by which Nvidia has continued to create a viable purpose for ever increasing chip speeds.
 
Its not as if this technology makes older games uglier, quite the opposite infact, since it'll improve them across the board. The games will come, they always do, its not as if the hardware developing at a faster rate is hardly a new phenomenon and in the end it ultimately drives the software to improve as consumers demand games that make use of their new hardware.
 

SRG01

Member
brain_stew said:
Say what now!?

Larrabee is a GPU

Cell is a CPU.

That's a pretty fucking fundamental difference right there, I'm not going to go into all the other fundamental differences, suffice it say, they follow very different design philosiphies and are meant for totally different functions.

Ugh, yeah, Sony invented the concept of a manycore processor design and all others are just derivative copies of it, sure whatever helps you sleep at night.

Yeah, don't pay any attention to him. :lol I love my PS3 and all, but I love computer architectures even more. Cell is to Larrabee like white wine to red. Nevermind that I like white wine better... :lol

Honestly though, Cell and Larrabee are positioned in entirely different markets and may, in the future, reposition themselves in others. Cell was originally conceived as a main CPU for servers and desktops (both as a quasi-derivative of IBM's POWER architecture and R&D from STI), but it gradually found its way into accelerator applications for multi-blade servers.
 

SapientWolf

Trucker Sexologist
brain_stew said:
No amount of optimisation is going to make up for 14x the bandwidth and 10x the compute. :lol

They're absolutely comparable, since Sony used a cut down off the shelf Nvidia GPU. Optimisation is nice and all, but can never compete with new generation silicon. Good luck in getting the Wii to match a PS3, because that's essentially what you're proposing.
Yeah, but all that extra horse power won't do any good unless devs take advantage of it. Which might not happen if they develop games with the current generation of consoles in mind.
 

ymmv

Banned
SapientWolf said:
Yeah, but all that extra horse power won't do any good unless devs take advantage of it. Which might not happen if they develop games with the current generation of consoles in mind.

Also keep in mind that most developers target their games for the most popular current graphics card, not for the latest and greatest DX11 cards like the GT300 or Rx8xx that only a minority of the total gaming population will own. Games like Crysis aren't the norm. It will take at least 1.5 - 2 years before we see games that will take advantage of the additional power of these new cards. The greatest immediate benefit will be that current games will run smoother on more powerful hardware. It will take a while before you can see games that do more. Once that happens we'll be enjoying a new console generation too.
 

Durante

Member
brain_stew said:
Say what now!?

Larrabee is a GPU

Cell is a CPU.
I'm sorry this is off-topic, but actually looking at the 2 chips that's not a "fundamental difference", it's playing with semantics. In fact, the main architectural difference arguably being cache coherency (well, maybe second to heterogeneity), one could well argue that Cell is more GPU-like than LRB.
 

gofreak

GAF's Bob Woodward
brain_stew said:
No amount of optimisation is going to make up for 14x the bandwidth and 10x the compute. :lol

They're absolutely comparable, since Sony used a cut down off the shelf Nvidia GPU. Optimisation is nice and all, but can never compete with new generation silicon. Good luck in getting the Wii to match a PS3, because that's essentially what you're proposing.

There's absolutely no doubt about the relative gap in technology.

But the advantage consoles have is that there is a larger number of developers there willing to squeeze the machines for everything they've got, to use it as their baseline.

Of course the PC benefits from this via console ports, and with one of these GPUs you'll enjoy better resolution/texture-filtering etc.

But there are precious few PC devs who are willing to use the latest nVidia or AMD as their baseline or to even optimise for those chips. Many if not most PC devs seem to target far more modest specificiations, with more powerful chips 'only' providing higher resolution/better filtered versions of those games.

Of course there are developers willing to target the high end (e.g. Crytek), but they seem few and far between relative to the console space, where you often have whole stables of first party devs (at least) willing to aim really high with their games. The frequency/numbers of high-end looking games on PC still doesn't seem to match consoles even if it has the occasional title that matches and exceeds what's available on consoles..but the breadth of such titles doesn't seem to be the same.

Or am I wrong? I'll admit I haven't exhaustively surveyed what's coming up in the near future in terms of native PC games, so I could well be..

Anyway, this is kind of OT. As a tech whore, those reports about the GT300 are mouthwatering.
 
gofreak said:
There's absolutely no doubt about the relative gap in technology.

But the advantage consoles have is that there is a larger number of developers there willing to squeeze the machines for everything they've got, to use it as their baseline.

Of course the PC benefits from this via console ports, and with one of these GPUs you'll enjoy better resolution/texture-filtering etc.

But there are precious few PC devs who are willing to use the latest nVidia or AMD as their baseline or to even optimise for those chips. Many if not most PC devs seem to target far more modest specificiations, with more powerful chips 'only' providing higher resolution/better filtered versions of those games.

Of course there are developers willing to target the high end (e.g. Crytek), but they seem few and far between relative to the console space, where you often have whole stables of first party devs (at least) willing to aim really high with their games. The frequency/numbers of high-end looking games on PC still doesn't seem to match consoles even if it has the occasional title that matches and exceeds what's available on consoles..but the breadth of such titles doesn't seem to be the same.

Or am I wrong? I'll admit I haven't exhaustively surveyed what's coming up in the near future in terms of native PC games, so I could well be..

Anyway, this is kind of OT. As a tech whore, those reports about the GT300 are mouthwatering.
I would think you're right. Even with the more powerful (in comparison to consoles) PCs we see today, I really haven't seen a boatload of games that look so much better than a console game that I would consider upgrading my GPU and playing on PC.
 

Xdrive05

Member
I'll try to wait for the mid-range version to come out to replace my reliable 8800gt superclocked. If I can hang in there that is.
 
Yeah... Waiting for the midranged small die GT300 series that would fit in a small form factor PC to hit before I will be jumping in.

Ha.

Hahahaha.

Ha.
 

Fafalada

Fafracer forever
M3d10n said:
- Less VRAM spent on geometry, since nurbs/bezier/subdivision/displacement needs much less vertices/control points.
That's actually false - there's a lot of reasons why HOS haven't seen widespread use even though hw was capable of useful implementation for over 10 years now - and one is that they don't-really-save memory (outside of contrived scenarios, like certain types of terrain).
Subdivision+displacement is better - but anyway - tesselation capable hw has been in mainstream for so long now it's getting really silly it's still used in PR.

At any rate - there's other cool things you can do with programmable tesselation (that don't involve a redesigned art-pipeline and a lot of headaches) like for instance, aliasing-free shadow-maps (we might not have to wait for LRB to get those after all).
 

zoku88

Member
Durante said:
I'm sorry this is off-topic, but actually looking at the 2 chips that's not a "fundamental difference", it's playing with semantics. In fact, the main architectural difference arguably being cache coherency (well, maybe second to heterogeneity), one could well argue that Cell is more GPU-like than LRB.
This is an old comment, but I wouldn't really buy that argument. One just needs to look at the particular instructions each processor is designed for to see the big fundamental difference.
 

Manager

Member
Images of the RV870?
http://www.chiphell.com/2009/0728/89.html

20090728094608914.png


Awesome watermark...

Apparently it's around 28cm long, which is longer than any other Ati card. 2900XT is 24cm. They say it got 2 6-pins, meaning around 225w. (Source http://forum.beyond3d.com/showthread.php?t=49120&page=53 )
 

jmdajr

Member
As long as they are still supporting cuda, I look forward to what these monsters can do for video rendering/encoding/editing.
 

Tom Penny

Member
Xdrive05 said:
I'll try to wait for the mid-range version to come out to replace my reliable 8800gt superclocked. If I can hang in there that is.

I'm in the same boat and have the same card. I just am not sure what is the best possible bang for the buck is right now. They are really lowering in price.
 

dionysus

Yaldog
Tom Penny said:
I'm in the same boat and have the same card. I just am not sure what is the best possible bang for the buck is right now. They are really lowering in price.

4890 imo. I've seen those as low as 160 if you wait for deals.
 

JudgeN

Member
I'm go green this time around, not really liking how some games don't like ATI drivers (Looking at you Last Remnant and RE5) so bring on the GT300 so I can sell my 4870 to someone.
 
Any idea of the prices for a top model?

My dual 8800 GT setup still serves me well (since I never go over 1080p) but it'd be nice to see the future. Will nvidia's cards support purevision HD? Any hardware h.264 decoders? My CPU chugs with 1080p (AMD dual core 5600+).

Also, I've been loyal to nvidia for years because of their *nix support. My throw me down cards will eventually make their way to a linux box, because of how far advanced nvidia drivers are. But, IIRC, didn't ATI release some opensource drivers a while back? How far have those progressed?

Because that Rx8xx looks fucking delicious.

EDIT: Purevision, not true vision. And combined my posts.
 

Songbird

Prodigal Son
Are the ati R8s just going to keep slipping down the calendar? I'm in the market for a new PC and whenever the x2 cards turn up I'll probably get one of those. Have the specs I want but not the cash or the guts to put one together.
 

artist

Banned
camineet said:
http://www.geek.com/articles/games/...d-out-at-40nm-already-in-a1-silicon-20090518/






http://www.brightsideofnews.com/new...dy-taped-out2c-a1-silicon-in-santa-clara.aspx


GT300 is gonna be an absolute BEAST, but I'm pretty sure we won't see it until Q1 2010. Given that Larrabee won't be out until Q1 2010 also, that means ATI will be the only one with a DX11 GPU out in 2009.
:lol

These guys are absolutely hilarious claiming tape outs 8 weeks before the actual date. Something nice that they may be smoking. :lol

http://www.semiaccurate.com/2009/07/29/miracles-happen-gt300-tapes-out/

And BS News (what a name ... :D Bull Shit News) gets caught red handed, the admin (Theo) has been caught in the past too.

http://www.semiaccurate.com/2009/07/27/plagiarism-rampant-it-journalism/
http://www.semiaccurate.com/2009/07/27/when-caught-dont-destroy-evidence/
 

Wollan

Member
I'm like a button click away from buying Nvidia 3D vision. Img
I will most likely do it before the day is out just to need to plan some stuff first.

edit: And ordered.
 
Top Bottom