• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia GT300 + ATI Rx8xx info-rumor thread

camineet

Banned
GT300 is meant to be Nvidia's first completely new architecture (because it's DX11) since the introduction of G80 / 8800 in 2006.

Likewise, Rx8xx is meant to be ATI's first completely new architecture (it's also DX11) since the introduction of R600 / HD2900 in early 2007.

Both GPUs look to be a major advance in performance and features over GT200 and RV770.

Nvidia GT300

http://www.brightsideofnews.com/news/2009/...-cgpu!.aspx


nVidia's GT300 specifications revealed - it's a cGPU!
4/22/2009 by: Theo Valich - Get more from this author


Over the past six months, we heard different bits'n'pieces of information when it comes to GT300, nVidia's next-gen part. We decided to stay silent until we have information confirmed from multiple sources, and now we feel more confident to disclose what is cooking in Santa Clara, India, China and other nV sites around the world.

GT300 isn't the architecture that was envisioned by nVidia's Chief Architect, former Stanford professor Bill Dally, but this architecture will give you a pretty good idea why Bill told Intel to take a hike when the larger chip giant from Santa Clara offered him a job on the Larrabee project.

Thanks to Hardware-Infos, we managed to complete the puzzle what nVidia plans to bring to market in couple of months from now.
What is GT300?

Even though it shares the same first two letters with GT200 architecture [GeForce Tesla], GT300 is the first truly new architecture since SIMD [Single-Instruction Multiple Data] units first appeared in graphical processors.

GT300 architecture groups processing cores in sets of 32 - up from 24 in GT200 architecture. But the difference between the two is that GT300 parts ways with the SIMD architecture that dominate the GPU architecture of today. GT300 Cores rely on MIMD-similar functions [Multiple-Instruction Multiple Data] - all the units work in MPMD mode, executing simple and complex shader and computing operations on-the-go. We're not exactly sure should we continue to use the word "shader processor" or "shader core" as these units are now almost on equal terms as FPUs inside latest AMD and Intel CPUs.

GT300 itself packs 16 groups with 32 cores - yes, we're talking about 512 cores for the high-end part. This number itself raises the computing power of GT300 by more than 2x when compared to the GT200 core. Before the chip tapes-out, there is no way anybody can predict working clocks, but if the clocks remain the same as on GT200, we would have over double the amount of computing power.

If for instance, nVidia gets a 2 GHz clock for the 512 MIMD cores, we are talking about no less than 3TFLOPS with Single-Precision. Dual precision is highly-dependant on how efficient the MIMD-like units will be, but you can count on 6-15x improvement over GT200.


This is not the only change - cluster organization is no longer static. The Scratch Cache is much more granular and allows for larger interactivity between the cores inside the cluster. GPGPU e.g. GPU Computing applications should really benefit from this architectural choice. When it comes to gaming, the question is obviously - how good can GT300 be? Please do bear in mind that this 32-core cluster will be used in next-generation Tegra, Tesla, GeForce and Quadro cards.

This architectural change should result in dramatic increase in Dual-Precision performance, and if GT300 packs enough registers - performance of both Single-Precision and Dual-Precision data might surprise all the players in the industry. Given the timeline when nVidia begun work on GT300, it looks to us like GT200 architecture was a test for real things coming in 2009.

Just like the CPU, GT300 gives direct hardware access [HAL] for CUDA 3.0, DirectX 11, OpenGL 3.1 and OpenCL. You can also do direct programming on the GPU, but we're not exactly sure would development of such a solution that be financially feasible. But the point in question is that now you can do it. It looks like Tim Sweeney's prophecy is slowly, but certainly - coming to life.



Rumour: Nvidia GT300 architecture revealed
Author: Ben Hardwidge
Published: 23rd April 2009

How do you follow a GPU architecture such as Nvidia's original G80? Possibly by moving to a completely new MIMD GPU architecture. Although Nvidia hasn’t done much to the design of its GPU architecture recently - other than adding some more stream processors and renaming some of its older GPUs - there’s little doubt that the original GeForce 8-series architecture was groundbreaking stuff. How do you follow up something like that? Well, according to the rumour mill, Nvidia has similarly radical ideas in store for its upcoming GT300 architecture.

Bright Side of News claims to have harvested “information confirmed from multiple sources” about the part, which looks as though it could be set to take on any threat posed by Intel’s forthcoming Larrabee graphics processor. Unlike today’s traditional GPUs, which are based on a SIMD (single instruction, multiple data) architecture, the site reports that GT300 will rely on “MIMD-similar functions” where “all the units work in MPMD mode”.

MIMD stands for multiple-input, multiple-data, and it’s a technology often found in SMP systems and clusters. Meanwhile, MPMD stands for multiple-program, multiple data. An MIMD system such as this would enable you to run an independent program on each of the GPU’s parallel processors, rather than having the whole lot running the same program.
Put simply, this could open up the possibilities of parallel computing on GPUs even further, particularly when it comes to GPGPU apps.

Computing expert Greg Pfister, who’s worked in parallel computing for 30 years, has a good blog about the differences between MIMD and SIMD architectures here, which is well worth a read if you want to find out more information. Pfister makes the case that a major difference between Intel’s Larrabee and an Nvidia GPU running CUDA is that the
former will use a MIMD architecture, while the latter uses a SIMD architecture. “Pure graphics processing isn’t the end point of all of this,” says Pfister. He gives the example of game physics, saying “maybe my head just isn't build for SIMD; I don't understand how it
can possibly work well [on SIMD]. But that may just be me.”

Pfister says there are pros and cons to both approaches. “For a given technology,” says Pfister, “SIMD always has the advantage in raw peak operations per second. After all, it mainly consists of as many adders, floating-point units, shaders, or what have you, as you can pack into a given area.” However, he adds that “engineers who have never programmed don’t understand why SIMD isn’t absolutely the cat’s pajamas.”

He points out that SIMD also has its problems. “There’s the problem of batching all those operations,” says Pfister. “If you really have only one ADD to do, on just two values, and you really have to do it before you do a batch (like, it’s testing for whether you should do the whole batch), then you’re slowed to the speed of one single unit. This is not good. Average speeds get really screwed up when you average with a zero. Also not good is the basic need to batch everything. My own experience in writing a ton of APL, a language where everything is a vector or matrix, is that a whole lot of APL code is written that is
basically serial: One thing is done at a time.” As such, Pfister says that “Larrabee should have a big advantage in flexibility, and also familiarity. You can write code for it just like SMP code, in C++ or whatever your favorite language is.”

Bright Side of News points out that this could potentially put the GPU’s parallel processing units “almost on equal terms” with the “FPUs inside latest AMD and Intel CPUs.” In terms of numbers, the site claims that the top-end GT300 part will feature 16 groups that will
each contain 32 parallel processing units, making for a total of 512. The side also claims that the GPU’s scratch cache will be “much more granular” which will enable a greater degree of “interactivity between the cores inside the cluster”.

No information on clock speeds has been revealed yet, but if this is true, it looks as though Nvidia’s forthcoming GT300 GPU will really offer something new to the GPU industry. Are you excited about the prospect of an MIMD- based GPU architecture with 512 parallel
processing units, and could this help Nvidia to take on the threat from Intel’s Larrabee graphics chip? Let us know your thoughts in the forums.


http://www.bit-tech.net/news/hardware/2009...-architecture/1





ATI RV870 / R800 (HD 5850, HD5870, HD 5850X2, HD 5870X2)

http://www.neoseeker.com/news/10564-specs-...-5870-turn-up-/

Specs for ATI HD 5870 turn up
Kevin Spiess - Friday, April 24th, 2009 | 11:38AM (PT)


Seems reasonable; coming in July

Following rumors we went over earlier in the week, it does seem that ATI's next flagship GPU, the RV870, will be landing sometime in the later summer, possibly July.

Today some specs turned up for the R870 on German site ATI-Forum.de.

The RV870 will be a 40nm part, meaning that it will take less power than current generation 55nm GPUs. It will have:

* 1200 shader processors (compared with 800 on the current HD 4870)
* 32 ROPS (compared with 16 on the HD 4870)
* 48 TMUs (compared with 40 on the HD 4870)
* 2.1 TFlops of effective computational potential (this is excessive - just about double the TFlops offered by the HD 4870!)

The core clock speed for the HD 5870 appears that it will be 900 MHz, with the 512MB (or possibly 1GB) of GDDR5 running at 1100 MHz (4400 MHz effectively because of the GDDR5.) The RV870 will be DirectX 11 compatible as well.

It is presumed the RV870 will come in the same variants that the recent few generations of ATI cards have come in. That is to say that a HD 5850 and HD 5870 part will launch first, followed by a HD 5870 X2. Perhaps most interesting here though is that there will be much more board partners making a HD 5850 X2 card, unlike this current generation of HD 4850 X2, where Sapphire was the only company to put one together.

Looking at these specs, if someone where to take a guess at the HD 5870's performance, factoring in shader processor improvements, it seems that a HD 5870 will offer somewhere around 155%-160% the performance of the HD 4870 -- which seems hard to believe at first. That would put one HD 5870 around the power of two HD 4850 cards.

These specs are all from a "very trusted source" according to ATI-Forum.de, and they seem reasonable.

Certainly NVIDIA will have something equally fast and powerful to compete against the HD 5870 with. We'll post more rumors as they become available.



http://www.brightsideofnews.com/news/2009/...s-revealed.aspx

ATI Radeon 5870 and 5870X2 specs revealed?
4/24/2009 by: Theo Valich - Get more from this author


German site ATI-Forum probably scored a coup of 2009 - according to their sources, ATI's RV870-based cards are already out at selected partners.
We cannot say was this leak was a reaction on our joint-exclusive story about nVidia's GT300 architecture, but one thing is for sure - ATI wants to bring out their Cypress board as soon as possible - planned for July 2009.

The alleged specifications of RV870 reveal that this chip is not exactly a new architecture, but rather a DirectX 11-specification tweak of the RV770 GPU architecture. Just like nVidia's GT300 architecture, the actual RV870 chip is manufactured in TSMC's 40nm half-node process, packing more transistors than GT200 chips. Regardless of what ATI says about nVidia and large dies, the fact of the matter is that ATI is making a large die as well - but the company will continue to use the dual-GPU approach to reach high-end performance.

The RV870 chip should feature 1200 cores, divided into 12 SIMD groups with 100 cores each [20 "5D" units], while RV770 was based on 10 SIMD group with 80 cores total [16 "5D" groups consisting out of one "fat" and four simpler ones]. Thus, it is logical to conclude that when it comes to execution cores, not much happened architecturally - ATI's engineers increased the number of registers and other demanding architectural tasks in order to comply with Shader Model 5.0 and DirectX 11 Compute Shaders. The core is surrounded with 48 texture memory units, meaning ATI is continuing to increase the ROP:Core:TMU. For the first time, ATI is shipping a part with 32 ROP [Rasterizing OPeration] units, meaning the chip is able to output 32 pixels in a single clock.

When it comes to products, ATI plans to launch four parts: Radeon HD 5850 and 5850X2 in more affordable pricing bracket and HD5870 and HD5870X2 for the high-end parts. While there were no clocks for the Radeon HD 5850/5850X2 parts, alleged clocks for HD5870 and HD5870X2 reveal that for the first time, an X2 part is clocked higher than a single-GPU part. Was this a requirement of SidePort memory interface, we are not aware atm. German site Hardware-Infos placed all of the data in a very convenient table, which we are running here with permission. Their story also contains more data about the upcoming ATI RV870 architecture.

ATI_5870Specs_550.jpg



ATI 4870 vs 5870 table...courtesy of Hardware-Infos

These units should result in 2.16 TFLOPS for the HD5870 and 4.56 TFLOPS for the dual-GPU part. Yes, you've read correctly - we are going from 1TFLOPS chip to 4.6TFLOPS within 13 months. Is it now clear that CPUs are in a standstill when it comes to performance improvements? The biggest question though is - while there is no doubt that ATI pulled another miracle out of their hat with a brilliant on-time execution, releasing a 40nm part that will be relatively cheap to manufacture. BUT - can it beat nVidia's GT300 and by how much?

Some journalists allegedly have miracle 8-balls and claim that the ATI cards will blow nVidia out of the water. We are not so certain... stay tuned.
 

Slightly Live

Dirty tag dodger
I cant decipher what that means to me as casual PC gamer that might splash cash on some new graphics hardware in the next 12 months or beyond.

Better eye candy, right?
 

mr stroke

Member
I really hope ATI pulls out the 58xx line in the summer at a good price(I need an upgrade asap)

only problem is wtf are we going to use it for outside of Crysis? I can't think of anything that a current card can't handle until maybe Rage/Doom 4?
 

Darklord

Banned
Dani said:
I cant decipher what that means to me as casual PC gamer that might splash cash on some new graphics hardware in the next 12 months or beyond.

Better eye candy, right?

Price ++++
Performance ++

When new cards like this are announced I never brother. You know the upgraded versions are 6 months away at a cheaper price.
 

Nirolak

Mrgrgr
That's... quite the increase. o_O

On the note of DirectX 11 actually, You wouldn't happen to have any rumors on DirectX 11 games per chance would you?

I've been wanting the new DX11 ATI cards for a while, but currently the only game that's confirmed to support DX11 is Battlefield: Bad Company 2, and that's not out until 2010.

Mainly I'm curious because of:
Darklord said:
When new cards like this are announced I never brother. You know the upgraded versions are 6 months away at a cheaper price.
Since well, if no games are going to support it for six months, I'm feeling quite patient.
 

thuway

Member
Nirolak said:
That's... quite the increase. o_O

On the note of DirectX 11 actually, You wouldn't happen to have any rumors on DirectX 11 games per chance would you?

I've been wanting the new DX11 ATI cards for a while, but currently the only game that's confirmed to support DX11 is Battlefield: Bad Company 2, and that's not out until 2010.

Mainly I'm curious because of:

Since well, if no games are going to support it for six months, I'm feeling quite patient.


Agreed. I'm happy these things are coming to fruition, perhaps Sony and Microsoft should look at the graphics market and hold off future IPs with these specs in mind.
 

Haunted

Member
I don't care for the technical mumbo-jumbo (although I can tell that these numbers are quite a bit higher than those numbers from the previous cards) - someone just tell me what the next 8800GT will be, because that's the best and longest-lasting bang for your buck graphics card I've ever bought.
Better than a 3dfx Voodoo 2. Yes, I said it.
 

Xavien

Member
Depending on Performance, i might just replace my two 8800GT's with these guys.

But whatever happens it looks like this'll be the "leap" generation of graphics cards that we occasionally get (Geforce 8000 series and ATI 3XXX were the last "big change" generation of graphics cards).
 

navanman

Crown Prince of Custom Firmware
Hmm, you can see how nVidia want to move into the CPU market with the new GT300 GPU.
Is sounds like it very similar in design to the basics of Intel, AMD CPU and probably points to the future of possible CPU/GPU integration.
 

careful

Member
Even though it's kinda useless at this point, my measuring stick will still be Crysis VH 19x12 res 60+ fps
And you know what, even with twice the power of current cards, I still don't think that'll be enough.

Someone already said it though, the new tech is nice, but I don't see many games pushing current tech all that hard nowadays. The current push seems to be towards games that can run on bare minimum hardware specs. In a way, it's cool that you don't have to upgrade so often, but at the same time it's making the graphics whore in me cry a bit inside. :'(
 

camineet

Banned
thuway said:
I also wonder how Cell 2 will compete with it :lol .


Cell 2 probably won't be directly competing with GT300, R8xx or Larrabee in the GPU / GPGPU / cGPU market. Unless Cell 2 is designed to accelerate graphics more than the current Cell is--Something that IBM actually said they would take a look at.

Jim Kahle:

We will push the number of special processing units. By 2010, we will shoot for a teraflop on a chip. I think it establishes there is a roadmap. We want to invest in it. For those that want to invest in the software, it shows that there is life in this architecture as we continue to move forward.

DT: Right now you’re at 200 gigaflops?

Jim Kahle: We’re in the low 200s now.

DT : So that is five times faster by 2010?

Jim Kahle: Four or five times faster. Yes, you basically need about 32 special processing units.


DT: AMD bought ATI Technologies and they signaled that a combined CPU and graphics processor is not so far off. They are going to do an initial crack at it for emerging markets in 2007. Is that something you see coming and is Cell anticipating this world already?

Jim Kahle: If you look at a gaming system, there is obviously a close relationship between graphics and the main processing elements. Over time we will look to see how effectively we can make the main processor and graphics tie together. I won’t go beyond that.

DT: With Cell and PlayStation 3, was there a lot of thought about whether you needed a graphics chip?

Jim Kahle: We explored that to understand the bounds of what we could do with the architecture. If you look at some of our ray tracing, ray casting techniques, they are very effective. People have worked on some software caches to help out the ray tracing. I wouldn’t say that is graphics processing because ray tracing is a little different. We’ve explored the bounds on this to understand where it can contribute with pure graphics processing. Over time, we have been exploring that.

DT: With Moore’s Law, is it inevitable that they will wind up on one chip?

Jim Kahle: If you look at the PlayStation 2, eventually the graphics did get integrated into the Emotion Engine. Sony has talked about that. Definitely from a cost reduction view. Now we have to look at it from a performance point of view too. That is something we have to study for the future. Even beyond PlayStation 3. I don’t know if it is inevitable. We have to understand the pros and cons of it.

http://web.archive.org/web/20061031...curynews.com/aei/2006/10/the_playstation.html
 

godhandiscen

There are millions of whiny 5-year olds on Earth, and I AM THEIR KING.
I think there is way too little info to start speculation. I am excited, but it is not "JUNE" yet.
 

camineet

Banned
godhandiscen said:
I think there is way too little info to start speculation. I am excited, but it is not "JUNE" yet.


I disagree, there have been quite a few articles on GT300 and RV870 in recent days/weeks. ATI will most likely launch in the summer, and Nvidia will launch in the fall. ATI's part should be right around the corner.
 

dionysus

Yaldog
I have no idea what any of the technical mumbo jumbo means, but I know what it implies for me. By the end of 2009 I will probably be upgrading my 8800GT. That and a SSD should improve performance nicely.
 

godhandiscen

There are millions of whiny 5-year olds on Earth, and I AM THEIR KING.
Why is ATI beating Nvidia to the launch this time? If I learned anything from last generation is that I am not buying a card until both companies launch their products. Prices decrease dramatically and the most informative benches are released.
 

Gaogaogao

Member
dionysus said:
I have no idea what any of the technical mumbo jumbo means, but I know what it implies for me. By the end of 2009 I will probably be upgrading my 8800GT. That and a SSD should improve performance nicely.
agreed on both accounts
 

Nirolak

Mrgrgr
To put this into perspective a bit in terms of computing power:

PlayStation 2 Emotion Engine: 0.0062 Teraflops
PS3 CPU + GPU Combined: 0.46 Teraflops
8800GT: 0.504 Teraflops
HD 5870 X2: 4.6 Teraflops

And that's ignoring all the all hardware based bonuses that come in newer technology.

Edit: Fixed the PS3 flops.
 

mr stroke

Member
Nirolak said:
To put this into perspective a bit in terms of computing power:

PlayStation 2 Emotion Engine: 0.0062 Teraflops
8800GT: 0.504 Teraflops
PS3 CPU + GPU Combined: 2.0 Teraflops
HD 5870 X2: 4.6 Teraflops

And that's ignoring all the all hardware based bonuses that come in newer technology.

wow had no clue thanks :)
funny to think a 5870x2 will be twice a powerful as a PS3, just sad that devs will make everything multi platform :(
I hope someone steps to the plate, we need something other than Crysis to push tech
 

camineet

Banned
Nirolak said:
To put this into perspective a bit in terms of computing power:

PlayStation 2 Emotion Engine: 0.0062 Teraflops
8800GT: 0.504 Teraflops
PS3 CPU + GPU Combined: 2.0 Teraflops
HD 5870 X2: 4.6 Teraflops

And that's ignoring all the all hardware based bonuses that come in newer technology.


The PS3 figure is marketing flops. You're comparing the programmable performance of everything else to PS3's NvFlops, for the most part.

The PS3's actual programmable performance is
PS3's CELL CPU: 218 GFLOPs + RSX GPU: 200~250 GFLOPS
Thus, CPU + GPU combined: 0.418 ~ 0.468 TFLOPS (somewhere under half a TFLOP)


mr stroke said:
wow had no clue thanks :)
funny to think a 5870x2 will be twice a powerful as a PS3, just sad that devs will make everything multi platform :(

5870x2 is many times more powerful than PS3, about TEN TIMES more :)

4.6 TFLOPS vs 0.46 TFLOPS.

It would be silly to think that PS3 is even remotely close to 1/2 the performance of a 5870X2 :)
 

Nirolak

Mrgrgr
camineet said:
The PS3 figure is marketing flops. You're comparing the programmable performance of everything else to PS3's NvFlops, for the most part.

The PS3's actual programmable performance is
PS3's CELL CPU: 218 GFLOPs + RSX GPU: 200~250 GFLOPS
Thus, CPU + GPU combined: 0.418 ~ 0.468 TFLOPS (somewhere under half a TFLOP)




5870x2 is many times more powerful than PS3, about TEN TIMES more :)

4.6 TFLOPS vs 0.46 TFLOPS.

It would be silly to think that PS3 is even remotely close to 1/2 the performance of a 5870X2 :)
I thought there was something horribly wrong with that PS3 number. Thanks. Updated the list.
 

camineet

Banned
Nirolak said:
I thought there was something horribly wrong with that PS3 number. Thanks. Updated the list.


Welcome.

And just to be fair regarding consoles, the X360's performance, CPU+GPU combined, is 0.355 TFLOPS
(CPU: 115 GFLOPS + GPU: 240 GFLOPS) which is more realistic compared to the 1 TFLOP marketing figure.

For those that are curious about their little Wii, it gets 0.015 TFLOPS

CPU+GPU combined: 15.75 GFLOPS
(Broadway CPU: 2.85 GFLOPS + Hollywood GPU: 12.9 GFLOPS)

which is exactly 50% more than GameCube's 10.5 GFLOPS
(Gekko CPU: 1.9 GFLOPS + Flipper GPU: 8.6 GFLOPS)


On another note, now imagine what Nintendo could do with a low-end to midrange GPU that is at least one GPU-generation beyond Rx8xx.

AMD/ATI is not only working on the R9xx generation but the R1000 as well.

The Flipper/Hollywood GPU architecture is now 10 year old tech.

http://cube.ign.com/articles/099/099520p1.html
IGNcube: You say you began talking to Nintendo® in 1998. So from white paper designs and initial design to final mass production silicon how long was the development process?

Greg Buchner: Well, there was a period of time where we were in the brainstorm period, figuring out what to build, what's the right thing to create. We spent a reasonable amount of time on that, a really big chunk of 1998 was spend doing that, figuring out just what [Flipper] was going to be. In 1999 we pretty much cranked out the gates, cranked out the silicon and produced the first part. In 2000 we got it ready for production, so what you saw at Space World last year was basically what became final silicon.


The Wii to Wii HD/Wii 2 could represent a 11-12 year leap in GPU technology O_O
 

Log4Girlz

Member
Labombadog said:
I dont care what other people think, but these rumors got me aroused. That's right, I said it. ;)

Let us touch our erect penises.

Ok kidding...but man to think that next-gen consoles will be at least 1 step above these cards.
 

aeolist

Banned
Intel's supposed to be pushing Larrabee out the door late this year right?

Looks like they'll be late to the party.
 

artist

Banned
camineet said:
The Wii to Wii HD/Wii 2 could represent a 11-12 year leap in GPU technology O_O
I'd be totally shocked if they adopt something as powerful as the RV740 in Nintendo's next console, let alone this "jump".

aeolist said:
Intel's supposed to be pushing Larrabee out the door late this year right?

Looks like they'll be late to the party.
Early is not Intel's priority, support is.
 

sankao

Member
These are impressive figures all around. I'm interested to see how "programmable" the GT300 will be. MIMD seems a little vague. Did they add one instruction fetch/cache per 512 units ? If so it's pretty impressive. If it's just a way to sell their previous SIMT approach, well, good for them I guess. Guess we will have to wait until cuda 3.0 is released to know.

I hope the game programmers follow up and exploit this programmability. There is no more excuse to use a vertex/pixel pipe "because it's optimized in hardware". Voxel engines or any other creative engine will be welcome. No need to use an antiquated Z-buffer and rasterizer too, hello proper phase handling, goodbye aliasing !
 

Zaptruder

Banned
camineet said:
Welcome.

And just to be fair regarding consoles, the X360's performance, CPU+GPU combined, is 0.355 TFLOPS
(CPU: 115 GFLOPS + GPU: 240 GFLOPS) which is more realistic compared to the 1 TFLOP marketing figure.

For those that are curious about their little Wii, it gets 0.015 TFLOPS

CPU+GPU combined: 15.75 GFLOPS
(Broadway CPU: 2.85 GFLOPS + Hollywood GPU: 12.9 GFLOPS)

which is exactly 50% more than GameCube's 10.5 GFLOPS
(Gekko CPU: 1.9 GFLOPS + Flipper GPU: 8.6 GFLOPS)


On another note, now imagine what Nintendo could do with a low-end to midrange GPU that is at least one GPU-generation beyond Rx8xx.

AMD/ATI is not only working on the R9xx generation but the R1000 as well.

The Flipper/Hollywood GPU architecture is now 10 year old tech.

http://cube.ign.com/articles/099/099520p1.html



The Wii to Wii HD/Wii 2 could represent a 11-12 year leap in GPU technology O_O

That's all well and neat, but if the Wii has proven anything this gen, it's that graphics have become a commodity and no longer a feature in a console system.

At least that's what Nintendo will think.

More like what's happening is that there's a market division in wants between traditional gamers (18-30+ males) and extended market gamers (i.e. women, children (who rely on their parents to buy them the consoles), old people, lapsed gamers, etc).

Still. Good to know that if next gen consoles release in 2012+, that they'll have easy access to technology that is 10 times more powerful than the most powerful of the last generation.

Problem with good graphics will then pretty much solely rest on the content creators and the artists. Hopefully their tools will keep up in order to help them provide an aggregate level of quality that exceeds what we've come to see this gen.
 

tokkun

Member
GPU maker's calculations of 'cores' and FLOPs are such marketing bullshit.

Anyway, after being forced to wrestle with CUDA again for the past few weeks I look forward to trying this new MPMD model in the GT300.
 

Zyzyxxz

Member
Darklord said:
Price ++++
Performance ++

When new cards like this are announced I never brother. You know the upgraded versions are 6 months away at a cheaper price.

pretty much.

Avoid the 1st gen of new cards and buy the refreshes which always run cooler, use less power, and perform the same if not a few percentage points better
 

Stop It

Perfectly able to grasp the inherent value of the fishing game.
Zyzyxxz said:
pretty much.

Avoid the 1st gen of new cards and buy the refreshes which always run cooler, use less power, and perform the same if not a few percentage points better
And are usually cheaper to boot.

Still, since 2006/2007 we've seen the same DX10 parts refreshed and refined, it's about time that we see DX11 hit the fray, architecture wise. Will buy in 2010 once the 2nd gens come out.
 

Jaagen

Member
irfan said:
I'd be totally shocked if they adopt something as powerful as the RV740 in Nintendo's next console, let alone this "jump".

I guess they probably will, seening a Wii2 release won't be here for at least a couple of years. Maybe three. And asuming they will keep a small form factor, I guess they can shrink it down enough to be cool as hell for the Wii2. However, I don't think they will shoot higher than this.
 

artist

Banned
aeolist said:
Not really following you here
Intel is already late to the GPU segment (captain obvious) and they dont seem to be in a rush to push LRB out the door. Their paramount effort and resources are being devoted to garner wide dev support. After all LRB is bunch of x86 cores with software layer doing all the magic.

Zyzyxxz said:
pretty much.

Avoid the 1st gen of new cards and buy the refreshes which always run cooler, use less power, and perform the same if not a few percentage points better
Not always true. It all depends on when the new generation launches and it's corresponding refresh comes out. For example, people who bought R300 (9700 Pro), G80 (8800GTX) possibly made the best purchase of their lives.
 

DaCocoBrova

Finally bought a new PSP, but then pushed the demon onto someone else. Jesus.
irfan said:
For example, people who bought R300 (9700 Pro), G80 (8800GTX) possibly made the best purchase of their lives.


That'd be me. :D
 

D4Danger

Unconfirmed Member
Any chance the 2XX series will drop in price with this news?

My 8800GTS packed up and I'm currently using a 7800GT :lol
 

TheExodu5

Banned
Can't wait. Holding it out with my 8800GT until this series comes out...I figure it's worth waiting out the current gen of cards.

careful said:
Even though it's kinda useless at this point, my measuring stick will still be Crysis VH 19x12 res 60+ fps
And you know what, even with twice the power of current cards, I still don't think that'll be enough.

Someone already said it though, the new tech is nice, but I don't see many games pushing current tech all that hard nowadays. The current push seems to be towards games that can run on bare minimum hardware specs. In a way, it's cool that you don't have to upgrade so often, but at the same time it's making the graphics whore in me cry a bit inside. :'(

Well yes and no. With this kind of hardware, we can finally start to expect a constant 60fps V-Synced with full anti-aliasing with most games. Many games still don't maintain 60fps on today's high end cards at the highest quality settings, especially when AA comes into play. I look forward to not having to make any graphical compromises.
 

Talamius

Member
I'll wait for the 2nd or 3rd refresh. $100 cards can pull 1680x1050 comfortably in most games these days.

Don't waste your money.
 

TheExodu5

Banned
Talamius said:
I'll wait for the 2nd or 3rd refresh. $100 cards can pull 1680x1050 comfortably in most games these days.

Don't waste your money.

I'll spend my money how I want, thanks.

Sure $100 cards get you by, but you're not getting pristine performance. You still need to deal with tearing/stuttering, quality compromises (usually in the form of AA), lower memory usage in a few games...

I'm not saying my 8800GT isn't still a good card...it is...but it's not quite enough to satisfy me at this point.
 
I'm still getting by on an X1950 without problems, although I will need a complete rebuild sometime in the next year. (Alan Wake, Diablo 3, Wolfenstein)

I have to laugh about DX11. :lol

Was DX10 ever relevant ? Not IMO.
 

Mindlog

Member
Death Dealer said:
I'm still getting by on an X1950 without problems, although I will need a complete rebuild sometime in the next year. (Alan Wake, Diablo 3, Wolfenstein)

I have to laugh about DX11. :lol

Was DX10 ever relevant ?

I'm still on a 9800 Pro. All the games I was planning to upgrade for came out buggy and shitty. I will also be transitioning straight from a DX9 part to a DX11 part.
 

Nemo

Will Eat Your Children
Exactly how next gen would these be? I'm a bit out of the loop since I upgraded my PC. Would they worthy enough to upgrade my 4850?
 

TheExodu5

Banned
Teetris said:
Exactly how next gen would these be? I'm a bit out of the loop since I upgraded my PC. Would they worthy enough to upgrade my 4850?

Going simply by the numbers here, the high end will probably be around 2x the performance of the current high-end. So, yeah, it'd be worth it. No confirmation until the products are released though.

That means you'd be looking at about 3x the performance of the 4850 if you'd get a high-end card.
 
Teetris said:
Exactly how next gen would these be? I'm a bit out of the loop since I upgraded my PC. Would they worthy enough to upgrade my 4850?

If you've got a 4850, you must have upgraded not very long ago. Does it not do everything you need ? Need to play new upcoming games in 1080P with high AA?

Then yeah it's probably worth an upgrade.

The software is what should force you to upgrade, and frankly since this console generation began, PC software technology has been pretty stagnant.
Nothing really pushes PC hw anymore, unless the code is horribly unoptimized.
The last time I was still using a 3 year old card, a geforce 1 in 2002, I couldn't play new games in a resoluton higher than 800x600 with most detail off or turned way down. An 8800GT will still max out most games at reasonable resolutions.
 

godhandiscen

There are millions of whiny 5-year olds on Earth, and I AM THEIR KING.
Death Dealer said:
If you've got a 4850, you must have upgraded not very long ago. Does it not do everything you need ? Need to play new upcoming games in 1080P with high AA?

Then yeah it's probably worth an upgrade.

The software is what should force you to upgrade, and frankly since this console generation began, PC software technology has been pretty stagnant.
Nothing really pushes PC hw anymore, unless the code is horribly unoptimized.
I would say there are a couple sofware titles pushing for an upgrade if you are an enthusiast who wants all bell and whistles turned on in a game. Empire III, Crysis, Stalker Clear Sky, World in Conflict, and GTA4 are games on which you will not achieve 60fps once you crank up all the settings in 1080p, regardless of your hardware. A 8800 is fine for an experience a step above consoles, but it isn't for the enthusiast anymore.
 
Top Bottom