• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Caution: G70 FUD within. Enter at your own risk...

Pimpwerx

Member
I'm gonna warn you that I don't really know what to make of this info, and don't want to say one way or another. Just thought it would be interesting to consider since it is at least supported by some screen grabs.

Source: bit-tech.net
Nvidia's Chief Scientist, David Kirk has suggested that "People just don't know as much as they think they do" when it comes down to the "Many" clocks within the GeForce 7800, aka G70.

"It's somewhat hard for us to say 'the core clock in G70 is this single number'", says Kirk. "We didn't want to be accused of exaggerating the clock speed, so we picked a conservative number to talk about the core clock speed. But, yes, that is just one of the multiple clocks."

David's comments come as he speaks exclusivey to bit-tech about the issue which has become a hot topic amongst the community over the past couple of days, following the discovery that RivaTuner was reporting varied clock speeds for the 7800. We met with David in central London today, and he talked to us about many different issues. We'll have the full interview for you tomorrow, but we couldn't sit on this one until then.

"People have said that G70 doesn't have any new architecture, but that's not really true. It has new architecture, it's just not always visible.

"The chip was designed from the ground up to use less power. In doing that, we used a lot of tricks that we learned from doing mobile parts. The clock speeds within the chip are dynamic - if you were watching them with an oscilloscope, you'd see the speeds going up and down all the time, as different parts of the chip come under load."

So why haven't we heard about this feature before?

"We haven't talked about this feature before now because we wanted the technology to speak for itself," says Nvidia's PR Manager Adam Foat. "People noticed its effect - that the 7800 is amazingly quiet and fantastically cool - and that's what we wanted. "

You can pretty much bet that the other reason that Nvidia haven't talked about the technology is because they didn't want ATI to find out about it and copy it.

We asked David what the three visible clocks did (that's the ROP clock, pixel clock and geometry clock if you're still playing catchup). "You're making the assumption there's only three clocks," was his cryptic reply. "The chip is large - it's 300m transistors. In terms of clock time, it's a long way across the chip, it makes sense for different parts of the chip to be doing things at different speeds."

What of the speculation that certain parts of the chip only overclock in multiples of more than 1MHz, appearing to restrict overclocking? "Well, the chip is actually better for overclocking, since it's so low-power and low-heat," Kirk tells us. "We're going to have to work with the guys at RivaTuner, because it could be that it makes sense for overclocking tools only to offer options that are really going to give a performance benefit, rather than letting users hunt around for the best combinations and multiples that work. Because of the way the chip works, it makes sense for different parts to be working in multiples."

So there you have it - there are an undisclosed number of individual clockspeeds within G70, possibly more than 3. Those clocks scale up and down to save power, and this is one of the big features that has kept G70 to a single-slot-heatsink design, and an astoundingly quiet one at that. This is proprietary Nvidia tech, and they're incredibly pleased with how well it works. Nvidia is going to work to iron out issues with overclocking and RivaTuner, but don't expect too much more to be given away - we think that Nvidia see this as a great technology advantage over their rivals.

Our thanks to David for talking to us, especially as we began to become aware of the events unfolding around us. Check back on the site for the full interview tomorrow, where we discuss unified shaders, the PlayStation 3, HDR, and the next generation of GeForce cards.

Essentially, an assynchronous clock, which I believe is becoming more and more common these days in CPUs. Wasn't aware GPUs did this yet, but could be interesting. It's about a 8% increase over the default clock. @ 550MHz, that would place the bump at 600MHz. What this means? I don't know. All we have is some RivaTuner anomaly, and NVidia PR comments.

Here's another piece of this same topic.

And far be it from Asus to make this a bullet-point. The thing is, the specs probably stay the same, so there might not actually be any gains. But it sure does make the chip look faster than it is.

If you have the vertex shaders running 8% faster than the pixel shaders (not confirmed), then it throws off the extrapolated figures for RSX. The dot prod and FLOPS figures would be wrong. :? I'm gonna wait and see what officially comes of it, but since it popped up on B3D a short while ago, I thought I'd post it here too. Thoughts? PEACE.
 

Kleegamefan

K. LEE GAIDEN
I don't understand how this could be FUD??

Seems like he is just clearifying features about G70 that weren't talked about before??
 

gofreak

GAF's Bob Woodward
With their figures they seem to have always gone with the lowest clock - at least NVidia's (not sure about the manufacturers). The issue is that people were finding parts of the chip running faster than the claimed core clock, not slower (i believe?). So effectively I can't see how anyone would complain since AFAIK, your performance will only be better not worse.
 

fart

Savant
running multiple asynch clocks is hardly proprietary to nvidia. this stuff has been in production and probably in the literature for years+.

these tricks tend to hurt performance, though. this is good evidence that their chips are just getting too damned big, and they're hitting an envelope on how far they can scale the speeds up if they're getting better performance using these tricks
 

gofreak

GAF's Bob Woodward
Apparently the clockspeed differences scale up with overclocking. At 500Mhz, it goes up to 540Mhz.

Also, from that article:

Check back on the site for the full interview tomorrow, where we discuss unified shaders, the PlayStation 3, HDR, and the next generation of GeForce cards.

I wonder what he'll be able to say about PS3..SOMETHING new would be nice.

edit - actually, there's conflicting info out there about how this clock variations behave when the chip is overclocked..seems might take a little more time to figure out entirely.
 

rastex

Banned
fart said:
this is good evidence that their chips are just getting too damned big,

I suspect this is correct as well. One of the biggest causes of high power consumption is high fanout, and when they have as many pipelines as they do, running them all by one clock would produce HUGE fanout and thus very high power consumption (and heat consumption). Gotta appreciate how they're making the uninformed techies think this is soemthing really cool and special though.
 

gofreak

GAF's Bob Woodward
There are some (wrongly) thinking this is all new and innovative, but I think more people are surprised that NVidia appears to be lowballing their paper figures i.e. sticking with the lowest clock.

Makes me wonder about the RSX figures and whether the same methodology was adopted - if they may be greater under circumstances where the clock goes up.
 
Top Bottom