Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Yixian

macrumors 65816
Original poster
Jun 2, 2007
1,483
135
Europe
Wuh-oh, just when we thought we knew where we stood:

http://www.pcmag.com/article2/0,2817,2326689,00.asp

In the mobile space, five versions of the new GPUS are available, Nvidia said. From top to bottom, there is the GeForce 9800M GTX, GeForce 9800M GTS, and GeForce 9800 GT, plus the GeForce 9700M GTS and GeForce 9700M GT. These join the existing GeForce 9600M and 9500M GPUs for mid-range and mainstream notebooks, the company said.

The top two, the 9800M GTX and the 9800M GTS, contain 112 processing units, on par with the 9800 GT. Each supports PhysX, Nvidia SLI, Hybrid SLI, PureVideo, and the new MXM3.0 graphics module specification, which allows a notebook's graphics chip to be upgraded.

Incedentally these are all compatible with the new Nvidia chipset AppleInsider is reporting Apple will probably use in the next MacBook Pros.

Time for a TDP hunt.

EDIT: These cards are all enabled for CUDA.
 
Would love to see the 9800 GTX as a BTO option in the new MBP's, but I know that it won't happen.:(

Don
 
Do you think apple is going to use a mxm card where it lets user replace the video card? I might see it when a case resign if it is more user friendly.
 
And the 9700M GT is the only one that we could ever see, but a 9650 GS is far more likely.

How is the 9700 GT even possible, at 45W... that's more than twice the 8600 GT. Even the 9650GS is 45% higher.. 29W vs. 20W.

Seems unlikely, dunnit?
 
Que cera cera... so, what, a 9600 GT, then? Sounds about right.

S'what I was thinking. It's not really that bad... shaders are clocked considerably higher, and the mem gets a little boost. The only difference as far as I can tell is the core speed... less important these days (it would seem), and obviously overclockable. Cores usually hit reasonable overclocks... at least in my experience.
 
Wuh-oh, just when we thought we knew where we stood:

http://www.pcmag.com/article2/0,2817,2326689,00.asp

Incedentally these are all compatible with the new Nvidia chipset AppleInsider is reporting Apple will probably use in the next MacBook Pros.

Time for a TDP hunt.
<Darth Vader voice>Don't underestimate the power (consumption) of these GPUs.</Darth Vader voice>

How is the 9700 GT even possible, at 45W... that's more than twice the 8600 GT. Even the 9650GS is 45% higher.. 29W vs. 20W.

Seems unlikely, dunnit?
It'll only happen if Apple uses a 25 W CPU instead of a 35 W CPU, and a better cooling system.

The only difference as far as I can tell is the core speed... less important these days (it would seem), and obviously overclockable. Cores usually hit reasonable overclocks... at least in my experience.
Apple might underclock the GPU. :(
 
Yeah, but I meant for the next update. Montevina or what have you.

Or is it just Nehalem that will bring 25W chips? I'll bet you it's that...
Montevina brings 25 W for 2.27/2.4/2.53 GHz (3 MB L2). Montevina also has 35 W for 2.53/2.8 GHz (6 MB L2).

Nehalem brings 35/45 W at the same price zones as Montevina. Those TDPs are 10 W higher than those of equivalent Montevina CPUs because of the integrated GPU and chipset changes. So 25 W Montevina = 35 W Nehalem etc.
 
It'll only happen if Apple uses a 25 W CPU instead of a 35 W CPU, and a better cooling system.

Unlikely, if they want to have any speed ramp at all.... Apple never released a laptop line (at least since the PowerBook G3s were introduced) that didn't include a processor bump during the G3 and G4 era.... 250---> 233/250/292--> 233/266/300 --> 333/400 --> 400/500 --> 550/667 ---> 667/800 --> 800/867 --> 867/1 GHz ---> 1/1.25 --> 1.25/1.33 --> 1.5/1.67 GHz

Of course, with Intel's snails-pace clock increases, they just might. Honestly, what irritates the hell out of me is their insistence in using so much cache. All that does is increase cost, die size, and heat, particularly since you need to volt them higher. Why not clock them 20-30% higher and far outstrip the meaningless performance increase that comes along with cache sizes over 4 MB? It's ridiculous. I don't understand the strategy.

You'd get cooler, cheaper, faster chips. Am I missing something..?

Apple might underclock the GPU. :(

Apple always underclocks their GPUs below PC standards, especially on the 15" models... but they even do it on desktop GPUs. They'd do better to find a stable, lower voltage to cut down on heat/power since that's what drives it up most significantly. I dunno, maybe that's just not feasible.
 
Unlikely, if they want to have any speed ramp at all.... Apple never released a laptop line (at least since the PowerBook G3s were introduced) that didn't include a processor bump during the G3 and G4 era.... 250---> 233/250/292--> 233/266/300 --> 333/400 --> 400/500 --> 550/667 ---> 667/800 --> 800/867 --> 867/1 GHz ---> 1/1.25 --> 1.25/1.33 --> 1.5/1.67 GHz
They kept the same speed on the last generation of the PowerBook G4s. And this would still be a very small speed bump (like the November 2007 MacBooks).

Of course, with Intel's snails-pace clock increases, they just might.
I read in an article that 30% of speed improvements are due to increased clock speed, and 70% are due to architectural improvments.

Honestly, what irritates the hell out of me is their insistence in using so much cache. All that does is increase cost, die size, and heat, particularly since you need to volt them higher. Why not clock them 20-30% higher and far outstrip the meaningless performance increase that comes along with cache sizes over 4 MB? It's ridiculous. I don't understand the strategy.

You'd get cooler, cheaper, faster chips. Am I missing something..?
No IMC is one reason, that's why AMD chips have less cache. But even they have quite a bit of cache.

Logic has a higher defect rate than cache. And I doubt you get a 20%-30% performance boost (for the same heat) with less cache, otherwise Intel would have done it.
 
I read in an article that 30% of speed improvements are due to increased clock speed, and 70% are due to architectural improvments.

Yes architecture is what matters, if you go few years back when Pentium 4 was competing with Athlon 64 you see that the 2ghz Athlon is roughly as fast as 3,8ghz P4. Intel tried the brute force way with pumping clocks but saw the light and went with smart way too.
The cap can be even bigger if you compare floating point calculations between CPUs and GPUs, where GPUs are times faster at 1/4 of the clock.
 
They kept the same speed on the last generation of the PowerBook G4s. And this would still be a very small speed bump (like the November 2007 MacBooks).

I know... I'd actually said except that but apparently removed it. But that was a chip switch. There was still a significant performance increase in certain applications... particularly AltiVec aware, of course, and those that... took advantage of the greater Floating-point power of the FPU they took from the 604e.. which saved a cycle on certain operations.

I read in an article that 30% of speed improvements are due to increased clock speed, and 70% are due to architectural improvments.[/quote[

I don't know how they measured that, but I'd have to guess it's a very subjective test... there are long stretches where all that's happening are clock increases (or minor tweaks, think especially MMX), and there are things like AltiVec which are only used sometimes, and when they are used satisfactorily give great boosts. That would be kind of hard to define and measure.

No IMC is one reason, that's why AMD chips have less cache. But even they have quite a bit of cache.

Right.... although even AMD, with their IMC could definitely benefit from a more efficient L2 cache. Actually, apparently they could do with a more efficient IMC, as it seems Intel has beat them at their own game with previews of Nehalem's IMC.

Logic has a higher defect rate than cache. And I doubt you get a 20%-30% performance boost (for the same heat) with less cache, otherwise Intel would have done it.

I dunno. That was just a guess.. but it would cut greatly down on transistor count, and therefore heat and power... and in a mobile solution that means the ability to clock CPUs higher that have headroom but would otherwise run too hot; being able to drop the voltage allows you to ramp the clock assuming you're using less power/clock, and still come even or under in terms of heat, right?
 
Yes architecture is what matters, if you go few years back when Pentium 4 was competing with Athlon 64 you see that the 2ghz Athlon is roughly as fast as 3,8ghz P4. Intel tried the brute force way with pumping clocks but saw the light and went with smart way too.
And Intel had a process lead on AMD half the time.

I don't know how they measured that, but I'd have to guess it's a very subjective test... there are long stretches where all that's happening are clock increases (or minor tweaks, think especially MMX), and there are things like AltiVec which are only used sometimes, and when they are used satisfactorily give great boosts. That would be kind of hard to define and measure.
Probably an estimate or an average based on testing.

I dunno. That was just a guess.. but it would cut greatly down on transistor count, and therefore heat and power... and in a mobile solution that means the ability to clock CPUs higher that have headroom but would otherwise run too hot; being able to drop the voltage allows you to ramp the clock assuming you're using less power/clock, and still come even or under in terms of heat, right?
Cache uses less power than cores, at least according to the attachment (Pentium M Banias chip). And I think cache has less defects than cores despite the high transistor count, and even if there were, bits of it can be disabled and not have a big performance loss unlike cores (at least until core count gets in the double digits).

And, at least for server chips, cache is actually needed. When Intel was designing the Xeon MP Dunnington processor, they evaluated a 4-core chip with tons of cache and an 8-core chip with minimal cache. Intel settled on a 6-core chip with a moderate amount of cache because it had the best balance between core count and cache size.
 
EDIT: These cards are all enabled for CUDA.


Just fyi Apple doesnt plan to use Cuda but OpenCL. It would be very stupid of Apple to used nVidia owned Cuda for core functions of their OS.
And afaik OpenCL should be backwards compatible so it should work on even current gen hardware.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.