Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
On my late 2008 unibody MBP, when it runs the test on the 9400 GPU the mouse becomes pretty unresponsive. I assumed this was because the 9400 can't handle updating the screen at the same time as handling other computer tasks.

However, when I change energy saver to "higher performance" to bring in the 9600 GPU, I assume that the graphics are now being handled by the 9600. However, when it runs the test on the 9400 GPU, the mouse still goes unresponsive and other screen updates pause / become less frequent. When it moves on to run the test on the 9600 GPU, the screen updates fine.

Does that imply that my slower 9400 GPU is handling screen updating all of the time regardless of energy saver settings? It would explain why I see no difference between the two settings other than how hot the machine runs and battery life. Has anyone else with a unibody MBP noticed this?
 
On my late 2008 unibody MBP, when it runs the test on the 9400 GPU the mouse becomes pretty unresponsive. I assumed this was because the 9400 can't handle updating the screen at the same time as handling other computer tasks.

However, when I change energy saver to "higher performance" to bring in the 9600 GPU, I assume that the graphics are now being handled by the 9600.

You would assume wrong. Changing the active GPU which handles graphics requires you to log out and back in again.
 
Could these performance increases be leveraged by virtualisation apps? I suspect not; i just thought it'd be funny being able to have a big performance increase in Windows (etc) that isn't present when they run natively.
 
Real world improvements?

OK. I'm not a technical person :eek:, but technologies like OpenCL and GC interest me. Actually, anything that promises to speed up my system interests me.

Now for a real world question...

I have a lot of bookmarks in Firefox. When I click on bookmarks on the menu, it takes about a second to drop down. And yes, I'll probably be switching to Safari. But if I were to continue to use Firefox, would this sort of thing speed up? Will the overall system interface become more fluid? Will the pretty icons in the dock change size faster and more smoothly?

I know it may sound silly, but one of the reasons I was so impressed by Macs was how smoothly the interface operates. That's what I'm going to be using almost every second I use my computer, so that's what I'm most interested it.

Thanks.
 
When will Apple release an update for Logic that allows Logic to offload some of the effects and synth plugins to the GPU?

No need for dedicated DSP solutions anymore (except for being used as a hardware dongle). UAudio on a Macbook Pro without the Expresscard.
 
I too have a HD2600 iMac, bought 09/2007 and I don't believe my iMac to be obsolete just because ONE single feature of 10.6 is not supported. My machine is still plenty fast for my tasks (mostly graphic design). I can still install Snow Leopard, which even without OpenCL, should make the system a bit faster.

We don't even know OpenCLs full potential yet because there are no usefull apps out there that use it. I'm not mad because I can't run some benchmarking tool. :rolleyes:

The fact that some systems may get speed gains in some apps in the future won't make my system run slower. And I'll happily continue to use it for the next 2-3 years for Adobe CS4(5/6 sometime), Final Cut Express, Cinema 4D, ...
 
Does that imply that my slower 9400 GPU is handling screen updating all of the time regardless of energy saver settings? It would explain why I see no difference between the two settings other than how hot the machine runs and battery life. Has anyone else with a unibody MBP noticed this?
Not sure about why the benchmark is causing these effects, and I did notice the same type of lag on my 2008 uMBP; however, I am sure that there is a noticeable difference in performance when running a game such as X-Plane on the 9400 vs. 9600GT. It is able to render much more detail and at a higher framerate on the 9600.
 
Oops! Seems I forgot how much you guys hate the GMA950, and Intel graphics in general! Must be because I don't have one. :p
 
FWIW, my results. Early 2008 MBP.


Number of OpenCL devices found: 2
OpenCL Device # 0 = GeForce 8600M GT
Device 0 is an: GPU with max. 940 MHz and 32 units/cores
Now computing - please be patient....
time used: 3.084 seconds

OpenCL Device # 1 = Intel(R) Core(TM)2 Duo CPU T8300 @ 2.40GHz
Device 1 is an: CPU with max. 2400 MHz and 2 units/cores
Now computing - please be patient....
time used: 16.009 seconds

Now checking if results are valid - please be patient....
:) Validate test passed - GPU results=CPU results :)
 
Number of OpenCL devices found: 2
OpenCL Device # 0 = GeForce 8800 GS
Device 0 is an: GPU with max. 1250 MHz and 64 units/cores
Now computing - please be patient....
time used: 0.942 seconds

OpenCL Device # 1 = Intel(R) Core(TM)2 Duo CPU E8235 @ 2.80GHz
Device 1 is an: CPU with max. 2800 MHz and 2 units/cores
Now computing - please be patient....
time used: 13.031 seconds

Now checking if results are valid - please be patient....
:) Validate test passed - GPU results=CPU results :)

- My 2008 iMac.. I'm surprised how well my this machine is measuring up against much more expensive machines.. ?!
 
Oops! Seems I forgot how much you guys hate the GMA950, and Intel graphics in general! Must be because I don't have one. :p
No kidding, the GPU test quits after being unable to get the GPU string ...
And the CPU test fails with wrong processor type...

(Core Duo, GMA 950 Mac Mini) :p
 
Wow, this is incredible.
This means small, inexpensive Laptops with SL installed can suddenly beat big,fat MacPros without SL.

It will be interesting to see what comes about in terms of speed increases over the next 12 months. It sounds like Apple has laid a lot of groundwork - if the developers manage to harness that power, we could be getting some pretty amazing stuff.
 
You would assume wrong. Changing the active GPU which handles graphics requires you to log out and back in again.

I realise that I need to log out and back in again to switch graphics adaptors. The point still stands though - screen updates are a bit screwy when the 9400 GPU is working on computation tasks, regardless of which GPU is supposed to be up handling screen updates. It's a relief to see that someone else sees the same thing.
 
I'm a little bummed that my ATI x1600 isn't supportable. I would really like to play with this feature some. I've been working with GCD this weekend, and it is really awesome.

I know I can do OpenCL programming with just my CPU, but it isn't the same.

Definitely looking harder at a Macbook Pro for my next computer, if OpenCL can use both GPUs at once. While it won't make a difference to 97% of the programs out there, the other 3% interests me A LOT!

:D
 
Wow, this is incredible.
This means small, inexpensive Laptops with SL installed can suddenly beat big,fat MacPros without SL.

... and then the big, fat Mac Pro gets fully loaded with nVidia graphic cards and updated to SL and it will run circles around even expensive Laptops!
 
and the answer is..... compute shaders

http://forums.amd.com/forum/messageview.cfm?FTVAR_FORUMVIEWTMP=Linear&catid=328&threadid=116102

>Pixel Shader code (if not using some special stuff like double precision) runs on all cards. Compute shaders only on the HD4000 series.


Apple could make opencl for older radeons via Pixel shaders and glsl, but they didnt want to
I knew that the original Brook compiled to Pixel Shaders allowing GPGPU work on DX9 level GPUs but I didn't know Brook+ still does it with DX10 GPUs. In any case, ATI being able to compile their own Brook+ language into Pixel Shaders doesn't necessarily mean that OpenCL could work. Especially if the problem is incompatible memory structure in the HD2000 and HD3000 series. Plus, if the HD4000 series doesn't have great performance in OpenCL when it's able to talk natively to the GPU, I doubt the performance of OpenCL emulated in Pixel Shaders on older generation GPUs would be stellar so it probably wouldn't be worthwhile even if it were technically possible.
 
Number of OpenCL devices found: 1
OpenCL Device # 0 = Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz
Device 0 is an: CPU with max. 2200 MHz and 2 units/cores
Now computing - please be patient....
time used: 16.994 seconds

Now checking if results are valid - please be patient....
:) Validate test passed - GPU results=CPU results :)

Not sure what it means for me.
 
Agreed. This is pretty much a worthless benchmark. There's nothing complex taking place, no hard memory thrashing, no difficult calculations.

It's just taking 2 arrays with 5000 numbers in them and adding them together into a new array of 5000 numbers. Simple atomic add operations of 2 numbers over and over and over. The more cores you have the more you can split the array up (4 cores = each core processes 1250 items, 32 cores = each core processes 156.25 items.) and the faster clock means that each item gets processed faster.

This is NOT AT ALL indicative of a real OpenCL app that will be doing hundreds of thousands of difficult computations, with dependancies between the dat and working across huge datasets.

I'd say ignore every result we get out of this app. It's not at all indicative of real-world performance in the least.

I was waiting for someone to say this. This 'test' is rather worthless. It does no 'real' testing. It 'stresses' nothing. It 'tests' nothing but floating point performance. That is perhaps an 'indication' of future performance but not direct and meaningful 'proof' that one board would be faster than another.

It's a simplistic 'test' that misses the whole performance potential of the card and the ancillary services that make up the card.

IE: You could have a card with a screaming GPU and a very slow memory bus that couldn't be tested by this program resulting in extreme tests and yet crappy real world performance.

A truer test, if possible, would be for the program to have the card display a complex graphic, using many colours and textures/polygons, etc and then time the production of the result. But then you still can't get away from the principle that just by measuring the production of the test, you are also influencing the result to some degree.

A test only tests what it was designed to test. Nothing more. These numbers are meaningful on their own but really don't prove anything except for potential. Nothing tests better than the real world...
 
I was waiting for someone to say this. This 'test' is rather worthless. It does no 'real' testing. It 'stresses' nothing. It 'tests' nothing but floating point performance. That is perhaps an 'indication' of future performance but not direct and meaningful 'proof' that one board would be faster than another.

It's a simplistic 'test' that misses the whole performance potential of the card and the ancillary services that make up the card.

IE: You could have a card with a screaming GPU and a very slow memory bus that couldn't be tested by this program resulting in extreme tests and yet crappy real world performance.

A truer test, if possible, would be for the program to have the card display a complex graphic, using many colours and textures/polygons, etc and then time the production of the result. But then you still can't get away from the principle that just by measuring the production of the test, you are also influencing the result to some degree.

A test only tests what it was designed to test. Nothing more. These numbers are meaningful on their own but really don't prove anything except for potential. Nothing tests better than the real world...
I'm assuming a real world application would also not use the GPU in isolation so it isn't really a GPU vs CPU competition. The GPU and CPU would probably have to work together, passing data back and forth. As you mentioned, it is also important to see how well the OpenGL and OpenCL pipelines work together for visualization applications. I think it'd also be interesting to see how smart the OpenCL scheduler is if there are multiple OpenCL applications and say Core Image/Animation applications requesting GPU time.
 
Did you not even bother to read the article? It stated clearly that only 1 GPU is active at a time. With the unibody MBP's only 1 GPU can be accessed at a time, and you have to log out to switch between them. I don't get why people think that wouldn't be the case with OpenCL. The OS only sees 1 of the GPUs at a time.

Ummm - maybe you need to read the artical again. It says

"Most interesting is that for owners of high end MacBook Pros which contain both 9400M and 9600M GT graphics cards, both GPUs can be used at any time by OpenCL. In contrast, both of these GPUs can not be used for general graphics processing and requires a Mac OS X logout to switch from one to another.
 
we are not switching, we use BOTH graphics chips. :rolleyes:

I guess I don't see what you are implying here. The OpenCL test can use both graphics chips when the machine is set to 'Higher Performance.'

However if you are set to 'Better Battery Life' then only the 9400 is available, and neither the graphics nor OpenCL can use the 9600, thus only using one GPU, not both.

In order to switch between 'Higher Performance' and 'Better Battery Life' you must log out and back in.

Personally I always run my machine in 'Better Battery Life' since I don't need the performance on the 9600 for graphics. And I think the 9600 is unavailable in this mode, even for OpenCL, for the obvious reason that it sucks more power. However I'd love to see an option to allow OpenCL to see/use the 9600 when in 'Better Battery Life' when the machine is plugged in, so that in the rare tasks that use OpenCL you could get that extra boost.
 
you know the event on the 9th is just a ipod event.

Oh yeah. I forgot. They introduced the iPhone 3GS with a bunch of other stuff. Call the Keystone Cops (the FTC)...

What could be more complementary for a screaming fast iPod that will make pancakes and french toast than a ripping fast iMac with either a Core I7 or a Quad Core processor! (Or Xeon?)
 
I'm assuming a real world application would also not use the GPU in isolation so it isn't really a GPU vs CPU competition. The GPU and CPU would probably have to work together, passing data back and forth. As you mentioned, it is also important to see how well the OpenGL and OpenCL pipelines work together for visualization applications. I think it'd also be interesting to see how smart the OpenCL scheduler is if there are multiple OpenCL applications and say Core Image/Animation applications requesting GPU time.

Like a runner on a treadmill. Yeah, they are fast for a 5 minute 'run'. Put them on an outdoor 5k? In Denver? Well, that's different, ain't it...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.