Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I said *disk* intensive tasks. Read posts carefully and quote people accurately. One such example is booting. If you don't understand that, there's not much we can do for you.

Not all SATA drives are created equally even within their respective revisions (2.0, 3.0) but it is safe to say that a fast SATA 2.0 drive is generally trounced by a fast 3.0 drive. The following article shows that not only does the OWC SATA 3.0 drive beat the MBA's fastest SATA 2.0 SSD (the Samsung), it THRASHES it. To save you the read, it was found to be an average of 243% faster.

http://www.storagereview.com/owc_mercury_aura_pro_express_6g_review
That doesn't matter, since it was obvious you were talking about disk intensive operations. Booting up is not really disk intensive in the big scheme of things.That's why it's a poor yard stick for comparing SSD speeds since you're likely to see about 1-2 seconds difference between different SSDs.

I could put you in front of computers running different SSDs and you wouldn't be able to tell the difference in booting up and most common tasks. I've tried to show you the real world benchmarks and these numbers like 243% are meaningless.

You're so fixated on synthetic benchmarks, but you do not understand what they are saying and what it actually means in the real world. It's also pretty clear that you do did not read my previous post and that you're as stubborn as a mule, hence any further conversation is futile.
 
Last edited by a moderator:
From what I gather, it makes a lot of sense writing complex parallel tasks in OpenCL. Then, depending on the user's system, the program will decide at run-time which processor(s) to use for a given task.

The user with the Air won't notice anything different, while a user on a MBP, iMac or MacPro will get the added benefits from the GPU using the same program code.

My take as an erstwhile developer would be to factor in how many of my potential users would get an actual advantage from the work effort. Assuming, of course, there are processes in my application naturally suited to spawning parallel tasks. I'd guess that it would be easier to invest that effort once the vast majority of Apple's lines included OpenCL support -- via Ivy Bridge for the IGP crowd. Classic "chicken or the egg" problem.
 
But the primary benefit of writing OpenCL code is to offload specific processing to a processor other than the CPU.

No, per definition. If you use a 3D app like Cinema 4D (uses all available GPU-resources), and some of your own programs should run only on the CPU, then you can write parts of your programs in OpenCL (portable code). If you use only Mac OS X, then you can use C/C++ with GCD.

OpenCL is not only portable, it is also very flexible (as shown above). The language is not limited to GPUs.

----------

Why would anyone care about OpenCL at the CPU level? It really offers no benefit.

It offers a benefit, because:
a) it is portable
b) GCD (for CPUs) is not part of other OSs. Yeah, i know you can use your own libdispatch.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.