Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Starfyre

macrumors 68030
Original poster
Nov 7, 2010
2,905
1,136
What does OpenCL have over CUDA if anything? They both sound like they are using the graphics card for "computational" tasks.
 
OpenCL is like it sounds, an open source program language so you can build your programs to make use of internal GPU's. CUDA is the same except it was developed by Nvidia for Nvidia GPU's. I wouldn't worry about it. Just two different languages doing a similar task. Except one is open source and one is Nvidia specific.
 
Hard to say, probably CUDA as I believe it has been around longer, since they introduced the unified shader architecture in the 8000 series. But I imagine OpenCL will quickly gain traction as it has more backers.
 
OpenCL isn't open source, it's an open specification agreed upon by "key players" in the industry.

Because the specification is agreed upon by committee and designed to be used across hardware platforms, it is likely less tailored to any specific vendor's hardware and almost certainly less efficient at some tasks than it could be.

CUDA is NVIDIA's language, and more directly maps directly onto the nvidia hardware and pipeline.

More popular? I dunno. NVIDIA has done quite a bit of pushing their solutions into render farms and the scientific community compared to ATI and Intel, etc. It may be that CUDA is more widely used in the wild in compute farms where efficiency of computation has a direct correlation to cost. OpenCL, however, is supported by "everybody" and if you're a writing software for the masses, it doesn't make too much sense to target CUDA and limit your audience.
 
Schnort summed that up better. I didn't know quite how to put it as 'open specification' in other words it's maluable and not tightly controlled by one player like CUDA is with Nvidia.
 
The thing about OpenCL is, it sounds like it is only for integrated chips right? Which by default, make it slightly more "unpopular" than CUDA as CUDA runs on dedicated graphics?

Or is Nvidia planning to adopt OpenCL into their cards, which would effectively negate any need for CUDA for the reasons stated... it being everyones combined standard as opposed to only Nvidias or any single company? I can't see why OpenCL would be popular at all if all it runs effectively on now is Intels integrated gpus.
 
The thing about OpenCL is, it sounds like it is only for integrated chips right?

That's not the case. Support for integrated graphics is new in Mavericks but discrete chips still work (and probably offer better performance).
 
The thing about OpenCL is, it sounds like it is only for integrated chips right? Which by default, make it slightly more "unpopular" than CUDA as CUDA runs on dedicated graphics?

Or is Nvidia planning to adopt OpenCL into their cards, which would effectively negate any need for CUDA for the reasons stated... it being everyones combined standard as opposed to only Nvidias or any single company? I can't see why OpenCL would be popular at all if all it runs effectively on now is Intels integrated gpus.

Nvidia's chips do run OpenCL. The reason Iris Pro is being called a better choice for OpenCL is not specifically because Nvidia doesn't support it, it's because the consumer-level Kepler chips used in Nvidia's GPUs this year and last simply have fairly poor compute performance in general. Nvidia's interest in continuing to promote CUDA over OpenCL is based on selling more GPUs; if you need CUDA, you have to buy an Nvidia GPU. The more software that uses CUDA, the more people demand Nvidia GPUs in their computers.
 
Because the specification is agreed upon by committee and designed to be used across hardware platforms, it is likely less tailored to any specific vendor's hardware and almost certainly less efficient at some tasks than it could be.

CUDA is NVIDIA's language, and more directly maps directly onto the nvidia hardware and pipeline.

Well, GPUs are now generic enough so that such things don't really matter. Both CUDA and OpenCL are 'high-level' and very similar programming languages (besides few dialectal differences), and a smart compiler should be able to general optimal code for the hardware at hand most times. Few years ago, CUDA used to be faster than OpenCL on many kernels, even if the code was 99.9% identical (just changing CUDA idioms to OpenCL ones) - so the culprit was the compiler. Nowadays, as compilers have matured, there shouldn't be much difference. CUDA might still have an edge in provided libraries and tools though
 
OpenCL is open to being implemented on any CPU or GPU that exists or may exist in future. e.g., you can run OpenCL on an i7 CPU, Iris GPU and discrete GPU at the same time.

CUDA is nvidia proprietary and limited to Nvidia GPU only.

CUDA has a head start in terms of development, but you can be sure it will be phased out in coming years as OpenCL matures. You can bet apple will continue to push OpenCL, as being dependent on CUDA ties them to Nvidia. And apple don't like being held to ransom by anyone.
 
The thing about OpenCL is, it sounds like it is only for integrated chips right? Which by default, make it slightly more "unpopular" than CUDA as CUDA runs on dedicated graphics?

Nope, all modern GPUs (Nvidia, AMD and Intel) run OpenCL. Iris Pro simply has more raw theoretic (and practical) performance then current-gen Nvidia cards, it is also a more flexible hardware for many algorithms. Its only problem is the limited memory bandwidth. Coupled with some fast GDDR5 memory, it should have similar performance in games as the 650M. Still, for most GPGPU applications, the memory bandwidth plays a lesser role, because the algorithms spend more time processing data then copying it (it is often the other way around for games).
 
Nope, all modern GPUs (Nvidia, AMD and Intel) run OpenCL. Iris Pro simply has more raw theoretic (and practical) performance then current-gen Nvidia cards
They seem on paper to be better than last gen GT650m and slightly above the current gen 750m.

But they fall behind the the 660m or 760m, much less the higher level solutions.

Coupled with some fast GDDR5 memory, it should have similar performance in games as the 650M.
I don't think that is a valid configuration. As far as I know, haswell can't connect to GDDR5 memory; just DDR3.

The GT3 (and GT3e) solutions are very compelling for what they are, but they're not highest performing solutions out there, particularly for games.
 
I don't think that is a valid configuration. As far as I know, haswell can't connect to GDDR5 memory; just DDR3.

I was speaking in theoretical terms (as in, if Intel would build the Iris Pro as a dGPU) :) AFAIK, AMD's next CPU/GPU platform will be a solution like that, with the possibility of some extra on-board GDDR5 RAM for the iGPU.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.