Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

bilboa

macrumors regular
Original poster
Jan 16, 2008
213
1
As many of you may know, Apple has released a new concurrency API with Snow Leopard, called Grand Central Dispatch (GCD). You may also know that, contrary to some of the hype about GCD being a totally new facility for programmers, there was already a similar concurrency abstraction available to Cocoa programmers prior to Snow Leopard, called Operation Queues (OQ), using the NSOperation and NSOperationQueue classes. What's more, the OQ API has been updated in Snow Leopard to accept blocks as operations.

I've been reading the documentation for GCD to see what is different about it compared to OQ. Also I am wondering what advantages there are, if any, to using GCD instead of OQ.

Let me start off by listing some differences I'm already aware of.

  • OQ is a Cocoa API, whereas GCD is a C API, so obviously if the concurrent code you're writing is not Cocoa code, then only the GCD API will be useful to you.
  • GCD has a facility called Dispatch Sources, to dispatch new tasks every time certain events occur, such as timers firing, file descriptor events, process events, etc. OQ has no equivalent facility.
  • OQ has a more powerful and flexible way of specifying dependencies among tasks compared to GCD. With OQ you can specify that an operation has any number of other operations as dependencies, which means the dependency operations must complete before this operation can proceed. With GCD, you only have the choice of putting tasks on a serial or concurrent dispatch queue. Let's say I have tasks A, B, C and D, and tasks A, B and C must complete before task D can proceed. With OQs, I could add A, B and C as dependencies of D, and then add all the operations to a queue. The queue would then know it was free to run A, B and C concurrently, but D has to wait until A, B, and C are done. It's not as straightforward to express the same thing with GCD.

I guess what I'm really wondering is, assuming I'm not concerned about the first two points listed above, is there any reason to prefer GCD over OQ? Is there any performance benefit of GCD over OQ? I gather from some of the OQ documentation that in Snow Leopard OQs may in fact be using GCD under the hood now anyway, which would mean that it has the same advantages as far as global scheduling of concurrent tasks. However I haven't seen that explicitly stated. I read through Apple's new Concurrency Programming Guide, which covers both APIs, but it doesn't really give any guidance about why you'd choose one over the other. Thanks for ideas.
 

ManiG

macrumors member
Aug 11, 2009
44
0
I read the same docs and I think you pretty much nailed it. The key sentence for me was: "An operation queue is the Cocoa equivalent of a concurrent dispatch queue and is implemented by the NSOperationQueue class." (http://developer.apple.com/mac/libr...html#//apple_ref/doc/uid/TP40008091-CH100-SW9), which does indeed imply that NSOperationQueue now uses GCD under the hood.

The problem is, words like "I think" and "imply" aren't good enough. :) Have you tried asking anyone at Apple?
 

bilboa

macrumors regular
Original poster
Jan 16, 2008
213
1
Thanks, ManiG. Another thing which gives an even stronger implication that OQs are using GCD underneath is this in the documentation for NSBlockOperation, where it says:
When it comes time to execute an NSBlockOperation object, the object submits all of its blocks to the default-priority, concurrent dispatch queue.
(Emphasis added by me.) That fact that it uses a dispatch queue I guess confirms that it is in fact built on top of GCD. So I guess we should think of Operation Queues as a Cocoa layer around Dispatch Queues now.

What prompted me to post this is that I had already been using NSOperationQueue since 10.5 came out. I read the hype about GCD and how it was providing a totally new abstraction to developers to make it easy to parallelize their code, but when I read about the actual API, it looked mostly like just a C version of the already existing NSOperationQueue API, so I felt like I must be missing something. I think the difference may be that GCD is actually a new kernel-level feature, whereas prior to Snow Leopard, NSOperationQueue was just a library-level feature without any special support from the kernel. So from a programmer point of view it's not much different, except for the programming language used. However since GCD is a kernel-level feature, it can load-balance tasks intelligently across all processes on the system through the use of global concurrent queues, which is something the old presumably library-level NSOperationQueue class couldn't do.
 

ManiG

macrumors member
Aug 11, 2009
44
0
IMHO, the big "new" feature to come along with GCD is blocks. The way it allows you to pass around a namespace + function with such ease and brevity, while maintaining efficiency, is pretty cool and new. I have a feeling the real benefits of blocks with regards to concurrent programming have yet to be explored ...

Anyway, good luck and have fun. :)
 

bilboa

macrumors regular
Original poster
Jan 16, 2008
213
1
As a long time user and fan of functional programming languages, I'm certainly glad to see closures (blocks) added to Apple's version of the C language. However I see the blocks feature as being orthogonal to GCD. GCD supports either blocks or function pointers to implement tasks, so removing blocks wouldn't decrease the power of GCD, it would just remove some convenience. Plus NSBlockOperation allows you to use blocks with operation queues too, so the use of blocks doesn't distinguish GCD from operation queues.

I'm actually more pleased to see that blocks have been integrated into other parts of the Cocoa APIs. For instance NSArray now has enumerateObjectsUsingBlock: and indexOfObjectPassingTest: methods which accept blocks as arguments.
 

savar

macrumors 68000
Jun 6, 2003
1,950
0
District of Columbia
I don't know anything about OQ, but I have read most of the articles about GCD.

My interpretation is that the implementation is the key feature here, not the API. GCD thread pools are sized to the underlying hardware dynamically and then shared among the applications. Individual apps can't perform system-wide load balancing, so they may create too many threads or too few threads to perform at maximum efficiency. GCD is designed to handle all of that stuff at the system level and transparently "make it work".

Sorry if I'm preaching to the choir... just didn't see that point of view mentioned yet.
 

Catfish_Man

macrumors 68030
Sep 13, 2001
2,579
2
Portland, OR
In addition to the automatic scaling of # of threads, I would say a more important aspect is the *very very* low overhead. In many common cases of GCD usage it avoids calling malloc(), taking any locks at all, or doing a userspace/kernelspace transition.

Be aware though, that dispatch_async() will (if necessary) copy the block it's passed onto the heap; So this:
Code:
for (NSUInteger i = 0; i < 100; ++i) {
    dispatch_async(somequeue, ^ {
        //do something
    });
}
is slower than this
Code:
dispatch_block_t work = [^ {
        //do something
    } copy];
for (NSUInteger i = 0; i < 100; ++i) {
    dispatch_async(somequeue, work);
}

Of course, for this particular example, the way to go is:
Code:
dispatch_apply(100, somequeue, ^(size_t i) {
        //do something
    });
which is fast, concise, expressive, and avoids the possibility of off-by-ones :)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.