Re: Regarding Multiprocessing and OS X
Originally posted by bentmywookie
Ok, for those arguing about the single processor vs. dual processor issue, all I want to say is that an OS (at its lowest level) needs to do resource allocation/management. It's the interface between the hardware and the software.
True.
So, to characterize the scenario, a piece of software comes along and tells the OS, "hey I need to cook this meal, here's the recipe, now off ya go!" And the processor looks and sees, "well I have two ovens, 3 bowls, 5 fridges, etc." and basically it uses whatever it has to get the job done. Ok, enough with this analogy (for now).
Bad analogy, because the recipe most software follows is "do this, THEN do this, THEN do this", not "do this and this and this in any order". The string of things that have to be done in order is called a thread of execution. MOST, not all, software is written with a single thread of execution for the "main task" of the software. Like baking bread: you have to mix the ingredients, then knead it, then let it rise (iterate), then bake. If you bake it first, you end up with fried eggs and singed flour. No matter how many cooks you have in the kitchen making one loaf of bread, the process can not be sped up because the labor can not be divided up and made parallel instead of in series.
That is not to say that most software is single-threaded. In fact, I would be surprised if any OS X or in fact any post-Win95 software that does any amount of work whatsoever were single threaded, simply because then the app would be unresponsive to both the user and the OS while it did its work. Hence, apps that "do work" tend to do their "work" in a background thread while the "main" thread continues to respond to user and OS messages. It is possible (although not common), in this common case, for the message-handling thread to execute on one processor while the "work" thread executes on the other processor, leaving the work thread a bit more "room" on its CPU. It is not common for this to happen as the OS will preferentially place threads of a single application on the same processor (unless they both use full timeslices and the other CPU is relatively unused) as threads within an app are far more likely to share data between each other (and to require cross-thread signalling and mutexes) than threads in different apps, and these things are more efficiently done when the threads are on the same physical processor.
What multiple processors buys you, in a single-threaded-application world, is the ability to run two separate applications on two separate processors. If you had one app that is running full-bore and consuming massive swaths of CPU time, and a dozen other apps, the kernel will schedule the boorish app on one CPU and the other apps on the other CPU. Likewise, if you have two CPU-hog apps running, each wil be scheduled on its own CPU and the rest of the processes in the system will divide up amongst the two CPUs on top of the two hogs.
Third scenario, an app is written to divide its "main work" between two threads. The OS initially schedules both threads on one CPU, then quickly sees that they are both "CPU Hog" threads and pushes one over to the other CPU next timeslice. Thus, the OS quite efficiently can use its two processors even though the app itself wasn't "strictly" written to an SMP API.
Fourth and final scenario, like the third, except the app uses the SMP API. When the second "work" thread is spawned, the application tells the kernel that it will be a CPU hog and should be placed on a different CPU from the first if possible. This removes the couple of timeslices where both threads share the same CPU before the kernel sorts out that they need to be separated.
Fifth, an improvement on the fourth, the app asks the Kernel if it has a low-usage CPU available, and only if there is a second CPU available for processing does it spawn a second thread for processing. This is a bit more difficult to program, but often significantly improves performance on the single-CPU case while not affecting multiple-CPU performance.
So, going from one single-threaded app to multiple single threaded apps to a multiple-threaded app to a multiple-threaded app that is SMP-aware, you get increasingly efficient use of the second processor.
OS X does a pretty good job of dividing up the work here, and of course you'll never see a "single" app running on OS X because OS X itself consists of multiple background processes. But, you won't see your main app run significantly faster on a dual-proc machine than on a single-proc machine if it is single-threaded. The only advantage the dual-proc machine has is that OS X's background processes are shunted over to one CPU and thus your main app has the whole CPU to itself (but, of course, not the whole System Bus or Memory Bus or disk, etc ...)
Now, a word about benchmarks: Benchmarks tend to be sequential, not parallel. By this, I mean, one task is executed while the machine is doing nothing else, then another task is executed, then another, and so on. If the apps handling the tasks are single-threaded, you have the first scenario above (one CPU hog thread on one CPU and the OS threads on the other), which shows very little advantage to having multiple CPUs. As a result of the fact that most of the time the popular applications out there have a single "work" thread, this skews benchmarks heavily in favor of single-CPU systems. This type of benchmarking is realistic for single-task servers whose only interaction with users is how fast it got its job done. However, "real life" with a desktop or workstation computer is generally not like this. People multi-task, and get annoyed when their computers do not. Yes, they want to be able to browse the web and print that report out and listen to iTunes while their CD is burning. Dual (or more) processors allows this to happen, and that is never shown in benchmarks.