Great background, but it misses the point. What Intel is saying (and what Grand Central seems to go after) is increasing the percentage that is accelerated by multiple cores.
DOS had zero speed up from multiple cores, so the speed gain was 0. As MP systems became available, a few very CPU intensive tasks were written to use multiple processors. More things are written that way now. Intel is saying that MOST things should be written in such a way as to benefit from multiple processors. The closer Grand Central gets to that ideal, the better for Apple.
The problem is that programmers very quickly will hit a wall where tasks cannot be run in parallel because they depend on data output from one another. There can only be so many parallel tasks running at once for a program, and I hardly think that this will scale to utilizing hundreds or thousands of cores.
Only partially true. If properly written, much of the needed data will be in cache and accessible all the time. That's clearly one of the things that Intel wants people keeping in mind. But it really depends on what percentage of the time the processor NEEDS to be waiting for data.
Nonetheless, Intel's point is that you need to make everything that could possibly benefit work with massive numbers of processors. If there's something that can't benefit, then it can't be helped, but that's no excuse for failing to do what you can.
This seems like a grand concept but until the developers can exploit the full potential of these multicores there is no point investing in such high-end multi-pro systems.
That is not true. First, almost every real world computer program has SOME tasks that can be sped up. Second, even if you have a task which is truly linear with no possible gain from MP, extra cores still allow you to run other tasks at full speed without impacting the speed of the first task.
There is Amdahl's Law, which puts a limit on how many you can use effectively.
You're again missing the key point. Amdahl's Law gives the limit in speed gain for a given level of 'parallelizability'. If you change the percentage of tasks which benefit from multiple cores, the speed gain improves. Sure, if 5% of the tasks are truly impossible to make parallel, then we're limited to 20x speedup. I guess I can live with that. But Intel and Apple's point is that current software design is possibly making 50% of the code not benefit from MP when if the software were coded differently that percentage would drop - improving the speed gains.
The real lesson of Amdahl's Law is not the limit to the speed gains. It is that you need to design your code so that only a very small percentage does not use multiple processors.
Hardware designers will also experience difficulties creating memory switch architectures that can handle 16+ cores accessing the same memory.
Which is why Intel is giving them advance warning.
I appreciate the Wiki entry on Amdahl's law, but the page has a serious flaw: It assumes that the smallest possible chunk the task can be divided into is 1/20 (95% of a task). Naturally, that corresponds to a (almost) maximum speed increase of 20x. If a task could be divided into 100 chunks, then the maximum speedup would be 100x using the formula.
Here's a list of different sized chunks along with the corresponding speedup using 512 processors:
2000 chunks = 407x speedup
1000 chunks = 338x speedup
500 chunks = 144x speedup
100 chunks = 83x speedup
50 chunks = 45x speedup
20 chunks = 19x speedup
Clearly, the big challenge is how to solve the problem of making the chunks smaller. It seems impossible, but I doubt that it is.
Exactly- finally someone gets it. The whole point of Intel's announcement is getting developers thinking differently so that a much greater percentage of their code is written in small MP capable chunks.
Indeed, they do. But that doesn't change the fact that they need to sell chips, and the only chips they can make are the kind that need parallel code to be useful. So they are going to sell them until people don't want them, or a new use for them is discovered. Today, they are useless.
This is silly. Ask anyone who uses several apps at the same time if multicore processors are useless. Or a scientist or engineer doing heavy computations.
-unless the program you are using is optimized for duo, quad, "octagonal" or... cores, you processing speeds will not drastically improve with more cores.
-this is half the reason why windows runs like a dog; windows is, and will never be optimized for each and every computer configuration.
Which is exactly the point of Intel's announcement and Grand Central. It's time for developers to start thinking in terms of large numbers of cores.