Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
68,272
39,085


Intel's research blog recently discussed the direction that developers should plan for in the coming years.

Intel's Anwar Ghuloum describes how developers have been listening to Intel's announcements that they are increasingly heading towards multi-core processors in the future. At this point, they suggest that developers program for as many cores as possible, even if it is more cores than are currently in shipping products.
Ultimately, the advice I’ll offer is that these developers should start thinking about tens, hundreds, and thousands of cores now in their algorithmic development and deployment pipeline. This starts at a pretty early stage of development; usually, the basic logic of the application should be influenced because it drives the asymptotic parallelism behaviors.
Multi-core processing has been an ongoing trend in modern processors. Each "core" acts independently from the others and can essentially be considered its own individual processor. All shipping Macs are now at least dual-core, while the high-end Mac Pro ships with 8 cores across two 4-core processors. Intel's forthcoming Nehalem processor will introduce up to 8 cores per processor. If combined into a dual-processor configuration, we could see a 16-core Mac Pro in the near future.

While each core acts independently of the others, a dual core processor is not necessarily twice as fast as a single core processor due to inefficiencies in splitting up tasks. In recognizing this trend, Apple has announced that a major feature (Grand Central) of the next version of Mac OS X (Snow Leopard) will specifically focus on optimizing for multi-core processing.

Article Link
 
I hope that programmers begin to do this. Is there a theoretical limit on how many cores there can be?
 
"the basic idea…program for as many cores as possible, even if it is more cores than are currently in shipping products. "

Apple has this in mind. It looks like Grand Central, and also OpenCL should helpfully take some of the weight off developers' shoulders regarding optimally using multiple cores and multiple processors...

The great thing is, that you can buy multi-core systems now, and then get a performance boost when Snow Leopard comes out next year.
Nehalem chips due soon will surely be being set up by Apple to be in line with getting a decent performance boost from Snow Leopard. Improving performance on a Core 2 Duo is easy. A dual socket Nehalem with 8 cores each? That's what Snow Leopard seems to be aiming squarely at (and it doesn't stop there - seeing as it's more about multiple cores, than using more than 2 sockets currently).
In terms of theoretical limits - Intel already has protypes with 100s of cores on.
 
I've personally noticed this need

I love my MacPro 8 core system. It just SCREAMS! BUT, many many many tasks are single threaded so it's "technically" only as fast as a single core. Even with this being true the benefit of having the multi-core system is I can still do many HEAVY single threaded apps all at the same time without it affecting performance of each individual task.

So while encoding footage to different formats i can still use final cut pro, motion, dvd studio pro, safari, etc.... all at full speed while my box is cranking away at several intense tasks.

BUT with all that said, I would love to have more apps more multi-threaded oriented. It will need a different way of thought. When encoding footage, lots of formats depend on a previous frame to know what to base the current frame on. So in those situations it'll have to break the footage into equal blocks divisible by the "keyframe" numbers. So that way each processor can be assigned a sequence range of frames to encode and then piece it all together in the end. There would be too much stalling involved if you were to try to chain all the processors to work together on the same subset of frames.

Anyway, I'm very excited about apple working on "Grand Central". This will be essential to the future of computing.
 
It is just incredible to think about what is going to be in production when I am in the market for a new computer in 2010. Wow.
 
Can't wait to see what kind of performance improvements Snow Leopard brings. Even on older machines (with mutli-core procs) there should be significant gains, I should think. :D
 
The interesting thing is - multi-cores isn't potentially the main course. If Apple can effectively crack using GPUs for General Purpose computing, then there will be an insane speed bump coming our way (primarily for desktops i'd imagine due to the thermal issues).

And if you want to extrapolate, imagine what it could do in a couple of XServes, or using Snow Leopard Macs linked up into a loose distributed processing system.
https://forums.macrumors.com/threads/511801/
 
The problem is that programmers very quickly will hit a wall where tasks cannot be run in parallel because they depend on data output from one another. There can only be so many parallel tasks running at once for a program, and I hardly think that this will scale to utilizing hundreds or thousands of cores.

I hope Intel isn't gearing up for "The Megahertz Myth Part Deux".
 
The problem is that programmers very quickly will hit a wall where tasks cannot be run in parallel because they depend on data output from one another. There can only be so many parallel tasks running at once for a program, and I hardly think that this will scale to utilizing hundreds or thousands of cores.

I hope Intel isn't gearing up for "The Megahertz Myth Part Deux".

That is the way Cray supercomputers have worked for years as well as most other supercomputers. There are already computers with upwards of 1,000 processors in them.

I'll worry about it when software has the ability to keep up with a Core 2 Duo, let alone 4-6-8 or 16 core. Until the software is developed, it's just a waste buying multi core processors

It is pretty easy to find software that will max out an 8 core computer. Just ask any theoretical physicist or chemist.
 
I hope that programmers begin to do this. Is there a theoretical limit on how many cores there can be?

From the sound of it, there could be a lot more than the existing max 8! :eek: This is gonna be interesting. However, could more cores mean that Moore's Law is seeing the beginning of the end?

One thing a programmer has to keep in mind is how small can I divide up my problem before it takes more work to divide up and then put it back together. Take this parallel sorting example:

The way one parallel sort is done is the list is broken down into N parts, where N is the number of threads you want to spawn. Then, each thread sorts its own part. After all threads have completed their sort, the original program takes the individual, sorted parts and merges them together.

Now, say you have a 16 item list and 8 cores available. In this case, it simply would not make sense to split up the 16 item list into 8 parts because each thread would only be sorting two items; the overhead of spawning the thread would overcome any gains you'd have from parallelism.

In reality, most programmers don't deal with only 16 item lists, but this kind of illustration may hold as the number of cores increases.


Nice pull.
 
This seems like a grand concept but until the developers can exploit the full potential of these multicores there is no point investing in such high-end multi-pro systems.

1.)Also as someone pointed out above most applications process data in multiple steps and the output from one step is fed as an input to next step and so on. These interdependent steps cannot be processed in parallel to take the full advantage of the multi-cores.

2.) The division of single thread process to equivalent multiple threaded process requires more computational overhead and will infact increase the execution time where the amount of actual data to be processed is minimal. In this case single traditional single threaded process would execute much faster.
This is where Snow Leopard's Grand Central would be come into play. It will supposedly decide the best ways of executing a task. It will be very crucial and important how Apple implements the Grand Central's engine to process a task. I have faith that Apple will do an awesome job.

That said, for most real world programming and development applications the core platform is still Windows & Linux and next few years will decide who stays afloat in the next era (of Multi-cores). With Snow-Leopard Apple has made a very smart move to lay very strong foundation and is already ahead in the game.
 
I predic 10 years from now intel will released a 352Core proccesor at 3.4Gh and people are gonna complain because the previous processor was 320 Cores


By then (probably due to apple) software will be able to use all those cores so elegantly and flawlessly it wouldnt be like they are seperate cores at all and it'd be more like one super powerful processor, altho at this time it wnot be considered super powerful by that days standards.
 
Now, say you have a 16 item list and 8 cores available. In this case, it simply would not make sense to split up the 16 item list into 8 parts because each thread would only be sorting two items; the overhead of spawning the thread would overcome any gains you'd have from parallelism.

In reality, most programmers don't deal with only 16 item lists, but this kind of illustration may hold as the number of cores increases.

There is actually a sorting algorithm (merge-sort) that does just that - it breaks the list to be sorted in to 2 chunks that are then broken into 2 chunks till the chunks are only 2 values big, it then sorts them and puts the chunks back in order

As a result this kind of algorithm will lend itself very nicely to this kind of problem and multiple cores full stop.

The real problem with having a huge amount of cores is that it becomes nigh on impossible to program for it because as stated before in the thread some tasks just can't be broken down into tiny bits...
 
That is the way Cray supercomputers have worked for years as well as most other supercomputers. There are already computers with upwards of 1,000 processors in them.



It is pretty easy to find software that will max out an 8 core computer. Just ask any theoretical physicist or chemist.


Supercomputers are efficient because they do lot of processing as compared to a desktop. However if you engage a supercomputer to run a parallel program to do simple math (like 2*5=?) the overhead involved in dividing the task in parallel and sending it to each branch of the supercomputer and then collecting & integrating the results from all branches will be greater then a desktop computer. I have done a little bit of parallel computing so have seen this in action. That said, for some applications (large scale) multi-cores can be exploited to give you the best performance, however in most end user applications single-processor(single threaded application) will do the job faster then their multi-threaded equivalent.

This is the reason, it will be a crucial factor to have the right processing engine in your OS and or applications which can dynamically decide whether the task at hand can be run faster in single-thread or multi-thread mode and act accordingly.
 
It'll be interesting to see how much of Apple's work on such matters has been kept under wraps. Wasn't there some indication from WWDC as to how far along Grand Central and the other related technologies feeding into using multiple core multiple processor computers?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.