Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Real world benefits?

From what I am reading so far, the real benefit of 8 cores in the real world of a minority of applications being truly well threaded, is the ability to run 2-4 large complicated programs simultaneously, multiple instances of programs (some have talked about running 4 copies of handbrake), and multiple OS's simultaneously.

All those things also require vast amounts of memory as well, so a MacPro or X-serve is the only way to go now to addres 16GB+.

Apple has always had memory crippled computers on the low end. If they could do ONE thing in the coming 64 bit world, I would ask them to make the motherboards at least be able to address FUTURE RAM options as the cost always drops rapidly and the requirements always seem to be predominantly ram based.

Rocketman
 
I wonder how Handbrake, iDVD encoding, or Quicktime encoding will take advantage of the extra cores?

For some time, Handbrake didn't use more than two cores - owners of Quad G5s reported CPU usage of exactly 50 percent, then someone changed it and Quad G5s reported 100 percent CPU usage.

What we don't know: Was the code changed to use up to four processors, or as many processors as are available? Developers are usually very unwilling to ship code that they haven't been able to try out, so expect a version using eight cores about two days after the developers have access to an eight core machine.

In the case of Handbrake, encoding to MPEG4 seems already limited by the speed of the DVD drive; you can't encode faster than you can read from the DVD. H.264 is still limited by processor speed. Using eight cores is not too difficult; for example, if you encode 60 minutes of video, just give 7 1/2 minutes to each core.
 
8 Core Mac Pro won't be cheap. And most definitely will not come in at the entry level price point of $2500. I am sure you guys knew that already though.

Most applications are mutli-threaded that isnt the issue. The difference between 4-core and 8-core will be negligible as you can see from the benchmarks. The 8-core Mac Pro will shine when multi-tasking multiple multi-threaded applications.

You will have more power all around. So you can effectively do more at once with less slow down.
 
If all you do is email and type freakin Word documents, why the heck would you spend so much money on a new Mac Pro? You could have been fine buying an iMac or even a MacBook :confused:

Using applications like After Effects, Photoshop, Flash, and other media apps these 8 core computers will ANNIHILATE my render times and cut production times in half, if not chop them into little pieces and spontaneously combust.

Obviously these machines are geared towards video editing, 3d animation, and motion graphics.... hence the PRO after the MAC.

I'll take all the cores I can get, for this will be a huge improvement!!

Lol, I think you missed the sarcasm dripping off of his comment...
 
For some time, Handbrake didn't use more than two cores - owners of Quad G5s reported CPU usage of exactly 50 percent, then someone changed it and Quad G5s reported 100 percent CPU usage.

What we don't know: Was the code changed to use up to four processors, or as many processors as are available? Developers are usually very unwilling to ship code that they haven't been able to try out, so expect a version using eight cores about two days after the developers have access to an eight core machine.

In the case of Handbrake, encoding to MPEG4 seems already limited by the speed of the DVD drive; you can't encode faster than you can read from the DVD. H.264 is still limited by processor speed. Using eight cores is not too difficult; for example, if you encode 60 minutes of video, just give 7 1/2 minutes to each core.

Applications should be, and most likely are written to take advantage of available resources. A developer should be writing applications to take advantage of 8-cores already, they don't need an 8-core machine to do so.
 
I wonder how much of a performance boost (if any) there would be if someone made a whole operating from scratch. Totally new compiler, new programming libraries, new everything to take full advantage of all of today's technologies. This would take several years and hard work, I know, so don't flame me.

I was a programming major in college (though I sucked at it). I know that a lot of the libraries I used in it (like iostream and string) have been around since the '80's. Back then, consumer computers didn't have 4 core, 64-bit processors and high end video cards and broadband internet. While the libraries have been updated a little to work, they're not optimized for all the new technologies we have now.
 
Applications should be, and most likely are written to take advantage of available resources. A developer should be writing applications to take advantage of 8-cores already, they don't need an 8-core machine to do so.

I agree. I wonder how idle the graphics card is when not using games. It would really help if more programmers were able to write programs that take advantage of the graphics card and audio card. Too bad SoundBlaster cards are Windows only. I wonder how much faster ripping CDs and converting to different audio formats in iTunes would be if the instructions got offloaded to a Soundblaster or other sound card.

I also heard of a company called Aspex Semiconductors (www.aspex-semi.com) that designs PCI cards that speed up video processing and has something like OpenGL, but called OpenRL for video processing.. Would be cool if Aspex & Apple teamed up to make a card for Mac Pros to speed up Final Cut Pro & iMovie. Just my 2 cents.
 
That really depends on the program, on how "parallelizable" the application is.

The simplest way to think of it is like this: Let's say you have a program that first has to calculate A. Then, when it's done that, it uses the result of A to calculate B. Then, when it's done that, uses the result of B to calculate C, then C to D, and so on. That's a *serial* problem there. The calculation of B can't begin until A is done, so it doesn't matter how many processors you have running, all computation is held up on one spot.

On the other hand, let's say you have an application that needs to calculate A, B, C and D, but those four values are not dependent on each other at all. In that case, you can use four processors at the same time, to calculate all four values at the same time.

Think of it like baking a cake. You can't start putting on the icing until the cake is done baking. And you can't start baking the cake until the ingredients are all mixed together. But you can have people simultaneously getting out and measuring the ingredients.

So that problem is partially parallelizable, but the majority of its workload is a serial process.

Some software applications, just by their very nature, will never be able to do anything useful with multiple processors.

OK, I'm hardly a programmer (PHP doesn't really count) but that's the exact same description that I've heard applied to the description of what it takes to vectorize a program (i.e. make it Alti-Vec optimized) [that and the process of making loops that can be unrolled]. So I've got to ask, is there some difference between those two concepts? If not, it sure seems like we would have a lot more multi-core enabled apps out there already...
 
is there a chance that they replace the two dual core xenon with only one quad core xenon in the mac pro and drop the price? speedwise it should be about the same and it should definately be cheaper to make.
 
yup, and my webpages will load in the blink of an eye... definitely worth whatever apple will charge. ;)

seriously though, how hard is it to get a program to multi-thread? (if thats the right term; being a complete programming novice, i've no idea)

You answered your own question, in a way. Most people's taskes are not computational. A faster CPU will not make a page load faster. The bottle neck is the speed of the INternet connection. Same with wordprocessing and email. CPU speed is not required.

Whre this WILL help is is video editing and photography. Batch conversions of RAW images will go faster. and for those jobs they are already multi-threaded

As for how easy is it the multi-thread a program. I've done i. Basically you need to design the system from the ground up. There are some special cases where you can add it lter. this would apply to programes that do a computation on a large stack of data while the user waited. You could just swap out the computation modual. But in general it is pretty much a re-design.
 
How can this get negative votes? In fact, how do a lot of perfectly benign threads get negative votes? Are there just members out there who vote negative on everything?

Actually, it's a little known fact that Steve Ballmer frequents this site.... So I attribute all the negative votes to him clicking the "negative" button until he gets tired.... as you can see he doesn't have very much endurance....
 
My Wife and I each have a ton-o-spam to process, and leave our Mail.apps open all the time. Mail.app uses up a full core for a few minutes at a time (G5 2.0GHz). So occasionally the computer is fully loaded just from the suckiness of Mail.app. It's very distruptive to doing anything else on the computer -- watching videos becomes very pretty much impossible. The kids sometimes leave their Safaris pointed at some flashy website which, between the two kids, takes another half-core.

Eight cores makes me think seriously of upgrading.
 
How can this get negative votes? In fact, how do a lot of perfectly benign threads get negative votes? Are there just members out there who vote negative on everything?
It could be the fact that the 8-core Mac Pro butchered the iTunes encoding and Quake 4 test? I'm shocked myself that Mac Pro tied for the lowest score in the iTunes test.
 
That really depends on the program, on how "parallelizable" the application is.

The simplest way to think of it is like this: Let's say you have a program that first has to calculate A. Then, when it's done that, it uses the result of A to calculate B. Then, when it's done that, uses the result of B to calculate C, then C to D, and so on. That's a *serial* problem there. The calculation of B can't begin until A is done, so it doesn't matter how many processors you have running, all computation is held up on one spot.

On the other hand, let's say you have an application that needs to calculate A, B, C and D, but those four values are not dependent on each other at all. In that case, you can use four processors at the same time, to calculate all four values at the same time.

Think of it like baking a cake. You can't start putting on the icing until the cake is done baking. And you can't start baking the cake until the ingredients are all mixed together. But you can have people simultaneously getting out and measuring the ingredients.

So that problem is partially parallelizable, but the majority of its workload is a serial process.

Some software applications, just by their very nature, will never be able to do anything useful with multiple processors.

This is true, but there are still many many ways to optimize the multi-core processor that's not currently being use.

For example, I am waiting for a program to compile right now. Although I have a dual core on my computer, the compiler only compile one file at a time and usually takes about 10 min to do a full compile . If I have an 8 core computer with a multi-threaded compiler then I can cut the total time to jsut over a min + couple of seconds for linking time.

I think the main problem with muti-threading program is that it is difficult to implement, especially for coders who only knows high-level languages. Muti-threading in low-level program such as C is not easy but at least it is straight-forward. But trying to muti-thread high-level language such as VB or C# can get you into a big headace since everything is abstracted from the programmer. To do that, you need to get into unsafe code and call a bunch of DLLs, and it's easy to get memory leaks. Basically it can start to get very complicated, very quickly.
 
...Most applications are mutli-threaded that isnt the issue. The difference between 4-core and 8-core will be negligible as you can see from the benchmarks...

Uh... maybe we were looking at two different articles.

First off, most applications are not multi-threaded. It's only Pro level applications that tend to be, and even there, there are plenty that aren't. So, multi-threading is an issue.

Second, you say that the difference between 4-core and 8-core is negligible? Take a look at the PyMOL molecular modeling rendering performance! Under OS X with 4-cores, it took 11.18 seconds, whereas with 8-cores it took 6.8 seconds. That's a raw improvement of about 65%! It's a clock speed weighted improvement of about 85%! How on Earth can you consider gains like THAT negligible?!?

Sheesh!

Edit: Corrected a math error.
 
is there a chance that they replace the two dual core xenon with only one quad core xenon in the mac pro and drop the price? speedwise it should be about the same and it should definately be cheaper to make.

interesting question, but afaik two different chips would perform better (at the same ghz).
any chance that there will be an update of the mac pro before 2007?

if so, will the current models get more ram, or a different gpu or a lower price?

i'd appreciate any educated guesses since i have to buy in 2006 for tax reasons.
 
Apple and RAM options

To be fair, Apple has been generally "above average" in building computers that handled large amounts of RAM. I was amazed when I realized a couple years ago I could take an old PowerMac 7300 desktop and stuff 1GB of RAM in it. Couple that with a G4 upgrade and PCI card to give it Ultra ATA 100/133 hard disk support, and you had a pretty viable machine for running OS X (using XPostFacto to force it to install on something that outdated).

The biggest "problem" is probably just that people tend to use their older Macs a lot longer than people use their old Windows PCs. So they end up wanting to upgrade them far further than anyone anticipated.


Apple has always had memory crippled computers on the low end. If they could do ONE thing in the coming 64 bit world, I would ask them to make the motherboards at least be able to address FUTURE RAM options as the cost always drops rapidly and the requirements always seem to be predominantly ram based.

Rocketman
 
31%?!

31% is a little disappointing for 2x the number of cores. I'm hoping that particular benchmark isn't particularly tuned for multiple cores. I was thinking 60-70% would be more likely. I don't see where all the overhead is coming from. Or it because these aren't true quad-core, but really just dual-duals on the same wafer?
 
try reading the article...

well, OSX whooped xp for multicore usage then
On pyMol, yes.

If you look at the full article, XP bested OSX on several other programs.

Pretty much even, overall.

They don't report software versions or other useful details (like how many FB-DIMMs in the systems), so any of the "wins" and "losses" could easily be differences in software versions (is pyMol OSX exactly the same version, compiled with the same optimizations on the same compiler?) or other details.

For example, what if pyMol on OSx86 is optimized for Core and later chips, and the XP version is optimized for Pentium III systems (and doesn't take advantage of Pentium 4 and Core 2 improvements)? If that's that case, is not fair to say OSX is faster than XP - although it's clearly reasonable to state that OSX is a faster choice for pyMol.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.