Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Correct, Geekbench just measures CPU and memory performance, doesn't take into account for HDD or GPU performance.

The iMac i5 beats an upgraded 2.3 ghz i7 Quad core MBP in tasks like encoding video's which are CPU tasks.

Only in synthetic benchmarks like Cinebench and Geekbench it loose to the MBP. So the fake tests are obviously flawed and shouldn't be taken serious.
 
You're doing orchestral composing I see if you need a machine like that for audio production.

Is disk streaming with SSD's in raid not fast enough instead of pushing all those samples into your RAM? The iMac has 4 memory slots so they can have 32 gb of RAM in total? I know the MBP can use 8gb Ram modules so the iMac's can do it too probably.

A single ATI 6xxx card can support up to 3 monitors I believe so we're close. With Thunderbult you can add an other monitor and then we are at 4.

Although some sample libraries allow you to push everything to RAM, it's impossible with a big orchestral library that takes up hundreds of GBs. I had 8 HDs before, 6 used for samples and was getting clicks and pops when things got busy enough. When I upgraded to the SSDs (and decreased the number of drives to 4), that problem went away completely.

As for Thunderbolt, I wonder if it would be fast enough to support external SSDs, other monitors, plus an audio interface. I doubt that it could unless there are two thunderbolt ports that have their own controller and bandwidth for each.
 
The entry level Mac Pro should be around $2000.

It should be about $1500 considering 4C Xeon chips are cheap and easy to come by. The price doesn't jump exponentially until you put a 6C or 2x 4C chips.

It pained me in 2009 to pay $2200 for my base 2.66 quad Mac Pro when I knew there were only about $1100 in parts in side but as I've stated before I hate the glossy imac screens so I didn't have much of a choice.
 
It should be about $1500 considering 4C Xeon chips are cheap and easy to come by. The price doesn't jump exponentially until you put a 6C or 2x 4C chips.

It pained me in 2009 to pay $2200 for my base 2.66 quad Mac Pro when I knew there were only about $1100 in parts in side but as I've stated before I hate the glossy imac screens so I didn't have much of a choice.
A single socket hex-core is not all the expensive either. Though Apple might want to show off cores and clock speed on for BTO.
 
Would one of you, or several of you, explain to a non-propellor head the relative merits between clock speed and cores - do fewer cores but higher GHz get things done in FCP (pre X) quicker or slower than more cores but lower GHz?. Assume you're explaining it to an utter idiot and you won't go far wrong. Thanks in advance...
 
Would one of you, or several of you, explain to a non-propellor head the relative merits between clock speed and cores - do fewer cores but higher GHz get things done in FCP (pre X) quicker or slower than more cores but lower GHz?. Assume you're explaining it to an utter idiot and you won't go far wrong. Thanks in advance...

Old school FCP was pretty poor at spreading it's work amongst a number of cores - so if you have more than 2 or 3 cores it's not going to make much use of them.

So you're a lot better off getting one of the single-CPU MacPros with a 4 or 6 fast cores, instead of wasting money on the dual-CPU MacPros with 8 to 12 slower cores in total.
 
What are you talking about?

PowerMac Dual Core 2.3Ghz G5 - 2163
Macbook Air C2D 1.6ghz - 2358
Mac Mini C2D 2.26ghz - 3204
MacBook Pro Core i5 2.3ghz - 5910

http://www.primatelabs.ca/geekbench/mac-benchmarks/

*rolls eyes*

Are you really comparing machines using GeekBench? Especially when two of the tests are based on memory bandwidth, where the G5 has 533 MHz DDR2 and the C2Ds/i5s have up to 1 GHz + DDR3... something that impacts the system far less than it impacts the GB test?

Come up with actual real-world benchmarks.

Besides...

http://browse.geekbench.ca/geekbench2/view/504908
http://browse.geekbench.ca/geekbench2/view/481944

----------

Although the Mac Pro looks like a nice machine to have, it still seem crazy to me to drop 4-5k on a machine that will be outperformed by a MacBook Pro or iMac a year or a year and a half later for a fraction of the price.

I see it as you're not really paying for the power, but for having that power 1/1.5 year in advance. And it seems like people buying Mac Pros don't upgrade every year, a lot of them plan to keep theirs for like 5 years, so it doesn't make sense to me.

For the same price, you could buy the best iMac every year and resell it, and you would end up:
1) Having a more powerful machine on average
2) Paying less
3) Always having a warranty
4) Getting the cool new stuff first (Thunderbolt, 27" IPS display, FaceTime HD...)

Am I missing something?

You are missing something, yes.

Number one... point out to me which model of MBP is as powerful in threaded applications as any 2010 MP? That isn't going to happen for a few years; the six core may be eclipsed soon, but not the 8 and 12 core models (the latter of which are the "$4-5k machines"). The most powerful iMac still comes in at half the GB score as the top MP.. and is far slower in real-world applications that take advantage of all those cores.

Also, you can't throw 128 GB of RAM in an iMac... also, you can't customize/upgrade the screen or the graphics card, and you can't throw in four hard drives (more, with certain kits). And you don't have any PCIe slots on the iMac, either.

The iMac expandability is practically zero. I could go on...
 
I CANNOT wait for the new machines. I write massively parallel scientific software and can use every core they can put in a machine.

Like other posters, if it weren't for OSX I would have already have jumped over to a Linux workstation with 32 cores. My current 12 core hyperthreaded mac pro pales in comparison. At least with the new processors I'll be able to go to 16 hyperthreaded cores...

To give my mac pro a speed bump, I just filled it up with four 256GB SATA3 SSD's in RAID 0 (yes, I know my current mac pro doesn't have SATA3... but I went that direction so they are ready to go for the updated machine I'm going to order as soon as it's available!). And yes, that is ~$2400 in HDs... That just shows how important performance is to me...

Please let these chips finally come out soon....
 
Like other posters, if it weren't for OSX I would have already have jumped over to a Linux workstation with 32 cores. My current 12 core hyperthreaded mac pro pales in comparison. At least with the new processors I'll be able to go to 16 hyperthreaded cores...

I honestly can't understand why people needing a lot of number crunching even consider the MacPro. Displaying your X-terminals on an iMac and doing the crunching on a big dumb Linux machine is surely quicker, cheaper and more flexible?
 
Old school FCP was pretty poor at spreading it's work amongst a number of cores - so if you have more than 2 or 3 cores it's not going to make much use of them.

So you're a lot better off getting one of the single-CPU MacPros with a 4 or 6 fast cores, instead of wasting money on the dual-CPU MacPros with 8 to 12 slower cores in total.

Many thanks, Firestarter. Happy Hallowe'en.
 
I honestly can't understand why people needing a lot of number crunching even consider the MacPro. Displaying your X-terminals on an iMac and doing the crunching on a big dumb Linux machine is surely quicker, cheaper and more flexible?

If he's making this scientific software at a university, his IT may not officially support Linux. That's often one good reason to pay a bit of a premium for a Mac Pro. Though if you think about it, $1500 for an iMac + (say) $3000 for the big dumb Linux box, isn't going to really save you any money anyway. And while some universities have clusters that could be used, I can certainly see how software development may not be too fun on them.
 
I honestly can't understand why people needing a lot of number crunching even consider the MacPro. Displaying your X-terminals on an iMac and doing the crunching on a big dumb Linux machine is surely quicker, cheaper and more flexible?

Note that I'm not "just" doing number crunching... I'm _developing_ number crunching software. OSX is still a much better "Desktop" OS for actually using all day for developing software, writing Word documents, doing powerpoint presentations (yes, even in the science world we can't get away from those tasks!) AND doing number crunching.

The fact that it's UNIX underneath means we can do all of our development there... and our software will compile and run on our big Linux (and other, more esoteric OSes) on our clusters.

On that note I do use "big dumb Linux machines" all day. My main workhorse is a 12,000 core beast (was just running a job on all 12,000 cores last week). But you don't use machines like that to actually write code!

Everyone that thinks the MP line will be going away: there is no way. The profit margin is huge. In my department we have about 30 MP's... each one costing well over $6,000. We upgrade everytime Apple puts out an update... and we're not the only ones. Sure, they don't make the same revenue as on iPhones... but the profit is just too good to walk away from.
 
Geekbench is not a real test to test how fast computers are. MBP are beating iMacs in geekbench, but in real test the MBP loose to the iMac's. ;)

What do you think synthetic benchmarks do? They process things. "Real" tests can vary in which platform they show being faster too.
 
What are you talking about?

PowerMac Dual Core 2.3Ghz G5 - 2163
Macbook Air C2D 1.6ghz - 2358
Mac Mini C2D 2.26ghz - 3204
MacBook Pro Core i5 2.3ghz - 5910

http://www.primatelabs.ca/geekbench/mac-benchmarks/

*rolls eyes*

Are you really comparing machines using GeekBench? Especially when two of the tests are based on memory bandwidth, where the G5 has 533 MHz DDR2 and the C2Ds/i5s have up to 1 GHz + DDR3... something that impacts the system far less than it impacts the GB test?

Come up with actual real-world benchmarks.

Besides...

http://browse.geekbench.ca/geekbench2/view/504908
http://browse.geekbench.ca/geekbench2/view/481944


So ignore the memory tests and just look at integer and floating point performance.
http://browse.geekbench.ca/geekbench2/view/504864

You can try to justify the G5 as much as you want but clock for clock it is slower than new processors. Throwing in some faster memory isn't going to magically make it a 2011 machine.
Heck you are comparing to to the C2D which is a 2006 processor, comparing it to a 2.3ghz Sandy bridge or Ivy Bridge isn't even fair.


The iMac i5 beats an upgraded 2.3 ghz i7 Quad core MBP in tasks like encoding video's which are CPU tasks.

Only in synthetic benchmarks like Cinebench and Geekbench it loose to the MBP. So the fake tests are obviously flawed and shouldn't be taken serious.

That's most likely due to the hyper thread on the MBP and not on the iMac. A lot of things can't take advantage of hyper threading but if they could it would be faster on the MBP, but yes for most uses today the iMac higher clock i5 would be faster.
 
I honestly can't understand why people needing a lot of number crunching even consider the MacPro. Displaying your X-terminals on an iMac and doing the crunching on a big dumb Linux machine is surely quicker, cheaper and more flexible?


Also, an iMac doesn't cut it. I have results files for 3D simulations that are over 300GB. I need tons of RAM, multiple RAIDed HDs and big ass video cards to visualize those reults. The iMac can't do any of that. No too mention that more cores helps when running small/medium sized jobs on my workstation... and directly speeds up compiling.

IMacs are great though. I have a 27" i5 iMac at home for photography (mainly Lightroom) and I absolutely love it! However, it takes FOREVER to compile software on there (only 4 cores instead of 24 total threads).

Right tool for the right job....

----------

Sure do. Apple tax again. I'll keep building my own for about half the cost. Linux is working just fine. When this gets launched shortly .......
http://www.lightworksbeta.com/ , won't need apple anymore.

Ummm... No apple tax here. Go spec out a Xeon workstation anywhere else. They are expensive no matter what.

Now, you can certainly build a machine that would outperform a MP for 50% of the cost _for some workflows_. But if you do actually need a dual-processor Xeon workstation with tons of RAM that is expensive no matter where you get it.
 
Last edited:
I didn't say to run that stuff on an iMac. Run the simulations on Linux and host the X-session on an iMac. That's the way 'big iron' computing has been done for year. An X-Windows client is pretty lightweight.

Displaying back huge 3D results through a remote x-session is Not great, much better to do that locally. When the results get so huge that we can't even load them on our workstations we have to use visualization clusters... But the clients still run locally... and you still want more beef than an iMac...

You did see my reply above statting that we _do_ use huge clusters for crunching, right? But there are pieces of my job for which a MP is the BEST fit by far.
 
do fewer cores but higher GHz get things done in FCP (pre X) quicker or slower than more cores but lower GHz?. .

But is FCP the full extent of the workload? For instance previous ( http://www.kenstone.net/fcp_homepage/compressor_multi_cores_stitzer.html ) and current versions of Compressor can utilize multiple cores.

There are at least as many folks who need to convert, tag, pre/post process etc. their input/output files as those sitting in FCP doing serial edits.

As long as computer workload is purely driven by a single user ( have to click , point , drag , etc. ) on a single, very specific , task then fewer+faster cores has some traction. However, if the single user can invoke several long term, heavy duty task to be worked on at the same time then more cores (and memory) helps.

The other issue being swept under the rug is that if workload needs more than 32GB of memory than CPU packages is typically more cost effective than just one. The memory is hooked to CPU package and each gets 4 slots. If you need lots of memory, then you probably need lots of slots. ( over time that need goes away because more memory fits in fewer slots but if over time your jobs get bigger and bigger ... it can be like being on treadmill... you always will need more in the future).


Some of this also boils down to whether are committed to older software or newer. The new FCP (FCPX) does spread out the workload. The root cause problem in many cases is not the hardware. It is the software. However, many advocate throwing higher clocked hardware at software that is fundamentally causing the problem. The other approach is to use better software.

The excuse that software often lags hardware gets tossed around but at this point the whole iMac line up has 4 cores. Several of the MBP models have quad cores. The Mac Pro has had 4+ cores for several years. At this point it is more like "asleep at the wheel" developers than "waiting for widespread roll out" for software which is still stuck on one core.
 
Now, you can certainly build a machine that would outperform a MP for 50% of the cost _for some workflows_. But if you do actually need a dual-processor Xeon workstation with tons of RAM that is expensive no matter where you get it.

Apple gets a sweetheart deal on processors. Am presuming, given what Mac pro's cost, doesn't pass any of those savings to consumers. Could see them keeping half for profit like any business would. Ram is outrageously over priced from apple, hence why I call apple tax.
If you do your homework shopping, and know what your doing, you can save big time. My dual X5670, 24GB ram, gold rated PSU, blu ray rom, 2 HD's and an ATI 6950 was around $3400 bucks. Oh, Win7 64 too. Now I use unbuffered ram to keep costs down. Never have any issues on any programs I use. Most folks don't need registered ECC ram. But am limited to 48GB. If I go registered, the can go to 196GB.
Anyways, if your dependent on OS X, then not much of a choice. You pay the piper. I try not to get pinned down by any system because you have to pay to play with them. To each there own. Just my 2 cents.
 
fxt

Mid size tower, SB 2700, 8Gb, 1TB 7,200 HD, midrange NVidia or AMD GPU and a choice of optical drive.
For old times sake, call it the Mac Pro Quadra.

That's about $200 more than a comparable PC, but it's got an Apple sticker on it.
Need those higher margins for a poor, struggling company.
100 billion in the bank.
Just do it Tim. :apple:

I don't they realize how many people would switch if they offered something like this. I can't recommend any of their desktop offerings right now because they waste too much money on form factor rather than power. If they offered this I would have no problem telling people to buy one. Maybe even a lower end model at like 800 as well with a lower end quad core as well.
 
A system is much more than its logical core count

I CANNOT wait for the new machines. I write massively parallel scientific software and can use every core they can put in a machine.

Like other posters, if it weren't for OSX I would have already have jumped over to a Linux workstation with 32 cores. My current 12 core hyperthreaded mac pro pales in comparison. At least with the new processors I'll be able to go to 16 hyperthreaded cores...

Did you know that a Sandy Bridge processor is only 3% faster on multi-threaded SPEC fp with hyper-threading enabled vs. hyper-threading disabled?

Did you know that many applications are actually slower with hyper-threading enabled vs. hyper-threading disabled?

There are several main reasons for this:
  • if you have fewer active threads than logical processors, the scheduler may put two threads on the same physical core, while other physical cores are idle
  • with hyper-threading, each logical core effectively has half the shared cache as the same system with hyper-threading disabled - so some apps will lose more to the diminished cache than they gain from the extra logical cores
  • if the threads are limited by memory bandwidth, additional cores (even additional physical cores) won't help - and will aggravate the issue with the effective reduction in cache size
(The obvious reason that each pair of logical cores uses the same set of execution units, and when the threads need the same execution units there's less benefit, doesn't even need to be stated.)

Rather than assuming that hyper-threading it good, it is really a smart idea to benchmark your application with your data, with hyper-threading enabled and disabled, to see if it hurts or helps.

I've done the tests, and have decided to disable hyper-threading on most of my systems. Only a few run workloads where hyper-threading is a benefit. (And none of the workstations have it enabled - it's a loser on all of our workstations. Why enable something that makes your system faster 2% of the time and slower 98% of the time?)

In the best case, hyper-threading gives you higher throughput at the cost of higher latency. In the worst case, you get lower throughput and higher latency.
 
Last edited:
Wirelessly posted (Mozilla/5.0 (iPhone; CPU iPhone OS 5_0 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Mobile/9A334)

TwitchOSX said:
Speaking of....

Since we do have to buy a Mac Pro before the end of the year... should I consider getting a used one somewhere for cheap instead of plopping down $3499 on a brand new dual 4 core?

Get a refurbished one or here on the Marketplace.
 
Did you know that a Sandy Bridge processor is only 3% faster on multi-threaded SPEC fp with hyper-threading enabled vs. hyper-threading disabled?

Did you know that many applications are actually slower with hyper-threading enabled vs. hyper-threading disabled?

There are several main reasons for this:
  • if you have fewer active threads than logical processors, the scheduler may put two threads on the same physical core, while other physical cores are idle
  • with hyper-threading, each logical core effectively has half the shared cache as the same system with hyper-threading disabled - so some apps will lose more to the diminished cache than they gain from the extra logical cores
  • if the threads are limited by memory bandwidth, additional cores (even additional physical cores) won't help - and will aggravate the issue with the effective reduction in cache size
(The obvious reason that each pair of logical cores uses the same set of execution units, and when the threads need the same execution units there's less benefit, doesn't even need to be stated.)

Rather than assuming that hyper-threading it good, it is really a smart idea to benchmark your application with your data, with hyper-threading enabled and disabled, to see if it hurts or helps.

I've done the tests, and have decided to disable hyper-threading on most of my systems. Only a few run workloads where hyper-threading is a benefit. (And none of the workstations have it enabled - it's a loser on all of our workstations. Why enable something that makes your system faster 2% of the time and slower 98% of the time?)

In the best case, hyper-threading gives you higher throughput at the cost of higher latency. In the worst case, you get lower throughput and higher latency.

Definitely good info for everyone out there... it is totally true that each application should be tested with Hyperthreading... and never really gives great benefits.

Of course I'm aware of this... and it's one of the things I like least about my mac pro. I really wish it had more real cores in it (hence why I'm excited about getting 4 more cores (2x8 vs 2x6) with these new processors). This is also one of the reasons I went with the i5 for my home iMac instead of the i7.

Even though hyperthreads don't help with FP (because of lack of duplication of floating point units) and as you say can actually be detrimental... they can help out with integer bound processes. For my workflow, that equates to compiling. Compiling while using 24 compile processes (using all addressable execution pipelines in my dual 6 core, hyperthreaded MP) is about 20% to 25% faster than just compiling with 12 processes.

That might not look like a huge increase, but when you spend all day compiling it adds up!

Now, for actually running simulation jobs, I definitely never use more than 12 processes (or threads). Performance goes south immediately over 12. However, those extra integer pipelines are still useful in this scenario because they allow me to do _other_ things (like more compiling!) while I'm spinning all 12 processors on a simulation. We did some testing with hyperthreads disabled and the machine is definitely less useful while it's fully loaded.

BTW - Since Snow Leopard, OSX has gotten really good at managing processes in a hyperthreaded environment. It won't do anything stupid like assign two processes to the same physical core if there are other physical cores available. Lion is even better in this regard. In our testing there is no longer any penalty for leaving hyperthreading on (only if you actually try to use them with more FP processes than you have real cores).

Lesson Is: It depends. Hyperthreads can help, but they certainly aren't as good as real cores!
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.