Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
just two posts before yours is a graph that has:

2008 Octo 2.8 at 18,907 ( about a 5.8x speed up)
2009 Octo 2.26 at 20,138 ( about a 6.4x speed up)

Where are your numbers coming from?

I'm sorry, whenever I think about 2.8 my mind goes to my own overclocked one. That actually beats 2.26 on Cinebench multicore and various tests on Geekbench multicore.
So as long as there's no OC option available for Nehalems, I'll stick with this one.
 
Eh?
Before they are wrong because the scaling factor for parallel work is better fro the 2.26. Now they are wrong because don't want to look at parallel work at all , just the scalar / single core numbers are off.

You got that wrong. In my first post I pointed out that the 2.26 GHz machine cannot possibly scale with a factor of 7.81 when all the other machines only scale with a factor of about 6. Which just means that _some_ number must be wrong. You tried to give an explanation why the 2.26 might scale better than the 2.93. The argument is that a slow machine is less affected by bottlenecks elsewhere than a machine with higher clock speed. But following exactly the same argument, the "work per GHz" should be better no matter how many cores you use, and that is not the case according to the numbers shown.
 
Looking at your numbers, if we divide the single thread scores by the processor speed for the Nehalems we get:

4074/2.93 = 1390 per ghz
3572/2.66 = 1343 per ghz
2039/2.26 = 1022 per ghz

Why is the 2.93 36% faster than would be predicted by the clock difference compared to the 2.26 alone? The 2.26 nehalem is even 10% slower than the harpertowns (~1150 per ghz) when compensating for clock frequency. What is going wrong with the 2.26, and shouldn't turbo boost be doing something here!?

I would have one explanation: First, the code in question is not supposed to run single threaded. In real life, you would want to get maximum performance, so you would use multiple threads. People just switched it to single threaded to get a benchmark number out of it.

So if this app runs slower than it should with four threads on a quad core, or with eight threads on an eight core machine, then the developers would care about it and fix it. If it runs slower than it should with a single thread on an eight core machine, then the developers don't care. Why should they? They write an app to get work done, not to make benchmarkers happy.

One possibility is that the application does some work X that isn't mentioned and doesn't turn up in the benchmark results, and then fills the remaining time in a single thread with work Y that is benchmarked. So the 2.26 GHz machine would spend less real time on work that the benchmark measures. Just a possibility.
 
Looks nice, but not $5000+ nice :p

LOL, that seems to be escaping most people. At this price range and beyond it better be stomping available competition; who btw doesn't have access to the chip yet. When they do I want to see their pricing.
 
I'm sorry, whenever I think about 2.8 my mind goes to my own overclocked one. That actually beats 2.26 on Cinebench multicore and various tests on Geekbench multicore.
So as long as there's no OC option available for Nehalems, I'll stick with this one.

How high do you have yours clocked?
 
Divide the score by the processor speed to determine if the score scales linearly with clock. Your first graph had a 2.26ghz nehalem score which was anomalously low when compared to other nehalem scores...

I didn't do anything like that. It was an actual score that some user actually recorded. He even posted a screen shot of Cinebench displaying the score.
 
Looks nice, but not $5000+ nice :p

Yes I completely agree! I dunno if it's Apple's fault or Intel's but the price points of most of their machines across the board, is completely out of whack now - even for Apple. In the past all one had to do was select identical parts and one could see that DIY or similar pre-builts were either the same of more expensive.

The near-death position of the Core2-Duo as used in desk-top units and the identicality (sic) of the Corei7 and X3500 processors now preclude such comparisons from making any sense at all.

Some Apple machines are now thousands of dollars out of whack while others are only technologically impaired at a higher price point.

It's like Steve Jobs stepped out for a second and the overly incompetent stepped in and made a mess (again!!!). :(
 
[iBug], How high do you have yours clocked?

I'm in serious doubt of iBug's claims. He says he has it OC'd by ZDNet's utility and has benchmarked it. But that utility doesn't not allow benchmarking. It speeds up the system clock to the same degree as the processors so that the measurements become equalized and meaningless. It also keeps the clock running at a higher rate. One would have to run a utility in the BG that reset the clock every 5 or 10 seconds in order to be able to use the clock at all. And you can actually see the second hand spinning at an increased speed! :p So file time-stamps, cron-like timers and scheduled events all break. There is no way that I know of to reset the clock speed separately as ZDNet's magic is tired to the same hardware registers - so there's just no way. Then he says "after you reboot - it works" but after you reboot ZDNet's utility is gone and loading it again or at startup just incurs the same problems.

So my questions are:

How does he get his benchmarks?
Why would anyone use a utility that messes up the timing (for music and recording too) system wide?
 
I'm in serious doubt of iBug's claims. He says he has it OC'd by ZDNet's utility and has benchmarked it. But that utility doesn't not allow benchmarking. It speeds up the system clock to the same degree as the processors so that the measurements become equalized and meaningless. It also keeps the clock running at a higher rate. One would have to run a utility in the BG that reset the clock every 5 or 10 seconds in order to be able to use the clock at all. And you can actually see the second hand spinning at an increased speed! :p So file time-stamps, cron-like timers and scheduled events all break. There is no way that I know of to reset the clock speed separately as ZDNet's magic is tired to the same hardware registers - so there's just no way. Then he says "after you reboot - it works" but after you reboot ZDNet's utility is gone and loading it again or at startup just incurs the same problems.

So my questions are:

How does he get his benchmarks?
Why would anyone use a utility that messes up the timing (for music and recording too) system wide?


I thought you read my last reply to you. If you reboot your computer after overclocking the system clock resets to regular speeds, then you can do your benchmarking. But you need to reboot once.
This was also explained on ZDNet's website. By reboot I mean restart, not shutdown and reboot. If you shutdown ZDNet settings are lost.
 
3185 Mhz.

So you went from 2.8 -> 3.2 That isn't horrible. With the cooling that the Mac Pro has you should have been able to hit 3.5 - 3.8. Doesn't the ZDNet tool also increase the memory bus speed? I seem to recall that being a problem with FB-DIMMs they don't overclock well.
 
So you went from 2.8 -> 3.2 That isn't horrible. With the cooling that the Mac Pro has you should have been able to hit 3.5 - 3.8. Doesn't the ZDNet tool also increase the memory bus speed? I seem to recall that being a problem with FB-DIMMs they don't overclock well.

That is the only problem. The CPU should allow much higher overclocking if the tool could only OC the CPU. My CPU temperature doesn't go any higher when I OC to 3185. It can, like you say, as easily go to 3.4-3.8.

But the RAM's start to overheat and cause problems. For me 3185 is the limit, I have 8x1 GB chunks and one of them heats up, the others stay around 158 Farhenheit, while the one maxes out around 176. If I lower the amount of memory and use only the best ones, I can go to 3220 or so.

But there's also an advantage of OC'ing the whole system instead of CPU only. It speeds up the computer even further than just OC'ing the CPU. So a 3.2 OC'd 2.8 actually is a faster machine than a non OC'd 3.2.

I wanted to try with Kinsgston HyperX modules, but their heatsinks are quite big, it wouldn't fit in Mac Pro's "cleverly" designed slots. But those chunks can be OC'd much better than regulars.
image_hx_fbdimm.jpg
 
It seems like the only people complaining about this one:

A) have never owned a Apple pro desktop.

B) are trying to justify their last generation Mac Pro.

C) or have no clue on the GHz factor.

Fact is, it's a pretty decent update, all things considered. With more and more apps moving towards multi-core optimazation, the new Nahelem MPs are a good buy for professionals and pro-sumers who plan to use it for the long run.
 
Some Apple machines are now thousands of dollars out of whack while others are only technologically impaired at a higher price point.

It's like Steve Jobs stepped out for a second and the overly incompetent stepped in and made a mess (again!!!). :(

I'm sure all of these product plans have been in the works for many months, since long before Jobs went on LOA.
 
You got that wrong. In my first post I pointed out that the 2.26 GHz machine cannot possibly scale with a factor of 7.81 when all the other machines only scale with a factor of about 6. Which just means that _some_ number must be wrong. You tried to give an explanation why the 2.26 might scale better than the 2.93. The argument is that a slow machine is less affected by bottlenecks elsewhere than a machine with higher clock speed. But following exactly the same argument, the "work per GHz" should be better no matter how many cores you use, and that is not the case according to the numbers shown.

We had a bit of a disconnect, but I was responding more to what you wrote in post 81 as opposed the "root cause" of post 77 being a sampling error. In post 81 you claimed it could not possibly happen because had the same technology. In combining the two posts I suppose I shouldn't have interpreted that as the same technology could not possibly lead to a large gap. It can.

The scale factor 2.26 is larger. The scale does get smaller as you increase GHz. One could probably get into a low 7-ish range if dropped the turbo boost on the 2.26 or went to a slower frequency. Keeping the same memory tech it is possible to do. Impossible and unlikely are two different things. However, you are right in that a 7.8 score was unlikely and indicative of a measurement error. ( just because the gap in GHz is no where near as large as the gap between the scaling factors. )


On the ' "work per GHz" is better no matter how many cores you use' issue... that seems to be your theory. Not mine.
For a fix number of cores you have less of a memory bottleneck problem as you decrease the GHz. ( the gap in speed is what is driving the problem). Adding more cores or GHz makes the problem worse if keep the memory channels constant. Add more memory channels as go "up" ( in GHz or cores ) and can manage the problem.

At some point with the faster GHz, the caches , prefetch, and branch prediction (if held constant) are going to become less effective on real world workload. CPU/cores spends lots of time now even doing nothing but no-ops because waiting for data to show up so can do work. Just get worse if push the boundary even more.
 
It seems like the only people complaining about this one:

A) have never owned a Apple pro desktop.

B) are trying to justify their last generation Mac Pro.

C) or have no clue on the GHz factor.

Fact is, it's a pretty decent update, all things considered. With more and more apps moving towards multi-core optimazation, the new Nahelem MPs are a good buy for professionals and pro-sumers who plan to use it for the long run.

This. Most professional apps have been taking advantage of multi-threading for a long time (which is why we've seen Apple and most other companies offer multi-processor workstation configurations since the G4 days and even before). Even as a single processor, the nehalem performs much better than its predecessor in these types of programs and the gap only increases when you introduce multi-processor systems since quickpath offers far better scalability than before.

The bottom line is that you will almost certainly see large improvements in performance if your buying the new Mac Pro for professional work-related reasons. You won't be seeing as large of a jump in performance if your playing games or doing office work but you shouldn't be buying a Mac Pro in the first place for these things.
 
Errrrr.
most stuff right now if you only do one thing at a time. Not sure why folks are going to buy 8 core boxes to just run single threaded stuff on just a single core. Seems like overkill to me. The whole point of having 8 cores is to have competing instruction streams. Otherwise, have bought a whole like of transistors that spend most of their time in 'sleep/energy saving' mode.

There is a difference between running just one program/thread at a time and maybe having mulitple possibly limited single threaded apps running at the same time.

Every metric in the memory section is scalar and the 2.26 wins most of those too. The only loser is the stdlib Write ( surprise, the "older" OS possibly doesn't leverage the "new" approach to CPU architecture/NUMA.)


Every metric in the stream section is scalar and the 2.26 wins.

off the top of your head, what do you think are the top 10 apps
that would take advantage of these new 2.93 octo cores????:confused:

id really like to know... anybody???
 
off the top of your head, what do you think are the top 10 apps
that would take advantage of these new 2.93 octo cores????:confused:

id really like to know... anybody???

Pretty much any professional stuff will be well optimized for multi-threading and be able to take advantage of the new processors. In terms of pro stuff: cinema4d, 3dsmax, lightwave, maya, fcp, aftereffects, photoshop cs4, plus most encoding stuff i.e. divx, wme, x264, plus technical stuff i.e. mathematica, Pro-E Mechanica, etc
 
Pretty much any professional stuff will be well optimized for multi-threading and be able to take advantage of the new processors. In terms of pro stuff: cinema4d, 3dsmax, lightwave, maya, fcp, aftereffects, photoshop cs4, plus most encoding stuff i.e. divx, wme, x264, plus technical stuff i.e. mathematica, Pro-E Mechanica, etc

Yes basically most pro app uses multicore, but not all of them will scale as great as geekbench tests.
 
Pretty much any professional stuff will be well optimized for multi-threading and be able to take advantage of the new processors. In terms of pro stuff: cinema4d, 3dsmax, lightwave, maya, fcp, aftereffects, photoshop cs4, plus most encoding stuff i.e. divx, wme, x264, plus technical stuff i.e. mathematica, Pro-E Mechanica, etc

ok, so i got CS4 on the way,,, and plan on getting FCP when it is upgraded/new version, i hope soon... aftereffects sounds interesting...

how bout coreldraw? iphoto and aperture? i got like 8000+ pics,,, slow as mole-asses just to do anything, or search for anything...
 
I thought PowerPC was so superior to Intel chips. That they "toast" the intel chips.... What's with all these drollin' over an Intel i7 processor? I mean, Steve Jobs said PowerPC toasts Intel. How can he be wrong? No way he is wrong. You are all doped....
 
I thought you read my last reply to you. If you reboot your computer after overclocking the system clock resets to regular speeds, then you can do your benchmarking. But you need to reboot once.
This was also explained on ZDNet's website. By reboot I mean restart, not shutdown and reboot. If you shutdown ZDNet settings are lost.

Oh I see. You must have a newer Mac.

http://www.zdnet.de/enterprise/feedbacks/0,39023838,39192217+20083736-20000004c,00.htm?PROCESS=show
Meinung:
If you have a MacPro3.1 (2008-Model) you can reboot to get the clock running correct. For earlier version of mac pro there is right now no solution.

Meinung:
Overclocking my MacPro results in my system clock running too fast. At the end of one day my clock will be 20 minutes ahead.​

Know of any other options? Someone said something about resetting it from the shell or something? ???
 
This. Most professional apps have been taking advantage of multi-threading for a long time (which is why we've seen Apple and most other companies offer multi-processor workstation configurations since the G4 days and even before). Even as a single processor, the nehalem performs much better than its predecessor in these types of programs and the gap only increases when you introduce multi-processor systems since quickpath offers far better scalability than before.

The bottom line is that you will almost certainly see large improvements in performance if your buying the new Mac Pro for professional work-related reasons. You won't be seeing as large of a jump in performance if your playing games or doing office work but you shouldn't be buying a Mac Pro in the first place for these things.

I think you and eeboarder are either confused by bar-graphs, are working for Apple, or have a very different idea of what "significant improvement" means.

Between the new 2009 MP 2.66 and my ancient MacPro v1.1 at 2.66 x8 there's a 35% to 40% increase when the cores are all (16 and 8 respectively) hitting at 100% load! That's dismal! The rule of most professionals has been to upgrade when it's 100% increase (AKA: Twice as fast) for about the same price.

There's only a 20% to 25% increase for something like photoshop or pretty much anything that isn't maxing out your cores.

The new 2009 2.66 is $4,700 in it's wimpiest configuration. My v1.1 MacPro was I think, $2,500 and $2,600 all together after upgrading. That's a 80% price increase for a 25% speed differential. Sorry, that totally sucks!

What was the 2008 Mac Pro v3.1 at 2.8GHz? Oh yeah, $2,800.

The New MacPro 2009 2.66 is between 10% slower and 10% faster in everyday use than the 2008 2.8. No question about it and Snow Leopard will probably NOT be able to recover enough ground to make the difference if it makes any at all. When all cores are at 100% the new 2.66 is just 21% faster than the older 2.8 octad.

The price difference is 68%. You really think a 68% price increase is justified for a machine that is sometimes slower and only between 10% and 21% faster in some cases???

Me? Nope!!! No way I just did real numbers and real percentages based on real benchmarks and real prices and the results are in. This years' Mac Pros are a total rip-off compared to last years. I really mean a total screaming in your face, raising hell, boycotting kind of rip-off too. So I guess we can expect there to be lots and lots of noise on the forums about all this. I won't but I bet others will! PS: And it's even worse if we go outside Apple and look at machines from other vendors.

This really IS the year to skip upgrades IMO. Maybe next year if they leave the prices and speeds the same but offer 6 or 8 cores per chip.
Last year you could upgrade to a machine that was MORE than twice as fast for just $200 above the previous year's. This year we're bending over and taking an almost $2,000 ramming and getting a box which is sometimes even slower on top of that!

.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.