(octo-harpertown Vs quad-nehalem) + 10.6 = ???

Loa

macrumors 68000
Hello,

Strange and concise title, but it says it all. Right now, the quad 2.26 nehalem is slightly more powerful than the octo 2.8 harpertown (on most tests, based on what I've read).

But if 10.6 comes in and allows the machine to make much better use of the 8 cores, couldn't that balance shift?

I'm planning on buying a mac pro soon, and I've been a fan of refurbs for a long time now. There's a very nice octo-2.8 hapertown mac pro in there right now, cheaper than the quad-2.26 nehalem.

Can anyone help me figure out grand central? Is there even enough info on GC yet?

Thanks
 
10.6 won't magically make things run faster. Code has to be written from the ground up to use multiple cores. Snow Leopard comes with OpenCL built in so writing parallel code should be easier and in the year's down the line programs should start using extra cores. This will take time though, people seem to think that Snow Leopard is a magic wand that will make single threaded code magically run faster by using multiple cores. If that were true then Apple would destroy the market since its basically the dream target of many programmers / compiler writers.

A lot of single threaded software simply can't be speeded up with multithreading anyway. Generally speaking, stuff that currently use more than one core will never be able to successfully use more than one core. If you're using stuff that can noticeable make use of two or more cores already then that might get some speed boosts.

Look at what you're doing at the moment. What tools do you use, what software do you use? Go through your standard every day tasks and have the activity monitor open to see how much processor time you're using. For most users - even "Pro" users - even 4 cores is currently overkill. Now, if you do specific tasks that *can* use more cores then 8 cores is great - if you have to run parallelised N-body simulations then its great. If you have to run multiple instances of single threaded tasks then its great. If you're just doing stuff like using Photoshop filters and general web browsing etc then you're not going to use those 8 cores be it 10.5 or 10.6.

Honestly, you'll know if you'll be able to use 8 cores. If you think maybe you might then chances are, you don't and you'd be better off with fewer cores and faster speed.
 
dunno what tests u read, maybe they highly depends on memory bandwidth.

8 virtual cores cant beat 8 real cores in terms of cpu performance (for example, skulltrail systems beats single nehalem in cinebench).

2 virtual cores in 1 physical cpu shares executive units, so if one virtual core uses (for example) FPU already, second will have to wait.
 
Hello,

Strange and concise title, but it says it all. Right now, the quad 2.26 nehalem is slightly more powerful than the octo 2.8 harpertown (on most tests, based on what I've read).
There is no quad 2.26 and no quad anything nehalem will outperform last year's octo-2.8.
 
Hello,

Strange and concise title, but it says it all. Right now, the quad 2.26 nehalem is slightly more powerful than the octo 2.8 harpertown (on most tests, based on what I've read).

dunno what tests u read, maybe they highly depends on memory bandwidth.

8 virtual cores cant beat 8 real cores in terms of cpu performance (for example, skulltrail systems beats single nehalem in cinebench).

2 virtual cores in 1 physical cpu shares executive units, so if one virtual core uses (for example) FPU already, second will have to wait.

He's probably talking about this graph that I assembled:

And if so, he's talking about 8 physical cores vrs. 16 virtual cores (or 8 physical). So it's 8 vs. 8.

But even with all that I don't believe for a second that the 2.26 octad is faster than the 2.8 octad. I think the one person who submitted the benchmark for the 2.8 probably got the lowest score possible in that test under his particular environment (maybe he had only 2GB RAM and BG tasks running, etc.) and the 2.26 user submission was the best possible making it appear as if the 2.26 beat the 2.8.

What needs to be considered here is that these benchmarks were submitted by users here with wildly different environments not at all well suited for proper comparisons. The graph was meant to give a VERY general impression of how the various machines preformed for the singular task of rendering in a 32-bit rendering engine. This is 32-bit math with a little memory copying flying as fast as the processor will allow.

What it really shows more than anything else is how the individual processors scale between single thread execution and multi-thread execution . Perhaps the most important mark here is the Multicore Speedup percentage.

Of course the 2.8 octad is going to feel faster and be faster at almost everything we do with our machines - over the 2.26 octad I mean. This is indicated somewhat by the green bar within that above linked graph where the 2.8 is shown to very much faster than the 2.26.



About Grand Central - it's proposed to speed up all or most aspects of OS X. It's unclear to me how much if any impact it will have on applications that weren't compiled to take advantage of these functions.
 
Measuring single thread apps is pointless. Its a dying standard, much like supporting PowerPC code is now and 68K code was 10 years ago.
 
Measuring single thread apps is pointless. Its a dying standard, much like supporting PowerPC code is now and 68K code was 10 years ago.

I don't think that is a good analogy. Though there will be more multi-threaded applications, some processes are inherently serial in nature. As the saying goes, though one woman can have a baby in nine months, nine women can't produce a baby in one month. A good book to read is "The Mythical Man Month" by Brooks. (Though this is about the human side of things rather than computer processors.)

Even given the advances predicted in Snow Leopard, it will be a major investment for say Adobe to rewrite Photoshop and they will only do so as and when it makes economic sense. I think single threaded application speed will be important for quite a few years yet.

It is a very different situation to the underlying code changing (as in PowerPC to Intel) such changes have to happen (the developer has little choice) and are also relatively straight forward (as much of the code can simply be recompiled). Single threaded applications won't suddenly stop running on multi core architectures.
 
Hello,

Thx for all the replies... Sorry for mixing up the numbers for the quad 2.66 nehalem with the octo 2.26 nehalem...

I'm still surprised that nobody sees a 2.8 octo harpertown catching up with the 2.66 quad nehalem once 10.6 hits.

Maybe I'd been putting too much hope into grand central...

@madisontate: can you give me more info on how nehalem is crippled pre-10.6? And are the harpertowns so crippled as well?

Thanks
 
Measuring single thread apps is pointless. Its a dying standard, much like supporting PowerPC code is now and 68K code was 10 years ago.

PPC is far from dead. OS X on PPC may be dead, but IBM is still making mainframe and workstation systems with PPC based chips (POWER5/POWER6 series cpus). Not to mention the Xbox 360, PS3, and the Wii are all running PPC variants (of course, also built by IBM). Terrasoft also produces the PowerStation which uses a 970FX based cpu (same cpu as the Apple G5s).

Also, people still use the Motorolla 68000 arch in many different platforms such as phones / pdas. Not 2 years ago I learned ASM on the 68000 in college.
 
Measuring single thread apps is pointless. Its a dying standard, much like supporting PowerPC code is now and 68K code was 10 years ago.

No. Most applications will never be multithreaded. Ever. They physically and logically cannot be with present processor architecture. Until processors radically change neither will this.

As just one very simple example of this consider how a computer calculates Pi. It calculates one value and uses that to calculate the next. This result is used to calculate the next and so on. In this very simple example one of the cores would have to time-travel in order for multi-threading to be possible. There are many such procedural algorithms in very many applications making it impossible for them to multi-thread on current processor architecture.

Pretty much what we have today speaking of multithread apps, is all we're gonna get. With very few exceptions what can be multi-threaded already has been.

We as Apple users are up for some big speed increases as more and more apps go 64-bit tho!
 
PPC is far from dead. OS X on PPC may be dead, but IBM is still making mainframe and workstation systems with PPC based chips (POWER5/POWER6 series cpus). Not to mention the Xbox 360, PS3, and the Wii are all running PPC variants (of course, also built by IBM). Terrasoft also produces the PowerStation which uses a 970FX based cpu (same cpu as the Apple G5s).

Also, people still use the Motorolla 68000 arch in many different platforms such as phones / pdas. Not 2 years ago I learned ASM on the 68000 in college.
None of them run OSX do they?

No. Most applications will never be multithreaded. Ever.
Clearly you have your mind set on this answer. You will be very easily proven wrong, as you already have.
 
Clearly you have your mind set on this answer. You will be very easily proven wrong, as you already have.


It sounds set because this has been stated 100s and 1000s of times by developers already yet there still after 8 years, seems to be this body of users who think that very very soon now all (or even most) of their apps are going to magically take full advantage MT/HT.

Hehe.. it's been going to happen "real soon now" for the past 6 or 7 years. :p

And still the developers say to their alpha and beta teams: Nope, never gonna happen. So you can see it's not really a matter of being set in some opinion or caring about being right or wrong. I'm just sharing what I've learned.

I have a vested interest in being wrong actually. All my machines are multi-core / multi-processor. I want every developer to spend large portions of their budget catering to me by redesigning their application bases in order to squeeze that extra 10% of performance out of them, heck yeah!
 
"real soon now" as in "already happened". Try finding programs today that don't take advantage of at least 2 processors.
 
None of them run OSX do they?

If you read my post I answered that ("OS X on PPC may be dead, ..."). The issue was, you claimed that coding for these platforms is dead, which is wrong - both platforms are alive and kicking, just not in the mac world so much.
 
Tesselator is right on here. For a number of years to come (certainly more than the practical working lifetime of either the 2.8 octad or new nehalem 2.26 octad) the great majority of applications will fail to take much, if any, advantage of multiple cores. A lot of people really do NOT understand what's involved to make that happen. It is far from trivial. It in no way compares to PPC vs. Intel.
 
Tesselator is right on here. For a number of years to come (certainly more than the practical working lifetime of either the 2.8 octad or new nehalem 2.26 octad) the great majority of applications will fail to take much, if any, advantage of multiple cores. A lot of people really do NOT understand what's involved to make that happen. It is far from trivial. It in no way compares to PPC vs. Intel.

Exactly. Tesselator knows what he's talking about. If anyone has any doubts about programming multithreaded applications, go ahead and give it a try. Sure, sometimes it can be as simple as throwing an OpenMP statement or two in but most of the time, it takes a serious amount of work which often doesn't even scale all too well with multiple processors.

What most people don't seem to realise is that multithreading and multiprocessors is only being used because performance of single cores cannot increase with Moore's Law anymore. Silicon manufacturers are rapidly approaching the limits of how small things can be made using the same techniques as for the last 20 odd years. Quantum interference will start to become a problem. The speed of individual processors is also becoming an issue - why do you think processors haven't really got much over 3GHz over the last decade? First they ramped the MHz up, then they started to hit a limit and so managed to ramp up the amount of work a processor could achieve per MHz and now that they're approaching those limits, the only other option is to use more processors.
 
"real soon now" as in "already happened". Try finding programs today that don't take advantage of at least 2 processors.

Hahahaha. Ya. Were you born yesterday?

P.S. Even if every single program today did use at least 2 processors... that means it took 10 years. So by that logic in 10 more we'll see everything using 4 cores.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.
Back
Top