Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
maxvamp:

I have thrown down the gauntlet.
What are you trying to prove by comparing lists of programs? You've dug up some high-profile programs with intermittent threading, that doesn't say anything about how useful two processors are, or four processors would be. Like I said, its a case of serious diminishing returns, and you are way too excited about it.

Please provide a list of non threaded Mac apps.
Consider the applications that you didn't list. Games, for example. Most command line tools, compressors, encryptors, image viewers, pdf viewers. And in your list, web browsers and Toast don't count (nothing there takes meaningful advantage of multiple CPUs).

( DOS == singleThreaded) && ( OSX != DOS );
This is obnoxious. DOS implies single threads but single threads do not and never will imply DOS, or even anything bad. Single threads are the natural state of programs, and single thread performance will remain very important.
 
Here is the core point

I have seen written several times on this thread alone that most, if not nearly all programs are single threaded. I then get told I am getting too excited.

This argument I cannot let stand, as the base argument that all apps are single threaded is absolutely not true.

Now, there is a difference between effective parallel processing, and just plain ole threading, but both can be very useful in a multi-threaded environment, such as OSX.

If an application can have a main thread, and several monitors, then the work load can be split across several processors , or time sliced if needed. If a user only ran one application at a time, multiple processors in this case would be a waste. I, however, make the stand that people do many things at once with their machines. They have iTunes playing while working on bills, maybe Word open, and using Safari to do some online banking.

While none of these apps will bring down a processor alone, the monitors they have , if each put into a single thread would bring down this system with just this select few apps open. Threading will keep the system responsive, even though the processors are not staying at 100%.

There are other aspects to threading besides driving the CPUs all at 100% for every action, and most apps now are multithreaded. To say otherwise, no matter what the context is a lie.

One final note. Several of you try to basically, indirectly use Amdahl's law to try to say all apps are single threaded, and send the message that multiple procs are a waist. I have to comment that yes, while Amdahl said that there are diminishing returns form multi proc systems, there is still some performance to be gained. You do him a great disservice when making such comments. Read the link I have posted.

Max.
 
ddtlm

Consider the applications that you didn't list. Games, for example. Most command line tools, compressors, encryptors, image viewers, pdf viewers. And in your list, web browsers and Toast don't count (nothing there takes meaningful advantage of multiple CPUs).

The big reason people buy macs are not to sit at command lines, or play games. Those tasks are more for the UNIX and windows world. Toast does encoding to video, and has several offspring apps, such as Disk Doctor that make use of multi threading.

I think what needs to be defined here is the argument. Here are some facets of what this thread has become:

1. All apps are single threaded.
2. All apps are multi threaded.
3. Multi threaded apps don’t make use of multiple processors.
4. Most multi threaded apps do not effectively make the best use of multi proc systems.
5. Only the science community can make use of multiprocessors.
6. Shakespeare don't know English right.

We should narrow our focus on the topic, because I am thinking at this point that arguments 1 -> 5 are stepping on each other. I only disagree with points 1, 3, 5, and 6.

Max.
 
Response vs. performance

'Responsiveness' of an interface has two elements: The size of a timeslice, and the event handling model. It has nothing, directly, to do with SMP. UNIX folks have long enjoyed 'responsiveness,' and x86 folks generally went through this phase of thread-obsession in 1992 when OS/2 2.0 was released; again when NT 3.1 came out in 1993, and again when Win95 brought threading to the average Joe.

You should note that SMP was not common at those times.

The primary reason for the increased responsiveness of these systems was that the GUI no longer ran in a single 'thread' or 'process' (these are almost the same thing to a CPU), therefore an app that spent a long time processing an activity would not prevent the GUI from updating or responding. I'm sure the old Macs suffered from this in some way.

The key factor of responsiveness at the OS-GUI level became how quickly a thread could become the active thread; the longer a timeslice given to a thread, the longer it takes for the next thread to get active. The tradeoff is that you can have very short timeslices, and then waste all of your CPU in the switching overhead. The only benefit you get from SMP is that since you obviously have twice as many timeslice switches, you can theoretically cut your response time in half. The tradeoff still exists though; instead of simply doubling your speed with the same number of context switches (one CPU twice as fast), you've doubled your speed, but also doubled your context switches (two CPUs, same speed). This is an example of Amdahl's Law.

Now, the event model is the other key factor, and it is generally a given that you are going to at least not kill the whole OS-GUI once you have a true multitasking OS. The app creator can continue the effort and ensure that the app itself never freezes its own GUI either by multithreading itself. This is not really done to benefit from SMP; the benefit is achieved on a single processor as well. The actual code of what the app does is not automatically multithreaded, even if some GUI threads happen to automatically be created by the frameworks. The programmer would have to decide it was worth the effort to write certain parts in a multithread manner, and it would still be subject to Amdahl's Law.

I'm not sure what 'monitors' are; perhaps you refer to Java VM, perhaps to waiting for something to happen in general. But they are almost never implemented as a continuous CPU intensive polling effort. Instead, they almost always end up simply waiting for the OS to inform them that something has happened. This requires very little CPU power.

It also is rare that apps are 'given' a single CPU (called CPU affinity). Solaris allows it(perhaps Linux 2.6?), but the application (or admin) also has to specifically ask for it. It is common on applications where the cache is more important than CPU speed; database is a common one. Instead, the usual process is that threads or CPUs end up hopping among CPUs, so the shorter your timeslices, the greater chance the CPU cache gets thrown out and wasted. Another example of Amdahl's Law.

The bottom line is that multithread responsiveness is not a justification for SMP; SMP is only worth it when you've got specific apps that do not suffer as much from Amdahl's Law, such as some of the video apps you mentioned. The industry is being forced into SMP by difficulty making a single CPU run faster. If SMP was better than faster CPUs, Intel would have happily sold twice or quadruple as many CPUs in each computer since 1996 when NT 4.0 started showing up on advanced users desktops.

maxvamp said:
I have seen written several times on this thread alone that most, if not nearly all programs are single threaded. I then get told I am getting too excited.
...
While none of these apps will bring down a processor alone, the monitors they have , if each put into a single thread would bring down this system with just this select few apps open. Threading will keep the system responsive, even though the processors are not staying at 100%.

There are other aspects to threading besides driving the CPUs all at 100% for every action, and most apps now are multithreaded. To say otherwise, no matter what the context is a lie.
...
Max.
 
Amdahl,

I don't entirely disagree with you. A classic example of an application that spawns more than 1000 threads is Exchange 2000. Multiple processors do not improve this applications by up to 100% per processor. Actually, most of those threads are just waiting to do something, kind of like sitting in a thread pool. iMovie on the other hand spawns a new thread for every mix edit you put on the timeline and actually starts a render in the background. Multiple procs will improve the time it takes for multiple renders.

As I was trying to point out in earlier posts, however, is that there is no shortage of dual processor multi threaded apps for the Mac platform. More fortunately, they happen to be the mainstream apps that people buy a mac for ( multimedia ).

One final point.

We need to determine context. If we talk about app specific threading and tasking, then on a per app basis, a multiple CPU system may seem to be a waist. For people doing multiple simultaneous things, the context changes. Since the OS is now dealing with more processes and threads, the system as a whole can make better use of multiple processors.

I guess it breaks down to this:

Grandma in Peoria writing an email probably won't see any benefit from dual procs, or anything above a 600 for that matter. She probably bought a Dell.

Grandson, though, working in the basement writing the next great program, or making millions through web design, or making the next great American film will clearly be able to make good use of both processors for the variety and number of tasks that he will be doing.

Son or daughter down at NIST will need XGrid **and** multi-procs to get their work done. They might need a generator too for all that extra power draw.

Max.
 
Intel, for 20 years, has been selling khz, mhz and ghg.

Amdahl said:
'Responsiveness' of an interface has two elements: The size of a timeslice, and the event handling model. It has nothing, directly, to do with SMP. UNIX folks have long enjoyed 'responsiveness,' and x86 folks generally went through this phase of thread-obsession in 1992 when OS/2 2.0 was released; again when NT 3.1 came out in 1993, and again when Win95 brought threading to the average Joe.

You should note that SMP was not common at those times.

The primary reason for the increased responsiveness of these systems was that the GUI no longer ran in a single 'thread' or 'process' (these are almost the same thing to a CPU), therefore an app that spent a long time processing an activity would not prevent the GUI from updating or responding. I'm sure the old Macs suffered from this in some way.

The key factor of responsiveness at the OS-GUI level became how quickly a thread could become the active thread; the longer a timeslice given to a thread, the longer it takes for the next thread to get active. The tradeoff is that you can have very short timeslices, and then waste all of your CPU in the switching overhead. The only benefit you get from SMP is that since you obviously have twice as many timeslice switches, you can theoretically cut your response time in half. The tradeoff still exists though; instead of simply doubling your speed with the same number of context switches (one CPU twice as fast), you've doubled your speed, but also doubled your context switches (two CPUs, same speed). This is an example of Amdahl's Law.

Now, the event model is the other key factor, and it is generally a given that you are going to at least not kill the whole OS-GUI once you have a true multitasking OS. The app creator can continue the effort and ensure that the app itself never freezes its own GUI either by multithreading itself. This is not really done to benefit from SMP; the benefit is achieved on a single processor as well. The actual code of what the app does is not automatically multithreaded, even if some GUI threads happen to automatically be created by the frameworks. The programmer would have to decide it was worth the effort to write certain parts in a multithread manner, and it would still be subject to Amdahl's Law.

I'm not sure what 'monitors' are; perhaps you refer to Java VM, perhaps to waiting for something to happen in general. But they are almost never implemented as a continuous CPU intensive polling effort. Instead, they almost always end up simply waiting for the OS to inform them that something has happened. This requires very little CPU power.

It also is rare that apps are 'given' a single CPU (called CPU affinity). Solaris allows it(perhaps Linux 2.6?), but the application (or admin) also has to specifically ask for it. It is common on applications where the cache is more important than CPU speed; database is a common one. Instead, the usual process is that threads or CPUs end up hopping among CPUs, so the shorter your timeslices, the greater chance the CPU cache gets thrown out and wasted. Another example of Amdahl's Law.

The bottom line is that multithread responsiveness is not a justification for SMP; SMP is only worth it when you've got specific apps that do not suffer as much from Amdahl's Law, such as some of the video apps you mentioned. The industry is being forced into SMP by difficulty making a single CPU run faster. If SMP was better than faster CPUs, Intel would have happily sold twice or quadruple as many CPUs in each computer since 1996 when NT 4.0 started showing up on advanced users desktops.

Intel has a long history of selling mhz delta's as a reason to buy.

I trust IBM to not be directed by the sales department, but by people competent in computer science.

I think that Java programmers and C, C++ programmers can and do make effective use of multiprocessor machines. It could be argued that the JVM is happiest and was really designed to run on a QUAD processor.

VB programmers still haven't caught on to multi-threading, so, on this point you are right. But, I believe the fault can be layed at Microsoft's doorstep.
Interesting that they Don't teach multithreading in the MCSD courses.
( Yes, I know they teach "How to use a Thread" in a Gui, but they don't teach "How to write a thread". )
I believe this is an interesting omission.

You bring up Databases, again another example of an application that can make effective use of dual processors: There's also web servers, plus the other apps mentioned.

It seems to me if you use any development tools or pro level app. there is a clear advantage to a dual core chip. Especially when all the chip builders are hitting the ceiling on ghz.

The question isn't would a dual core 750mhz processor be faster then a 1.5Ghz G4, but, would a Dual-Core 1.5ghz G4 be faster then a single-core 1.5ghz. I think the answer is yes.
;)
 
maxvamp:

I have seen written several times on this thread alone that most, if not nearly all programs are single threaded. I then get told I am getting too excited. This argument I cannot let stand, as the base argument that all apps are single threaded is absolutely not true.
So is this an counter-arguement to me? Cause I don't believe I've made any claims about everything being single threaded.

I, however, make the stand that people do many things at once with their machines. They have iTunes playing while working on bills, maybe Word open, and using Safari to do some online banking.
This is not a scenario that would show a dual being noticably much faster than a single.

There are other aspects to threading besides driving the CPUs all at 100% for every action, and most apps now are multithreaded. To say otherwise, no matter what the context is a lie.
Amdahl did a good job on this one.

Several of you try to basically, indirectly use Amdahl's law to try to say all apps are single threaded, and send the message that multiple procs are a waist.
If you are trying to argue with me, then you've totally missed my point. Note I keep accusing you being being too excited about it, which is a different thing than claiming that multiple processors are useless.

The big reason people buy macs are not to sit at command lines, or play games.
Well in that case I hope Apple likes its niche market making movies and whatnot.

MikeBike:

It could be argued that the JVM is happiest and was really designed to run on a QUAD processor.
I'm sceptical.

You bring up Databases, again another example of an application that can make effective use of dual processors: There's also web servers, plus the other apps mentioned.
Only matters for high-traffic servers, not workstations.

The question isn't would a dual core 750mhz processor be faster then a 1.5Ghz G4, but, would a Dual-Core 1.5ghz G4 be faster then a single-core 1.5ghz. I think the answer is yes.
How about a dual 1.5 vs a single 1.7 with a couple megs of extra on-die cache? Otherwise the dual is be larger and costs more. The single will be faster most of the time.
 
This thread is threaded

Yes, I think we are in general agreement. At this point, it is just down to which apps, which markets, and how much does it cost.

maxvamp said:
Amdahl,

I don't entirely disagree with you.
...
Max.
 
In Defense of Java

- I think there should be no doubt that the JVM would make effective use of a multiprocessor / much less a multi-core multiprocessor.
Remember, Sun has certified Java to run on a 72 processor Solaris box.
I mean what's the point if Java didn't successfully scale on such a beast?

I think people don't appreciate that java truely was designed for the enterprise arena. Especially, with more benchmarks coming in, indicating that the JVM produces faster code then most C/C++ programmers.
To beat the JVM on a server, you now Must Profile your code.
And even then there's no guarentee that your complied code will beat the JVM's compiled code, especially if you don't profile all the Hot Paths thru your app. I know in our small web app's there's at least 16+ "Hot Paths".

Anyway, a dual-core G4 laptop would make a he** of a platform for
developing java with JBuilder X -- ( JDataStore, MySql, Oracle, Openbase ), plus ITunes, Mail and ICal, IAddress and IChat all running at once.

It's good to be on Apple.

;)
 
MikeBike said:
Especially, with more benchmarks coming in, indicating that the JVM produces faster code then most C/C++ programmers.
;)
What?!
 
MikeBike:

I think there should be no doubt that the JVM would make effective use of a multiprocessor / much less a multi-core multiprocessor. Remember, Sun has certified Java to run on a 72 processor Solaris box. I mean what's the point if Java didn't successfully scale on such a beast?
Note that this does not address the issue of any relevant Java application benefitting from 4 processors, and it does not address your possible implication that Java is potentially a better justication for many processors than C is.

I think people don't appreciate that java truely was designed for the enterprise arena. Especially, with more benchmarks coming in, indicating that the JVM produces faster code then most C/C++ programmers.
In some applications, at a large cost in memory and start-up time.

To beat the JVM on a server, you now Must Profile your code.
To make good Java code, some say must profile it as well.

Anyway, a dual-core G4 laptop would make a he** of a platform for
developing java with JBuilder X -- ( JDataStore, MySql, Oracle, Openbase ), plus ITunes, Mail and ICal, IAddress and IChat all running at once.
Why do people constantly equate "many applications == faster on many processors" when most of their applications have indetectably small processor usage? You'd almost certainly do a lot better with one faster processor, such as a G5. Further, I encorage you to shelve your fast dual-core G4 dreams. To quote the company making the product:

The e600 core is instruction set and pin compatible with the G4 core used in the award-winning, high-performance MPC74xx family of PowerPC processors
Its pin compatible... that means the same old FSB. Not exactly the beast of a processor some people have been expecting. Perhaps the e700 will come though on that.

http://www.freescale.com/webapp/sps/site/overview.jsp?nodeId=02VS0l72156402
 
Interesting thread.

It may very well be that a dual processor feels faster because the os enjoys not just 2 extra processors but the L1 and L2 cache that go with them.
So, a smart Os can keep twice the cache filled with the active processes.
But, it must implement some kind of cache/processor preference.

Still, I still can't help but believe a whole other processor, on the chip, would make Java run like greased lightening. Plus, all those Folding at Home guys would love 4 altivec units to really run up the numbers.
Just wonder what the heat penalty would be with Folding running everything at 100%.

Maybe a single processor would feel as fast if it had the extra cache memory to manage. Again, it comes down to your standard work load and how you work. But, if your head is always in a Pro application or two it seems a dual core processor would be advantagous.

I'm not worried about the performance of the e600. The G4 at 1.5 ghz is a bit faster then the G5 at 1.6. So, I won't have G5 envy unless IBM builds a dual-core G5 they can put into a laptop.
 
MikeBike:

Interesting thread.
Yeah, though the readership seems to have dropped off a lot. ;)

It may very well be that a dual processor feels faster because the os enjoys not just 2 extra processors but the L1 and L2 cache that go with them.
I'm still not seeing how 4 cores is gona do a lot for responsiveness vs 2 cores. The more I think about it, the more I am inclined to believe that IBM is not preparing a dual-core 970 and is instead working on a Power5-lite. Even with 2MB L2 it would still probably be smaller than the 970 dual core, while being more useful to most people.

Plus, all those Folding at Home guys would love 4 altivec units to really run up the numbers.
Yeah that would certainly keep the transistors flipping. I don't think F@H uses AltiVec though, seems like if it did there would be a big effect vs PCs.

I'm not worried about the performance of the e600. The G4 at 1.5 ghz is a bit faster then the G5 at 1.6.
Yeah thats a real shame in my humble opinion. I blame Apple's memory controller, a programmer I have spoken to online claims it has huge latency compared to the old G4 (DDR) controller, something like 100ns vs 135ns turn-around time. But in any case, that 166mhz FSB is eventually gona put the brakes on G4's ...
 
ddtlm said:
MikeBike:


Yeah, though the readership seems to have dropped off a lot. ;)


I'm still not seeing how 4 cores is gona do a lot for responsiveness vs 2 cores. The more I think about it, the more I am inclined to believe that IBM is not preparing a dual-core 970 and is instead working on a Power5-lite. Even with 2MB L2 it would still probably be smaller than the 970 dual core, while being more useful to most people.


Yeah that would certainly keep the transistors flipping. I don't think F@H uses AltiVec though, seems like if it did there would be a big effect vs PCs.


Yeah thats a real shame in my humble opinion. I blame Apple's memory controller, a programmer I have spoken to online claims it has huge latency compared to the old G4 (DDR) controller, something like 100ns vs 135ns turn-around time. But in any case, that 166mhz FSB is eventually gona put the brakes on G4's ...


Why would a Power5-lite be smaller than a 970( based upon the Power4 ? ).

Team Mac OS X has climbed from around 27 to 14 place in the ranks.
I'm abit surprised by the rapid rise. I would have thought that x86 folders would have migrated to AMD based systems and stayed in the race.

Intel's dual processors only get a 40% boost in performance.
I think AMD systems see more of an 80% boost in performance.

More cores also means more cache: each cpu will have it's own independent cache, and a properly written os should be able to keep a much higher level of cache "coheriency"?

Anyway it's nice to see Apple/Ibm stay in the race.
At least we have something to talk about.
 
After reading through a combined 15 pages on this subject.

I am hopeful for a summary.

IF your current system was a bit old and tired, but still working fairly well
and you had saved for 3 years to purchase a new CPU............

Would you go ahead and purchase the current G5 2.5 system with the hope
that it will hold you for at least 5 years or WAIT just a bit longer
till the new improved
970MP dual cores are available?????
 
FFTT said:
After reading through a combined 15 pages on this subject.

I am hopeful for a summary.

IF your current system was a bit old and tired, but still working fairly well
and you had saved for 3 years to purchase a new CPU............

Would you go ahead and purchase the current G5 2.5 system with the hope
that it will hold you for at least 5 years or WAIT just a bit longer
till the new improved
970MP dual cores are available?????

You would have to wait till abut mid 2005...
 
ddtlm said:
MikeBike:
I'm still not seeing how 4 cores is gona do a lot for responsiveness vs 2 cores. The more I think about it, the more I am inclined to believe that IBM is not preparing a dual-core 970 and is instead working on a Power5-lite. Even with 2MB L2 it would still probably be smaller than the 970 dual core, while being more useful to most people.

A dual-core 970FX would come in handy for Apple to move more seriously into the enterprise. The current 970FX, with its 250KB of L2, is limited as a server processor.

[mikeBike] But in any case, that 166mhz FSB is eventually gona put the brakes on G4's ...[/QUOTE]

The upcoming 90nm G4 won't have a 166Mhz FSB. I have seen a internal Motorola document that describes the next G4 as having DDR and DDR2 capability. That puts the FSB at far beyond 166Mhz. I would expect this chip to be announced withing the next two months and it could be either dual-core or just single core at that time, with the dual-core arriving somewhat later.
 
titaniumducky said:
You would have to wait till abut mid 2005...

Given Apple's history of announcing updates at major events, I'd expect a revised 970 to be announced in January and before that the 970FX should hit 3GHz. Apple has stated that IBM has told them the 970FX production problems would be aleviated in the fourth quarter of this year, so that should mean 3GHz chips. And no, the 970FX-MP chips are not likely to go beyond 3GHz.
 
Specialized Processors

I think it's important to note the role of specialized processors as the computing industry matures. In the early days of the GUI, when the Amiga was actually the far more supirior product, it achived this goal from a combination of efficient multitasking (software) and customized graphics subsytems/processors. Today, what the Amiga achived with several processors is all but consolidated into one GPU... but for years the GPU has been underutilized in a desktop environment-- until perhaps Mac OS X 10.4, code named Tiger.

With all the talk we have been hearing about multiple processors it is interesting to note that OS X 10.4 will be using the GPU to do a lot of mathematical computations that are specialized to the GPU. In other words, instead of using the CPU, which is not as good at GPU style computations, it will use the GPU to offload a significant amount of processing to do things in real time that would have slowed a CPU based solution to a crawl even a couple years ago. I think it's important to realize how monumental OS X's strides to offload CPU tasks to the GPU really is. The reality is that GPU's are far better at a lot of things than CPU's are.

The result is that we will see real world performance gains far exceed the simple scaling of CPU number and speed. I think the next generation of G5's will probably use a single CPU that is dual core, making the machines more affordable. They'll also be much faster than if we just scale today's machines with tomorrow's CPU GHz. And this approach makes sense, too! The human brain has several customized areas that deal with customized computation: visual, auditory, reflex, reason and logic, etc. Why not make out computers this way? It's a start, and I think we may see more computers take advantage of this kind of thinking.

The future looks bright for the Mac, if you ask me.
 
Phinius said:
And no, the 970FX-MP chips are not likely to go beyond 3GHz.

What's your source? The articles that I have read seem to indicate the multi-core 970 will be introduced around 3GHz, not top out there. I think the 970mp is designed to replace the need for 2 individual CPU's, so if the next revision is 3GHz, then the multi-core 970 will be 3GHz. Now, I could be wrong. I'll admit that. But I don't think any of the above is far off the beaten path and is actually likely.

I don't care how much spin you put on it, unless they stop indicating GHz ratings on their computers there is no way Apple will release a machine with lower GHz or equal GHz as they have now and call it an upgrade-- no matter what the real world performance is. It's a markting nightmare.
 
MikeBike said:
Interesting thread.

It may very well be that a dual processor feels faster because the os enjoys not just 2 extra processors but the L1 and L2 cache that go with them.
So, a smart Os can keep twice the cache filled with the active processes.
But, it must implement some kind of cache/processor preference.
Well, a smart OS would always be able to split multiple threads across a single CPU, so why couldn't a "really really really smart" OS split a single thread across multiple CPUs? It would be nice.
 
FFTT said:
After reading through a combined 15 pages on this subject.

I am hopeful for a summary.

IF your current system was a bit old and tired, but still working fairly well
and you had saved for 3 years to purchase a new CPU............

Would you go ahead and purchase the current G5 2.5 system with the hope
that it will hold you for at least 5 years or WAIT just a bit longer
till the new improved
970MP dual cores are available?????

I would buy now, or wait until the new machines come out to buy a refurb unit. Even if they can achieve linear performance increases (didn't happen this time) the 3.0 GHz machine is only 16% faster. Last time the 2.0 to 2.5 GHz jump was 25% in GHz, but only about 16% in actual. So that means we could see a real world performance increase of roughly 10%. You won't perceive that difference in your day to day tasks. In fact, unless you're running rendering farms or long running tasks, 10% won't amount to much. If we were talking about a 2.5 to a 3.5 GHz increase, maybe that starts to make sense... but I wouldn't hold my breath on that! :)

I would base your decision more on the other capabilities of the upcoming machines. Sadly, we don't have much... but I would say that since you're looking at keeping the machine for 5 years that you should consider expansion options more. The CPU speed won't be that big of a deal. Maybe the new ram would be if it were DDR2? Maybe a PCI-express graphics card bus? Aside from that stuff, not much will change in my mind. Maybe someone else has some ideas about architectural improvements, but I think most of them occured in the jump from G4 to G5 (already.)
 
A dual-core 970FX would come in handy for Apple to move more seriously into the enterprise. The current 970FX, with its 250KB of L2, is limited as a server processor.

Actually the PPC 970 has always had 512KB of L2 cache. The 970MP would double this to 1MB per core.

Even if they can achieve linear performance increases (didn't happen this time) the 3.0 GHz machine is only 16% faster. Last time the 2.0 to 2.5 GHz jump was 25% in GHz, but only about 16% in actual.

2.5Ghz to 3Ghz is a %20 increase. We will not be able to accurately compare the 2.5Ghz Duals to a 3Ghz Dual Core because of the cache size differences and faster chip to chip links in a dual core system. Also it seems that the 970MP may have lengthened some pipelines which would allow it to clock higher but would change the IPC.
 
Frobozz said:
I would buy now, or wait until the new machines come out to buy a refurb unit. Even if they can achieve linear performance increases (didn't happen this time) the 3.0 GHz machine is only 16% faster. Last time the 2.0 to 2.5 GHz jump was 25% in GHz, but only about 16% in actual. So that means we could see a real world performance increase of roughly 10%. You won't perceive that difference in your day to day tasks. In fact, unless you're running rendering farms or long running tasks, 10% won't amount to much. If we were talking about a 2.5 to a 3.5 GHz increase, maybe that starts to make sense... but I wouldn't hold my breath on that! :)

I would base your decision more on the other capabilities of the upcoming machines. Sadly, we don't have much... but I would say that since you're looking at keeping the machine for 5 years that you should consider expansion options more. The CPU speed won't be that big of a deal. Maybe the new ram would be if it were DDR2? Maybe a PCI-express graphics card bus? Aside from that stuff, not much will change in my mind. Maybe someone else has some ideas about architectural improvements, but I think most of them occured in the jump from G4 to G5 (already.)


I must admit that the 2.5 machine is already 8 times faster than my 300 MHz G3 tower and would probably hold up well, but that also makes me wonder which configuration would have a more lasting stability.
The individual water cooled 2.5's or the future, far more complicated dual core?

After waiting this long, another few months is nothing if the benefits of
waiting mean a significant long term improvement.

At this point it is not worth the expense to upgrade this system further.
So I will be getting something new soon.

Frustrated, Dazed and Confused
 
Frobozz said:
What's your source? The articles that I have read seem to indicate the multi-core 970 will be introduced around 3GHz, not top out there. I think the 970mp is designed to replace the need for 2 individual CPU's, so if the next revision is 3GHz, then the multi-core 970 will be 3GHz. Now, I could be wrong. I'll admit that. But I don't think any of the above is far off the beaten path and is actually likely.

I don't care how much spin you put on it, unless they stop indicating GHz ratings on their computers there is no way Apple will release a machine with lower GHz or equal GHz as they have now and call it an upgrade-- no matter what the real world performance is. It's a markting nightmare.
Considering the same source people were pointing at indicating that the IBM PPC970 is close to Notebook capable, and that dual core is an option -- Norman Rohrer, chief designer of the PowerPC 970FX.

A dual-core version of the processor could also boost performance at slower speeds. Rohrer declined to confirm whether a dual-core PowerPC is on the company's roadmap, although he did say that a dual-core chip would remove some of the pressure to constantly push clock speeds higher.
A single core would likely be the MHz speed demon, while the dual cores run at a significantly lower clock -- while operating at similar heat/power to each other.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.