Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
ClimbingTheLog said:
On my 1.2 GHz G4 AppleMail is CPU bound. It downloads much faster than it indexes.

That's too bad ;) .

On the other hand, this could be a good example of the problems and successes of multithreading.

If indexing a mailbox is CPU-intensive, and if each mailbox is indexed separately - then threading will be easy and effective. Each mailbox can be assigned a different thread, and on an SMP system the mailboxes can be indexed in parallel. Fairly easy to program, but speeds the real system up noticeably.

On the other hand, if there's only one mailbox - threading doesn't do much for you.

On the third hand - if there are multiple mailboxes and a single index, it can be done somewhat, but it's more work. Each mailbox can be separately indexed (in parallel), and then when all the individual indices are ready they can be merged into the single master index. The merge, of course, is serial and can only be done in one thread (unless you wanted to use a recursive, threaded parallel merge - which would still end up at the end as a single thread).

Note that I've never claimed threading is bad or is not done. I objected to the "Bull!!!" statement and its implication that creating additional threads is trivial. It is *not* trivial to get real speedups with multi-threading in many applications.


ClimbingTheLog said:
If I had 4 or 8 CPU's, real or virtual it would busy one with fetching, leave UI on another, and spend the rest indexing, ideally.

Typically it would just spawn as many threads as it can, and let the OS schedule the threads as they need CPU.

Dedicating processors (logical or physical) to particular threads is usually a net loss compared to letting the OS do dynamic scheduling.
 
ProfSBrown said:
It says both cores will share a 1ghz bus...right now each CPU in a dual 2.5 gets 1.25ghz of it's own bandwith, but with that each 3ghz core would have to share only 1ghz, isn't that a step backwards? Or are we getting dual dual-cores? :confused: :(

both cores HAVE a 1MB L2 cache. each. individually. :)
 
Interesting take on the Apple-IBM partnership

http://www.linuxinsider.com/story/34994.html

Wintel's Dilemma and Apple's Problem

For the last three weeks I've been talking about the impact the new Sony, Toshiba and IBM cell processor is likely to have on Linux desktop and datacenter computing. The bottom line there is that this thing is fast, inexpensive and deeply reflective of very fundamental IBM ideas about how computing should be managed and delivered. It's going to be a winner, probably the biggest thing to hit computing since IBM's decision to use the Intel 8088 led Bill Gates to drop Xenix in favor of an early CP/M release with kernel separation hacked out.

Sun has the technology to compete. Its throughput-computing initiative -- coupled with some pending surprises on floating point -- give it the hardware cost and performance basis needed to compete on software where it has the best server-to-desktop story in the industry.
No one else does. Microsoft's software can't take x86 beyond some minor hyperthreading on two cores without major reworking -- and Itanium simply doesn't cut it. The Wintel oligopoly could spring a surprise -- a multicore CPU made up from the Risc-like core at Xeon's heart, along with a completely rewritten Longhorn kernel to use it. But no one has reported them stuffing this rabbit into their hat. So, for now at least, they seem pretty much dead ended.

If, as I expect, the Linux community shifts massively to the new processor, Microsoft and its partners in the Wintel oligopoly will face some difficult long-run choices. It's interesting, for example, to wonder how long key players like Intel and Dell can survive as stand-alone businesses once the most innovative developers leave them to Microsoft's exclusive mercy.

Wintel's dilemma is, however, a fairly long-term issue. Much closer at hand is Apple's immediate problem. Just recently Steve Jobs has had to apologize to the Apple community for not being able to deliver on last-year's promise of a 3-Ghz G5 by mid 2004. IBM promised to make that available, but has not done so.
A lot of people have excused this on the grounds that the move to 90-nanometer manufacturing has proven more difficult than anticipated, but I don't believe that. PowerPC does not have the absurd complexities of the x86, and 90-nanometer production should be easily in reach for IBM. The cell processor, furthermore, is confidently planned for mass production at 65-nanometer sizes early next year.

This will get more interesting if, as reported on various sites, such as Tom's Hardware, IBM has been burning the candle at both ends and also will produce a three-way, 3.5-GHz version of the PowerPC for use on Microsoft's Xbox.
Whether that's true or not, however, my belief is that IBM chose not to deliver on its commitment to Apple because doing so would have exacerbated the already embarrassing performance gap between its own server products and the higher end Macs. Right now, for example, Apple's 2-Ghz Xserve is a full generation ahead of IBM's 1.2-GHz p615, but costs about half as much.


Consequences of Apple Decision


Unfortunately this particular consequence of Apple's decision to have IBM partner on the G5 is the least of the company's CPU problems. The bigger issue is that although the new cell processor is a PowerPC derivative and thus broadly compatible with previous Apple CPUs, the attached processors are not compatible with Altivec and neither is the microcode needed to run the thing. Most importantly, however, the graphics and multiprocessor models are totally different.

As a result, it will be relatively easy to port Darwin to the new machine, but extremely difficult to port the Mac OS X shell and almost impossible to achieve backward compatibility without significant compromise along the lines of a "fat binary" kind of solution.
In other words, what seemed like a good idea for Apple at the time, the IBM G5, is about to morph into a classic choice between the rock of yet another CPU transition or the hard place of being left behind by major market CPU performance improvements.

Look at this from IBM's perspective and things couldn't be better. Motorola's microprocessor division -- now Freescale Semiconductor -- is mostly out of the picture, despite having created the PowerPC architecture. Thus, if Apple tries to stay with the PowerPC-Altivec combination, it can either be performance starved out of the market or driven there by the costs of maintaining its own CPU design team and low-volume fabrication services.
If, on the other hand, Apple bites the bullet and transitions to the cell processor, IBM will gain greater control while removing Apple's long-term ability to avoid having people run Mac OS on non-Apple products. Either way, Apple will go away as a competitive threat because the future Mac OS will either be out of the running or running on IBM Linux desktops.


Apple-Sun Partnership


I think there'll be an interesting signal here. If IBM thinks Apple is going to let itself be folded into the cell-processor tent, it will probably allow as many others to clone the new Cell PC as it can make CPU assemblies for. If, on the other hand, IBM thinks Apple plans to hang in there as an independent, it might just treat the Cell PC as its own Mac and keep the hardware proprietary. Notice, in thinking about this, that they don't have to make an immediate decision: There will be CPU assembly shortages for the first six months to a year if not longer.

So what can Apple do? What the company should have done two years ago: Hop into bed with Sun. Despite its current misadventure with Linux, Sun isn't in the generic desktop computer business. The Java desktop is cool, but it's a solution driven by necessity, not excellence. In comparison, putting Mac OS X on the Sunray desktop would be an insanely great solution for Sun while having Sun's sales people push Sparc-based Macs onto corporate desktops would greatly strengthen Apple.
Most importantly, Sparc is an open specification with several fully qualified fabrication facilities. In the long term, Apple wouldn't be trapped again, and in the short term the extra volume would improve prospects for both companies. Strategically, it just doesn't get any better than that.


Some Important Footnotes


I am not suggesting that Sun buy Apple, or Apple buy Sun. Neither company has adequate management bandwidth as things stand. I'm suggesting informed cooperation, not amalgamation.

The transition to Sparc would be easier than the transition to Cell. It might look like the bigger change, but the programming model needed for cell is very different, whereas existing Mac OS software, from any previous generation, need only be recompiled to run on Sparc.
In particular, the graphics libraries delivered with the Cell PC will likely focus on Gnome-KDE compatibility to make porting applications for them easy, but Apple would have to redo its interface-management libraries at the machine level -- something it would not face in a move to Sparc where PostScript display support is well established.

In addition, existing Sun research on compiler automation suggests that multithreaded CPUs like Niagara and Rock could automatically convert PowerPC and even MC68000 executables to Sparc on the fly -- meaning that "fat binaries" would not be needed, although a Mac OS 9.0 compatibility box would probably still make sense.


Sun's Throughput-Computing Initiative


People I greatly respect tell me that Sun's throughput-computing direction isn't suited to workstations like the Mac where single-process execution times are critical to the user experience. The more I study this question, the more I disagree. Fundamentally this issue is about software, not hardware.
Consider, for example, what could be achieved with the shared-memory access and eight-way parallelism inherent in the lightweight process model Sun is building into products like Niagara. This won't matter for applications like Microsoft Word, where the 1.2-GHz nominal rate is far faster than users need anyway, but can make a big difference on jobs like code compilation, JVM operations or image manipulation in something like Adobe's Photoshop.

Given the much higher cache hit rates and better I/O capabilities offered by the relatively low cycle rate, theory suggests that truly compute-intensive workstation software could hit somewhat better than 85 percent system use -- meaning that an eight-way Niagara-1 running at 1.2 Ghz would easily outperform a Pentium 4 at 8 GHz.
Making that happen would, of course, take serious software change, but if the preprocessors now thought to be under development at Sun work as expected, most of that would be automated -- thereby greatly reducing the barriers to effective CPU use on the Mac for PC-oriented developers like Adobe.​

Should Apple be thinking different?
 
skunk said:
http://www.linuxinsider.com/story/34994.html

"In other words, what seemed like a good idea for Apple at the time, the IBM G5, is about to morph into a classic choice between the rock of yet another CPU transition or the hard place of being left behind by major market CPU performance improvements"

Should Apple be thinking different?
Did they completely forget that Apple hopped on the POWER branch of the tree intead and off PPC branch (sort of), and IBM does have a long-term plan for those POWER processors.

So it doesn't look like the G5 is an instant dead end after the current G5 runs it's course -- because IBM has already announced the GR-UL (aka, Power5-UL) and given an outline for the POWER6.

So it doesn't look like Apple will be left behind by IBMs CPU improvements.

These CPUs do support the majority of the PowerPC ISA, and the addition of Altivec did make the transition rather painless for Apple to this class of CPU -- except for VirtualPC.

Take a midrange/server class chip and drop some of the expensive bus and interconnects (that made it too expensive) and drop it in a desktop and soon into a iMac.

A few years ago the thought of putting a Power4-class CPU in an iMac would have been sort of silly, but with IBM looking to deliver a dual-core PPC970 it doesn't look silly any more.

POWER to the people
 
ClimbingTheLog said:
No, XP is just egotistical arrogance from Microsoft.

XP is the traditional Christian abbreviation for "Jesus Christ", from Latin.

Getting quite OT, but I think that is urban legend.
XP is supposedly for "experience".

however, the legend is funnier :D
 
StudioGuy said:
Getting quite OT, but I think that is urban legend.
XP is supposedly for "experience".

however, the legend is funnier :D

This is the first time that I have heard XP being an abbrevation for Jesus Christ. OT is an abbreviation for the Old Testament though.

I'm more interested in knowing if this dual core PPC 970 will be worth waiting till MWSF '05.
 
wdlove said:
This is the first time that I have heard XP being an abbrevation for Jesus Christ. OT is an abbreviation for the Old Testament though.

I'm more interested in knowing if this dual core PPC 970 will be worth waiting till MWSF '05.

I meant "off topic" :D , but nice transition!

I agree with the worth, but are tired of waiting - time to buy for us if we can afford it. :rolleyes: they'll always be something better making us want to wait...
 
jouster said:
I don't have the CPU smarts to address this myself, but I will point out that this article has been extensively rubbished on various forums.
Well, I don't have the patience to read the whole thing but I skimmed to this bit...
A lot of people have excused this on the grounds that the move to 90-nanometer manufacturing has proven more difficult than anticipated, but I don't believe that. PowerPC does not have the absurd complexities of the x86, and 90-nanometer production should be easily in reach for IBM.
That's enough to tell me the author doesn't have a freaking clue.
The problems with moving to .09 micron have NOTHING to do with the complexity of the instruction set and everything to do with physics. The .09 micron transition has gone poorly for IBM and Intel because, though they expected quite a few problems with current bleed, they didn't expect the level of signal crosstalk that they have been seeing.
I don't give two craps what the author "believe" or doesn't, but he/she shouldn't be passing themselves off as someone worth listening to if they can't bother to do some basic research.
 
skunk Should Apple be thinking different?[/QUOTE said:
Yeah Right..... Thats all Apple needs to do is to allign itself with a dead end company like Sun.

The Sparc Processor is a slow piece of junk..... And anyone who says otherwise doesnt have a clue what they are talking about...

Sun has consitantly been producing overpriced hardware, And has consistantly been losing customers.

Their hardware provides the worst cost/performance ratio in the industry.
And dont even talk about reliability.... Because i have first hand knowledge of their underhanded dealings with me as a customer...
We had hundreds of their systems and they would crash when the wind blew....
We even had to sign a NDA or they wouldnt fix their problem...
Im so freaking glad that we finally got rid of every SUN piece of junk that we owned.
 
Maxx Power said:
Try playing any latest game and tell PC gamers that... nuff said

I'd bet a dual 2.5GHz with a Geforce6800 Ultra would be able to keep up nicely, even if it might still be slightly slower. Keep in mind that the Alienware machines we're speaking of are finely tuned for gaming, whereas the PowerMac is finely tuned for creative production, etc.

I think that Macs are highly compeitive on raw power these days. The only thing I can possibly complain about is out of my control-- Half-life 2... will it ever come to the Mac? We know Doom3 is, but my DP 1GHz G4 isn't likely to handle it. :)
 
wdlove said:
This is the first time that I have heard XP being an abbrevation for Jesus Christ. OT is an abbreviation for the Old Testament though.

I'm more interested in knowing if this dual core PPC 970 will be worth waiting till MWSF '05.

It's not XP. It is Chi Rho, which is the Christogram (The symbol for Christ). The latin symbols for those two are like an X and a P.

The P is usually in the X. Look it up, pretty cool story. The christogram came to Constantine in a vision, and could have made him a Christian.

http://www.forumancientcoins.com/forvm/Articles/Constantine_Ch_Rho_files/Constantine_Ch_Rho.htm
 
StudioGuy said:
I meant "off topic" :D , but nice transition!

I agree with the worth, but are tired of waiting - time to buy for us if we can afford it. :rolleyes: they'll always be something better making us want to wait...

Definitely the waiting is becoming very tedious. As of now the wait has been over a year.
 
ddtlm said:
nuckinfutz:


Lemme give you a secret to estimating the feasibility of a rumor relating to a large R&D investment: you need to establish that the people spending on R&D are going to recoup that cost. For years now I've used that simple rule to correctly predict the non-arrival of crasy things that everyone else seemed to swallow hook, line and sinker. Apply that rule to the idea of there being both a 970-dual and a Power5-lite. When I apply it, I see that the two chips aim at the same market, harming profit. Note, neither chip is aimed at the market that will be increasingly held by the 970fx, so I see no conflict there.

It is possible that a G5-dual could occupy the market for a year or more before being replaced by a Power5-lite. There are some questions there, I'd wonder if going from true dual core to SMT would be a step backwards, and also I wonder why Apple would be designing for SMT already.


The only problem with your theory is you've forgotten the potential reward for success in this area. The Huge Intel market would eventually be up for grabs. A market IBM once owned... :cool:
 
jeffbax said:
Ugh...

Game consoles do not provide a better gaming experience than a PC dammit!

Its the worst Mac cop-out ever!

Games are what keep me locked to PC, thats for sure.

One day my precious...

You should try a "game" that could pay off for you in the future...
GarageBand for example.
I hear it's a good gig, being a rock and roll star.
 
MikeBike:

Heh, the only thing people predict the demise of more often then Apple is x86. ;) Won't happen.
 
ddtlm said:
MikeBike:

Heh, the only thing people predict the demise of more often then Apple is x86. ;) Won't happen.

I'm not predicting the demise of x86.
Why is AMD chasing after Intel?
Seems to me the potential money pot is one big reason.

Smart people go after the money.
Ibm should be looking at every Apple sale,
even if they lose a potential Linux sale,
as a win. Ibm should be busting their collective butt to make Apple succeed. Linux servers on Power / Apple on PPC / even AMD on x86 put money in IBM's pockets.
 
skunk said:
http://www.linuxinsider.com/story/34994.html

[/indent]

Should Apple be thinking different?

The article you quote is indeed thought provoking, but I chalk it up to the wishful thinking of one individual. There really is no basis for his prioritization of the cell architecture over the G5. To top that off, IBM is a bit more intelligent than that-- if cell was ever going to be something they could sell more of to Apple, they would have made it that way. IBM is a megacorporation looking for profits above all else.

I think his conjecture that the 90nm wall was totally off base. While it is true that perhaps the die size and etching features of the G5 may be less complex than a Intel x86 chip, that has nothing to do with the problem they cite. I'm not a chip architect, but let's assume my previous statement is indeed true. It doesn't make good business to flat out lie to your investment community about such things as they are probably audited about statements like that. If IBM said they ran into lamination issues or heat dissipation problems (which caused delamination?) then that's 99.9% likely to be the cause. Again, they are a corporation out to make money FIRST, so there's no reason to withhold anything from a buyer.
 
so a G5 with two of this new chip and a 6800 ultra would be a...



dual dual dual dual G5? ha!



anybody say that yet?
 
I am quite surprised that no one has really mentioned how in the world IBM and Apple are supposed to get over the heat Problems produced by a Dual Core PowerPC 970 series chip @ 3Ghz per core. Apple has allready seen it fit to put Liquid Cooling in to cool a Single core PowerPC 970 @ 2.5Ghz. Dual Core 3Ghz sounds pretty darn hot to me and I am curious how the heat problem will be overcome. Don't tell me it will be at 65nm because I doubt we will see 65nm until the ending of 2005 at the earliest and this Rumor article states that it would see Production in early 2005 which would mean on a 90nm fab. Remember Dual Core was meant to control PowerConsumption and heat while striving to offer better performance. From sources I have read most CPU manufactures planning to produce Dual Core Versions of their Single Core Chips would actually be clocked slower per core than their Single core counterparts that is where the Power savings and heat is supposed to be reduced yet offer better performance. An example would be The fastest Single Core Athlon 64 might top out at 3Ghz while the Dual Core version will run at 2Ghz per core in "effect" producing 4Ghz combined which would in most cases offer increased performance with less heat and Power Consumption. The same Goes with Intel and their upcoming Dual Core Version of Xeon Processors as is the same for IBM and the 970. Basically what it boils down to is if they can't even give us a Single core 3Ghz chip what makes one think they can Produce a Dual Core 3Ghz chip and have it ready sometime early next year?
 
Good points

Little Endian said:
I am quite surprised that no one has really mentioned how in the world IBM and Apple are supposed to get over the heat Problems produced by a Dual Core PowerPC 970 series chip @ 3Ghz per core.

Do remember, though, that one of the reasons for the liquid cooling is that the PPC970fx chip is physically rather small. The problem is both the total amount of heat generated (watts), and the concentration of the heat (watts/mm^2).

A dual core chip would physically be about twice the size of the single core, so the heat density wouldn't be much different from the current chips. While more heat is generated, it isn't a new problem.

Bigger radiators/fins.... :rolleyes:

Also, much of the commentary about dual cores refers to the slowing of the MHz increases. So, the dual core 3.0GHz could be thought of in lieu of a 4GHz single core, therefore producing much lower heat densities than the 4GHz chip.


Little Endian said:
...[dual core] which would in most cases offer increased performance...

For multi-threaded MP-aware applications, yes. And for running multiple non-MP-aware apps simultaneously.

It would not help the single non-MP-aware app, that would run slower.

I know that you say "most", but the words "many" and "some" would also apply.

A bit of skepticism about the benefits of multi-processing is good to have. You'll win sometimes with this approach (that is, the slower dual-core), but sometimes you'll lose.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.