Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Processor speed is the critical factor in computing, but in this thread only one processor is being acknowledged--the central processor. Isn't anyone excited by 10.2's attempts to offload work from the CPU to the GPU? I am excited by this because it is a wonderful example of Apple (and nvidia) engineers using ingenuity to offset the obvious shortcomings of the Motorola CPUs. If the CPU can't handle the load, send the load elsewhere. I hope to see more of this kind of innovation in the future with more task-specific chips cooperating to get the job done instead of throwing all instruction at a CPU--even a Power4 derived one. QE is the wonderful result of scouring the machine to identify and tap into potential that was already in place. Go Apple.

Now that I complimented Apple, I have to get in a jab at them too. Why did they chose to equip all of their high-end machines with dual processors when both processors share a single bus? In high-end PCs and workstations, dual and quad machines are orders of magnitude more expensive than single processor machines because each processor has a replicated set of equipment attached to it. From what I can find, the Apple dual processor machines aren't designed that way. They seem to just sprout two processors at the end of the bus. How is this optimal? Shouldn't Apple be doing everything humanly possible on its high-end machines to work around the known limitations of the bus feeding the CPU, including replicating the bus for each processor? They just designed a new controller chip for the motherboard, why didn't they include an obvious improvement like this?
 
Originally posted by Quixcube
Processor speed is the critical factor in computing, but in this thread only one processor is being acknowledged--the central processor. Isn't anyone excited by 10.2's attempts to offload work from the CPU to the GPU? I am excited by this because it is a wonderful example of Apple (and nvidia) engineers using ingenuity to offset the obvious shortcomings of the Motorola CPUs. If the CPU can't handle the load, send the load elsewhere. I hope to see more of this kind of innovation in the future with more task-specific chips cooperating to get the job done instead of throwing all instruction at a CPU--even a Power4 derived one. QE is the wonderful result of scouring the machine to identify and tap into potential that was already in place. Go Apple.

I was going to say something like this in my prev. post, but I didn't want it to be too long, plus I knew hardly anyone would read it or care since they all want to talk about G5's and such.

I believe Apple is doing a good job at trying to innovate around their bottle-necks in technology that surround them.

BTW, I think the new motherboard is a stronger helper in offloading the CPU by letting the GPU have some of the DDR ram bandwidth that the CPU can't use.


Now that I complimented Apple, I have to get in a jab at them too. Why did they chose to equip all of their high-end machines with dual processors when both processors share a single bus? In high-end PCs and workstations, dual and quad machines are orders of magnitude more expensive than single processor machines because each processor has a replicated set of equipment attached to it. From what I can find, the Apple dual processor machines aren't designed that way. They seem to just sprout two processors at the end of the bus. How is this optimal? Shouldn't Apple be doing everything humanly possible on its high-end machines to work around the known limitations of the bus feeding the CPU, including replicating the bus for each processor? They just designed a new controller chip for the motherboard, why didn't they include an obvious improvement like this?

I'm not an expert about this stuff, but I believe I read somewhere a comparison between using 2 proc. on one bus and having two busses. It seemed to me that they were certainly not equal, but both had advantages. If I find my source I'll post it, but I think it mainly had to do with when there was one bus, one proc took over part of a job when the other was busy, while when there are two busses, the two proc are given two different jobs. Could be wrong on this.

-- Bert :cool:
 
Originally posted by Quixcube
Processor speed is the critical factor in computing, but in this thread only one processor is being acknowledged--the central processor. Isn't anyone excited by 10.2's attempts to offload work from the CPU to the GPU? I am excited by this because it is a wonderful example of Apple (and nvidia) engineers using ingenuity to offset the obvious shortcomings of the Motorola CPUs. If the CPU can't handle the load, send the load elsewhere. I hope to see more of this kind of innovation in the future with more task-specific chips cooperating to get the job done instead of throwing all instruction at a CPU--even a Power4 derived one. QE is the wonderful result of scouring the machine to identify and tap into potential that was already in place. Go Apple.

Absolutely! The x86 Mac folks say they want Mac hardware to become more and more like PC hardware. I say Apple should slope the playing field as much as it can and use as innovative a system design as it sees fit, no matter how much that design happens to deviate from the x86 world.
Now that I complimented Apple, I have to get in a jab at them too. Why did they chose to equip all of their high-end machines with dual processors when both processors share a single bus? In high-end PCs and workstations, dual and quad machines are orders of magnitude more expensive than single processor machines because each processor has a replicated set of equipment attached to it. From what I can find, the Apple dual processor machines aren't designed that way. They seem to just sprout two processors at the end of the bus. How is this optimal? Shouldn't Apple be doing everything humanly possible on its high-end machines to work around the known limitations of the bus feeding the CPU, including replicating the bus for each processor? They just designed a new controller chip for the motherboard, why didn't they include an obvious improvement like this?
I guess it's a limitation of the MPC7450 CPU and the way it is not fully MERSI-compliant. I don't think Apple crippled the memory bus like this because it wanted to. I'm still awaiting word on whether this limitation resides on the motherboard or on the processor card, because if it resides on the CPU card, that means the current systems could possibly upgraded to whatever future CPU that supports a more efficient bus.

Alex
 
Originally posted by Quixcube
If the CPU can't handle the load, send the load elsewhere. I hope to see more of this kind of innovation in the future with more task-specific chips cooperating to get the job done instead of throwing all instruction at a CPU--even a Power4 derived one. QE is the wonderful result of scouring the machine to identify and tap into potential that was already in place. Go Apple.

Exactly! Quartz Extreme is an extremely innovative use of the hardware and of OpenGL. Modern GPUs have lots and lots of calculation power but usually sit idle unless you're playing UT or something.

It's interesting though, this used to be something that was more common in the industry. The Amigas had a few coprocessors for graphics and audio, the NeXTcubes had an on-board DSP and the NeXTdimension board had an Intel i860 for DPS processing, etc. Even the Apple Quadra AVs had an AT&T 3210 DSP for AV and telephony features.

I hope we see more of this cuz it's got potential. Heck, we could even name the chips like in the Amiga. lol ;)
 
IBM Power4 chip

The IBM chip is real. A quote from IBM: "The POWER4 chip provides the processing power for eServer p690, the recently introduced high-end, IBM 64-bit POWER-architecture, 8-to-32-way server system." So, IBM's chip is not only in production, but has been implemented in one of their high-end servers.

The current PowerMacs have an excess power supply capacity of 250 watts. The Power4 chip running at 1.1 GHZ consumes 115 watts! Everything else staying equal, die size, applied voltage design, # of transistors, then at 2 GHz the chip would consume ~ 230 watts. This would just about eat up all of that excess PM power supply capacity and would explain why the PM has such a huge, out of proportion to current needs, heat sink. Maybe the next gen PM will use a Power4. Also, the Power4 implements, and extends, the AIM PowerPC instruction set.

The Power4 has 174 million transistors! Explains the power consumption. Probably will never see it in a PowerBook, unfortunately.

Personally, I see a Power4 chip inside Macs before a Moto G5 or an x86 architecture chip like from Intel or AMD.
 
Power4 Architecture

Talk about number of processors and such...from IBM the Power4 implements two identical processors on the chip. Neither of these is the AltiVec equivalent vector processor that speculation claims IBM is now integrating onto the Power4. Also, the chip scales to 8, 16, 24, or 32 processors (4 Power4 chips = 8 processors). Of course, we are talking of a differently designed chassis to accomodate all of the required power and heat dissipation. This would be a very exciting approach, expecially given OS 10s (UNIX's, and the Mach kernal's) ability to wring all of the available processing out of many processors (SMP).
 
Re: IBM Power4 chip

Originally posted by jerryobrecht
The IBM chip is real. A quote from IBM: "The POWER4 chip provides the processing power for eServer p690, the recently introduced high-end, IBM 64-bit POWER-architecture, 8-to-32-way server system." So, IBM's chip is not only in production, but has been implemented in one of their high-end servers.

This is not the chip we will potentially be gettting.

For the simple reasons listed in each and every example you listed below the above, plus the fact that the current Power4 requires 700 lbs of insertion force for it's socket.

It's a mainframe server chip design, not a desktop server chip design.

IBM's upcoming announcement is of a Power4 variant for use in desktops and entry level servers. In other words, it will incorporate design elements of the Power4, but toss those that get in the way of making it cost effective or impedes hardware design for those markets.

The end result will not be a true Power4. However, if we're lucky, it won't be a 486SX either...
 
SGI and Sun?????

Apple is faced with a menacing obstacle to enticing a chip maker to build it a competive CPU: low market share. As mentioned many, many times in this forum, Motorola sells a hell of a lot more CPU's to embedded widget makers than to Apple. Consequently, Apple doesn't necessarily get in the front row when it comes to Motorola setting design priorities.

In comes IBM with junior, the Power4 offspring with a sleeker profile. We talk about IBM's new plant in New York, implying a direct correlation to this and Apple.

This is ridiculous insofar as Apple alone is way too small of a fish to justify a multibillion dollar investment. Before jumping, yes of course, the sum of sales to other customers and Apple is the driver for IBM. Also, junior, if indeed manufacturered at the New York plant, will undoubtedly be one among many different chips that IBM manufacturers there.

While I seem to be undermining my own argument, its now time to get to my point regarding Sun and SGI in my subject line. Surely IBM has more customers in mind than Apple and itself for Junior.

Rumors not too long ago suggested that SGI was considering Motorola's fabled G5. Also, Sun, I BELIEVE (not sure), had considered embracing the Itanium but jumped ship having seen the initial results.

Though Sun and SGI both design their CPU's and outsource their production to other companies to fab, their core competancies really aren't CPU design. That's not to say they're not good at it. I'm just saying that their customers would buy Sun or SGI systems whether they had Sparcs or Mips chips in them or not. They buy overall system performance.

The Itanium and fabled G5 tanked from their perspective so they have been sticking with their in-house designed semiconductors.

That said, if presented with a 3rd party source of CPU's that offered both performance and sound economies of scale, I'm confident that they'd happily embrace it. Naturally, they wouldn't jump over night and transition as one changes from one T-shirt to another. It would be a major strategic move with tectonic implications regarding adaptions of their software/OS's.

All that said, I'll be watching for signs of Sun and SGI being interested in Junior. I'm not predicting per se; I'm suggesting that such signs would be positive reinforcement for Apple embracing Junior and Apple enjoying long-term positive prospects for competitive hardware relative to the x86 folk.

The three of them have very common needs from CPU's. The three of them could potentially benefit from a signfiicant boost in economies of scale from Junior--POTENTIALLY.

But Sun and SGI, undoubtedly, don't want to open themselves to renewed and emboldened competition from Apple. If the three of them share a common CPU family, Sun and SGI would only do so if they could maintain a means to differentiate themselves from Apple.

Again, I'm not predicting the formation of this Troika. To be honest, I know very little about Sun and SGI. But, you can clearly see how Apple could benefit from such a Troika.

You know Book E being highly modular, suggests that Sun, SGI, and Apple might share the same core of Junior but may have alternate CPU-components such as memory controllers or what not. Obviously, Sun and SGI might go with considerably large L1 and L2 cache than Apple (higher cost).

If I were Steve Jobs, I'd have deployed some of my brain trust to find commonality with Sun and SGI to determine if an IBM Junior-centric Troika could be practical and beneficial to all concerned parties.

Can you guys think of how Sun and/or SGI might be interested in moving to Junior? I'm sure its easier to think of reasons for them not. I don't mean to stick our collective heads in the sand by ignoring the 'cons'. I'd like to ask the folks here where common needs may exist for the potential Troika.

A Troika would mean that IBM would invest very heavily in Junior, relative to an Apple and IBM only market for Junior. So, do you see where I'm coming from? If by some unlikely chance that the Apple braintrust hasn't even looked at the formulation of a Troika, our discussion would of it would not go unnoticed by Apple.

Cheers,

Eirik
 
Re: SGI and Sun?????

Originally posted by eirik
While I seem to be undermining my own argument, its now time to get to my point regarding Sun and SGI in my subject line. Surely IBM has more customers in mind than Apple and itself for Junior.

Perhaps, but... I don't know, it seems like it could make sense. It's not a completely new chip - it's just a 64-bit PPC for desktops derived from a chip that already exists. So it can't be that huge of an engineering effort. If its only intended recipients are Apple, IBM, and a few minor embedded device and rack-mount server vendors, I don't think that would be too far-fetched. If all these companies combined could ship a million units in a quarter (didn't Apple sell 800k Macs last quarter?), I think that would be enough to sustain the platform. Maybe. :)
Rumors not too long ago suggested that SGI was considering Motorola's fabled G5. Also, Sun, I BELIEVE (not sure), had considered embracing the Itanium but jumped ship having seen the initial results.

I'm not too familiar with Sun, but I do have some familiarity with SGI. I remember the rumors about IRIX 7.0 being ported to PowerPC, but I don't remember them coming from particularly reputable sources. Ryan Meader's macosrumors.com was where I read it, so if that's where the rumor came from, I'd say it's probably about as likely as a comet crashing into the moon and knocking it into the Pacific Ocean in the next 18 hours.
Though Sun and SGI both design their CPU's and outsource their production to other companies to fab, their core competancies really aren't CPU design. That's not to say they're not good at it. I'm just saying that their customers would buy Sun or SGI systems whether they had Sparcs or Mips chips in them or not. They buy overall system performance.

From a hardware obsolesence standpoint, it's true that customers buy overall system performance, but when a company forks over $50k+ for a Sun or SGI server, they are also making an investment in that platform's future. They usually don't want to see their vendor move to another CPU platform, as that will invalidate their hardware & software upgrade path. One of the big attractions of SGI systems is their great scalability; when a company buys a 4-way Origin server, they know they will be able to grow it over many years to fit their gradually increasing performance needs. There are still Challenge servers from 1994 in use today thanks to the fact that software and hardware compatibility with early-'90s SGIs is still intact. A move from MIPS to PowerPC would break this; so either PowerPC would have to coexist for a good while alongside MIPS on the roadmap, or it would have to replace MIPS, causing existing SGI customers to scream.

SGI is no longer as proud or influential as it once was, but it still has its niches in the high-end scientific, technical and visualization markets, and those niches appear to be pretty well locked up. So I certainly don't think MIPS is going anywhere. (And SGI doesn't either, as they've got the whole next decade covered in their roadmap, to the R18000 and beyond.) So this reinforces my opinion that a PowerPC SGI would have to coexist with the MIPS lineup.

The current desktop SGIs get their butts kicked by PCs in raw CPU performance, but they still own their respective niches because they can do things PCs just can't do. One thing SGI and Sun don't want to do is step into the ring (again) against commodity x86 machines. They don't have much choice but to add value to their products elsewhere. If SGI's machines were available with fast, x86-crushing PPC processors but cost the same very high prices, would they sell much better than they do now (enough to justify the transition)? I'm not sure they would. A faster processor would be nice, but if it weren't available, would this fact hurt sales? I'm not sure it would. But the situation as it pertains to Sun may be different.

The fastest CPU available in an SGI today runs at 600MHz and performs very well in SPEC (at least for its clock speed), besting the 1GHz Pentium III by a good amount. But one thing to consider is that it consumes only 18 watts of power. SGI makes a big deal of this; its VP of whatever says that they've given up competing for raw performance and are instead aiming for 16-20 watts in their designs. Why? Because SGIs have fantastic multiprocessing capabilities and performance increases in direct, linear proportion to the number of CPUs added, and because these CPUs can be packed very densely into racks whereas Itanium2s and POWER4s and SPARCs cannot because of their cooling requirements. So MIPS is not a blazing fast architecture at the moment, but it is still unique and healthy with a bright future.
Can you guys think of how Sun and/or SGI might be interested in moving to Junior? I'm sure its easier to think of reasons for them not. I don't mean to stick our collective heads in the sand by ignoring the 'cons'. I'd like to ask the folks here where common needs may exist for the potential Troika.

I'll let someone else handle this one... for me, it's a lot easier to think of reasons why Sun/SGI wouldn't be interested in moving to "Junior," but that's only me. :)

Here's one more reason: Wouldn't it be weird if Sun and SGI used IBM CPUs in the same machines that are competing directly against IBM's machines? :) Sun and IBM are bitter rivals. I'm not sure what the SPARC roadmap looks like - do they need a new CPU that badly?

Alex
 
ddtlm & kenohki...

ddtlm wrote:

This time around though, the rumors center around an actual chip that has actually been announced. This is starkly contrasted to the G5 rumors last year as well as the endless talk about some sort of 7470 or 7500 G4, or a full DDR FSB.

I do see your point. My problem is just simply that this seems too magnanimous. I mean, this is beyond-incredible news. Every time I hear something like this it slowly disappears and fades into the darkness of the collective Mac-user memory, or it's continually recycled and fanned to great intensity every six months just before an Expo.


The fact is that the Power4 clocked in at 1.3ghz more than a year ago on 180nm technology. It is easy to believe it will hit 2.0ghz next year. I fully expect it to turn up during 2003, perhaps not at 2.0ghz, but then again it doesn't need that clock speed to perform well. We may not catch the P4... but we will be so much closer.

Even closer would be wonderful. Shoot, I'd happily get a new Mac. Unfortunately, until it's officially announced as a Mac processor, I'll reserve my excitement. I mean, I hope you're right. But recent history does not make me optimistic.


kenohki wrote:

I doubt Apple will take the Intel route. Things aren't too rosey on that side of the fence either. P4 has a high clock rate but look at how Intel is getting sued over that now. x86-64 (Athlon, Opteron) hasn't proven itself yet and neither has IA-64. (Plus there's a lot of risk associated with IA-64. It's a whole different school of thought and another thread on it's own.)

Doesn't have to be a 64-bit x86 processor that Apple buys. The most recent 32-bit Athlons announced are somewhere around 2.2GHz. I'd welcome that in a second.


However, quad processor G4s won't happen at least until the 7470 and really only if that processor goes back to using the full MERSI cache coherency protocol. The current batches of G4 class chips only use the four state MESI which makes them less efficient in quad processing environments without separate cache controller hardware, software, etc.

Charming. Oh well!


POWER4 Lite (this new PPC from IBM) looks, from what has been released about it so far, like it could be a winner. It would provide binary compatibility (investment protection), and a familiar environment (PPC ISA). It also has gobs of memory bandwidth in comparison to what we have today and should be a screamer if it's comparable to POWER4 (even if you do have to crank up the clock to make up for the fact that you don't have an enormous cache like on POWER4). It is also much less risky for Apple than the Intel or AMD strategy.

Agreed. Jumping over to x86 would be much more challenging. I just don't believe that this gorgeous piece of IBM hardware is going to be found in any Mac any time soon, if ever. But whatever Apple does, they're going to have to do it soon. (Of course, we've been saying THAT for how long now? Apple survives by hanging on to a thread that refuses to break.)


Hopefully this chip will come sooner rather than later (complete with auto-vectorizing compilers). What I find interesting is that the other desktop processors being discussed in that group (according to the schedule for the forum) are somewhat near term (Itanium 2, Athlon/Opteron). Maybe this is a good sign for this chip to see the light of day soon. *shrug* We can only hope.

Indeed. I hope my pessimism is misguided. But only time will tell for certain.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.