Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Let me say this again:
WHAT do you need multiple processors for? There are many reasons with many solutions. The problem with mac people (regardless of how much i love you all) is they stop at the box in front of them. There is this concept that the boys from Bell and MIT were messing around with a while back, its called networking. Have a look at the real MP systems like the SGI Origin series servers and you will notice that they don't waste millions of bucks designing quad (or higher) processor system boards because they solved the problem of bandwidth in the interconecting fabric between the CPU cards. And yes each CPU can snoop each others cache so stop rabiting on about L3 cache and start looking at branch prediction cache instead.

Now i will admit i need to do some homework on the new Opteron and how it functions but i have to remind all and sundry that the POWER4 and its pedigree the G5 are superscalar RISC CPU's and not instruction heavy like the Intel Plentym's. In other words there is literally less requirement for cache size which allows for more steps in the pipe. What i dont know is if the branch prediction can re-order the cache mid-flight or execute instructions out of order like a MIPS R12K ... Ill get back to you on that. The point is if CISC CPU (x) needs to do 120 cycles to figure out one operation and RISC CPU (y) needs only to do 5 or 6 as it reorders its cache then you do the math on which CPU wins the little fluffy toy.

Now lets put this all back together. If you have a network of computers and you are running a network operating system with a network file system (lets say, um let me think, BSD) and you run it all on lots of small, fast RISC CPU's then you have one large computer that has as many processors as you would like. THE ISSUE IS APPLICATION! or as i said before WHY.
 
ZetaPotential:

Have a look at the real MP systems like the SGI Origin series servers and you will notice that they don't waste millions of bucks designing quad (or higher) processor system boards because they solved the problem of bandwidth in the interconecting fabric between the CPU cards.
I've handled some older Origins (2000's and 2100's) and it's true that they keep the CPU count to two per CPU card. Newer SGI's seem to run at least 4 per card, for example see the Origin 350 which holds up to 4 CPU's but at 2U is pretty much too small to have CPU boards like the older SGI's do. Looks like the 4U Origin 3000 can run 16 CPU's in 4U, again it's likely that they have at least 4 on some single boards in there. I've never handled those machines so I don't know for sure.

But anyway your pretty much speaking nonsense. What you are describing as a networked computer is in fact just a NUMA computer, not news at all. In fact the Opterons are NUMA machines. Large NUMA machines are in fact easier to make than large SMP machines, and they've known that for quite some time. This is not the same thing as just networking distinct computers together, that's clustering.

And yes each CPU can snoop each others cache so stop rabiting on about L3 cache and start looking at branch prediction cache instead.
A processor with L3 is faster than the same processor without it. Just because the G5 doesn't have L3 doesn't mean you can dismiss it. The G5 is not perfect.

And what the heck are you talking about with "branch prediction cache"? No such thing exists. Branch prediction is handled in the CPU core and has next to nothing to do with cache.

Now i will admit i need to do some homework on the new Opteron and how it functions but i have to remind all and sundry that the POWER4 and its pedigree the G5 are superscalar RISC CPU's and not instruction heavy like the Intel Plentym's. In other words there is literally less requirement for cache size which allows for more steps in the pipe.
The number of possible instructions is not related to the memory space that the instructions take. In fact x86 code is typically smaller than PPC code doing the same thing, because it takes more PPC instructions to do the same thing. In case you've forgotten, the first "C" in "CISC" stands for complex, as in it does more than just one basic operation.

What i dont know is if the branch prediction can re-order the cache mid-flight or execute instructions out of order like a MIPS R12K ... Ill get back to you on that.
Reording the cache? I think you're confused. Cache doesn't need to be reordered. The G5, P4's, and Opterons can all reorder instructions in flight, however. About the only powerful modern CPU's that can't are US3/4 and Itaniums.

The point is if CISC CPU (x) needs to do 120 cycles to figure out one operation and RISC CPU (y) needs only to do 5 or 6 as it reorders its cache then you do the math on which CPU wins the little fluffy toy.
Your pulling that out of thin air. It's true that some x86 instructions take a long time but on average the difference is nowhere near 120 cyles to 6.

If you have a network of computers and you are running a network operating system with a network file system (lets say, um let me think, BSD)
Are you talking about networking support like every other darn OS on the market? Neither BSD nor any other mainstream OS allows task sharing between separate computers, special software exists to do that (clustering software).

and you run it all on lots of small, fast RISC CPU's then you have one large computer that has as many processors as you would like.
No, then you've got a whole bunch of separate computers.

THE ISSUE IS APPLICATION! or as i said before WHY.
Your obviously not someone who has written programs to utilize multiple processors, let alone multiple computers.
 
Originally posted by ddtlm
ZetaPotential:


I've handled some older Origins (2000's and 2100's) and it's true that they keep the CPU count to two per CPU card. Newer SGI's seem to run at least 4 per card, for example see the Origin 350 which holds up to 4 CPU's but at 2U is pretty much too small to have CPU boards like the older SGI's do. Looks like the 4U Origin 3000 can run 16 CPU's in 4U, again it's likely that they have at least 4 on some single boards in there. I've never handled those machines so I don't know for sure.

But anyway your pretty much speaking nonsense. What you are describing as a networked computer is in fact just a NUMA computer, not news at all. In fact the Opterons are NUMA machines. Large NUMA machines are in fact easier to make than large SMP machines, and they've known that for quite some time. This is not the same thing as just networking distinct computers together, that's clustering.


A processor with L3 is faster than the same processor without it. Just because the G5 doesn't have L3 doesn't mean you can dismiss it. The G5 is not perfect.

And what the heck are you talking about with "branch prediction cache"? No such thing exists. Branch prediction is handled in the CPU core and has next to nothing to do with cache.


The number of possible instructions is not related to the memory space that the instructions take. In fact x86 code is typically smaller than PPC code doing the same thing, because it takes more PPC instructions to do the same thing. In case you've forgotten, the first "C" in "CISC" stands for complex, as in it does more than just one basic operation.


Reording the cache? I think you're confused. Cache doesn't need to be reordered. The G5, P4's, and Opterons can all reorder instructions in flight, however. About the only powerful modern CPU's that can't are US3/4 and Itaniums.


Your pulling that out of thin air. It's true that some x86 instructions take a long time but on average the difference is nowhere near 120 cyles to 6.


Are you talking about networking support like every other darn OS on the market? Neither BSD nor any other mainstream OS allows task sharing between separate computers, special software exists to do that (clustering software).


No, then you've got a whole bunch of separate computers.


Your obviously not someone who has written programs to utilize multiple processors, let alone multiple computers.


Owned.

---------------------------

Anyway, to add to this fray, I dont think quad procs are really the way to fly. I think dual and single chips are as far as Apple will go for practical, consumer based applications. Though, if they decide to take on the higher unit server markets, they may develop units with more procs inside.

This is all speculation, of course.
 
G5orbust:

I think dual and single chips are as far as Apple will go for practical, consumer based applications.
Yeah I'd settle for a dual-processor Power5 derivitive with SMT. :) Four threads per CPU, wasn't it?

Sedulous:

How about a quad 970FX system? These chips are supposedly much cooler.
Yeah but the system controller with 4 FSB's and 256 bits of RAM is still gona be trouble. There's a reason Intel runs quad Xeons on a single FSB.
 
How about an Xserve Cluster Node Lite, a bare-bones dual G5 without all the server-friendly features? Drop the ECC RAM, drop the PCI-X slots, drop the Mac OS X Server software. Get rid of the serial port and just leave 1 FW 800, 1 FW 400, 1 USB 2.0, and 1 gigabit Ethernet. Make it 2 RU high, so less fancy cooling is needed. Then price it at something like $1,799. Sure, their margins would not be huge at this price, but I bet they'd still make a profit and sell a lot of them. This would be great for smaller shops looking for a cost-effective render/encoding/compiling farm, where $3,000-$4,000 per box is just a bit too steep to justify. My order for three of them would be entered within minutes! 12 GHz of G5 rendering capability for $5,400, mmmmm.....
 
Originally posted by HiRez
How about an Xserve Cluster Node Lite, a bare-bones dual G5 without all the server-friendly features? Drop the ECC RAM, drop the PCI-X slots, drop the Mac OS X Server software. Get rid of the serial port and just leave 1 FW 800, 1 FW 400, 1 USB 2.0, and 1 gigabit Ethernet. Make it 2 RU high, so less fancy cooling is needed. Then price it at something like $1,799.

So you want a rack-mounted... I don't know what to call it.

The machine you describe isn't really very server-class if you strip away things like ECC RAM, redundant power supplies (if they were there to start with) and the rest to make it cheap. Aren't you concerned with reliability and data-integrity?

The machine you describe is priced like a consumer machine, but it isn't really very workstation or consumer-class friendly if it is housed in a 2U rack mountable enclosure... Unless you lay it across two short filing cabinets and use it as the desktop...

Umm.. hmm.. With the exception of the processor class, it sounds like you want Apple to just recycle the original xserve concept--except have them price it at what it is actually worth this time. The first two xserve revisions really didn't have anything fancy going for them. Come on, Apple! Non ECC memory in a server? What makes a server anything other than a repackaged workstation? The form factor?

If size and cost are the only considerations when shopping for a server, why not use the ultra small form factor PCs that most vendors sell and run Linux--or if you must, run Darwin--instead? Talk about processor density. And cheap! Why not? Because they don't have server-class hardware inside? Servers shouldn't be the fastest Macs made, they should be the most rock-solid Macs made--and that isn't cheap.

I say let the idea of the ultra low-cost "server" die and be glad Apple is bringing at least a little more serious hardware to their "server-class" effort than the last time. Some things just shouldn't go together, and "cheap" and "server" are two of them. Isn't that what network appliances are for anyway?
 
Originally posted by Quixcube
So you want a rack-mounted... I don't know what to call it.

The machine you describe isn't really very server-class if you strip away things like ECC RAM, redundant power supplies (if they were there to start with) and the rest to make it cheap. Aren't you concerned with reliability and data-integrity?
No, the machine I describe is NOT A SERVER (nor an engineering box). That's the whole point, it's a stripped-down network render box (Cinema 4D, Maya, After Effects, etc.), basically, but could be used for MPEG encoding or distributed Xcode compiles, or whatever other distributed CPU tasks you might need (probably Xgrid will help make more of them available in time). I'm not suggesting they scrap the Xserve, this would be a similar but separate product, one targeted towards people who just want raw G5 processing power but don't need all the extra stuff that demanding server and scientific computing environments need. The G5 Xserve Cluster Node is a step in the right direction, but it still costs as much as a full dual-G5 PowerMac. I'm saying take that concept to the extreme. It wouldn't necessarily even have to be rack-mountable (though the option would be nice), put some feet on them and make them stackable on a table or the floor.
 
Originally posted by HiRez
No, the machine I describe is NOT A SERVER (nor an engineering box). That's the whole point, it's a stripped-down network render box (Cinema 4D, Maya, After Effects, etc.),

...

It wouldn't necessarily even have to be rack-mountable (though the option would be nice), put some feet on them and make them stackable on a table or the floor.

Uh oh, the headless G5 iMac thread is spreading :) Just kidding... Yeah I understand what you want but I don't think there is much demand for an extremely gutted version of the xserve. Most corporations who would be interested in a group of dedicated rendering machines want an industrial piece of hardware even if it isn't the full-fledged high-end model (industrial meaning redundant power, ECC RAM, redundant disks unless netbooting etc.)

Making do with reality, if a bunch of xserves aren't in the budget, I would just fill a lab or fill a department with entry level single processor G5 powermacs and render on them while people are getting their Microsoft Word on all day long. They will never even know it... The biggest hidden cost would be in gigabit switches to feed every desk, but that could be optional I guess depending on the nature of the distributed task. When our university filled up the new student learning center, I would love to have seen them purchase 1100 Macs instead of 1100 Dells. It is amazing what a university can do with 1100 G5s nowadays. :) Of course the students would object to not knowing how to use the computers for anything, but nothing is perfect.

If Apple were to build a fantasy product that met the need for massive processing power on a shoestring budget though, I would like to see something configured like a strand of christmas lights--only replace the bulbs with G5s. They would glow when hot and make a festive mood while delivering massively parallel processing power. Just don't let the cat bite the cord.
 
i read that some clone back in the 90s had a quad powerpc mac but it was very expensive

i don't really see the need for one for consumers, but i could see the need for a dual G4 powerbook for some high end consumers/professionals on the go

but it might suck up too much battery time but i wouldn't think it would cost that much more to make in terms of parts and the extra processor and possibly slightly larger case to hold all that stuff

i think someday apple will have dual processor laptops once the battery issue gets solved to a manageable level
 
Wow, another ancient thread back from the dead. But, since it's alive now...

Originally posted by jefhatfield
i read that some clone back in the 90s had a quad powerpc mac but it was very expensive
The company was called DayStar, and made a few different 4-processor, 604e-based towers. They ran between $5000 and $10,000, but were the ultimate in speed at the time. Apple (as well as Umax) also made dual processor 604e boxes back then--they just disappeared when they went G3 because those weren't multiprocessor capable.

Nowadays, I'd say there's at least a chance of a return to the quad processor Mac, if Apple decides to go workstation or maybe do a super-dense cluster node (four processors, only one hard drive or something). The 970FX makes it possible in terms of heat draw, but I wonder how big the market really is--they'd be expensive no matter how you look at it, and it's just not something there's a huge demand for.

The advances of the 970 and beyond, assuming IBM can keep the momentum up, are far more compelling even with dual processors than a four processor G4 ever would've been, and would make a four-processor G5 mostly unnecessary as well.

By the way, jefhatfield, if IBM can get the 970FX running at reasonable clock rates with a laptop-scale power draw, there's absolutely no reason for Apple to go dual in it's laptops--what you've got then is a portable workstation, and while cool, I just don't think there's a big market for such a beast, since people wanting that much power are probably going to want all the other advantages of a tower. It'd also go entirely against Apple's laptop design philosophy of slim, elegant, and long battery life.
 
as for why - its an old argument. If you do Video, 3D animation or anything that requires rendering, faster is better. And not everyone wants to have more than one machine in a distributed system. So having a quad processor would give you more bang for your bucks....

I'd love to have a 16 processor machine myself :D

D
 
Originally posted by Quixcube
Uh oh, the headless G5 iMac thread is spreading :) Just kidding... Yeah I understand what you want but I don't think there is much demand for an extremely gutted version of the xserve.
Well now you've gone and made me create it! I call it the PowerNode. Come on, you know you want one!

powernode_small.jpg


OK fine, so I was bored :p Here's a larger pic if you want to see it:

PowerNode Home Page
 
[rant]

why do people still talk about water cooling...? the reason they use circulated air is because it releases the heat right out in to the environment... are these water cooled computers gonna come with a big fVcking water tank to recycle the heated water with colder water...? this topic comes up all the time... i don't get it... :mad:

[/rant]
 
Originally posted by Mr. Anderson
Its more efficient and quieter than air cooling - but you need that radiator..... I don't ever see it happening, even though on some machines (super computers) have had it in the past.

yes... more efficient (although there are lots of liquids more suitable for it than H2O), quieter (um, possibly? i guess it would depend on the circulation method and how they removed the heat from the water)... but for a laptop? or even a desktop? who wants to carry around an extra water tank? ugh...
 
Originally posted by Makosuke


By the way, jefhatfield, if IBM can get the 970FX running at reasonable clock rates with a laptop-scale power draw, there's absolutely no reason for Apple to go dual in it's laptops--what you've got then is a portable workstation, and while cool, I just don't think there's a big market for such a beast, since people wanting that much power are probably going to want all the other advantages of a tower. It'd also go entirely against Apple's laptop design philosophy of slim, elegant, and long battery life.

G5 laptop? hey, i am all for it but not sure it will come out thtat soon

in the meantime, a dual G4 can be possible, and be sleek, but the battery time is what may be hard to get around ;)

...but if it could be done, it would be a great stopgap measure if it takes a year from now to get the G5 into a laptop
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.