Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Originally posted by Kethoticus


Hence the reason I usually come in here and put this nonsense down. Heck, we don't even know IF this processor--assuming it really exists--will ever find its way into Macs. And even if it does, we've been hearing of speeds of 1.6-2.0GHz. Wowee. Intel and AMD will probably be closing in on 4GHz at that point, but clock speed doesn't count, right?

One of the main workstations used by 3D animation studios and high end CAD designers who require advanced technology typically use SGI and MIPS R14000ATM 64bit procesors. The FASTEST they clock is 600mHz.

Similarly, IBM makes the RS and Intellistation Power series with SMP procesors and 604e PowerPC processors. The highest any of these clock is 450, and cost anywhere from $8000.00 to $47000.00.

So, depending on the chip, sometimes mHz doesnt matter. If Apple and IBM are REALLY teaming up to put a 64 bit procssor into Apple boxes, then I might just wet myself with excitement. Let P4 users get all happy about 4ghz. Big freakin deal. It seems to me that they have hit a R+D plateau, and are trying to save themselves by upping the mHz number on the box so the drones will continue to buy their product.

Besides, with Apple being 64 bit, maybe they would have the capability to compete with SGI in workstation performance. Id love to see Pixar switch from SGI to Apple.

However, I do agree that it would be nice to see it sooner than late 2003.
 
Apple CPU's

Assuming IBM's new CPU, the so-called GPUL, shows up in Mac's in late 2003, I'd be quite happy to see a new G4. The new G4 would possess an on-board memory controller for DDR. And, it would be fully MERSI compliant, enabling quad configurations.

Even if such a new G4 were to intro at only 1.5 GHz, it would still power one hell of a box.

Unfortunately, the rumors suggest that Motorola is not working on a new G4. However, a rumor does suggest that a Book E CPU might make its way into Mac's. But, I fear this Book E rumor is terribly unlikely.

Eirik
 
Originally posted by agreenster


It seems to me that they have hit a R+D plateau, and are trying to save themselves by upping the mHz number on the box so the drones will continue to buy their product.

Oh, have they ever. The tales of woe surrounding the Itanium would be hilarious if they weren't true. The only chip on the Windows side that shows promise comparable to this new IBM chip is AMD's Hammer.
 
Originally posted by Hawthorne


Oh, have they ever. The tales of woe surrounding the Itanium would be hilarious if they weren't true. The only chip on the Windows side that shows promise comparable to this new IBM chip is AMD's Hammer.

Is this a pentium compatible chip or will the current Windows apps need to be emulated on it.
 
Originally posted by Bonte
Is this a pentium compatible chip or will the current Windows apps need to be emulated on it.

AMD's Hammer-based processors are supposed to be compatible with current 32-bit x86 processors. Intel's Itanium is not compatible.
 
Hawthorne:

Eh? The Itanium2 @ 1ghz is quite possibly the fastest processor ever made, and Intel has a lot of money to keep dumping on the whole Itanium family for quite some time.

Just because IBM might make a fast chip for Macs does not make them good and everything they do devine. Just because Intel makes the chips that defeat G4's through "unfair" means like high clockspeed, or just because Intel chips run Windows, does make make Intel evil and their every product a sham.
 
That L2 takes quite so much die space I hadn't reallised. Making L2 and L3 smaller seems to be a sensible way to go then.

The web site seems a little contradictory on the L3 size - one table says L3 can be 32Mb, 64MB or 128MB whereas the bulk of the text addresses only a 32Mb L3 cache. I guess this is a difference between what IBM is actually building versus the design capabilities.

I hope there's some detailed info on GPUL (or whatever!) after the MPR presentation - it took ages to get a really detailed desciption of Power4 on line.

And as a silly aside - I think I met one the the Power4 designers in the Rabbit Ridge vineyard in CA a few years ago. Zinfandel was the subject though!

Cache is expensive, not only in monetary value but also in space, so it is always one of the first areas to be looked at if you want to cut costs.

The differing value on the web come due to the setup of the POWER4. 1 chip, which is a pair of processors, has 32 Mb of L3 cache. You can then have 4 chips that form a module with 128 MB of combined L3 cache.

There will be information out if only because of the attention being focused on it. I'd also expect IBM to publish a few journal articles in the very near future on it.

I have a great deal of respect for the chip designers. I had the displeasure of doing materials work a while back and it's perhaps the most difficult field I have ever had to deal with, perhaps because electrical engineering isn't my primary area or perhaps because it just takes some really bright guys to design these things :)

Hmm. transistor count isn't necessarily the same as die area. Any comments on how these two relate, Telomar?
From memory it takes up around 40% of the die size and ~50% or so of the transitors.


Oh, have they ever. The tales of woe surrounding the Itanium would be hilarious if they weren't true. The only chip on the Windows side that shows promise comparable to this new IBM chip is AMD's Hammer.
The only real advantage to the Hammer that I have seen is it retains 32-bit compatability. It is more like a stepping stone where Itanium is just a blind leap. It really will be a matter of is Intel forcing people to change too much too fast and whether they will introduce a stepping stone to aid the transition.

Itanium itself will be and is a good chip, at least since its latest revision. I also expect it will get a lot better. I'd say it still trails somewhat to the POWER4 in the real world, though at their respective prices you would hope so. Time will tell how long that lasts. Certainly the gap isn't so great now but the POWER4 is waiting on a revision.

I had heard that software written for 32 bit processors won't work on 64 bit ones. Is this true?
The Itanium uses some rather poor emulation for the 32 bit mode and is effectively just a 64 bit processor.

The opteron is 32 bit compatible and all PPCs have the potential to be designed 32 bit compatible and the 64 bit desktop processor IBM announces definitely will be I imagine.
 
My understanding is that the core (s) in Power4 is actually a G3. If that's true, then stripping cache, reducing the number of cores to 1, etc., might simple cripple the chip back to G3 levels. Taking out all the goodies of a PIII made a dog of a Celeron chip, no?

Could someone in the know give some feedback on this aspect of the issue?
 
Originally posted by matznentosh
My understanding is that the core (s) in Power4 is actually a G3. If that's true, then stripping cache, reducing the number of cores to 1, etc., might simple cripple the chip back to G3 levels. Taking out all the goodies of a PIII made a dog of a Celeron chip, no?

Could someone in the know give some feedback on this aspect of the issue?

No, that's wrong. POWER4 implements the PowerPC and POWER ISAs. It's also 64-bit. Those two features in themselves (among many many others) pretty much preclude it being the same core.

You may be thinking about the G4, which in it's original implementation was pretty similar to the G3 except for the addition of AltiVec, higher precision math, and some cache coherency stuff.
 
Originally posted by ddtlm
Hawthorne:

Eh? The Itanium2 @ 1ghz is quite possibly the fastest processor ever made, and Intel has a lot of money to keep dumping on the whole Itanium family for quite some time.

Just because IBM might make a fast chip for Macs does not make them good and everything they do devine. Just because Intel makes the chips that defeat G4's through "unfair" means like high clockspeed, or just because Intel chips run Windows, does make make Intel evil and their every product a sham.

Not quite. The Itanium 2 IS fast, but it also costs around $3500 when you can even find one OEM.
It isn't, by all accounts I've seen, faster than the Power4 though. I don't believe it is faster than the HP 8800 either.

Making chips clock faster isn't unfair, but it is a design decision. P4 was designed for clock speed above all else. It wasn't designed to perform well per clock. Instructions take too many ticks to complete, the pipe is very long...
Funny thing is, Intel is now publicly stating that they need to revise their design goals to aim for more than just clock speed. Intel's focus for next-gen processors seems to be work per cycle above raw clock. Interesting considering they currently have the fastest desktop processor. Perhaps the see the end of the line already.

...ffakr.
 
Originally posted by Kethoticus


Hence the reason I usually come in here and put this nonsense down. Heck, we don't even know IF this processor--assuming it really exists--will ever find its way into Macs. And even if it does, we've been hearing of speeds of 1.6-2.0GHz. Wowee. Intel and AMD will probably be closing in on 4GHz at that point, but clock speed doesn't count, right?

if we get 2 ghz by christmas season 2003 and wintels go to 4 ghz, who really cares?

because we are still in the mhz range on some macs and the wintel world is in the gigahertz range on all models, well, that looks bad

but who cares after a certain speed?...features and price will be the main selling points by then...clock speed of processor won't be the main issue like it still seems to be now

at one time, hard drive size was a major concern, but it has diminished since hd space is not a mjaor concern for most shoppers at places like comp usa and circuit city
 
Originally posted by ddtlm
Hawthorne:

Eh? The Itanium2 @ 1ghz is quite possibly the fastest processor ever made, and Intel has a lot of money to keep dumping on the whole Itanium family for quite some time.

Just because IBM might make a fast chip for Macs does not make them good and everything they do devine. Just because Intel makes the chips that defeat G4's through "unfair" means like high clockspeed, or just because Intel chips run Windows, does make make Intel evil and their every product a sham.

Hey, if I could make it up, I would. It's not that Intel is evil, (they're one of the biggest local employers here. The second biggest is Motorola....:D ), it's just that the Itanium has developed into the silicon version of bloatware. It's problems have been well-docmented elswhere (free subscription required). There maybe hope for Itanium 2, but with enough power drain from one chip to blow the average household circuit, the Marines would call the Itanium a Charlie-Foxtrot.....:D
 
isn't the heatsink on the new DP's really
huge? did anyone ever find out the reason
for that? could this rumor be related in any
way? sort of –getting things in order– for
the new chip?
 
There's a lot of BS in this thread

It would definitely be great if Macs got a new, better PowerPC chip, but the Intel-bashing in this thread is ridiculous. A few points:

1) As someone else already pointed out, Intel's MHZ advantage isn't just a creation of marketing, it was a design decision. They purposely decided to make a chip that was not very efficient but upon which it would be easy to increase the clock speed, believing that was the right decision long run. Now, so far that hasn't been the right decision - the AMD beats the Intel in price to performance. BUT, the P4 is as fast as the AMDs - it's just more expensive too. And long run, it is possible that it will be the right decision. At the very least, it is silly to say that the P4 is a bad chip - it is a very, very fast chip, it is just overpriced.

2) It's total bull**** to say "I hate the MHz myth! PC users are so dumb" and then turn around and say "Macs will kick PC ass next year because they will have 64 bit chips." There is nothing automatically better about a 64 bit chip. I could build a ****ty 64 bit chip just like I could build a chip that runs at 10GHz but is still slow. It's not even clear why Macs would need a 64 bit chip. All current 64 bit chips are aimed at very high-end markets - data warehousing and massive data processing primary. Macs don't fit that market.

3) The New York Times article claiming that Itanium (linked by Hawthorne) was so blatantly wrong that it was painful to read it. The whole premise that because Google doesn't buy Itaniums means that the Itanium is a failure is total garbage. Anyone who knows anything about how Google works knows that the idea behind Google is that instead of having a few supercomputers, they have decided to buy thousands of very cheap PCs running Linux. Google found that this was much cheaper than buying a few expensive, powerful computers.

4) This, by the way, is why agreenster shouldn't get his hopes too high about future Macs taking over rendering farms for 3d graphics. Yes, SGI used to be the **** when it came to rendering farms. But nowadays more and more companies are going Google's route - buying lots of cheap PCs, and rolling their own Linux distro and software. It's possible that the XServe can make inroads into this market, but it will be tough because price/performance is the key metric. Macs are losing the price/performance war right now.

5) The Itanium 1 was no good, but Intel made it clear that it was practically a "beta test" - for people who wanted to get a head start on learning to program and creating hardware around the Itanium. The Itanium 2 (McKinley) has been very well received, on the other hand - its SpecFP and SpecINT scores were very, very respectable. And this is just revision two. New chips take a while to gain entrance into the high-end market. For example, AMD has had a hard time getting corporations to buy their chips, just because they don't have the right "rep" in corporate circles. Well, the task is even tougher for a new high-end server chip, because no one's going to just replace their existing million dollar Power4 investment with Itaniums. So Itanium's weak showing so far doesn't mean that the Itanium is a failure. It just means that Intel needs to keep pushing the technology. But Intel has a lot of money, and PA-RISC and Alpha are being put out to pasture. It'll be Sparc, Opteron, Itanium and Power 4 competing for the 64 bit crown. If I were a betting man, I'd bet on Itanium winning this war long run - just because it's hard to bet against Intel's coffers. They have a lot more money than Sun and AMD, and while IBM has the money to push Power chips, it doesn't have quite the incentive as Intel does, since IBM can make money selling Itanium systems, whereas Intel can't make money selling Power systems.
 
ffakr:

Not quite. The Itanium 2 IS fast, but it also costs around $3500 when you can even find one OEM. It isn't, by all accounts I've seen, faster than the Power4 though. I don't believe it is faster than the HP 8800 either.
Price is not part of being "fastest" so lets not mention it again. Anyway, the HP chip is certainly slower than the Itanium and HP is gumg-ho about going all Itanium, as I''m sure you are aware. The Power4 is certainly a fast chip, and Alphas at 1.2ghz or so are said to be pretty damn fast too. Hence my use of:

quite possibly the fastest processor ever made
See? Got my bases covered. :)

Hawthorne:

Bloated for sure, but Intel may just be ahead of their time. It has not yet been shown wether or not their EPIC is better than conventional design approaches.

jgalun:

Right on! :D
 
Originally posted by ddtlm
ffakr:

Price is not part of being "fastest" so lets not mention it again.

TCO is important however. I don't want to get into the whole TCO debate here because it tends to be a pandora's box but in response to this and jgalun's response...if you can't compete on TCO, datacenters won't buy your chips. If your chips aren't bought, there is only so long of a time that even Intel will throw money at a platform before deciding to kill it. Remember, these guys are out to make a profit, not make the fastest chip just because they can.

Power consumption is becoming a big concern in the datacenter. Chips that aren't power effecient cost exponentially more. Not only do you have to expend more energy to power them, but also to condition that power, cool the machines, monitor and replace failed components, and use higher grade materials to compensate for the added heat. If datacenters can get comparable or "good enough" performance from a processor architecture that gives them a significant savings in TCO and could possibly be a less expensive capital investment, why use Itanium?

Because Itanium is a statically scheduled architecture, many people are wondering what Intel is going to do about power consumption and management. Write it into the compiler? That seems a little iffy to me. Most of the good power management schemes that are being discussed right now by CPU vendors rely heavily on dynamic scheduling. Intel needs to address this issue.

[
Bloated for sure, but Intel may just be ahead of their time. It has not yet been shown wether or not their EPIC is better than conventional design approaches.

First off, I think it was HP that was ahead of their time. They just needed Intel's marketing clout to get their idea to where they get economies of scale. Remember that Itanium started life as a VLIW cousin of PA-RISC.

However, I do see significant potential in the Itanium architecture as far as performance goes. In only it's second revision it's managed to deliver world class performance second only to POWER4 in integer and second to none in FP.

However, everyone thought RISC ideas would totally supplant CISC thinking back in the early days of RISC. Instead, you got a nice blending of both in modern architectures depending on application. Possibly we'll see EPIC-like thought going into newer RISC chips and some RISC type thought going into EPIC.
 
Re: There's a lot of BS in this thread

Originally posted by jgalun
4) This, by the way, is why agreenster shouldn't get his hopes too high about future Macs taking over rendering farms for 3d graphics. Yes, SGI used to be the **** when it came to rendering farms. But nowadays more and more companies are going Google's route - buying lots of cheap PCs, and rolling their own Linux distro and software. It's possible that the XServe can make inroads into this market, but it will be tough because price/performance is the key metric. Macs are losing the price/performance war right now.

Go back and re-read.

I never mentioned rendering farms. Being a professional animator, Im constantly looking for a more powerful WORKSTATION, which allows me to animate in real time with the least amount of lag possible. I want to use Apple (since I really like OSX, and Im pretty pleased with everything about the computer minus the speed) but am stuck using an XP box. Two Xeons and a 128MB Quadro4 video card allows Maya to float right along.

Most people in this industry use 64bit IBM (w/Linux) or SGI boxes for workstations. I want to use Apple, but not the G4 chip because it is simply too slow. (and please, dont tell me a dual 1ghz G4 will run faster than a dual 2.0ghZ Xeon running Maya. We've tried it many a time. The G4 runs choppier, renders slower, and is generally slower.)
 
Originally posted by agreenster


One of the main workstations used by 3D animation studios and high end CAD designers who require advanced technology typically use SGI and MIPS R14000ATM 64bit procesors. The FASTEST they clock is 600mHz.

Besides, with Apple being 64 bit, maybe they would have the capability to compete with SGI in workstation performance. Id love to see Pixar switch from SGI to Apple.


No, the only animation studios still using SGIs are using them for legacy software. For most commercially available software, like Maya, an AMD Linux box with current nVidia graphics is way faster.

Pixar is doing a lot with Linux too.
 
Originally posted by Tue12
If the CPU reaches Apple, their marketing department is going to have one tough message to communicate.

Not only will they have to communicate the idea of a the Mhz Myth, they will also need to communicate the idea of 32bit vs 64bit. :)

I think the 32bit vs 64bit message is easy to get across. Call bits "power" and you've got people drooling over 64 instead of 32 ... maybe side by side comparisons of "16 bit Intel" (DOS screen) "32 bit Mac" (OS X); "32 bit Intel" (Win XP/ BSOD) "64 bit Mac" (" ... come to an Apple Store to find out ...")

Of course, technogeeks like us know that bit width != power, but it's a natural assumption for most consumers.

Also, note that Intel is gearing up to push lower-clock CPUs (Banias, Itanium II, etc), and we won't be up against a "MHz is everything" P4 marketing effort a year from now.
 
Originally posted by sentinal


No, the only animation studios still using SGIs are using them for legacy software. For most commercially available software, like Maya, an AMD Linux box with current nVidia graphics is way faster.

Pixar is doing a lot with Linux too.

Maybe for a single user modeling workstation, but SGIs are still incredibly more powerful than these machines in their scaled-up implementations. Move into visualization and it's another ballpark. SGIs have gobs of bandwidth. Much more than you can get on any "workstation" class PC. SGI's higher end boxes use a switched architecture with cache coherent non uniform memory access (ccNUMA).

When you need things to be done in realtime, in a single system image, SGI is where it's at for graphics. Granted, render farms are something completely different but let's be more specific here about the application we're talking about.
 
Originally posted by nixd2001
6.4GB of memory bandwidth is great if you can manage it, but if commodity DRAM parts won't let this be achieved, this will be simplified as well to make it cheaper.


Note that 6.4GB is the BUS bandwidth, not memory bandwidth. Bus bandwidth is basically "CPU->everything else", meaning memory bandwidth plus PCI bus plus AGP plus ... So 6.4GB is a valid bus bandwidth. I would imagine the memory interface would be DDR 333 to take advantage of as much of this bus as possible.

Also, note that DMA allows devices to access memory directly (ie, the network interface directly pulling data from main memory, or a CD-R pulling data from memory or writing to memory, etc). On current PowerMacs this doesn't effect CPU performance at all as the CPU bus is so much smaller than the memory bandwidth. In a 6.4GB CPU bus you've got the reverse situation (DMA, etc, might affect CPU performance by starving the CPU of memory access). Note that of course the CPU will still be getting memory access faster than currently, but the relative dynamics would change and some now-optimized code might have to be rethought.
 
Originally posted by beatle888
isn't the heatsink on the new DP's really
huge? did anyone ever find out the reason
for that? could this rumor be related in any
way? sort of –getting things in order– for
the new chip?

People have claimed how the new heatsink is so huge and what overkill but I do not see that. While the heatsink on my 1Ghz is quite large, it also gets quite warm. I think the heatsink is large so that the fan does not have to run as much or as fast. I have to imagine that the heatsink needs to be its size for the 1.25Ghz machines and to cut costs, Apple just used the same configuration for all of them.
 
Originally posted by sentinal


No, the only animation studios still using SGIs are using them for legacy software. For most commercially available software, like Maya, an AMD Linux box with current nVidia graphics is way faster.

Pixar is doing a lot with Linux too.

You are wrong, but thats okay.

The only major studio I know that isnt using SGI is PDI/Dreamworks. They are using HP Linux boxes.

Besides genius, the first box to use Maya was SGI/Irix. I even learned to use Maya on an SGI.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.