PDA

View Full Version : PowerPC 970 requires process shrink to reach 2.5Ghz


gaomay
Mar 5, 2003, 04:48 AM
According to this article on ZDNet;

http://news.zdnet.co.uk/story/0,,t269-s2131244,00.html

Pushing the 970 to 2.5Ghz will require a process shrink to .09micron. So it look like we won't be getting to that until early 2004.

timbloom
Mar 5, 2003, 11:45 AM
no biggie, considering the 970 is "supposed" to be 3x faster per Mhz than the current G4, just having the chip running in my mac would be worth it. 1.8 ghz is still faster than what we have.

jefhatfield
Mar 5, 2003, 11:50 AM
if apple could get a machine running on the ibm 970 for us this year, that will be great...even if it only tops out at 1.8 ghz using the .13 micron process

a .09 micron process will really be cool and i wouldn't be surprised if it took until 2004 to get that...but then the intel chips will be at 4 ghz

i can see pro apple desktops with the 970s, the pro laptops with G4s, and everything consumer with G4s at the end of this year

what i am waiting for is to see a laptop with a 970 in it but that will most likely be next year when the wintel world will have 3+ ghz mobile, low power chips

...i guess we can never catch up:rolleyes:

WannabeSQ
Mar 5, 2003, 11:59 AM
does this mean we will still have duals? I want a dual so much because of how people rave over their performance. And Pro Tools now supports duals, so i would get more bang for my buck. MMMM Dual 1.8 970 /drool

timbloom
Mar 5, 2003, 12:16 PM
970 can have dual-core chips, iirc. Which means two processors on one chip. I think it would be smart of Apple to offer dual's still, but the need for them with these faster chips may only be ont the top end, due to the rather high prices of the 970.

jefhatfield
Mar 5, 2003, 12:24 PM
Originally posted by timbloom
970 can have dual-core chips, iirc. Which means two processors on one chip. I think it would be smart of Apple to offer dual's still, but the need for them with these faster chips may only be ont the top end, due to the rather high prices of the 970.

when any new chip comes out, it is very high at first but then drops in price fairly quickly

apple will have to keep the prices more competitive to keep up market share

but us macheads being who we are, apple could stay afloat with less than a 1 percent market share...it's just that they won't really grow much and more likely than not, they will shrink

but as long as apple inc breaks even or makes a profit, there is a reason for them to survive

i think apple saw it's most recent all time market share with the crt imac back in the late-90s and that great episode in apple's history will never be repeated since it also corresponed with the growth spurt of the internet and the dot.com revolution

if apple stays a small efficeint company like bmw is to the car world, that will be ok but not optimal

...optimal would be apple inc slowly gaining market share until they became the leader in hardware and operating systems (like they used to be more than a couple of decades ago)...but with steve jobs at the helm, that will not likely happen since he is a good short term sprinter, but not a marathon runner like bill gates and microsoft

strider42
Mar 5, 2003, 01:03 PM
Originally posted by timbloom
970 can have dual-core chips, iirc. Which means two processors on one chip. I think it would be smart of Apple to offer dual's still, but the need for them with these faster chips may only be ont the top end, due to the rather high prices of the 970.

can you point me to a link where anything says the 970 has dual cores as a possibility. the power4 is a dual core chip, and although that 970 is based on that, its a single core cpu. I suppose it might be possible to make it dual core, but I haven't read anything indicating that could happen or is likely to happen. My feeling is that it won't happen.

KingArthur
Mar 5, 2003, 02:30 PM
I would like to see that site, too. I think what has happened is maybe that people have been using the two chips so synonomously that they have forgotten the difference in a way. True, a dual core (and note the usage of "dual";), Ryan) 970 MIGHT be a possibility, but I don't think we will see it. Remember, the 970 is more of a consumer/low-end server chip, unlike the Power4, and I don't think that they are going to stress the dual core in Blade servers considering Blade servers are designed to be racked like the Xserve. I think the 970 will be more focused on higher Ghz rather than efficiency like dual cores or SMT (hyperthreading). I do think that we will see SMT eventually, which will be a weak replacement to dual cores, but I don't think that we will see it until Intel gets the bugs worked out of their's. I envision dual 970s for quite a while in PowerMacs. Plus, Apple didn't design an OS to operate with multiple processors (up to 32) just to abandon that feature as soon as possible.

Frobozz
Mar 5, 2003, 03:34 PM
Originally posted by timbloom
970 can have dual-core chips, iirc. Which means two processors on one chip. I think it would be smart of Apple to offer dual's still, but the need for them with these faster chips may only be ont the top end, due to the rather high prices of the 970.

According to the information I have read from both rumors and real news sites, the 970 will be a cheaper part than the g4. The 970, if I'm not mistaken, will only have a single core.

Frobozz
Mar 5, 2003, 03:38 PM
Originally posted by KingArthur
I would like to see that site, too. I think what has happened is maybe that people have been using the two chips so synonomously that they have forgotten the difference in a way. True, a dual core (and note the usage of "dual";), Ryan) 970 MIGHT be a possibility, but I don't think we will see it.

Yeah, I agree. I think that it makes more sense to provide a SMP capable single-core chip. It's a more flexible design. They can be used in a varety of applications, either as one chip, in tandem, or in fours, etc. But the customer, Apple, has that option and is not forced to pay higher prices, produce more heat, and eat more power than the single core chips.

type_r503
Mar 5, 2003, 04:06 PM
This seems to be sandbagging by apple. They don't want IBM to jump the gun. This will reduce our hopes and increase the wow factor when they introduce a 2.5GHz DP 970 PMac, in June.

Apple probably got word of the PR and had IBM take it down and leak some contradicting info. This seems to contradict what apple is saying about market share and revenue predictions.

jefhatfield
Mar 5, 2003, 04:11 PM
Originally posted by type_r503
This seems to be sandbagging by apple. They don't want IBM to jump the gun. This will reduce our hopes and increase the wow factor when they introduce a 2.5GHz DP 970 PMac, in June.

Apple probably got word of the PR and had IBM take it down and leak some contradicting info. This seems to contradict what apple is saying about market share and revenue predictions.

i don't think a rather small 7,000 person company like apple could have any leverage over a giant company like ibm

but whatever the speed of the 970, as long as it is even marginally faster than the g4, will still wow the mac crowd because it will be the next generation pro processor

KingArthur
Mar 5, 2003, 04:34 PM
Let us not forget that the 970 is a 64-bit processor, too. Designed to move large amounts of data and do double the precision of existing 32-bit processors. We probably won't see a big difference in speed of programs at first other than that caused by the faster processor, but once companies start developing 64-bit software, we might see some big improvements. I don't know exactly they build programs in machine code, but think if they could optimize a program that, when having to add the same thing to two different numbers, it combined them into one number, added the number to both sides of the decimal, and then seperated the number back into two parts. I know it probably doesn't work that way, but at least it is something to think about. There has to be some way for things to be optimized for the 64-bit archectecture.

MrMacMan
Mar 5, 2003, 04:35 PM
I'd like to see the 970 come out this year, I mean I don't expect the 2.5 to come this year but there are few sources near this chip that are leaking so, not much to say.

Dont Hurt Me
Mar 5, 2003, 04:42 PM
I wish someone had that original ibm post. i thought it said they they had reached the 2.5 mark on current process and that zd net is misquoting?Ibm pulled that thing fast. Someone correct me but didnt that Ibm release say they had hit 2.5 on the current process. Maybe thats why it was pulled because ibm misquoted themselves.???

ffakr
Mar 5, 2003, 04:59 PM
I think the zdnet article is crap. They seem to miss-represent a number of points in the article. I had the distinct feeling that some junior writer [who didn't understand the tech, or didn't bother to research well] pen'ed the piece.

The page at IBM Germany that leaked the 2.5 GHz blade servers SPECIFICALLY noted that the 970s in the blades would be built on .13 micron SOI copper. I tend to believe IBM over ZDNet.

Of interest, however, is the claim that IBM will be showing off the blades at CEBit in Germany... pre-production units of course. I just noticed today that CEBit is next week! We may get news of how far along the CPU is and how powerful it is in under a weeks time!

OK... stuff I didn't like about the ZDNet article...
...PowerPC 970 chip that eventually will reach 2.5GHz...
The poor choice of words here implies that the 970 is expected to top out at 2.5 GHz though no one has any idea how fast the CPU will go before the design is maxed out.

.... expected to arrive on the market in the second half of this year and reach 2.5GHz speeds in a future incarnation ....
even worse, now the implication is that the 970 won't even make 2.5GHz, but that a later version may scale that fast.

... at last October's Microprocessor Forum, and at the time said the chip would run up to 1.8GHz, suggesting that a manufacturing process upgrade would be necessary before higher clock speeds could be achieved....
This seems like the author is trying very hard to read into what IBM said. This doesn't sound like Fact (I certainly don't remember IBM stating this 'fact'). Inferences that are read into statements belong on rumor posts... not in supposedly professional reporting.


... IBM says a later PowerPC 970 will reach 2.5GHz using a 90-nanometre manufacturing process. ...
Funny, I've been keeping up on any 970 news and I've never heard IBM say this... In fact the IBM Germany link specifically states this is not the case.

... Even at the lower end of its range, 1.8GHz, the upcoming chip will run nearly twice as fast as IBM's quickest existing PowerPC chip, the 1GHz 750FX...
I think this would entirely depend on what metric you use. I think that "as fast as" is a very bad way to describe MHz on totally different architectures. How about SPEC? ... some other bench? I don't care if the 970 runs at 50MHz as long as its computational power is multiples of a G3.

... In a few years, enthusiast home users will be asking for greater amounts of memory. ...
What the hell is this writer smoking? P4 and G4 have 38bit memory addressing (athlons probably do also). You can buy Macs with 2GB of ram now and PC servers (32bit) with 6GB of ram. Macs could support more than 2GB now if Apple wanted to.
How much memory will a consumer need in a few years? I have 768 MB and I consider myself a poweruser (ie. someone running sendmail, apache, mysql, and at least a dozen apps at any given time). I NEVER run out of memory. I have the distinct feeling that if I had 2GB, I'd be doing just fine with Office XI and Safari 3.0.
Consumers aren't going to need 64bit memory addressing in a few years.

ffakr
Mar 5, 2003, 05:10 PM
Originally posted by KingArthur
Let us not forget that the 970 is a 64-bit processor, too. Designed to move large amounts of data and do double the precision of existing 32-bit processors. We probably won't see a big difference in speed of programs at first other than that caused by the faster processor, but once companies start developing 64-bit software, we might see some big improvements. I don't know exactly they build programs in machine code, but think if they could optimize a program that, when having to add the same thing to two different numbers, it combined them into one number, added the number to both sides of the decimal, and then seperated the number back into two parts. I know it probably doesn't work that way, but at least it is something to think about. There has to be some way for things to be optimized for the 64-bit archectecture.

it doesn't work this way. 64bit chips don't pack registers like vector units pack registers.
64bit processors offer larger memory addresses (2 petabytes??? ...too lazy to do the math). They also offer greater precision integer math. BTW, You can do 64bit integer math on a 32bit processor, it just takes a lot longer.

Average users won't benefit much from the fact that the 970 (or the Athlon 64) are 64bit processors. They'll benefit from the fact that they are brand new designs that benefit from decades of research. They will be inherently more powerful because they are designed better, not because they have wider integer registers.

Now, 64bits will benefit the hell out of scientists, researchers, 3d artists, CAD designers, database developers or anyone doing very high precision and/or memory intensive work. It may benefit encryption/decryption speed depending on that algorithm ... but I'm not an expert in such things.

timbloom
Mar 5, 2003, 05:28 PM
Well, I tried to google any sources about a possible dual core 970, and I did not find anything about it. So, I could easily be completely mistaken... don't mine me :rolleyes:
I have just heard it so many times from people in these forums..

ryan
Mar 5, 2003, 05:33 PM
Originally posted by timbloom
no biggie, considering the 970 is "supposed" to be 3x faster per Mhz than the current G4, just having the chip running in my mac would be worth it. 1.8 ghz is still faster than what we have.
Actually, I think the estimates are that the 970 is "supposed" to be 2x faster per MHz than the G4. Regardless, having the equivilent of *only* a 2.8GHz G4 in few months works for me.

ryan
Mar 5, 2003, 05:34 PM
Originally posted by timbloom
970 can have dual-core chips, iirc. Which means two processors on one chip. I think it would be smart of Apple to offer dual's still, but the need for them with these faster chips may only be ont the top end, due to the rather high prices of the 970.
I thought the 970 was a dual core chip too until somebody (on this forum I believe) slapped me down and had a good link stating that it wasn't.

DarkNovaMatter
Mar 5, 2003, 05:38 PM
Ok, I am wondering one thing: When are the Blade servers coming out? I thought they would be coming out soon? Why would IBM announce their servers based on the 970 up to 2.5 Ghz if they they need a .9 micron process? Does this mean that the .9 process is coming sooner than originally said? or are the servers not coming out till next year when the .9 process was sappose to be used?

nuckinfutz
Mar 5, 2003, 05:38 PM
Hmmm interesting times.


ffaker I noticed the PR "did" mention 2.5Ghz at .13 Micron. Hell we all pretty much could assume that the PPC 970 could hit 2.5 at .09 Micron. The big stir was if IBM could pull those "megahurts" out of the current process.

I look forward to Cebit :cool:

As for Dual Cores no I have not seen one iota of evidence supporting Dual Cores.

I think this may happen perhaps in 2004(Late). Dual Cores would be preferential to two seperate CPU's because of the more efficient Cache Coherency between the cores. Sure SMP systems handle that but that's more traces and pins to account for.

I'm pretty much sure that we'll see SMT coming. Keep in mind Intel's version of SMT(Hyperthreading) is different from IBM's. The Power5 will have SMT and we'll get to see how it affects performance and how efficient it is. IBM has been rumored to claim "up to 80% efficiency" but as always take that with a grain of salt until you see it overall.

I do think that SMT is going to be the next thing. Intel is using it already(The P4 kind of needs it since it doesn't support SMP).

SMT will be a natural for Dual Core procs. Imagine Dual Core with 2-4 threads per core and you have multitasking at it's best.

I expect Apple to eventually synce with BSD 5.0 which offers enhanced SMP support(Fine Grain Locking) I hope this can be brough over to Darwin.

Let's just say come 2006...we'll be running some damn good Hardware!

macmunch
Mar 5, 2003, 05:52 PM
I think its nearly clear we will see one more Revision or even two till we see the first PPC970 in Mac.

According to the last Moto Roadmap its seems not so bad I think.

See --> The next PM will have a PPC7457 we now dont know if he will have full DDR ! He will clock to 1.8.

The next Revision of the PM will be one with the ...( I forgott the name) the G4 Revision with Rapid IO and this will surely have full DDR. Will Reach 2 GHz I think or even more.

So, the Januar Revision of the PM will have I think Dual 2 GHz at the Top out with Rapid IO and full DDR. So, he will be very fast I think. And the then coming PPC970 will boost the Mac in Front.

The Start using the PPC970 with 0.09 so they wait one Revision to take him than with higher clock rates which Moto will not reach with the G4.

nuckinfutz
Mar 5, 2003, 06:23 PM
I think its nearly clear we will see one more Revision or even two till we see the first PPC970 in Mac.

According to the last Moto Roadmap its seems not so bad I think.

See --> The next PM will have a PPC7457 we now dont know if he will have full DDR ! He will clock to 1.8.


That wouldn't make sense though. Even if you add DDR support to the MPX bus it's still not going to speed up Macs enough to be competitive. Keep in mind Spec scores show a HUGE disparity

PPC 970@1.8Ghz = SpecFP 1050 SpecINT= 937

PPC 75xx@ 1Ghz SpecFP= 187 SpecINT= 306

There simply is NO contest here. Why would Apple and Motorola redesign the MPX bus to support DDR for only one to two generations. Doesn't make sense.

The next Revision of the PM will be one with the ...( I forgott the name) the G4 Revision with Rapid IO and this will surely have full DDR. Will Reach 2 GHz I think or even more.

Not going to happen. Moving a 7 Pipline G4 to .13 Micron will probably yield a %20 increase in clockspeed. We'll be lucky to hit 1.6-1.8Ghz with a G4 this year. Moto won't be shipping these chips for a while and in the meantime PPC 970 based systems will be demoed in a week ;)

So, the Januar Revision of the PM will have I think Dual 2 GHz at the Top out with Rapid IO and full DDR. So, he will be very fast I think. And the then coming PPC970 will boost the Mac in Front

The PPC 970 is equivalent to a roughly a 3.4Ghz G4 if Spec scores are any indication so unless 2x G4 Chips plus adding 1-2MB of Level 3 Cache is cheaper(hint..it's not) than a PPC 970 system then Apple would be shooting itself in the foot.

People it's altogether possible that we might not see PPC 970 based Macs in 2003 but it's unlikely.

1. IBM is showing PPC 970 Blades at Cebit

2 IBM is already booting Power5 based systems

There's no reason to think that a late summer release isn't possible.

phampton81
Mar 5, 2003, 06:54 PM
I think it is possible that the confusion on the subject of the 970 being dual core may have originated from the thread on the Power5 and it's possible derivitive. Correct me if I am wrong but I believe there was much speculation saying that if, or when the Power5 derivitive was made that it would have dual core capabilities.

nuckinfutz
Mar 5, 2003, 07:06 PM
Originally posted by phampton81
I think it is possible that the confusion on the subject of the 970 being dual core may have originated from the thread on the Power5 and it's possible derivitive. Correct me if I am wrong but I believe there was much speculation saying that if, or when the Power5 derivitive was made that it would have dual core capabilities.

Yeah probably.

Dual Cores aimed at the Markets Apple touches would most likely need to be 90nm. Even then we may be waiting a while...but it is the future. Even intel will be bringing Dual Cores to the masses within the next few years.

ffakr
Mar 5, 2003, 09:03 PM
I believe IBM specifically said that 970 would be single core and that dual core was not on the map.
Making a dual core 970 defeats the purpose of making it a Power4 lite. You'd have to add extra logic to make each core play nicely with the other and you'd more than double the size of the die. Cost and power consumption would go up, and you have a Power4 with Altivec... not the best processor for a small server or desktop box.

When IBM shrinks to .09 then anything is possible. The die will shrink quite a bit, and dual core may be a possiblity. I doubt that IBM would redesign a processor for this when the Power5 (and any possible derivatives) are slightly over a year out at this point. If there are changes to a 970 after the die shrinks, I'd wager it would simply be more on die cache.

Flynnstone
Mar 5, 2003, 09:34 PM
This is my understanding of the IBM Power 4, Power 5 and 970.
The Power 4 is a dual core. It has two PowerPC cores on it with cache. The Power 5 is dual core as well but supports something equivalent to Hyper threading (as on higher end P4s). So it would appear to have 4 processors on board.
The 970 is a scaled down version of the Power 4. Basically yard out a processor core and add in an Altivec unit. Other changes as well.

just by $0.02

suzerain
Mar 5, 2003, 10:21 PM
Too many people seem to be confusing the POWER4 and the PowerPC 970.

The PowerPC 970 is derived from POWER4 technology, but not the same thing. So, IBM learned some things while making the POWER4 and applied them to create this all-new chip for lower end computers.

There's too much publicly available, nonspeculative information available for so many people to be so confused. Here's what's known about the processor, because IBM announced it at the microprocessor forum last year. I'll quote an article that was written the day after IBM's announcement:

The PowerPC 970 triples the length of the PowerPC pipeline, which translates into a higher clock speed: 1.4 to 1.8 GHz at the core's introduction..

OK, so first: they lengthened the pipeline of the PowerPC to scale megahertz higher. Now, we know from the leaked blade server press release that they may be on the verge of scaling speeds even higher...great, this is good news. The numbers are less important than the fact that it's capable of going where no PowerPC has gone before.

...the front-side bus can transfer up to 7.2 Gbytes per second, roughly four times the bandwidth of the current Pentium 4 front-side bus...The front-side bus electrically runs at 450-MHz, double-clocked to an effective rate of 900-MHz, generating a peak bandwidth of 7.2 Gbytes or 6.4 Gbytes/s of useable bandwidth after transaction overhead is taken into account

Well, this is huge. Obviously, our bus limitations on the Mac are an even bigger problem than our CPUs. This chip will allow Apple to make bus speeds...well...a hell of a lot faster, probably using HyperTransport, since it runs at the same speed.

"Our goal in designing the PowerPC 970 was to enable (symmetric multiprocessing) while still supporting 32-bit code with a high level of performance," said Peter Sandon, senior processor architect within the PowerPC organization at IBM Microelectroni

OK, so YES, these chips support SMP. In other words, Apple can make dual processor 970-based Macs if they want to.

Second, the fact that the chip is 64 bit is irrelevant to current software. Apparently, our current 32 bit software will run just fine. When it's reworked to be 64 bit, so much the better.

My speculation: I read somewhere, and can't provide a link, that the 970, in fact, supports 8-way multiprocessing. Note that Apple spent time last year buying out high end Hollywood-quality software, yet lacks w workstation to go along with it. I think there's a very, very, very good chance we'll see high-end 5-10 thousand dollar Appple pro workstations, and rackmounted servers, to sell to film production companies and the like.

The core, as defined, contains 64 Kbytes of instruction cache, 32 Kbytes of data cache, and 512 Kbytes of 8-way set associative level 2 cache. Unlike the Power4, the core does not apparently contain an onboard cache controller to enable the use of off-chip L3 cache.

OK, so no L3 cache in these future machines. Obviously, since bus speed will be like 7 times faster than we're used to, this won't be much of an issue; the chip will stay fed.

Performance-wise, IBM believes the chip can record a benchmark of 932 on SPECint 2000 and a score of 1051 on SPECfp2000, both at 1.8-GHz. Peak SIMD GFLOPs should be about 14.4, Sandon said. Using Dhrystone MIPS, the chip should output a score of 5,220. or 2.9 DMIPS/MHz/. IBM expects the chip should test 18 million RC5 keys per second.

OK, so...it's...umm...fast. We can all understand that. Yes, the 1.8 Mhz scores pretty much match the 3.2 Ghz Pentium 4, so if it goes up to 2.5 Ghz, that'll be...well...awesome.

IBM will use a 0.13-micron SOI process with 8 levels of copper to manufacture the chip, which should require a 576 pin package; Sandon did not disclose the die size. IBM expects the chip will output between 19 watts and 42 watts of power, depending on the whether a 1.2-GHz (1.1 volts operating voltage) or 1.8-GHz (1.3 volt) clock speed is used.

The current G4 draws about 30 watts, so it's safe to assume that the 1.8 Ghz, at 0.13 micron, will be stretching the limits of the PowerBook. But, the 1.2 Ghz chip, if it's as low as 19 watts? Wow. Keep in mind that this chip should theoretically effectively perform 2-3 times as fast as a current G4, clock for clock...and at 19 watts, it makes it possible to create either dual processor PowerBooks, or increase battery length, either of which would be just fine with me.

Of course, they could sacrifice megahertz for wattage, and, as they move to .09 micron, power consumption will undoubtedly improve.

A note about the press release: again, this is my speculation. We don't know how far in advance their marketing department creates press releases for future products. In other words, it *could* mean they will release blade servers soon, but it's more likely that it'll be several months away.

Remember, Apple began selling 1.42 Ghz machines two months ago, and they're just starting to ship now. So, assuming this press release wasn't meant to come out until next week, and it's only a product announcement, it could be as much as 6 months or more before those blade servers even see the light of day outside of IBM testing. Then again, they could release them next week, too, but I think that's optimistic, given that they are going out of their way to say the current blades are prototypes.

Here are some resources:
ExtremeTech, (http://www.extremetech.com/article2/0,3973,636154,00.asp) eWeek, (http://www.eweek.com/article2/0,3959,634695,00.asp)IBM (http://www-3.ibm.com/chips/techlib/techlib.nsf/techdocs/A1387A29AC1C2AE087256C5200611780)

suzerain
Mar 5, 2003, 10:30 PM
Here you go. This is a single core chip. In fact, that's one of the main things that differentiates the 970 from the POWER4:

KingArthur
Mar 5, 2003, 10:35 PM
If the derivative of the Power5 had duel cores and SMT, the MacOS X could still take 8 of those things (remember, OSX can support 32 processors, so 8 processors appearing like 4 each is 32)! Just amazing to think of that much power behind an OS. And an OS that is that good at scheduling things to multiple processors.

Oh, and I still say that we are going to see one or two speed boosts of the G4 before a 970 (I am betting one). I wouldn't be suprised to see a G4 iBook comming soon, though. Probably the 14" one will have it to start. Apple is going to have to start reving up the lower laptop line if they plan on raising the stakes on the high-end line. I also don't think it would be a wise strategy for them to be using three different processors at a time in brand new machines. I bet we are going to see the retirement of the old iMac once and for all, a boost of the iBook to G4, and after that is all said and done, then we will see the 970. Apple has to tie-up loose ends before making new ones;)

KingArthur
Mar 5, 2003, 10:51 PM
Speaking of Laptops.....Remember, although the 970 could consume less power than the G4, we also must factor in that this is JUST the processor. What about the bus? The ram? Apple will have to work out a way to make those variable like switching between 450Mhz regular mode and double-pumped mode. I know that DDR supports slow down mode where it lacks the Double-Data-Rate feature (essentially making it our old friend, PC133). Battery life is dependant on a lot more than just the processor. Hell, the LCD screen is the biggest powerconsumer. I say we need to just sit-back, relax, and wait for a month or so. Then we may be able to better speculate. Hehe, I remember when there use to be just as much hype about a G5 last summer, and look at what great G5s we have! *looks around at a bunch of G4s*. Anyway, my soap-box is done for the evening. Have a good one.

suzerain
Mar 5, 2003, 10:52 PM
I happened to be perusing the IBM PPC 970 PDF, and noticed that they contain, on their very rough bar graph, a 1 Ghz 970. So, it provides an opportunity to compare clock for clock. I found some G4 numbers by trawling the Net.

The numbers look like this, for both processors at 1 Ghz:

SPECint2000
G4: 306
970: ~550

SPECfp2000
G4: 187
970: ~700

Well, I'll let you guys break out your calculators...but in my mind, that's one hell of a difference at the same clock speed.

Sounds to me like a 1.8 Ghz 970 ought to be the equivalent of about a 3 Ghz G4.

OK...sorry for the flurry of postings; I just get annoyed when so many people talk out of their ass when there's plenty of officially announced information around to start drawing a picture.

bones
Mar 6, 2003, 01:11 AM
Originally posted by suzerain

Sounds to me like a 1.8 Ghz 970 ought to be the equivalent of about a 3 Ghz G4.


With a much faster bus.

I think you all are seriously under-estimating how fast these 970 machines will be.

nuckinfutz
Mar 6, 2003, 02:03 AM
I'm not underestimating.

It all makes too much sense.

1. Motorola announced 85xx processors but no Altivec. Rumors surface that G5 is cancelled.

2. Suddendly IBM announces new Fab in E Fishkill

3. IBM announces PPC 970 at MDR Online

4. SIMD unit is Altivec.

5. Rumors of new Processor interconnect surface from TGB it's rumored to be called ApplePI.

6. Apple CFO announces that they are expecting to increase revenue to 8 billion and marketshare to 5%(a %66 increase)

Well it doesn't take Einstein to realize that Apple expects to have enticing HW coming.

The Powermacs are not going to see another G4 rev. The 130nm G4's aren't due and even then they would pale in comparison to even the lowliest of PPC 970

Hell IBM is booting Code with the Power5 what makes someone rationalize that the PPC 970 won't be available until next year. This chip is "simple" compared to fabbing a Power4 pr Power5.

The question is does Apple go with Dual Processors? I guess yields will determine that. I'm an optimist when it comest to this. I mean you can generally get the sense with Apple(if you've followed them for sometime) on if they are moving forward or standing still. They are definitely moving forward.

Speculation is fine but it must be based on a shred of commonsense.

suzerain
Mar 6, 2003, 02:10 AM
Originally posted by bones
With a much faster bus.

I think you all are seriously under-estimating how fast these 970 machines will be.

I think, for me, at least, I'm just erring on the side of pessimism, since I've been repeatedly let down on the speed front ever since Steve Jobs took over Apple. The year before, Power Computing had the fastest personal computer on the planet (even including PCs), as 604e clock speeds were higher than Pentiums. Motorola was ready to release the G3 a FULL YEAR before Apple eventually did (that's why Apple killed clone makers). If Motorola had been allowed to do that, the speed advantage over PCs would have been fantastic (for a while, at least).

I have been quite happy in the second Jobs era with product innovation, however; Apple has made great products in the past few years, and the software is incredible, but I wonder what kind of impact the cloning riff had on Motorola's priorities vis-a-vis processor development, since they were, themselves, a Mac clone maker. For its part, IBM was really reluctant to work with AltiVec, so they weren't much help, either...until now.

So, I'm a pessimist.

But, if I were to be an optimist for a second...

I said before that the 1.8 Ghz 970 would equal a 3.0 Ghz G4. That was based, obviously, on looking at the SPECint numbers.

The floating point numbers, though, look more like a 1.8 Ghz 970 is the equivalent of a 6.7 Ghz G4, plus it still has AltiVec. (!)

And ON TOP OF ALL THAT, as bones pointed out, these should have a bus speed that's 4-5 TIMES the speed of a current top of the line G4, plus faster RAM to go with it...

...Then you factor in Apple going further with Quartz Extreme, in terms of offloading graphics code to the GPU, thus freeing up more CPU power...

...and yes, as an optimist, it's easy to imagine a computer that can do real-world things like encode video into MPEG4 and burn onto DVD in real-time, or rip a CD into AAC in like 30 seconds, or achieve astronomical frame rates in Doom 3.

Plus, to everyone around here who keeps saying "the average user doesn't need 64 bit": all you need to do is wait a year, and programmers will invent reasons. In short, the 64 bit revolution will allow computers to just plain do things they can't today.

In 2-3 years, it will be become common, for example, to have like 100 GB of RAM in your computer, thus enabling games to get truly photorealistic, or perhaps containing the entire OS in RAM, increasing speed of file access like 100 times.

Anyway...back to pessimistic mode. If Apple releases these machines, and they're 2x faster, I'm happy. Plus Motorola's got faster 7457s coming for laptops and iMacs.

More speed is only good.

KingArthur
Mar 6, 2003, 03:05 AM
lol. I use to keep an operating system in RAM. OS 8.6! I had 64MB of RAM, and made a bare-bones OS along with Norton SpeedDisk and saved it all in a folder. Then, whenever I wanted either a blazing fast response time or to optimize my harddrive, I would just make a RAM disk, put the contents of my folder into that RAM disk, go to start-up disk, and select the RAM disk and set the preferences to save the RAM disk to the HD when computer is shut down. I tell ya, I have never started up a computer so fast. It would be nice to be able to do that again. Set it so that, upon power-up, everything in the system folder (all 1GB) would be read directly into the RAM. Sure, this would take a while to load all of that, but HDs are not that slow when just being read from. Think of it, then. If you did software update, it would just change the stuff on the HD without any hinderance to the system. Just restart when you felt like it, or maybe, the OS would just keep everything in the RAM except the things that changed, then would restart, only having to read from the HD whatever was changed. Restart time would be a matter of 10-20 seconds. The only thing that would require time would be the reading of 1-10MB of data into the RAM and then the processing of the files. Think if all of the temporary internet files could be kept in the RAM and written to the HardDrive so that if you have to restart, they can all be read back into the RAM quickly. Ok, so what if I am talking about the use of over a GB of RAM just for the OS. Hell, if there were a tower that could hold that much RAM and an OS that could do that, I would jump on the bandwagon as soon as I had the money.

Anyway, back to the present. Yes, I think that people are being over-optimistic, and no, I don't think that we are going to see 970s yet. Remember, Apple has to be as economical as possible, and having a line-up of three different processors is just unrealistic right now. With their fingers in too many pies, either you go the way of IBM who would be harder to uproot than Microsoft, or you go the way of the failed companies, and I don't think that there would be many people at all who would mind seeing Apple go the latter way.

The reason Motorola canceled or at least delayed the G5 is b/c quite frankly, they are having some bad economical times. The G4 is where the money is for them right now, and they don't really need a 64-bit processor with a bunch of bells and whistles when a 32-bit one with a little more speed will work just fine.

If you also read rumors and more legit places, Apple is investing a lot more in the software division right now than the hardware. They are trying to make it appealing for people to switch. You can have all the hardware you want, but without a good selection of good software, no one in their right mind is going to buy your product. Right now, I would expect to see a lot more iApps comming out of Apple, and maybe some more educational software. Also, I would expect to see a lot more focus on speech recognition and graphics rendering b/c they can't afford to lose anymore of the graphics crowd.

I am highly pessimistic as of late b/c my best friend and I are having problems, so I guess I may not be thinking as optimistic as I should.

This is the best thing I have heard all day, though, so I figure, what the h e l l, might as well post it somewhere:

" and maybe that's why Billy is strangling his mother, because of sentences just like this one, which have no discernible goals or perspicuous purpose and just end up anywhere, even in mid "

suzerain
Mar 6, 2003, 03:27 AM
...as i was searching around the Net tonight for speed comparisons. I decided to compare IBM's RC5 cracking claims against current G4s.

OK, two things jumped out at me from the speed comparison on distributed.net's site (http://n0cgi.distributed.net/speed/query.cgi?cputype=all&arch=2&contest=all):

(1) On the topic of the PPC 970 vs. the G4, RC5 code cracking is something that's handled by AltiVec, the way the software's written. It appears the performance vis-a-vis the SIMD unit will scale linearly, according to IBM's claims:

970 1.8 Ghz: 18 million keys/sec
G4 1.6 Ghz: 17 million keys/sec
G4 1.25 Ghz: 13 million keys/sec.

This tells me that the SIMD unit is essentially exactly the same, and not significantly improved in any particular way over the one in the G4.

Note that the POWER4 chip at 1.3 Ghz only manages 8 million keys/sec. AltiVec really makes a difference for matrix math. The Pentium at 3.14 Ghz manages a respectable 12 million...at least there's one place the PowerPC still reigns supreme.

(2) Gee, you think it's worthwhile to note that distributed.net...ahem...has a speed rating for a 1.6Ghz G4 (http://n0cgi.distributed.net/speed/query.cgi?cputype=99&cpumhz=1600&recordid=1&contest=all&multi=0)?!

I don't know who reported it, or what, but the machine was running OS X 10.1.

Hmmm...

nuckinfutz
Mar 6, 2003, 03:59 AM
Yes, I think that people are being over-optimistic, and no, I don't think that we are going to see 970s yet. Remember, Apple has to be as economical as possible, and having a line-up of three different processors is just unrealistic right now.

Define "yet" that is very vague. No one expect Apple to announce PPC 970 based systems next week but it's very plausible that a late summer release can happen. The number of processor lines doesn't matter as long as they all can run in the same motherboard. That's why apple has standardized on the UMA Motherboards. G3/G4's are well supported. So in essence you're only adding new Mobo tech with the 970.

The reason Motorola canceled or at least delayed the G5 is b/c quite frankly, they are having some bad economical times.

You think so. I tend to think Apple chose the PPC 970 over a year ago causing Motorola to scuttle any G5 Desktop development. This isn't the first time it has happened. Apple drop kicked motorola and went to the IBM Powerpc 601 chip after Moto sluffed off on the 860xx series chips.

This tells me that the SIMD unit is essentially exactly the same, and not significantly improved in any particular way over the one in the G4.

I agree that the SIMD unit is the same. However I think that Altivec will have a noticeable speed increase in the PPC 970 as currently Altivec is "starved" by the current Memory Bus.

Booga
Mar 6, 2003, 04:07 AM
Originally posted by ffakr
What the hell is this writer smoking? P4 and G4 have 38bit memory addressing (athlons probably do also). You can buy Macs with 2GB of ram now and PC servers (32bit) with 6GB of ram. Macs could support more than 2GB now if Apple wanted to.
How much memory will a consumer need in a few years? I have 768 MB and I consider myself a poweruser (ie. someone running sendmail, apache, mysql, and at least a dozen apps at any given time). I NEVER run out of memory. I have the distinct feeling that if I had 2GB, I'd be doing just fine with Office XI and Safari 3.0.
Consumers aren't going to need 64bit memory addressing in a few years.

I happen to agree with the writer. 1GB is becoming very commonplace, and 1.5-2.0GB "Power User" levels. If you work with Photoshop, 4GB isn't completely unreasonable, and server machines that max out at 4GB are becoming "low end". In addition, the "38 bit memory addressing" may be there as a hack, but if you can't fit an address in a register or single OP, you're going to slow some things down or limit processes in various ways. And if you're talking about lots of A/V and home movie making, the more the merrier. With RAM fab techniques continuing to make more RAM cheaper, I think the bet is a good one. Currently AMD and IBM are betting home users will want more, while Intel is the holdout, agreeing with you that it won't affect sales until 2008-9.

Originally posted by ffakr
it doesn't work this way. 64bit chips don't pack registers like vector units pack registers.


Actually, that's not really true either. If you can fill an 8 byte general purpose register with image data fast, you can do a lot of operations on it fast. Altivec can do this, obviously, but is somewhat limited in its available operations. I suspect once Photoshop is 64-bit-ized, you'll see acceleration in many of the operations that are not Altivec-able. In essence, it doubles the ultra-high-speed "working space" that an image filter programmer has to work with.

Of course, the memory architecture of the 970 is where Macintosh Photoshop users will really drool, and probably one of the primary reasons it's currently so much faster on a Wintel machine.

Which brings me to the more interesting point. A lot of people are comparing it against the G3 and G4, which are pretty ancient technology. The real bonus will be that the 970 will be in the same performance ballpark as the x86 world. According to current estimates, it will just about match the offerings from AMD and Intel-- the first time a PowerPC has done this in years, marketing demos notwithstanding. (Although AMD's original roadmap would have left the 970 behind, IBM seems to have gotten lucky with the current delays at AMD, and if they don't suffer delays themselves should be able to keep up.)

ktlx
Mar 6, 2003, 06:33 AM
Originally posted by Booga
Actually, that's not really true either. If you can fill an 8 byte general purpose register with image data fast, you can do a lot of operations on it fast. Altivec can do this, obviously, but is somewhat limited in its available operations. I suspect once Photoshop is 64-bit-ized, you'll see acceleration in many of the operations that are not Altivec-able. In essence, it doubles the ultra-high-speed "working space" that an image filter programmer has to work with.

ffakr is correct in this and you are confusing what AltiVec and general purpose registers are used for. AltiVec is a SIMD unit and by definition is designed to perform the same operation on multiple pieces of data without interference. A general purpose register is not designed this way. That is why things like AltiVec and SSE were invented in the first place.

But if you insist, do not take either of our words for it. Please pick up one of the PDFs floating around that define the operations available on the general purpose 64-bit registers. Then take two 32-bit numbers or two pairs of 32-bit numbers and try to perform the operations in parallel as a single 64-bit register or two 64-bit registers as the PDF describes the instruction set.

You will find that operations such as compares, rotates, multiplies and divides do not work anywhere close to correctly. You will find operations like adds, subtracts and shifts require several operations to extract the correct values (basically a bunch of post processing). The only instructions I have seen work correctly in parallel are bit-wise ands, ors and exclusive ors.

Except for the bit-wise operations, it is faster to serialize the 32-bit operations than to try to fake a SIMD unit with general purpose registers.

KingArthur
Mar 6, 2003, 07:21 AM
Ok, there seems to be a little confusion in what I said earlier. When I made the example of the 64-bit processor, I knew that it doesn't operate scalable like the AltiVec vector processing unit. The altivec can do 4-32bits, 8-16bits, ect. I was merely trying to say that along with the new 64-bit code comes possibilities that most programmers are probably not even aware of. Who knows how they can use the 64-bit processor for most programs. Maybe they can find ways of speeding up certain functions that usually require more than one pass through the processor to complete. I am just saying that a 64-bit processor leaves a lot to be optimized and we don't know yet how programs are going to utilize it.

Also, I still don't believe that Apple is going to switch to the 970 until they get rid of the G3. It isn't economical! Hell, we all know how Apple tries to keep invitory really tight to minimize loss. That isn't to say that they won't be selling the already-built iBooks, it is to say that I think they will put a G4 into the iBook before they go to the 970. Everything after that involving the G3 will probably just be Apple depleteing its stock of G3s. True, the iBook looks to be one of the next things to be updated since it, the eMac, and the iPod are the only things that are not "new" according to the Apple store. We may see the new G4 iBook soon, but then again, if we don't, I wouldn't expect to see the 970 for quite a while longer. It is like the groundhog seeing his shadow. If we don't see the G4, then we know it is another 6-weeks of winter;) (figuratively speaking, for those who try to take that literally).

Also, do you keep up on the stock quotes for Moto? Well, I do (well, I did until I stopped getting the paper about a month ago), and Moto's stock has been plumeting well before the G5 fiasco, and right now, they are more concerned with keeping costs down and riding out the recession than spending the money to research a chip for a small company (Apple). They probably decided that there was little use for them to use the chip, and since they are their own biggest buyer, Moto probably decided to scrap the idea until the need arises for a new chip. Remember, everything said here is purely speculative on my part, and same on anybody elses part unless you actuallty are working for Apple's R&D or sales dpts.

Anyway, I have only one question left: Is the SIMD processor in the 970 128-bit or 256-bit? A 256-bit AltiVec engine would definately be nice, but I don't know if code would have to be rewritten to fully utilize it.

Rincewind42
Mar 6, 2003, 08:04 AM
Originally posted by KingArthur
Also, I still don't believe that Apple is going to switch to the 970 until they get rid of the G3. It isn't economical!

Your missing the point. All indications show that the PPC970 will be a cheaper part than the PPC74xx. Why? 1) .13 process makes chips cheaper to produce (and IBM is working on other process improvements to drive this down further) 2) No L3 cache, those things are expensive as hell, easily costing as much as the CPU itself. Yes, Apple will have to design the motherboard around the new chip, and we will see other components go up in price (DDR400 RAM is around 50% more expensive than DDR333, but that only comes out to about $20 per 256MB) but overall the systems will be of similar cost if cheaper than a G4 system. Should Apple move to the PPC970, it would be expected that G3 systems would have already been retired, either before or with the update. What is really up for debate is if G4 systems would be retired as well =).

Anyway, I have only one question left: Is the SIMD processor in the 970 128-bit or 256-bit? A 256-bit AltiVec engine would definately be nice, but I don't know if code would have to be rewritten to fully utilize it.

The Altivec engine on the PPC970 is 128-bits. It would be possible to enlarge it to 256, but this would require a compatability mode similar to that of 32-bit vs 64-bit mode and would require code to be written specifically for it.

nuckinfutz
Mar 6, 2003, 08:28 AM
Also, I still don't believe that Apple is going to switch to the 970 until they get rid of the G3. It isn't economical! Hell, we all know how Apple tries to keep invitory really tight to minimize loss. That isn't to say that they won't be selling the already-built iBooks, it is to say that I think they will put a G4 into the iBook before they go to the 970

Yes normally I'd agree with you but

1. Apple is not going to shoehorn a .18um G4 into an iBook

2. As you can see below the .13um G4's don't ship until Q4 and they're really not cheap.

Apple knows there is pent up demand for a fast Powermac. Those purchases are where Apple makes it's money. They have the highest margin in the lineup. I can assure you it will be in Apples "Econonomic" interests to ship fast 970 systems as soon as possible. They cannot wait for faster G4's.


Motorola's PPC 7457 .13um G4 Press Release (http://finance.lycos.com/home/news/story.asp?story=31437784)

Pricing and Availability
Alpha samples of the MPC7457 and MPC7447 PowerPC processors are available
today to selected customers. General market sampling is planned for March,
with production expected to commence in Q4 2003. Suggested retail pricing for
the MPC7457 at 1 GHz is expected to be $189 (USD) in quantities of 10,000.

ftaok
Mar 6, 2003, 08:52 AM
Originally posted by nuckinfutz
snip
2. As you can see below the .13um G4's don't ship until Q4 and they're really not cheap.

snip

Motorola's PPC 7457 .13um G4 Press Release (http://finance.lycos.com/home/news/story.asp?story=31437784) At $189 per 10,000, the 7457's are a whole lot cheaper than the 7455's. I think the 7455's were priced around $300 per 10,000.

Also, the 7447's will be price very cheaply as well. They could easily put these in the iBooks and iMacs.

Hattig
Mar 6, 2003, 10:09 AM
Where would I like to see Apple's product range this time next year?

Portable

Ultra-lite (12" PB, new products): G4 7457 @ 1.2 GHz (maybe 1.4 GHz)

iBook: G4 7457 @ 1.0 GHz (maybe 1.2 GHz)

Powerbook: 1.2 GHz IBM 970 (maybe 1.4 GHz)

Consumer Desktop

iMac: 1.2 GHz IBM 970 to 1.6 GHz IBM 970

Power Desktop

PowerMac: 1.8 GHz IBM 970 to 2.5 GHz IBM 970

I don't believe that the PowerMacs with the IBM processors will be dual, after looking at the SPEC scores - especially if the processor is released up to 2.5 GHz. Maybe if it is only available at 1.8 GHz a dual would be necessary... but ...?

Power Workstation

"PowerMacPro": Dual IBM 970's (1.8 GHz to 2.5 GHz)

As you can see, I expect Apple to fully drop the G3 and G4 throughout their entire product range, except for the very low power 7457 in cheap and low-power systems. On the other hand, maybe IBM can make a low-voltage 1.0GHz PPC 970 that uses 10W ... who knows?

ffakr
Mar 6, 2003, 10:23 AM
Originally posted by ktlx
ffakr is correct in this

thanks ktlx. Maybe I can explain this a bit more clearly. Lets look at a couple examples.

I'm going to shorten the size of the numbers to make this easier.. but the example still holds.

imagine an 4 bit processor. 4 bit integers include all possible numbers in the set [0...15]. In binary they include [0000...1111].

if you add two 4 bit unsigned ints.. you get...
2 + 2 = 4
or
0010 + 0010 = 0100 = 4

If you add to bigger numbers you get...
10+10 = 4 (it rolls over... it goes up to 15, then goes to 0... back up to 4)
or, in binary...
1010+1010 = 0100
This behavior is predictable.

Now imagine an 8bit processor [00000000 ... 11111111].
pack it with two operations like the above...
1010 & 1010 + 1010 & 1010
(think of this as two 10+10 operations)
is the same as...
170 +170
which equals 340
which equals 01010100 in binary (again, it rolls over...)
if you break that up into two 4 bit numbers you get
0101 and 0100
or 5 and 4.
So... if you pack registers in a regular interger unit... you get 10+10 = 5 and 10+10 =4.

it just doesn't work right. Altivec, OTOH, is designed so that when you put 16 8bit integers into it... each section behaves properly. That's what makes it a vector engine and not a big wide FP or Integer unit.

Hope this helps.

Originally posted by Booga
I happen to agree with the writer. 1GB is becoming very commonplace, and 1.5-2.0GB "Power User" levels. If you work with Photoshop, 4GB isn't completely unreasonable, and server machines that max out at 4GB are becoming "low end". In addition, the "38 bit memory addressing" may be there as a hack, but if you can't fit an address in a register or single OP, you're going to slow some things down or limit processes in various ways...
I really have to disagree here. 1GB isn't commonplace right now. Dell still sells base machines with 128MB (as do many others). Most average users don't get machines with more than 256MB of ram. I'm an exception with 768, as are video and graphic professionals.
The article in question didn't say that certain professionals need more than 2GB of ram... it said average people will need more within a few years and I simply don't see this. Consider these arguments...

Ram prices to continually fall, but this is (to some extent) offset by the introduction of new, higher priced ram standards. 2 GB of DDR 333 [micron] is only $228 for 4 dimms now. This is best price at pricewatch for Micron DDR and MUCH less than what a manufacturer would charge an average users for an boxed machine. RAM prices have fallen a tremendous amount over the years, but this has leveled off to some extent. A year ago, you could probably have found a Gig of SDR for this much. A good decrease, but it seems that this has been slowing. There is a minimum amount that must be charged for one stick of memory in order for people to make money in its production and there are technological limits as to how much memory can fit on a stick. Right now, 4GB in 4 DIMMS would cost $943 at pricewatch. Even if this fell in half every year... it'd take a long time before the Average user felt compelled to put 2, 4, or more GB of ram in a machine. At the prices that retailers mark up memory... getting a Dell with 2GB a year from now would probably still mean that the memory cost as much as the rest of the computer.

38bit addressing isn't a hack.. it is the size of the memory registers in current chips. 32bit registers can directly access 4GB of ram. 38bit registers can natively address over 270GB of ram. If Apple wanted to, they could design G4 systems that could natively address hundreds of GB of ram. As an example of this capability, I pointed out that some x86 systems already allow users to install and address over 4GB of ram (currently some Xeon Servers). Does anyone expect the average user to really need more than 4GB of ram, let alone 270GB of ram in the next 3 years? Let's not get silly here.

The article I refered to clearly indicated that the author felt that the average user would require 64bit memory addressing (over current limits) within 3 years. I stated that i thought this was silly and I have to stand by this. My mother, my nieces and nephews, the secretaries I support... these are average users. They won't need 4 GB of ram in 3 years... any more than they need 1GB now.
This isn't to say that there are sectors that need huge memory addressing... but the article didn't address these users, just the average joes. The article was poorly thought out.

Hattig
Mar 6, 2003, 11:55 AM
Originally posted by ffakr
38bit addressing isn't a hack.. it is the size of the memory registers in current chips. 32bit registers can directly access 4GB of ram. 38bit registers can natively address over 270GB of ram. If Apple wanted to, they could design G4 systems that could natively address hundreds of GB of ram. As an example of this capability, I pointed out that some x86 systems already allow users to install and address over 4GB of ram (currently some Xeon Servers). Does anyone expect the average user to really need more than 4GB of ram, let alone 270GB of ram in the next 3 years? Let's not get silly here.

The article I refered to clearly indicated that the author felt that the average user would require 64bit memory addressing (over current limits) within 3 years. I stated that i thought this was silly and I have to stand by this. My mother, my nieces and nephews, the secretaries I support... these are average users. They won't need 4 GB of ram in 3 years... any more than they need 1GB now.
This isn't to say that there are sectors that need huge memory addressing... but the article didn't address these users, just the average joes. The article was poorly thought out.

36-bit address (Intel) is a hack. It is little better than bank-switching on 8-bit machines.

Pointers are still 32-bit (and this is the same for the G4 even with 38-bit addressing), so you don't have a flat 36 (or 38) bit address space, but one that you can only see 4GB of at the most. Most applications that use PAE (36-bit addressing) use the extra memory as a cache - e.g., SQL Server.

Consider that the OS likes to have some of the memory set aside, and you find that your maximum process size is 2, or 3 GB.

Today, this isn't an issue.

In two years time this may be an issue, when 1GB DIMMs are affordable, and 2GB not out of the price-range of the enthusiast.

Intel want to go 64-bit in 2008 for the consumer. That is too late in my opinion. The consumer will already have been enticed by the marketing talk of AMD and Apple for 4 years by then, regarding 64-bits - and will have purchased because they desired it, not out of any particular need.

64-bit integers are great for certain applications, particularly encryption and decryption like SSL.

64-bit addressing is in demand on workstations already for content creation applications. I couldn't imagine myself using more than 2 GB today, although I can see a need for 1 GB. Was it only 5 years ago that me and my mates were ahhhing over 128MB that someone had in their machine?

Maybe in 5 years time he will have 16 GB, and be working on images or raw video with 64 or 128-bit per pixel colour depth... or running several apps at the same time that all require a lot of memory, as you edit multiple images on your 2 OLED 2560x2048 128-bit displays ...

noverflow
Mar 6, 2003, 11:55 AM
Originally posted by suzerain
...970 1.8 Ghz: 18 million keys/sec
G4 1.6 Ghz: 17 million keys/sec
G4 1.25 Ghz: 13 million keys/sec.

(2) Gee, you think it's worthwhile to note that distributed.net...ahem...has a speed rating for a 1.6Ghz G4 (http://n0cgi.distributed.net/speed/query.cgi?cputype=99&cpumhz=1600&recordid=1&contest=all&multi=0)?!

I don't know who reported it, or what, but the machine was running OS X 10.1.

Hmmm...



they are talking about a dual 800... the 1.25 is a sing processor... the dual 1ghz QS were doing 21M/s.... check out this page


http://www.xlr8yourmac.com/archives/jan02/013002.html

ffakr
Mar 6, 2003, 12:05 PM
Originally posted by Hattig
Pointers are still 32-bit (and this is the same for the G4 even with 38-bit addressing), so you don't have a flat 36 (or 38) bit address spaceBut this is a limitation of the development language, not the hardware... isn't it?
Maybe in 5 years time he will have 16 GB, and be working on images or raw video with 64 or 128-bit per pixel colour depth... or running several apps at the same time that all require a lot of memory, as you edit multiple images on your 2 OLED 2560x2048 128-bit displays ...
I doubt any average user (and again, the article in question mentions average users) will be editing images or video with 64bit or 128bit color depths in a few years (again, the article said 3 [a few] years). Where is the real advantage in working on 128 bit color video? Can your eyes see the difference? Can your monitor or TV display at that depth? Maybe if you absolutely need to have a perfectly smooth fade of one color on a HDTV while still having a full pallet... then maybe you'd need a bazillion color pallet.
Personally, I'm pretty hard pressed to see the difference between 24bit and 32bit color. Do you need 3.4 x 10^38 colors?

noverflow
Mar 6, 2003, 12:12 PM
I agree... if we can see 64 bit color... why would we want to edit in it?

but on a diff note... i have 1.5Gb of ram... and i do run out when im doing big stuff in photoshop... 20 by 30 inches at 266 dpi with 5 layers takes about 1.6Gb of ram to do... and because i have 1.5... my swap goes carzy and i cant even get the doc till the swap is cleared. And i would like to do bigger, more complex things... i would like at least the option to have 4gb of ram.

ddtlm
Mar 6, 2003, 12:26 PM
KingArthur:

Ok, there seems to be a little confusion in what I said earlier. When I made the example of the 64-bit processor, I knew that it doesn't operate scalable like the AltiVec vector processing unit. The altivec can do 4-32bits, 8-16bits, ect. ... Maybe they can find ways of speeding up certain functions that usually require more than one pass through the processor to complete.
No no no, you're still saying exactly the same thing as ktlx very eloquently told you was wrong. 64-bits allows certain things to be spead up, such as huge bitwise ops, and such as working with integers that are too large to store in 32 bits. Everything else is a vector op, seems to me, and we have 128-bit AltiVec for that.

I was merely trying to say that along with the new 64-bit code comes possibilities that most programmers are probably not even aware of.
64-bit machines have been out since 1993 or so.

Rincewind42:

All indications show that the PPC970 will be a cheaper part than the PPC74xx.
Hardly. Even on 130nm the PPC-970 is larger than a 180nm 7455, and it uses a more complex and therefore more expensive FSB. Don't forget that the more complex FSB also means a more complex system controller, and don't forget that since CPU's cannot share FSB's dua CPU or greater machines will require more complex motherboards (multiple FSB's routed on it) and system controllers with multiple FSB ports. Eek.

No L3 cache, those things are expensive as hell, easily costing as much as the CPU itself
The off-die L3 should not be that expensive. It is very unremarkable stuff.

nuckinfutz:

Apple is not going to shoehorn a .18um G4 into an iBook
Except for the 12" TiBook. :)

ftaok:

At $189 per 10,000, the 7457's are a whole lot cheaper than the 7455's. I think the 7455's were priced around $300 per 10,000.
The price depends on the clockspeeds, which you didn't mention for either chip.

ffakr, ktlx:

Boy am I glad to see that you two are here shooting down these endless "64 bit is twice as fast" posts. I'm just getting frustrated by them all... shoot down one, and another pops on tomorrow in another thread... and it may be written by the same guy. Just imagine what will happen when Apple's PR starts to roll, it's gona be like D-Day holding off the RDF-ed invaders (except this time the good guys are the defenders of course).

Hattig:

Intel want to go 64-bit in 2008 for the consumer. That is too late in my opinion. The consumer will already have been enticed by the marketing talk of AMD and Apple for 4 years by then, regarding 64-bits - and will have purchased because they desired it, not out of any particular need.
Right now Intel is just trying to discredit AMD and IBM while trying to save their Itanium, but I bet they'll have a 64-bit consumer chip ready to go long before 2008... they'll just disable the 64-bit support (like they did with Hyperthreading) until the time is right.

64-bit integers are great for certain applications, particularly encryption and decryption like SSL.
But is AltiVec better?

Maybe in 5 years time he will have 16 GB, and be working on images or raw video with 64 or 128-bit per pixel colour depth... or running several apps at the same time that all require a lot of memory, as you edit multiple images on your 2 OLED 2560x2048 128-bit displays ..
You know, some new video cards (GF-FX) are doing 16-bit and higher FP per color already. I expect that idea will spread to software some day.

Kid Red
Mar 6, 2003, 12:32 PM
Originally posted by ddtlm
KingArthur:

Hardly. Even on 130nm the PPC-970 is larger than a 180nm 7455, and it uses a more complex and therefore more expensive FSB. Don't forget that the more complex FSB also means a more complex system controller, and don't forget that since CPU's cannot share FSB's dua CPU or greater machines will require more complex motherboards (multiple FSB's routed on it) and system controllers with multiple FSB ports. Eek.


The off-die L3 should not be that expensive. It is very unremarkable stuff.

Well I have heard also the 970 may be cheaper also. Not sure if it's IBM or that more people are going to be using the chip or that IBM is making the chip anyway or something. The tech on the 970 doesn't facilitate it's price from what I read somewhere.

ddtlm
Mar 6, 2003, 01:01 PM
Kid Red:

Well I have heard also the 970 may be cheaper also.
Yeah, probably multiple times on this thread alone. Unfortunately that does not settle the issue of which chip is actually cheaper.

The tech on the 970 doesn't facilitate it's price from what I read somewhere.
I don't know what you mean by this.

ftaok
Mar 6, 2003, 01:15 PM
Originally posted by ddtlm
ftaok:


The price depends on the clockspeeds, which you didn't mention for either chip.

My bad. In both cases, the prices were for the 1ghz chip. I'm not sure how much the 1ghz 7455 costs these days. That was back in January 2002.

reyesmac
Mar 6, 2003, 01:17 PM
Steve said this was the year of the laptop, so whatever Powermac comes out, it probably wont be the fastest computer out there, just a speedbump. We wont be crossing the 2gig barrier this year.

ffakr
Mar 6, 2003, 01:32 PM
Originally posted by reyesmac
Steve said this was the year of the laptop, so whatever Powermac comes out, it probably wont be the fastest computer out there, just a speedbump. We wont be crossing the 2gig barrier this year.

Steve won't give us ANY hint of what's coming out on the desktop.
It was the "Year of the Laptop" because they started taking orders on the 17" Powerbook.

When the 64bit xServes arrive, it will be the year of the internet app... or some such thing.

When the 970 based desktops arrive, it will be the year of the workstation or some such catch phrase.

he could have had a warehouse full of 3GHz 970s and he would have only mentioned laptops is thats all they were ready to announce.

ffakr
Mar 6, 2003, 01:39 PM
I had a thought a while back...

IBM has announced Blades with 970. They demo next week. They will likely ship in the not too distant future.

Apple has the xServe and the xRaid. They are the first step tword Apple entering the enterprise. Unfortunately, they fit a very specific need. They are small form, full featured servers with an impressive ammount of internal storage. They aren't designed to be big number crunchers, they aren't designed to be 'computationally dense' to possibly coin a phrase.

Apple is, however, courting sectors that require low cost, dense computational power... film, life sciences...

What is to stop Apple from releasing OS X or OS X Server (or OS X Cluster) on the IBM Blade servers???????
The instruction set is the same. most of the underlying architecture is the same (just need drivers for the chipsets).
Apple automatically gets blade servers overnight. They are also designed for specific applications that the xServe is not well suited for so they don't canabalize market, they expand presence in enterprise server rooms. Best of all, all the current software still works. You don't screw developers over by moving to x86 yet you still manage to expand your client base.

I'd be a brilliant move.

what cha all think?

ddtlm
Mar 6, 2003, 01:45 PM
ffakr:

What is to stop Apple from releasing OS X or OS X Server (or OS X Cluster) on the IBM Blade servers?
Well IBM would have to play along if Apple intends to rebadge the servers. I assume IBM would be a making a nice chunk of change, and that Apple would then have somewhat thinner margins that what they are used to. Still, it would be a cheap way for Apple to invade a new market, and I'd say it sounds like a good idea.

suzerain
Mar 6, 2003, 02:43 PM
Originally posted by noverflow
they are talking about a dual 800... the 1.25 is a sing processor... the dual 1ghz QS were doing 21M/s.... check out this page


http://www.xlr8yourmac.com/archives/jan02/013002.html


Ummm...did you even go to the site? According to their FAQ, they split the multiprocessor setups onto a separate page, which is where your xlr8yourmac numbers come from.

The speed I quoted is from a page ostensibly listing single processor speeds. Now, it's entirely possible someone submitted a bogus entry, I suppose, but it wasn't from the duals page.

Just go here (http://n0cgi.distributed.net/speed/) and you'll see that.

MacRETARD
Mar 6, 2003, 03:24 PM
Originally posted by bones
With a much faster bus.

I think you all are seriously under-estimating how fast these 970 machines will be.

The 970 spec scores currently are about the same as a 3 ghz p4. Thats a pretty standard benchmark. Also the P4 and intel will have a 800 mhz bus with dual 400mhz ddr, or faster rambus memory within a few months. Next on the intel roadmap is 1000 and 1200 mhz bus with a improved P4 core with improved HT.

Yeah it will be fast and competitive but its not going to be an x86 killer.

beatle888
Mar 6, 2003, 03:34 PM
Originally posted by ffakr
Where is the real advantage in working on 128 bit color video?


the future, maybe there are possibilities that we will only see when we get 128 bit.

Dont Hurt Me
Mar 6, 2003, 03:47 PM
Just want to say it again, as long as apple keeps dicking around with the stagnating g4 they are going to loose more and more marketshare. I will not buy another g4 mac. I may upgrade my mac with a faster g4 but will never buy a new mac with this cpu. Apple had better wake up and replace that motorola lagger. The 970 is the answer to all of their hardware problems, without it its just more of the same old crap. I still think that all that xserve architecture is prepping those machines with it for the 970.

Rincewind42
Mar 6, 2003, 04:13 PM
Originally posted by ffakr
But this is a limitation of the development language, not the hardware... isn't it?

It is a limitation of the hardware. A hardware's pointer size is equal to the size of the registers used by the integer unit, therefore on a 32-bit platform a pointer is always going to be 32-bits. Now, using chip and OS specific techniques you can emulate larger memory areas in software. This is how DOS used to address more than 64k of memory, and why it uses a segmented memory architecture (and additionally how it broke past the 1MB barrier...). Using a segmented architecture the latest x86 can address 36-bits work of memory, but not in one application without OS hooks.

Personally, I'm pretty hard pressed to see the difference between 24bit and 32bit color. Do you need 3.4 x 10^38 colors?

That would be because there isn't one =). 24-bit and 32-bit both addresss 16 million colors, but 32-bit is typically used because it gives better memory access characteristics.

That said, there is a place for greater than 32-bit color, however it isn't very useful on the desktop yet.

Yeah it will be fast and competitive but its not going to be an x86 killer.

One to tie, two to kill =). That, and with Apple able to say that they have the same chip in desktops & laptops (which Intel won't even do anymore) I can see the old PowerBook G3 commercials coming back in new form.

Another data point that may be interesting to pro users is that while a 1.8Ghz 970 uses 42 watts, two 1.2 Ghz 970s would only use 38...

nuckinfutz
Mar 6, 2003, 04:15 PM
The 970 spec scores currently are about the same as a 3 ghz p4. Thats a pretty standard benchmark. Also the P4 and intel will have a 800 mhz bus with dual 400mhz ddr, or faster rambus memory within a few months. Next on the intel roadmap is 1000 and 1200 mhz bus with a improved P4 core with improved HT.

Yeah it will be fast and competitive but its not going to be an x86 killer.

Those are "Estimated" Spec scores by the way.

The PPC 970 will support up to a 900Mhz FSB and 6.4Bgps bandwidth.

Can support the same DDR AND run in SMP configurations. If Apple had the balls to create a Dual 1.8Ghz 970 system then yes it would be faster overall than whatever Intel has.

Intels Hyperthreading "needs" improvement. Currently it is not very efficient. I expect IBM's implemtation to be superior in the Power5.

Needless to say. If Apple is aggressive Mac users won't be having Intel envy for a while. That's something to look forward to.

ffakr
Mar 6, 2003, 04:17 PM
Originally posted by beatle888
the future, maybe there are possibilities that we will only see when we get 128 bit.
My point was actually that the human eye is not acute enough to discern a large palette of colors.

The JPEG format can store images that contain up to 16.7 million colors (termed as "24-bit" or "true-color" since the human eye can not differentiate between 2 colors that are next to each other in a "true-color" spectrum).
... from... http://dp3.lib.ndsu.nodak.edu/~nem/archive/

64bit color bit depths would allow for a pallet of 1.844 x 10^19 colors (that's 18,440,000,000,000,000,000 colors) while so called 'True Color' provides slightly more than 16,000,000 colors.

Even on the 'OLED 2560x2048' displays... there are only 5,242,880 pixels. You would need 3,518,437,208,884 of those monitors to display every color available. If you were playing 30 frame/sec video of smooth color blends... your video would play for 3,719 YEARS before it was able to display all of those colors.
If you had a 128 bit color panel, it would take the same movie 6.860277220920572e+22 years to play! (hope I got my math correct ;-))

Any Who... 64 bit and 128 bit color pallets are enormous. Too big to be useful to people.
Perhaps if you wanted to create some cinema classic filmed (rendered) entirely in subtle variations of one shade of aqua. ;-)

So... I was basically saying that I REALLY doubt that people will be running iMovie on 128bit video in 3 years. There is no benefit.

Additionally, assume a video stream of 640x480 resolution, 30 frames per second, 128bit color depth (fairly low rez, super high quality). You'd need over 140 MByte/second constant bandwidth to stream that video. That would require a solid state drive or striped 15K SCSI drives (that MIGHT be fast enough).
BTW... if you had a Firewire camera that could to that... it'd require the majority of a FW 1600 bus. FW 800 wouldn't come close, neither would a Gigabit ethernet connection.

... just playing with my calculator. :-)

ffakr
Mar 6, 2003, 04:43 PM
Originally posted by Rincewind42
It is a limitation of the hardware. A hardware's pointer size is equal to the size of the registers used by the integer unit, therefore on a 32-bit platform a pointer is always going to be 32-bits. Now, using chip and OS specific techniques you can emulate larger memory areas in software. This is how DOS used to address more than 64k of memory, and why it uses a segmented memory architecture (and additionally how it broke past the 1MB barrier...). Using a segmented architecture the latest x86 can address 36-bits work of memory, but not in one application without OS hooks.
I was about to conceed this to you since you appear to know better than I...
but then I checked.
Technically we are both wrong. (I said 38bit, and it's 36bit)

36-bit physical address space for direct addressability of 64 Gigabytes of memory from http://e-www.motorola.com/webapp/sps/site/prod_summary.jsp?code=MPC7455&nodeId=03M943030450467M98653

MaxArturo
Mar 6, 2003, 04:47 PM
I can't believe they woiuld do this

Clockwork
Mar 6, 2003, 04:49 PM
What is to stop Apple from releasing OS X or OS X Server (or OS X Cluster) on the IBM Blade servers???????

I reckon that would be a very bold move by both Apple and IBM if that should happen.
Apple have just ventured into a market where few thought they would ever see an Apple branded product, but I think that If Apple was to do this they would buy the chips from IBM and develop their own Blade systems under their own market name.
Apple continues to impress alot of people and all they need is speed. That is the only thing holding Apple back in every aspect.
The OS and the overall software portfolio has never been stronger and it will surely continue to develop.

Someone said something about Intel moving to a 1200Mhz fsb on an improved P4. wtf?!? Are you talking about the proposed P5 named "Prescott"? It will start out at 800Mhz and from there on who knows... The 970 will start out at 900Mhz, so that should give us a head start.
And something else people should think about when talking about how the 970 leveled with a 2.8Ghz P4 is that SPECint leaves out AltiVec. Something known to be the real power behind the G4 and I'm sure it will throw the 970 into #1 position in areas that really matter. Science, 3D, Film and Sound :p not that other areas don't matter, but this is where the real power hungry community is.

MarkCollette
Mar 6, 2003, 04:58 PM
Originally posted by ffakr
But this is a limitation of the development language, not the hardware... isn't it?
I doubt any average user (and again, the article in question mentions average users) will be editing images or video with 64bit or 128bit color depths in a few years (again, the article said 3 [a few] years). Where is the real advantage in working on 128 bit color video? Can your eyes see the difference? Can your monitor or TV display at that depth? Maybe if you absolutely need to have a perfectly smooth fade of one color on a HDTV while still having a full pallet... then maybe you'd need a bazillion color pallet.
Personally, I'm pretty hard pressed to see the difference between 24bit and 32bit color. Do you need 3.4 x 10^38 colors?

128 bit video will not be using integral values like 24 bit and 32 bit video. Instead, the 128 bits are used to define alpha, red, green, blue as floating point components. This is part of the new OpenGL standard, and in the DirectX "standard". The reason being for maintaining precision over many, many calculations. Apparently this is required for photorealistic rendering.

ffakr
Mar 6, 2003, 04:59 PM
Originally posted by Clockwork
I reckon that would be a very bold move by both Apple and IBM if that should happen.
Apple have just ventured into a market where few thought they would ever see an Apple branded product, but I think that If Apple was to do this they would buy the chips from IBM and develop their own Blade systems under their own market name.
Apple continues to impress alot of people and all they need is speed. That is the only thing holding Apple back in every aspect.
The OS and the overall software portfolio has never been stronger and it will surely continue to develop.

Well, the beauty is, Apple doesn't need to spend R&D or worry about building, warehousing, or selling an IBM Blade server with OS X.
They just need to write the drivers for OS X to run on the blade. It should run on the processor with zero changes (though a recompile for 64bit would make sense).
Apple can simply release a blade version of OS X and IBM can sell it.
'Get your PowerPC 970 blade with Linux, AIX, or OS X. Run all in the same enclosure. Purchase the Xeon blade and you can even add Windows to the mix!'
The old slogan for IBM is "No one ever got fired for buying IBM". If Apple could get IBM to acknowledge that OS X runs on their blades... or even endorse it!... that would be a major coup for Apple as far as increasing Apple's enterprise reputation.

I'd be worth the work porting it even if they barely sold any copies... just the PR and Image would be awesome.

ffakr
Mar 6, 2003, 05:07 PM
Originally posted by MarkCollette
128 bit video will not be using integral values like 24 bit and 32 bit video. Instead, the 128 bits are used to define alpha, red, green, blue as floating point components. This is part of the new OpenGL standard, and in the DirectX "standard". The reason being for maintaining precision over many, many calculations. Apparently this is required for photorealistic rendering.

I did realize that CAD/CAM was interested in higher bit depths for precision. I had heard 48bit was being talked about.
I didn't realize that a 'color depth' would include alpha info for so called 64bit and 128 bit color. Usually there is a seperate value associated with the alpha channel.
I still don't see how you'd need 128bits though... especially for raw video or image editing.
32bit color with 16bits of alpha channel is considered high end right now. Even 64bits of room is vastly larger than that.

Thanks for the info though.

... stuborn old ffakr.

macrumors12345
Mar 6, 2003, 05:16 PM
Originally posted by Hattig
Where would I like to see Apple's product range this time next year?

Portable

Ultra-lite (12" PB, new products): G4 7457 @ 1.2 GHz (maybe 1.4 GHz)

iBook: G4 7457 @ 1.0 GHz (maybe 1.2 GHz)



Well, at least you are not crying for 970s across the board. I actually think you are shooting low here. This time next year? I would want the iBook to be at 1.3 Ghz 7457 and 12" PB at 1.6 Ghz 7457 (12" PB would also have faster memory bus, or L3 cache, or something to distinguish it from iBook).

Powerbook: 1.2 GHz IBM 970 (maybe 1.4 GHz)

Fine. Unless they transition as quickly to 90 microns as AMD does with Athlon-64 (kinda doubt it)...then I would expect 1.6 to 1.8 Ghz.

Consumer Desktop

iMac: 1.2 GHz IBM 970 to 1.6 GHz IBM 970

NO. The iMac will NOT have a 970 in it this year. And almost surely not at the beginning of next year either. Nor should it!! I would go for 1.8+ Ghz 7457 in this case, w/L3 cache and a good memory bus. Look, there ARE users who want a nice, affordable Mac that is fast but doesn't need to have supercomputing speed. Unless the 970 is cheaper across the board than the 7457 (which, remember, should be cheaper than the 7455 because of the process shrink), it makes no sense to force them to buy the more expensive 970 when a high clocked G4 meets the processing needs of many people. It doesn't make sense for the consumers, and it doesn't make good business sense to have no segmentation in your product line. If you go to the Dell website, you will see that only a small minority of their machines ship with 2.8 and 3.0 Ghz P4s. In fact, they sell many desktops with 1.8 and 2.0 Ghz Celerons. A G4 running at nearly 2 Ghz should be pretty competitive with a Celeron clocked as high as 3 Ghz, so (unlike today) Apple's low end machines would actually be quite competitive with low end PCs (albeit several hundred dollars more expensive). And this is as it should be. Honestly, if you really need a workstation class 64 bit chip in your iMac, then you should probably be looking at a Pro Machine. I think it could make sense for Apple to offer a single, high-end 970 iMac like the current 1 Ghz iMac. But nothing more than that.

If you want the 970, buy a Pro Machine. If not, sit back and watch as Apple doubles the clockspeed on the iMac in a matter of months. For many users, that is more than enough power. So why should they be forced to pay for more (for 970s across the board)?


Power Desktop

PowerMac: 1.8 GHz IBM 970 to 2.5 GHz IBM 970

Power Workstation

"PowerMacPro": Dual IBM 970's (1.8 GHz to 2.5 GHz)


Agree with these.


As you can see, I expect Apple to fully drop the G3 and G4 throughout their entire product range, except for the very low power 7457 in cheap and low-power systems.[/B]

I don't. Eventually, yes. But not this year, and not even next year. The G4 has been out since 1999, but the consumer laptops are still using the G3 (albeit only the G3 is currently fabbed on a lower power 0.13 micron process). They are going to want to differentiate their systems. And that makes sense.

Hattig
Mar 6, 2003, 05:30 PM
Originally posted by macrumors12345
I don't. Eventually, yes. But not this year, and not even next year. The G4 has been out since 1999, but the consumer laptops are still using the G3 (albeit only the G3 is currently fabbed on a lower power 0.13 micron process). They are going to want to differentiate their systems. And that makes sense.

This does depend entirely on the price of the IBM PPC 970 processor at different speed grades, which depends on IBM's fabrication yields for the processors, and how many of the things they can make.

I think that it is simple to have the same processor throughout the range, and to differentiate on processor speed, expandability and number of processors.

If the 1.2GHz PPC 970 from IBM processors are cheaper than the 1.6GHz 7457's from Motorola, then I don't see why Apple would want to use the 7457 processor, when it could get cheaper processors, and the advantage of 64-bit marketability.

So:
iMac3: 1.2GHz, 1.4GHz, 1.6GHz PPC970
PowerMac: 1.8GHz, 2.1GHz, 2.4GHz PPC970
PowerMacPro: 2x1.8GHz, 2x2.4GHz PPC970

However, if IBM aren't going to ship below 1.8GHz (as you could read the recent article as), then the iMacs will of course be faster 7457's.

And I did say that is what I would like, not what Apple will probably mess up with.

I agree, I was shooting a bit low with the iBook and PowerBook speeds :) But I do think that Apple will use a low-voltage 1.2GHz PPC 970 in the top of the range PowerBook, because it is only 20W or so. Maybe not in March, but by mid-year 2004.

MacRETARD
Mar 6, 2003, 05:31 PM
Originally posted by nuckinfutz
Those are "Estimated" Spec scores by the way.

The PPC 970 will support up to a 900Mhz FSB and 6.4Bgps bandwidth.

Can support the same DDR AND run in SMP configurations. If Apple had the balls to create a Dual 1.8Ghz 970 system then yes it would be faster overall than whatever Intel has.

Intels Hyperthreading "needs" improvement. Currently it is not very efficient. I expect IBM's implemtation to be superior in the Power5.

Needless to say. If Apple is aggressive Mac users won't be having Intel envy for a while. That's something to look forward to.

A few things:

1) Apple has annouced no new chip.
2) Intel does support dual cpus as well. Actually intel supports 32 cpus via SMP.
3) Yes intels hyper threading could use improvement, anything could use improvement but it is a feature that is out NOW, and is a feature that does help.
4) Intel supports SSE2 which in some benchmarks will blow away the G4 and altivec just like in other benchmarks (rc5?) the g4 blows away the P4. These specialized benchmarks mean little you have to look at a bunch if you want to get a good idea of performance.

Whats my point? The legendary holy grail of apple computing is not out yet. I think too many people have "G5 syndrome". How long have we been talking about the legendary pentium killer G5? To bash something that is out, and is useful(intels HT), while comparing it to something that is NOT out and you can only speculate on is stupid.

If Apple or AMD finally come out with the 64 bit chips for the desktop that support SMP at reasonable prices you can bet Intel will magically enable the P4 to be SMP capable. The only thing keeping the P4 from supporting SMP now is a hardware restriction intel adds so they can charge more for the Xeon cpus.

noverflow
Mar 6, 2003, 05:36 PM
Originally posted by suzerain
Ummm...did you even go to the site? According to their FAQ, they split the multiprocessor setups onto a separate page, which is where your xlr8yourmac numbers come from.

The speed I quoted is from a page ostensibly listing single processor speeds. Now, it's entirely possible someone submitted a bogus entry, I suppose, but it wasn't from the duals page.

Just go here (http://n0cgi.distributed.net/speed/) and you'll see that.


However... if you look at a dual 800 and this 1600 the numbers are the same. also when the 1.0qs came out someone posted 2.0 speeds... people just add the numbers up and submit them.

Dont Hurt Me
Mar 6, 2003, 05:38 PM
Originally posted by MacRETARD
A few things:

1) Apple has annouced no new chip.
2) Intel does support dual cpus as well. Actually intel supports 32 cpus via SMP.
3) Yes intels hyper threading could use improvement, anything could use improvement but it is a feature that is out NOW, and is a feature that does help.
4) Intel supports SSE2 which in some benchmarks will blow away the G4 and altivec just like in other benchmarks (rc5?) the g4 blows away the P4. These specialized benchmarks mean little you have to look at a bunch if you want to get a good idea of performance.

Whats my point? The legendary holy grail of apple computing is not out yet. I think too many people have "G5 syndrome". How long have we been talking about the legendary pentium killer G5? To bash something that is out, and is useful(intels HT), while comparing it to something that is NOT out and you can only speculate on is stupid.

If Apple or AMD finally come out with the 64 bit chips for the desktop that support SMP at reasonable prices you can bet Intel will magically enable the P4 to be SMP capable. The only thing keeping the P4 from supporting SMP now is a hardware restriction intel adds so they can charge more for the Xeon cpus. He makes a point, Apple has resorted to 2 cpus to make up for the g4 and it still dont do it what ever math you use. The g4 has been left to stagnate by that darn who cares motorola. the g4 had all the potential only a company behind it who could care a less. why else would apple have to resort to marketing, OSX, xserve architecture, etc just to keep it on the same page. motorola SUCKS!SUCKS SUCKS. Just another company ran by some Dam Bean counters with no vision. BRING ON THE 970 and DO IT NOW! INCREASE YOUR MARKETSHARE APPLE!

macrumors12345
Mar 6, 2003, 05:40 PM
Originally posted by Hattig

If the 1.2GHz PPC 970 from IBM processors are cheaper than the 1.6GHz 7457's from Motorola, then I don't see why Apple would want to use the 7457 processor, when it could get cheaper processors, and the advantage of 64-bit marketability.

If that turns out to be true, then yes, I agree, Apple should offer the 1.2 Ghz 970 instead of 1.6 Ghz 7457. But I seriously doubt that will be true. My understanding is that the 7455 die is as large or larger than the 970 die, but the 7455 is on a 180 nm process and 970 is on 130 nm process. I would expect that the 130 nm 7457, which I believe has significantly fewer transistors than the 970, will be cheaper than a 130 nm 970. At the end of the day, the 970 is a workstation chip, and the 7457 is an embedded chip. Unlikely that former will be cheaper than latter, even at lower clock speeds. But you never know, of course.



However, if IBM aren't going to ship below 1.8GHz (as you could read the recent article as), then the iMacs will of course be faster 7457's.

Yeah, well, I suspect that the 1.8-2.5 Ghz speed range is on the 90 nm process, but we'll see. You never know, but I'm not *that* optimistic. Still, the 970 will definitely be a good chip - much better than what we have now.

Frobozz
Mar 6, 2003, 08:27 PM
Originally posted by reyesmac
Steve said this was the year of the laptop, so whatever Powermac comes out, it probably wont be the fastest computer out there, just a speedbump. We wont be crossing the 2gig barrier this year.

Sorry to inform you, but Steve will do what makes Apple money. Apple will make money off of both strong portable sales and a revamped Pro line.

ddtlm
Mar 6, 2003, 08:41 PM
macrumors12345:

My understanding is that the 7455 die is as large or larger than the 970 die, but the 7455 is on a 180 nm process and 970 is on 130 nm process.
Arstechnica has this information nicely presented in table form at the link that follows. Note that the 970 is 14% larger than the 7455.

http://www.arstechnica.com/cpu/02q2/ppc970/ppc970-1.html

macrumors12345
Mar 6, 2003, 09:31 PM
Originally posted by ddtlm
macrumors12345:


Arstechnica has this information nicely presented in table form at the link that follows. Note that the 970 is 14% larger than the 7455.

http://www.arstechnica.com/cpu/02q2/ppc970/ppc970-1.html

Thanks for the link. The die size of the 130 nm 970 is slightly larger than the 180 nm 7455, so it is safe to say that the 130 nm 7457 will be substantially smaller than the 130 nm 970. Furthermore, for the 970 the transistor count is 52 million, for the 7455, 33 million. But the 7457's transistor count should rise somewhat with the doubling of L2 cache from 256k to 512k.

MarkCollette
Mar 7, 2003, 10:37 AM
Originally posted by ffakr
I did realize that CAD/CAM was interested in higher bit depths for precision. I had heard 48bit was being talked about.
I didn't realize that a 'color depth' would include alpha info for so called 64bit and 128 bit color. Usually there is a seperate value associated with the alpha channel.
I still don't see how you'd need 128bits though... especially for raw video or image editing.
32bit color with 16bits of alpha channel is considered high end right now. Even 64bits of room is vastly larger than that.

Thanks for the info though.

... stuborn old ffakr.

First of all, when I mention the alpha value, that does not apply to video capture, or final representation, but is instead applicable to any component of the video ie: fading from one scene to another reqquires to separate video sources, each with their own alpha value.

Secondly, when talking about floating point values, one must keep in mind that they are really just some integral value, plus a powers ie: 12.34 = 1234x10-2 is stored as [1234, -2]. Of course the computer actually uses base 2, but I'm trying to get at the fact that with a single color (say red) taking up 32 bits, that only, say, 20 bits might actually be for a precise number, and the rest is used for the exponent, so that one can better differentiate a really faint candle in one side of the room, to the sun shining through a crack elsewhere. (I don't know the IEEE floating point standards by heart, so it's probably not 20 bits, but is something close to that).

Ok, so now we see that to use 128 bits for each pixel only gives us ~20 bits of precision per color, which isn't so far from the current high end of 16 bits of precision. Of course everyone will be using 64 bit color for a while before moving to 128 bit, but it's best to make the standards well beforehand.

3G4N
Mar 7, 2003, 02:29 PM
Of course everyone will be using 64 bit color for a while before moving to 128 bit,

I would really like to see links to, even possible, uses of 128-bit color. There is NO NEED for that kind of precision. And everyone using 64-bit color? I don't think so.

As others have mentioned, our eyes can only detect the equivalent of 10-bits per color channel (30-bit color), max. This would be analagous to those uber-audio-guru-geeks that can claim they can tell the difference between pristine analog, and 16-bit, 44khz CD-quality sound (read: it ain't easy). We just can't discern the differences at higher bit depths.

Hollywood only uses 10-bit per channel color (maybe 12-bit) for film and HD. These are typically housed in 16-bit channels, with the remaining bits "unused", resulting in RGB=48-bits, and RGBA=64-bits of information that needs to be crunched/manipulated.

That 64-bits for RGBA even includes Alpha channel info that isn't really presented to the eye in the form of color, and it's still more than enough. 8-bit/channel is all anyone really needs for 98% of video projects.

I meet people all the time that can't tell the difference between an 8-bit indexed color GIF and a 24-bit RGB picture, so I just can't see humans needing 128-bit color.

ffakr said: Additionally, assume a video stream of 640x480 resolution, 30 frames per second, 128bit color depth (fairly low rez, super high quality). You'd need over 140 MByte/second constant bandwidth to stream that video.

fwiw - 140 MB/s is the equiv of uncompressed HDTV (1920x1080, 30p, 8-bit, not 10-bit) roughly 6x uncompressed NTSC, or about 5 MB/frame.

See? Wouldn't you want more physical resolution in the form of more pixels, and a bigger TV (that you can brag about), rather than more color that you can't even see on your present TV?? (try braggin to your buddies about that -- they'll look at you funny).

ddtlm
Mar 7, 2003, 03:25 PM
3G4N:

128-bit color (composed of 4x32-bit FP components) is actually used in video cards already, so that pixels can be passed through again and again without meaninful loss of data, and also so that extreme differences such as lightlight vs candlelight vs darkness can actually be handled properly. It's even part of the Directx 9 spec as far as I know. Both the Radeon 9700 and GeForce FX support it.

ffakr
Mar 7, 2003, 03:34 PM
Originally posted by ddtlm
3G4N:

128-bit color (composed of 4x32-bit FP components) is actually used in video cards already, so that pixels can be passed through again and again without meaninful loss of data, and also so that extreme differences such as lightlight vs candlelight vs darkness can actually be handled properly. It's even part of the Directx 9 spec as far as I know. Both the Radeon 9700 and GeForce FX support it.

I could be wrong here, but I thought the new Radeon and GF only support 48bit.

MarkCollette
Mar 7, 2003, 05:49 PM
Originally posted by 3G4N
I would really like to see links to, even possible, uses of 128-bit color. There is NO NEED for that kind of precision. And everyone using 64-bit color? I don't think so.

As others have mentioned, our eyes can only detect the equivalent of 10-bits per color channel (30-bit color), max.

...


Thank you for illustrating exactly what I said. With 64 bit color, one would have 16 bits per ARBG channel. Since they're floating point, the exponent probably takes from 4 to 6 bits, leaving 10 or 12 bits for the actual precision description of the color, precisely what you say our eyes can perceive.

Most radiation is detected by our body in a logarithmic fashion, not linear. That is why using more bits ni an integral fashion is useless, but adding more bits for an exponent is useful.