Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Re: Re: When will they be in the machines you think?

Originally posted by pseudobrit
The newer processors usually hit the PowerMacs first. I would wager it would take about 10 - 18 months to get the 970 in a PowerBook, along with a total redesign.

I remember that G3-based desktops and laptop were released around the same time, due to the low heat and power consumption of the G3. If the PPC 970 is in any way similar in the respect of heat and power, we may see a PowerMac and a PowerBook using the 970 released at the same time.

I hope...

But then again, I may be wrong...
 
Originally posted by Dont Hurt Me
Please anyone tell me if the newer architecture will support the 970 as is?

No.

The System Controller, at the least, would have to be replaced. It is possible that other systems on the motherboard would also have to be re-arranged, but replacing the SC is the big hurdle in implementing any new chip FSB architecture (the 970 has a radically different "front-side" bus from the G3 and G4).
 
Originally posted by Abercrombieboy
Hey I am just waiting for Steve Jobs to do his Photoshop test with a Dual 2.5Ghz PowerMac 970!


I'm sitting there waiting for Steve walk out and unveil the new PowerMac's. He starts to go through his normal routine of showing how the new Mac beats the PC in the usual Altivec Photoshop routines. Yeah, yeah, seen it before (polite applause).

Steve then sets up the Mac vs. PC for an extensive performance benchmark test that goes across a whole bunch of applications - - not just the usual Photoshop BS - - - and explains that it will take awhile to run this benchmark, so he goes on to whatever other product announcements...

An hour later, Steve is wrapping things up, and he wanders back over to the Mac -vs- PC benchmark test. Hey! Good news: the Mac ran this app faster, matched the PC for that app, etc...a very even performance match all the way across the board. Steve smiles, says "thankyouverymuch..." etc.

As the lights come up in the auditorium, it is heard: "Oh, one more thing..."

Steve goes back over to the Mac. He reveals that it had run all the benchmarks as Windows Applications under VirutalPC!

The audience is confused. They stand there, silent and stunned.

Steve sits back down, closes VPC and says "Let's run those benchmarks again, shall we?" :D


Suddenly, I wake up in a start, in an ice cold sweat. Dang, its just that same recurring dream again.



-hh
 
Re: Re: Re: Re: Re: Re: Re: When will this happen?

Originally posted by hacurio1
Funny!!! I still remember studies saying that the transition from 16 to 32 bits would take longer of what it actually took. People use to say, "there is no need for 32 bits yet, 16 is fine." and here we are!! The truth is that there will always be the need for more. When 16 bits was mainstream there was no need for 32 because the programs were small and people weren’t interested on editing videos nor had 30gb music libraries on their computers. What would be out in a year from now? Two years from now? Probably something that will benefit from 64 bits? The problem with studies is that they assume programs and uses for computes constant. When I bought a Power Mac 9600 in 1997 I was called crazy many times. I spent more than 5 grand on that computer and people use to say that it was such a machine that there is no way that I could use all its power. Today, that computer can’t even run OSX. I’m writing from the first dual processor Power Mac G4 with 2GB of RAM. So far so good, but sometimes I feel I need more RAM. The 500Mhz G4 is no big deal today either, but 4 years ago some of you might of asked, why do you need so much power? :confused:

Of course 4Gb of memory is expensive today!!!! About $1,000, but wait two years, heck…just one more year and 4Gb of memory will cost about $500. When I bought my G4 on November 1999, 1.5Gb or Ram cost about $1,500…today…2Gb of PC133 will cost only $319.
 
Originally posted by nuckinfutz
We'll have bragging rights. The 970 bus runs at half the core frequency. PPC 1.8Ghz has a 900Mhz FSB. 100Mhz faster than Prescott ;)
That means that the processors should be 1.35 Ghz, 1.8 Ghz, 2.25 Ghz, and 2.7 Ghz. Also, the Prescott will be 200x4 if i'm not mistaken, like the P4's "bus speed".
 
Re: Re: Re: Business

Originally posted by applejilted
Precisely !!!! Read my post on previous page....Apple is losing lots of sales by NOT pre-announcing .....people can make do with current systems if the promise of a blazingly new fast system is just around the corner. I for one would even thank Apple as I wouldn't buy a slow system now, but I wouldn't abandon ship either...

People who are perpetually waiting for the next big thing are not people that really need the computer to do real work with. Its more along the lines of a 'status symbol' for them.

Apple cannot pre-announce. At least, not without cannibalizing existing sales, which tends to hurt the bottom line. About the only time Apple pre-announced, was for the first iMac, and the iPod. At that point, there was no existing sales to cannibalize, and the only purchases that Apple hurt was the sales to the iMacs/iPod's competitors.

The people that need a pre-announcement in order to keep from 'defecting' are the people that were going to 'defect' anyway. Apple is like the US, we'll take 'defectors' but we won't stop anyone that has already made up their mind to 'defect' to another. I guess that makes Microsoft, the commies. :D
 
Originally posted by DavPeanut
That means that the processors should be 1.35 Ghz, 1.8 Ghz, 2.25 Ghz, and 2.7 Ghz. Also, the Prescott will be 200x4 if i'm not mistaken, like the P4's "bus speed".

The Prescott will have (I believe) one 64-bit wide bi-directional bus running at 800 MHz.

The 970 will have two 32-bit wide uni-directional buses (one from the chipset, one to the chipset) running at 900 MHz each.
 
Originally posted by DavPeanut
That means that the processors should be 1.35 Ghz, 1.8 Ghz, 2.25 Ghz, and 2.7 Ghz. Also, the Prescott will be 200x4 if i'm not mistaken, like the P4's "bus speed".

erm... I think you misunderstood. The bus frequency is half the core frequency. So, for a 2500MHz 970, it'll have a 1250MHz bus. For a 2000MHz 970, it'll have a 1000MHz bus. A 1.8GHz 970 and Prescott have the same effective bus speed (800MHz. The 970's is actually 900MHz, but there's some overhead). A 2.25GHz 970 would have a 1.125GHz bus, which seems a bit unwieldy.
 
Re: Re: Blade

Originally posted by beatle888
MY GOD. have you seen that disgusting site?
and the blade server looks like something found
at a garage sale....hahaha did you see those cheapo handles on the side. oh man...how can ibm let themselves be identified with such crap. god their website is so sad. anyway. looking forward to the 970

How charmingly provincial.

prat
 
Originally posted by szark
The Prescott will have (I believe) one 64-bit wide bi-directional bus running at 800 MHz.

The 970 will have two 32-bit wide uni-directional buses (one from the chipset, one to the chipset) running at 900 MHz each.
I thought the 970 had bi-directional buses
 
This may be a long shot, but does anyone feel that with the move from duals across the line to a single low end and dual mid/high end models on the powermac that this might suggest that they will keep this configuration for if/when the 970 comes out. I think it makes sense, since I would be extremely happy with even just one 970 and others would be willing to shell out another grand or so for that second 970.
 
Blade prototypes at Cebit

There's a news article by Heise, Germany stating that prototypes of the new Blade servers will be shown at the Cebit conference (12.3.2003-19.3.2003).
Article

P.S. Heise publishes the very famous and serious computer journal c't.
 
Originally posted by Raiden
My question is, if apple annownces the 970 at MWNY, saying the new processor will be availible in 2-3 months...

Then it will really be available in about 6 mos.

While I'd love to see apple jump to the 2 ghz range, they seem content to milk every dollar out of every .2 ghz climb. So - don't hold your breath.
 
IBM PowerPC 970 is 64bit...

Let's see... my The PowerPC Architecture: A Specification for a New Family of RISC Processors book is copyright 1993... in the book, they talk about 32 bit and 64 bit implementations.

Its now 2003... so 10 years for the 32bit to 64bit jump...

I guess it will be 2013 when the PowerPC runs out of steam, and we'd have to jump to 128bit processing, and/or a new architecture. :D
 
Re: Re: Re: Re: Re: Re: Re: When will this happen?

Originally posted by hacurio1
Funny!!! I still remember studies saying that the transition from 16 to 32 bits would take longer of what it actually took. People use to say, "there is no need for 32 bits yet, 16 is fine." and here we are!!

(Sorry if this is Intel-ish instead of Motorolla ... I wasn't on the "Mac scene" during Moto's 32-bit transition)

Really? I don't remember any studies saying that the 32-bit transition on Intel would take >7 years (from introduction of the 386 in ~1988, a 32-bit pure processor, to the introduction of Windows 95, the first 32-bit-ish OS from MS).

Yes, people will always underestimate the ability for us to use power, and hind sight will always show that such predictions were foolish. On the other hand, certainly by the time Windows 3.1 was out (1992) the computer industry as a whole had a very well-defined idea that 32 bit processing was necessary for many activities. Once Win 95 came out, yes, there were "experts" who claimed that 16-bit software was okay still, but the reality of mode-switching (the CPU had to switch between 32-bit and 16-bit code, which is reportedly not the case on the 970 between 64-bit and 32-bit code) meant that running 16-bit software on a 32-bit OS was overall a pretty bad idea.

Now, contrasting 32-bits to 64-bits: when Intel switched over to 32 bits on their processors it was because users were already bumping into the limitations of 16 bits (memory allocation and management had been kludged to be larger than 16 bit pointers allowed, but even those kludges' boundaries were being bumped into by the majority of developers). By and large, developers are happy with 32-bits today, except for a few areas of development (video processing and databases forefront amongst them). When MS switched over to 32-bits on their OS, many applications which required 32-bit calculations had already been written to thunk the processor into 32-bit mode while they were in control and thunk it back down to 16-bit mode when they surrendered CPU control, which was both incredibly bad for OS stability (often applications left the CPU in a bad state) and for multitasking (apps such as this tended to relinquish control sparingly to avoid the performance hit of thunking/dethunking the processor when no other app needed the timeslice). While there are certainly apps that make use of 64-bit ints today (although obviously not 64-bit memory addressing etc), 64-bit processing support amidst regular 32-bit code is quite well supported on most processors today (not efficient - operations take 5x as long as 32-bit int operations - but such operations can not leave the CPU in a bad state nor do they require expensive mode-switching which makes developers want to hoard the CPU for as long as possible while they have it).

The 128-bit discussion here is more ludicrous: 64-bit pointers can address more memory than has been produced in the entire history of the personal computer (16 billion billion bytes), and it will be a while before we have enough memory on our desktop to even take full advantage of 64-bit addressing; I can think of no applications today that use 128-bit integer calculations - though I am sure they exist, they certainly are nowhere near mainstream. Will 128 bits be needed eventually? Eventually. I just don't see it as being as quick as the 16->32 bit transition (~10 years) or even the 32->64 bit transition (~16 years). Integer/pointer bit width, once a bottleneck and constraining factor for the majority of computing applications, is no longer the bottleneck.
 
Re: Re: Re: Re: Re: Re: Re: When will this happen?

Originally posted by hacurio1
Funny!!! I still remember studies saying that the transition from 16 to 32 bits would take longer of what it actually took. People use to say, "there is no need for 32 bits yet, 16 is fine." and here we are!! The truth is that there will always be the need for more. When 16 bits was mainstream there was no need for 32 because the programs were small and people weren’t interested on editing videos nor had 30gb music libraries on their computers. What would be out in a year from now? Two years from now? Probably something that will benefit from 64 bits? The problem with studies is that they assume programs and uses for computes constant. When I bought a Power Mac 9600 in 1997 I was called crazy many times. I spent more than 5 grand on that computer and people use to say that it was such a machine that there is no way that I could use all its power. Today, that computer can’t even run OSX. I’m writing from the first dual processor Power Mac G4 with 2GB of RAM. So far so good, but sometimes I feel I need more RAM. The 500Mhz G4 is no big deal today either, but 4 years ago some of you might of asked, why do you need so much power? :confused:

Lets do a little math, shall we?

2^32 = 4294967296 bits = 4 Gigs

2^64 = 18446744073709551616 bits = 17592186044 Gigs

Consider that an entire film can usually fit on a 4 gig DVD disk.

The point that I am trying to make is that the "bitness" of an OS goes up exponentially, whereas other system measures, such as clock speed, go up sub-linearly (A 1000 Mhz chip is usually not 2x as fast as a 500 Mhz chip in practice). So the argument that "people said we'd never need 32 bits", while historically accurate, doesn't take into account the interaction of the exponential curve of addressable space and the relatively linear increase in need for addressable space (out side of databases).

In reality, it is RAM prices that will constrain memory size in PCs for the next few years rather than system bitness.

All of this is not to say that the 970 won't be a kick ass chip, but it is to say that 64-bitness will not be a significant contributer to that. The system bus and deep-and-wide approach that the 970 brings are far more important.

Cheers,
prat
 
Originally posted by DavPeanut
How fast are the buses for the 970's supposed to be?

From IBM's press release:

6.4 billion bytes per second.

That is, on a 64-bit wide bus (not sure if it's 64-bits wide or 128-bits wide), 800MHz.

This is identical to the original press release which touted an 800MHz bus on the 1.8GHz part.

In contrast, the fastest PC bus today is the Pentium-4, 533MHz (4.2GB/s). However, Intel is developing an 800MHz FSB as well, so if IBM gains the FSB speed crown it will be a short-lived triumph.
 
Originally posted by nuckinfutz
We'll have bragging rights. The 970 bus runs at half the core frequency. PPC 1.8Ghz has a 900Mhz FSB. 100Mhz faster than Prescott ;)

Where are you getting that? IBM's press release says (after you run the numbers) that the FSB is 800MHz, not 900MHz, and does not hint that the 2.5GHz part might have a faster FSB than the 1.8GHz part.

Generally, the FSB rate is not tied to CPU core frequency directly, although the CPU core frequency is usually a basic multiple (ie, not necessarily integral, but not a highly complex multiple either) of the FSB.
 
Re: Re: Re: Re: Re: Re: Re: Re: When will this happen?

Originally posted by hacurio1
Of course 4Gb of memory is expensive today!!!! About $1,000, but wait two years, heck?just one more year and 4Gb of memory will cost about $500. When I bought my G4 on November 1999, 1.5Gb or Ram cost about $1,500?today?2Gb of PC133 will cost only $319.

Note also that the large parts (1GB memory sticks) do not have enough buyers to attain mass-market pricing, and are thus more expensive than they would be if everyone and their cousin wanted 4GB of RAM on their machine (which, of course, would first and foremost require that their machines could accept 4GB of RAM, which is a rarity even in the Intel world still ...)

Memory prices, from a non-scientific guesstimate, appear to be following about an inverse-Moore's law, if not better. I'd say that roughly doubling the capacity of memory which can be bought for a specific sum from year to year is about right. Which would jive with the figures above: 4GB of memory being mass-market affordable (~$250 for memory) within two years.

However, when there is a barrier to people using an amount of memory, the more dense sticks tend to be significantly more expensive to purchase than the less dense sticks. For instance, just a few years back a single 128MB stick of SDRAM was running about 1.5-2x the cost of two 64MB sticks. If you are going to be putting 4GB in a PC, you'll need at least 1GB per stick, possibly 2GB per stick (depending on how many slots the motherboard has for RAM), which puts you in premium-priced memory densities. This will only change when the mass market moves to 64-bit processors ... unfortunately, probably a year or so after the introduction of the 970.
 
Re: Re: Re: Re: Re: Re: Re: Re: When will this happen?

Originally posted by jettredmont
The 128-bit discussion here is more ludicrous: 64-bit pointers can address more memory than has been produced in the entire history of the personal computer (16 billion billion bytes), and it will be a while before we have enough memory on our desktop to even take full advantage of 64-bit addressing; I can think of no applications today that use 128-bit integer calculations - though I am sure they exist, they certainly are nowhere near mainstream. Will 128 bits be needed eventually? Eventually.

By the time we are using 128bit computing, Apple, IBM, Microsoft and Intel would have merged into one company, called Cyberdyne, and the first application will be the Model 101.

Hasta La Vista, Baby. :D :D :D
 
Originally posted by jettredmont
Ummm ... the 1.8 chip is supposed to be fall-2003 to Q1 2004, and that date estimate seems to be moving more towards Fall 2003.

If IBM is claiming the 970 in their soon-to-debut PowerPC Blade servers runs up to 2.5GHz, then you can expect that at most there will be a 3-month delay between general introduction and 2.5GHz parts being available. IBM tends to not play games with press releases.

Therefore, assuming that this press release was not a typo, we can expect 2.5GHz parts no later than this time next year (March 2004).

If the 1.8GHz part gets the estimated SPEC scores, the 2.5GHz part should get approximately (although not exactly) 40% higher scores. This is some very good news. If nothing else, IBM's blade servers should be selling like hotcakes!

This is HIGHLY CONSISTENT with Apple January release notices and late Feb, early Mar ship dates.

Seems the writing is on the wall indeed. Save your pennies, but whoard OS9 machines NOW.

Rocketman
 
Re: Re: Re: Re: Re: Re: Re: Re: When will this happen?

Originally posted by praetorian_x
Lets do a little math, shall we?

2^32 = 4294967296 bits = 4 Gigs

2^64 = 18446744073709551616 bits = 17592186044 Gigs

Consider that an entire film can usually fit on a 4 gig DVD disk.

The point that I am trying to make is that the "bitness" of an OS goes up exponentially, whereas other system measures, such as clock speed, go up sub-linearly (A 1000 Mhz chip is usually not 2x as fast as a 500 Mhz chip in practice). So the argument that "people said we'd never need 32 bits", while historically accurate, doesn't take into account the interaction of the exponential curve of addressable space and the relatively linear increase in need for addressable space (out side of databases).

In reality, it is RAM prices that will constrain memory size in PCs for the next few years rather than system bitness.

All of this is not to say that the 970 won't be a kick ass chip, but it is to say that 64-bitness will not be a significant contributer to that. The system bus and deep-and-wide approach that the 970 brings are far more important.

Cheers,
prat

Your DVD example is out of point. Yes, the contents of a DVD will fit in about 4Gb, but compressed (MPEG2.) Now, how long will it take you to compress 17GB of uncompressed video into MPEG2? I know that it mostly depends on the CPU, but it will be really nice to load more than 4GB of data into RAM to keep the idle time of the CPU minimum. I know that today most people don’t do this on a daily basis, but five years ago for an average user to do this type of things was unimaginable (Average User.) tell me something? How much RAM did you have on your previous computer? How much RAM do you have on you current computer? That is my point, we are getting closer and closer to the point were 4GB would not be enough!!!! My first Mac, a PerformaCD, had 4mb of RAM, my second Mac, a Performa 6400 had 16mb of RAM, my Power Mac 9600 has 96Mb of RAM, and my G4 has 2GB or RAM. It has been about 10 years and my computers went form 4Mb to 2Gb.
And please explain, why do you think RAM prices will constrain memory size in the next few years?:confused:
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.