Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Thanks Rocketman

Good advice. As soon as the next one comes out, I`m "in" like that guy whose name rhymes with "in".

The reason I like the 12" is for it`s portability, and I find the 14" not as crisp to look at. I tried using the 12" at the store a few times and I found it very easy to read. I liked the keyboard especially, being the same size as the 14".

You don`t find the resolution on the 14" a bit wanting? Just curious.
 
Maxx Power said:
That must be it, I'm not the one going around calling people names.

Maxx Power said:
Yep, in other words, unless you want to satisfy your greedy desires for luxury computing, a ever shrinking niche, regular computing will suffice.

PLZ TO BE USING CONSISTENCY, KTHXBAI! ^^
 
I dont think there will ever be a G5 Powerbook.

Maybe Apple should just forget the G5 and Powerbook - by the time they've figured out how to fit one into such a small caseing the G5 technology will be old and even slower than the competiton. :(
 
MacSA Here here, well spoken Bruce

You`ve hit the nail on the head sir. Perhaps they`ll build a G6 before they can fit the G5 in the powerbook. Or maybe they`ll build a special G5a, or something, that resembles the G5 but isn`t one. They`re stuck on Mhz, and size at the same time. It just seems that the powerbook and the G5 were not meant to be. :(

It`s like a guy who really likes a girl, who doesn`t like him and never will. Sooner or later he`d better get on with his life and find a new girl, before life passes him by and he`s dead. :eek:
 
thatwendigo said:
The current 970 system design uses the U3 system controller as a northbridge, which mediates connection between the RAM and the processor. They'd need the newer, higher-pin sockets for DDR2 (which would block traditional 182-pin DDR memory), the wider data paths between the sockets and the chip, and a system controller that could access and use the new banks. It's not a huge engineering feat, but it's more than just changing the place you plug your ram sticks in.

Okay. Guess this wouldn't be far away but not seen in just a little speed-bump. Maybe in the next bigger revision then?

thatwendigo said:
I'm not sure we'll see the 3.0ghz chips before we see the 970MP, actually. There's a huge push amongst all the chipmakers to be the first to market with a a dual-core design, and IBM's already on the bandwagon. A dual-core 2.0-2.5ghz machine could very well perform better than a dual-processor 3.0ghz machine would.

It all comes down to the implementation.

Oh yes, forgot the dual-cores...the 970MP thread was quite interesting, I remember. Waiting for dual-core "consumer-processors" (desktop, not server)...

thatwendigo said:
Yes, it is. :D

Cool. :)
 
G4-power said:
Okay. Guess this wouldn't be far away but not seen in just a little speed-bump. Maybe in the next bigger revision then?

It all depends, really. This requires a major motherboard redesign because the system controller will have to change, the data paths for the memory have to be widened (DDR2 uses lower frequency, parallel paths and routines to accomplish the same thing), and that might mean moving things around somewhat on the internal positioning.

One advantage DDR2 has is that it's more scalable in frequency, cooler, and generally workable with modern architectures and their fast memory architectures. The downside is that it uses a different process to manufacture and costs more for the same chips.

Oh yes, forgot the dual-cores...the 970MP thread was quite interesting, I remember. Waiting for dual-core "consumer-processors" (desktop, not server)...

I think there could very well be a good argument made for making the e600 the G5-M or something similar, using the MPC8461D as the mobile processor and a future derivative of the 970 (or the POWER5) as the desktop chip. In one space, you try to balance heat and performance, and in the other you merely crank for the highest output possible.

There's nothing wrong with modifying a high-powered server processor to put in a professional and consumer machine, but expecting it to also fit in a laptop is a bit much.
 
thatwendigo said:
It all depends, really. This requires a major motherboard redesign because the system controller will have to change, the data paths for the memory have to be widened (DDR2 uses lower frequency, parallel paths and routines to accomplish the same thing), and that might mean moving things around somewhat on the internal positioning.

Yes, of course. And it would be okay to wait until PowerMac G6. Right?

thatwendigo said:
One advantage DDR2 has is that it's more scalable in frequency, cooler, and generally workable with modern architectures and their fast memory architectures. The downside is that it uses a different process to manufacture and costs more for the same chips.

Yeah, I guessed these wouldn't be as cheap, of course when they get more common the price will drop.

thatwendigo said:
I think there could very well be a good argument made for making the e600 the G5-M or something similar, using the MPC8461D as the mobile processor and a future derivative of the 970 (or the POWER5) as the desktop chip. In one space, you try to balance heat and performance, and in the other you merely crank for the highest output possible.

The e600 as G5-M is a good idea. The only downside of the naming is, that at least I think of the G5 as "the 64-bit chip". But otherwise a good idea.

thatwendigo said:
There's nothing wrong with modifying a high-powered server processor to put in a professional and consumer machine, but expecting it to also fit in a laptop is a bit much.

Well, that's exactly not what I meant. I mean that we have yet to see a desktop (aka. PowerPC or normal-x86) dual-core chip. Not POWER or Itanium2 (if the Itanium2 even is dual-core, no idea).
True, putting a straight POWER derivative into a laptop, no-way.
 
guylafleur said:
You`ve hit the nail on the head sir. Perhaps they`ll build a G6 before they can fit the G5 in the powerbook. Or maybe they`ll build a special G5a, or something, that resembles the G5 but isn`t one. They`re stuck on Mhz, and size at the same time. It just seems that the powerbook and the G5 were not meant to be. :(

It`s like a guy who really likes a girl, who doesn`t like him and never will. Sooner or later he`d better get on with his life and find a new girl, before life passes him by and he`s dead. :eek:

I still have faith that Apple will accomplish the technical obstacles. It isn't over until Apple admits that it can't. They definitely have made an advance with the G5 iMac. Hopefully we will have an answer by MWSF.
 
thatwendigo said:
Given that IBM had such horrible issues at 90nm and lost their target point to them, I find it unlikely - at best - that we'll see another process shrink in the near future. In case you didn't bother to do much reading around the time that the G5 was hovering at 2.0ghz, IBM and other companies were saying how there were unexpected leakage and crosstalk issues with the smaller parts. These were so bad that they're all worried about the next move because it's likely to be exponentially worse than this most recent one, due to the ever-shrinking logic gates and their more tightly packed locations.

Freescale is going ahead with their research on it at the Crolles2 plant, and you can bet that AMD, Intel, and IBM are on the same trail. That doesn't mean that it'll happen, though, just like the Intel P-8 core turned into a head monster.

I thought I heard here that IBM was working on both the 65nm and the 90nm at the same time with 2 teams.
What's 25 nanometers among friends!

I believe IBM, of all people, can do it.
But, I like Freescale to come back from the dead also,
just as long as I get my dual core Powerbook.
Note: AMD and INTEL aren't talking about a dual-core chip for a laptop!
 
MikeBike said:
I thought I heard here that IBM was working on both the 65nm and the 90nm at the same time with 2 teams.
What's 25 nanometers among friends!

Obviously, you've not read up much on how badly the transition to 90nm has hurt the scalability of chips. Let me try to put this simply... The way that a processor works is by putting electrons through things called gates, which are tiny channels that allow conductivity through the chip. Each gate has a "state" that's basically binary, in that it can be on or off. The problem with process shrinks is that it puts gates closer and closer together, while also making them smaller, and that leads to leakage of electrons and interference from the neighbors. This is called crosstalk, and it can really, really screw with the stability and functionality of a processor. There's also a massive addition of heat as the electrons move through a smaller space, leaving less surface area to transfer away through a heatsink.

Shifting from 130nm to 90nm was abysmally hard for everyone in the industry, and another jump is likely to be exponentially harder without some major technological innovation.

Note: AMD and INTEL aren't talking about a dual-core chip for a laptop!

Wrong.

Intel announces official plans for Yonah, a 65nm dual-core mobile processor.

Before Yonah, 90nm Merom is supposed to show, possibly with more than one core

AMD is only talking about the server and workstation market, with 90nm Opteron dual-cores coming in the near future, but it wouldn't surprise me at all if they were working on a low-power Athlon 64 and Athlon 64-M dual core.
 
thatwendigo said:
Shifting from 130nm to 90nm was abysmally hard for everyone in the industry, and another jump is likely to be exponentially harder without some major technological innovation.

This major technological innovation might be just behind the door. I read about it at our local library, it wan in a Finnish computer magazine (Finns are quite big on technology, don't mean to brag, but Linux is made by a Finn, and Nokia phones have started from Finland). Anyway, it had an article about some new laser-technology, that would allow for 38 nanometer process. There was some comparing to the current 90 nm process, and the new technology was much simpler, and more accurate.

They couldn't give any accurate dates when this new technology will be usable, some 2006 probably or later.

That's a pretty interesting thing, but I think we'll be seeing the 65 nm process way before that.
 
G4-power said:
This major technological innovation might be just behind the door. I read about it at our local library, it wan in a Finnish computer magazine (Finns are quite big on technology, don't mean to brag, but Linux is made by a Finn, and Nokia phones have started from Finland). Anyway, it had an article about some new laser-technology, that would allow for 38 nanometer process. There was some comparing to the current 90 nm process, and the new technology was much simpler, and more accurate.

They couldn't give any accurate dates when this new technology will be usable, some 2006 probably or later.

That's a pretty interesting thing, but I think we'll be seeing the 65 nm process way before that.

For a good read of the briefs of die shrinks and physics behind it refer to : http://www.sudhian.com/showdocs.cfm?aid=610&pid=2294

As well, i'll quote for you to save you some time one important part:

Anatomy of a Die Shrink:

When we refer to a CPU as “90nm (.09 micron)”, we’re referring to the space between CPU traces. To put this in perspective, human hair varies between 40 microns and 120 microns in thickness. To calculate how much of a difference this is from existing 130 nm, we take the square of each, arriving at a shrinkage of 47%. In other words, moving from 130nm to 90nm more than halves the size of the gap between CPU traces.

The reason this does not translate linearly to a direct CPU die shrink is because not all components in a CPU shrink by this amount. In this case, Winchester’s approximately 84 mm sq die is roughly 61% the size of Newcastle’s 144 mm sq die. This is obviously still a significant reduction.

Shrinking the gap between the CPU trace lengths, however, has several effects.

Increased Thermal Leakage: As the gaps between the CPU traces shrink, the amount of current that ‘leaks’ out of the transistors increases. This translates into heat, which translates into a hotter-running CPU.

Increased Thermal Density: This is a key factor that can’t be overlooked. As the surface area of a CPU shrinks, the amount of heat that has to radiate out of that area does not. This means that, all else being equal, a 130nm CPU has a lower thermal density than a 90nm CPU. It also means that a smaller chip runs hotter—all else being equal. Up until now, all else hasn’t been equal, which is why we’ve seen the improvements that we have.

Decreased Operating Voltage: Die shrinks typically allow for lower operating voltages because less power is required to bridge the smaller gap. Note, however, that thermal leakage can work against this—if power is leaking out of the transistor, obviously voltage can’t be lowered by as much as if the transistor leaked less.

The answer (in very broad terms) to why we haven’t seen the 90nm problem before now is because the positive effect of being able to lower operating voltages has outweighed the negative effects of increased thermal leakage and thermal density. Other technology upgrades (such as improved substrate technologies) have also helped. The question is, have they helped enough?

SOI and Efficiency vs. Netburst vs. Physics:

For all that reviewers have compared them to each other, AMD and Intel both are finding their approaches to computing tested by a third foe more implacable than either corporation could ever be—the laws of physics. Unfortunately for both companies, Physics is not impressed by marketing terms, does not care about full color ads, and is uninterested in either corporation’s bottom line. The question, in this case, is whether or not AMD’s decision to bet both on a more-efficient approach to computing as well as IBM’s SOI technology has paid off.
 
G4-power said:
(Finns are quite big on technology, don't mean to brag, but Linux is made by a Finn, and Nokia phones have started from Finland).

I wouldn't really count Nokia as an achievement, if I were you. As far as Linux goes, I have to give more than a little credit to Torvalds, but he was standing on the shoulders of giants. Much of the ideation and technique in the kernel comes from decades of Unix development, even if the code was written from scratch later on. Thousands of people contribute to the codebase, worldwide, so it can hardly be called a Finnish project anymore, if it ever could have been.

I'm not downing your country, but some perspective ought to be held. ;)


Anyway, it had an article about some new laser-technology, that would allow for 38 nanometer process. There was some comparing to the current 90 nm process, and the new technology was much simpler, and more accurate.

Sounds like a new lithography technique to me. Is it something like this?


That's a pretty interesting thing, but I think we'll be seeing the 65 nm process way before that.

We'll see.
 
thatwendigo said:
I wouldn't really count Nokia as an achievement, if I were you. As far as Linux goes, I have to give more than a little credit to Torvalds, but he was standing on the shoulders of giants. Much of the ideation and technique in the kernel comes from decades of Unix development, even if the code was written from scratch later on. Thousands of people contribute to the codebase, worldwide, so it can hardly be called a Finnish project anymore, if it ever could have been.

I'm not downing your country, but some perspective ought to be held. ;)

Yeah, Nokia nowadays is like, Japanese or Chinese, just the headquarters are in Finland, but in the early nineties' Nokia phones that were designed (and even manufactured) in Finland were an achievement. And the user-friendly "operating system" was great too, before they started to put too much stuff in the phones.
I know, Linus Torvalds wrote Linux with UNIX and Minix in mind, but it's quite nice to see it be accepted worldwide. Today, most Linux development is done outside of Finland, true.

thatwendigo said:
Sounds like a new lithography technique to me. Is it something like this?
If I'm not completely under Alzheimer's Disease, I'd say that's it.
 
12" Powerbook G4

What do you guys think about getting a 12" Powerbook G4 for 1400 or should I wait or even the 12" I Book? Get you guys please help me out (some piece of mind, that I'm not going to buy something already outdated).

Much thanks
 
guylafleur said:
Any of you folks want to tackle this small, but pleasant conundrum of mine?

I`m starting my grad thesis at the end of October, which is just about the time the new ibooks should be coming out. I`m no big power user, or creativity maven, but I`ve used my friend`s Mac, just fooling around, and grew to like it so much I decided the laptop I need is going to be a Mac.
I`ll be traveling around, only enough to justify my purchase while doing the thesis, and thought for the 12" ibook, it would be just purrrfect excuse. :cool:

Now the conundrum: The low power, high speed G4 outlined on this rumor would be sweeeet, but I`d have to wait at least 6 months, maybe a year for it. Do I dare wait???? :confused:

I have a Dell P4 1.7 desktop, and it works ok, only had to re-format the hard-drive 3 times in the past 2 years because of viruses and bad drivers...Just installed XP SP2 to find it causing some nice freeze-ups, but I can get by. Saving my thesis to disket and the now multi-megabyte freemail Myway and Yahoo is my plan of attack in case things go wrong...and you just know they will. :D

Wait or buy? Bueller? Bueller? Anyone?
My advice-- wait as long as you possibly can, but no longer... Sounds trite, I know, but the thing is that if you can get by with what you currently have, the technology will continue to improve for the price, and you've got the money in the mean time. If you wait too long, you'll have been suffering for little real gain.

I would absolutely advise against scheduling a purchase around rumors like this-- Apple hasn't even indicated they'll be using the chip, let alone when. If they did say they'd use it there's still no guarantee they'll hit their target. Look at all the folks who were thinking there'd be dual 3GHz machines in Sept.
 
G4-power said:
One thing considering DDR2. Apple is probably not supporting DDR2 in the PB's before PowerMac's. Any idea how big changes must they do to the PowerMac architecture to get DDR2 in it?
I wouldn't be so sure of that statement-- while DDR2 does help boost memory bandwidth a bit, the biggest advantage I see is the reduction in power consumption, which is significant. Add to that the DDR2 is going to be more expensive, at least to start, and laptops can usually support higher priced parts and I think it would be likely that the PB's get it first.

My theory when the G5 first came out was that people shouldn't watch the IBM process steps to guess when it'll move to the PowerBook, but should watch the DDR2 market. I don't think it can fit in a PB without the power reduction afforded by lower power memory.

Now that this dual-core G4 is in the mix, and looking more real, I tend to think that's the chip to watch for high performance portables for the next couple years. The 970 missed it's window of opportunity, I think.
 
Analog Kid said:
I wouldn't be so sure of that statement-- while DDR2 does help boost memory bandwidth a bit, the biggest advantage I see is the reduction in power consumption, which is significant. Add to that the DDR2 is going to be more expensive, at least to start, and laptops can usually support higher priced parts and I think it would be likely that the PB's get it first.

My theory when the G5 first came out was that people shouldn't watch the IBM process steps to guess when it'll move to the PowerBook, but should watch the DDR2 market. I don't think it can fit in a PB without the power reduction afforded by lower power memory.

Now that this dual-core G4 is in the mix, and looking more real, I tend to think that's the chip to watch for high performance portables for the next couple years. The 970 missed it's window of opportunity, I think.

Hmm, to think of it, I quite agree with everything you said. So, what speeds of DDR2 are currently available?
Micron tech site has something about DDR2. I see that they produce 1 GB units at 400 (200 MHz) and 533 (267 MHz) DDR2.
Yep, and Crucial has, in addition to 533, DDR2 667 (333 MHz).
 
brgionta said:
What do you guys think about getting a 12" Powerbook G4 for 1400 or should I wait or even the 12" I Book? Get you guys please help me out (some piece of mind, that I'm not going to buy something already outdated).

Much thanks

I just bought a 12" powerbook for the lady. It was the 1.33 GHz processor version w/ combodrive for 1399 (educational discount). According to Apple's news releases there is not going to be a G5 powerbook this year. The G5 powerbook would be the next logical extension to the existing G4 processor currently in the powerbook line. Yes, there is rumors about the dual-core G4 processor that might be able to fit into the powerbook line, however I haven't seen Apple to make any statements about this being implemented into future products. Well, there is always the uncertainty, it is also fact that the product refresh cycle is nearly at its end, meaning a product update is about to happen soon. This may be something minor as updating the 12" g4 powerbook to 1.5Ghz, adding a larger hdd, and reducing the price for a 100 bucks. I'm confident it is not going to be a G5 that's being added to the powerbook line in the next product refresh recycle.

Hypothetically speaking, if the next product refresh recycle happens in the next 30 days (which would be an extrapolation from the past), the G5 is very likely not to be added, so the next scheduled product refresh recycle would be sometime in March 2005, which then could bring the G5 to the powerbooks.

Given that you're buying today, you'll likely have a fairly up-to-date product for the next 6 months. I bought mine hoping there wouldn't be a G5 before March 2005, so I'm up-to-date for 6 months. I didn't care about if the G4 powerbook gets updated in the meantime. I mean, imagine buying a windows PC that's not outdated in the next 6 months, that's pretty unsusal.

However, I think I gave you a good overview what may happen or may not happen and hope you'll feel more comfortable making a decision when/what to purchase.
:)
 
Geeze

You know not to be rude (always the sign of someone about to let lose) but I wish people would quit whining about G5 PowerBooks. Do you know how long it took Apple to jam a G4 into a PowerBook after the G4 PowerMac release?

Two years! Yes, folks that's two years. Based on that folks we'd be looking to a WWDC release, or maybe a Paris Expo release.

Laptops are about compromisation. For what you gain in portability you lose in features and performance. This is the facts of life.

The G5 processor simply runs too hot and is too big for a PowerBook right now. Yes there are Athlon 64-Ms out, but they're all hunking huge. Apple isnt in the business of creating such laptops, they rarely appeal to any group but gamers, or people who want a computer that can be moved around the house easily.People rarely take laptops like that further away than their garden.
Ive got news for you, Apple has a computer like that, its called the iMac.

Which brings me on to another point, yes the iMac is a consumer machine, but its a consumer desktop. The fact is that the iMac will more than likely be someones primary machine. Its going to be the machine they use to do everything with. PowerBooks will likely be a secondary machine- to a PowerMac. So in all likelihood if there's something that you're PowerBook can't do you'll have a PowerMac set up at camp ready to do it instead. No big deal.

Yes, ok people video edit, photoshop etc. on the road sometimes. A PowerBook is particularly useful for bands on tour because they can edit mixes and so on, but the PowerBook pretty much gives the power needed to do all this with anyway, if it doesnt I suspect most professionals would choose smaller size and longer battery life over a 1.6 GHz G5, which is probably the fastest you'd be able to get out of the theoretically super speedy PowerBook G5, for which we seem to so want to part with our cash for.

Yes I want a G5 in a PowerBook, who doesnt? But lets be realistic here. The G4 is not the worlds fastest chip. Its not gonna set any records, but it will get you by happily enough. The 1.5 GHz G4 chugs at a decent pace at everything Ive seen thrown at it, and I bet it still will do in a few years too, unlike other Wintel Machines Ive noted that Macs tend to still feel decently fast for longer on in their life. Id say that most people probably need a new Wintel every 2-3 years, and a new Mac every 3-4, maybe 5 if they push the boat out. Look on ebay and the early PowerMacs are only just starting to be thrown out on mass. Those are a good four years old.

Wait, be patient, yes its taken a while. These things do. But the fact that Apple has managed to get G5s in a XServe and then an iMac shows progress is being made. And note: There's a new G5 on the horizon. 3.5 GHz, dual core, 65 nm, the 970MP is giving me wet dreams. Its speedy as hell, but yet at lower clocks it should be perfect for PowerBook operation. If IBM can deliver on its promises on this one we may be in for a fun ride.

Until this time arrives these new Freescale chips should be enough for most PowerBook users needs.
 
CTerry said:
Which brings me on to another point, yes the iMac is a consumer machine, but its a consumer desktop. The fact is that the iMac will more than likely be someones primary machine. Its going to be the machine they use to do everything with. PowerBooks will likely be a secondary machine- to a PowerMac. So in all likelihood if there's something that you're PowerBook can't do you'll have a PowerMac set up at camp ready to do it instead. No big deal.

Ok, but then there are those people who don't have money but for only one computer, and need both speed and portability. 'Books give portability, and iMacs and PowerMacs speed. The PB's aren't only bought to be a secondary machine, but a portable workhorse. Those who don't need power get an iBook. So nobody should say that nobody needs powerful laptops.
Still, I don't mean that the current PB's wouldn't be fast, but they just aren't wicked fast.
 
Nice to see those dual-G4s!!

Well, I for one am hoping to see the next update (anywhere from now till end of November probably) with a dual-core G4 lineup, that would be great! As far as getting the G5 into the PBs, I think realistically, that won't happen until at least this time next year. There are numerous heat and power-consumption problems that exist with squeezing a G5 into a small form factor and it is doubtful that Apple will want to come out with something that is sub-par. Besides, if the dual-core G4s get into the PBs, then we're talking much faster system bus speed (a REAL requirement for PBs) that can also run at higher frequency and with less power. That sounds like it is meant for portability to me. The G5 can wait as long as needed until it is done right in the laptop.
 
CTerry said:
Yes I want a G5 in a PowerBook, who doesnt? But lets be realistic here. The G4 is not the worlds fastest chip. Its not gonna set any records, but it will get you by happily enough.

Actually, the MPC74xx core is quite a bit more efficient than the IBM PowerPC 970 and its derivatives, has much better power and thermal characteristics, and hangs in competitively despite architectural advances applied to the newer motherboards. Even without Serial ATA, full acccess to DDR channels, AGP 8x, the huge frontside bus, and other advantages that the PowerMac G5 has, you'll find that the top of the line powermacs of the previous generation hold their own against G5s that outclock them - to a roughly linear level. In several benchmarks, the 1.5ghz MPC7447A beats the single 1.6 and 1.8ghz 970 in the iMac.

The MPC7448 doesn't change much of that, though it does raise the bus to 200mhz, increases the clock (theoretically to 1.8ghz), and keeps power consumption down to a relatively tiny 10-12 watts. For comparison, the Pentium M's typical consumption is twice that at the same clock, requiring a reduction to the 1.4ghz 90nm part to achieve the same wattage. The 970 doesn't even come close, except possibly in low-power sleep.

Wait, be patient, yes its taken a while. These things do. But the fact that Apple has managed to get G5s in a XServe and then an iMac shows progress is being made. And note: There's a new G5 on the horizon. 3.5 GHz, dual core, 65 nm, the 970MP is giving me wet dreams.

If they can't hit 3.0ghz on the 90nm die, there's not much chance they're going to crack the 3.5ghz point at 65nm, and that's assuming that the crosstalk and interference issues don't get even worse. Look how badly the die shrink hit everyone in the industry, and you'll see how naive it is to be claiming that there's some magical leap coming just around the corner. The reason for dual-core designs is that manufacturers are realizing it's a way to make faster chips without traditional techniques, sidestepping the problems of shrinking the overall process.

There are other issues that belong to the dual-core philosophy, as well, like cache coherency, threading, and other things that will have to be sorted out rather than left alone. These all stand in the way, but they might just be more solvable than the laws of physics.

Its speedy as hell, but yet at lower clocks it should be perfect for PowerBook operation. If IBM can deliver on its promises on this one we may be in for a fun ride.

Could you provide a source, preferably other than ThinkSecret? Everything I'm showing on a search goes back to their rumor, rather than any kind of announcement or confirmation of a dual-core 970, let alone anything about yet another process shrink or other features.

Until this time arrives these new Freescale chips should be enough for most PowerBook users needs.

Unless the 970 magically starts performing better at lower clocks, the Freescale chip will slaughter it in the portable market. Let me reiterate that on-die memory control is part of what makes the Opteron so fast, and they'll have DDR2, along with numerous SoC enhancements that lower system latency. There will be no FSB to limit the G4 anymore, thanks to their being no northbridge, and the RapidIO framework is pretty nice.

Then there's the superios Freescale implementation of AltiVec, the use of more cache per processor than the 970 (less need to fetch from memory), the fact that AltiVec will be installed in dual units...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.