Firstly, qubex, let me be among the ones to welcome you. It's always nice to have well-spoken people about, especially when they seem to have some good thought behind their posts. I'm sure that we'll butt heads over something eventually, but I thought I'd at least be pleasant to you at first.
That being said, I'm going to reply to you the way I normally do, by dissecting and taking points out to highlight them.
[It will, at the very least, offer "perfectly decent" performance for the next 12 months or so - though I am equally aware that I'll experience that awful sinking feeling if the G5 PowerBooks are released before June 2005.
Why?
Does the release of a new machine make your purchase any slower in a real sense, or is it just the admitted technolust that you spoke of? My guess is that the PowerBook is at least adequate for your needs, and that without a jump to a new processor, that the PowerBooks will
not be significantly faster in the near future. If you'd followed both the 970/970fx and the competition from FreeScale (nee Motorola), then you'd know that even the 90nm version of the newer processor is running at a pretty high heat compared to the G4. The chip alone is at 25-30w at 2.0ghz, and it's still hotter than an equally clocked G4 when you ramp it down, without providing significant performance gains, all while drawing far more power off the battery. In addition, there are other significant hits that pull on the limited supply - FSB and RAM being two of them, and the need for faster drives being another.
A G5 PowerBook will be a heat and battery monster, one that will be laughed out of the professional purchasing arena because current generation Centrinos are already higher clocked and proven performers, while also allowing a battery life that really does reach up to 6 hours. This is mostly because the chip is designed to be a small form factor processor and scale far, far down when not in use. It's interesting to note that the two are roughly comparable at full operation, right around the 30w mark, but that the Centrino still has better power management and can thus be more conservation-minded when it needs to be.
I'd also like to chirp in on a few technical issues. Firstly, the comparison betwen Xserves and PowerBooks is not worth the effort. The system architecture, manufacturing process, and intended objectives are totally different. As noted, the case volumes are wildly divergent (somebody calculated the Xserve has about 5 times the internal volume of the 17" PowerBook, and that sounds about right), plus the fact that the closer density of notebook system components (hard drive, battery, and optical drive all nearby) present formidable problems that quite simply have no equal in the 1U server market. Saying it should be possible to engineer a G5 PowerBook simply because there exists a dual G5 Xserve is tantramount to stating that one should be able to easily transport a six-seat sofa in Ferrari sportscar: it simply violates all notions of common sense and geometry. Not so much because of any single constraint, but because of all the constraints operating simulataneously.
Amen and thank you, good sir. You put it better than I did, though I fear that it's too little and too late, even if those who cry out for the G% PowerBook were open to listening to dissent.
Equally I do not consider a dual-G4 PowerBook to be realistic.
I'm curious why not, in this case. Two MPC7447A chips output less heat at peak than either one Centrino or one G5 (11w each, for a total of 22w) and don't require the 400mhz or 800+mhz FSB of the other systems. In addition, there's no need for PC3200 RAM to keep the pipe fed, nor an absolute requirement for 7200RPM laptop drives to feed the bus. The heat budget is far more reasonable.
Secondly, I think most of the posters grossly overestimate the performance hike they could expect from a 1.6 GHz G5 as opposed to a 1.5 GHz G4. As long as most software remains 32-bit, the performance boost would be negligable.
This is borne out in the FinalCut Pro benchmarks that have been thrown around this board to demonstrate that exact point. Low-clocked G5s are performing roughly on a clock-for-clock parity with the G4 on anything that doesn't have explicit ties to their AGP 8x graphics bus, the memory bandwidth, or 64-bit integers and math. As such, most consumer tasks won't see much lift, if any at all.
Of course, short of finding a new higher-density RAM module format, no PowerBook G5 for the foreseeable future will be able to stow away 4 GBytes, much less 8 GBytes of RAM, making the issue rather moot.
This doesn't seem to matter to them. Someone on the iMac thread is hoping for four RAM slots, and it didn't seem to phase anyone on this thread when I pointed out that 64-bit addressing does nothing at the moment if you don't have RAM over 4GB to be assigned.
Granted, the FSB in current PowerBook offerings is less than stellar, and it could be - nay, should be improved - one does not necessarily require a G5 to do so.
The FreeScale e600 G4-successor chip is a 2.0ghz dual-core design with an on-die DDR memory controller, 400mhz FSB, higher L2 cache, and (rumored) double-precision 128-bit AltiVec units. It puts out 22 watts and has at least two logical cores in the space of one processor.
I'd much rather see that in a PowerBook than a crippled G5.
Rather than hope that the next iteration of OS X is fully 64-bit, I'd much rather hope that they maintain the 32-bit/64-bit hybrid nature but rather begin compiling the performance-sensitive components (such as the XNU kernel, Aqua interface and QuartzExtreme render-engine) with a compiler that "sucks less" than GNU's much over-hyped GCC 3.3 series compiler. Ideally, OS X 10.4 would be a 32-bit/64-bit hybrid compiled with IBM's excellent and highly optimised PowerPC compiler, XCC. That alone would greatly enhance the end user's experience and the responsiveness of the system when under severe loads.
I'm fully in agreement on this, as well. Reports have been coming in about the increases in performance for using the IBM Fortran and C compilers, and there are some that are claiming anything between 40 and 200% performance increase from the additional efficiency in usage of the processors. If you could gain 40% real-world performance from merely switching compilers, well... It seems like a no-brainer to me. Apple needs to shift over to XCC and make it available to their developers as a part of the tools. It would make a decent use for that warchest they're sitting on, if they're not buying Macromedia, Adobe, or Sun anytime soon.