Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

jettredmont

macrumors 68030
Jul 25, 2002
2,731
328
Re: Re: Re: Re: 25 GHz?

Originally posted by -hh
Well, you can put me down on the list of "lesser" PowerMacs too.


I don't need the pricetag of a DP 2GHz, so I'm debating between a 1.6 and a 1.8GHz.

In running some numbers, it costs $275 to bring the $1999 1.6Ghz G5 up to be "equal" with the 1.8GHz G5 for the RAM & Hard Drive. This means that if you want those options anyway, the price premium for the extra 200MHz is effectively only $150 ($2,249 vs $2399), which is probably worth buying (an extra +12% in clock speed for only +6% cost).

However, the caveat is the usual one ... this is paying Apple's RAM prices. The $150 effective price differential jumps from $150 to $250 when 3rd Party RAM is used to expand to 512MB.

Are you sure about this?

Note that 256MB RAm from Apple comes as 2x128 and 512MB RAM from Apple comes as 2x256 (everything's 2x for memory on the new systems because of the interleaved design). Granted, you have four memory slots, so with 2x128 you can buy two more 128 sticks to bring it up to 512MB overall, but there is a significant advantage to having 2.256 instead of 4x128 ... And I SERIOUSLY doubt you'll find 512MB of PC2700 RAM for $25 ($125 - claimed $100 savings in going to third-party RAM) unless you know of some deal that I don't see out there (Crucial is good memory and they have 2x128 at $48 and 2x256 at $82 ... maybe you can find someone to buy your excised 2x128 to offset the cost of the 2x256 ...)

Plus, as you said, this is PC2700 RAM on the 1.6GHz and PC3200 RAM on the 1.8GHz, which pretty much accounts for the cost differential you see between the two machines, even were the processors to be the same ... I think the 1.8GHz machine is, overall, a better deal than the 1.6GHz machine (and the 2x2GHz step-up is even more appealing).

Yes, if you don't need/want the power, save your money ... but in the overall scheme of things it is fairly rare for the high-end step-ups to be as attractively priced as these are ...
 

eric_n_dfw

macrumors 68000
Jan 2, 2002
1,517
59
DFW, TX, USA
Sorry to rant, but...

...everyone keeps talking about the PM G5's 9 fans and how:
a) The Xserve's 1U box won't be able to keep a G5 cool
b) the PB would fry your lap off with a G5
c) The extra fans are because the G5 runs so hot.

Wrong - Wrong and Wrong.

From ArsTechnica: http://www.arstechnica.com/cpu/02q2/ppc970/ppc970-1.html
As you can see from the table, the 970 at 1.8 GHz is much closer to the G4e than to the P4 2.8 GHz in terms of power dissipation. This means that Apple will be able to use this chip in the kinds of innovative enclosure designs that make their hardware continually appealing, regardless of how it performs. Furthermore, a 1U, 970-based version of the XServe is not out of the question. And if you consider the fact that the 970's power consumption at 1.2GHz is a mere 19W, it's almost certain that we'll see a future notebook from Apple based on the new chip.

I predict dual 2Ghz G5 XServe's very soon as well as 1.2 (maybe 1.6) Ghz G5 PB's early next year.

Will the G5 XServes be loud? Heck yes - but who cares?
 

g3ski

macrumors member
Jun 18, 2002
89
0
I guess I have to wait again

I am always waiting for the next thing. now that I know the 9900 will be 20+Ghz, i will have to wait unil 2010 to upgrade

:confused:
 

BaghdadBob

macrumors 6502a
Apr 13, 2003
810
0
Gorgeous, WA
You know, when we get to these speeds we're going to see amazing things going on in entertainment. Think "mass proliferation of real time 3D" and multiple times more realistic surface rendering in movie-grade 3D -- resulting in indistinguishably real animated characters (unlike the Phantom Menace).

And "Final Fantasy the Movie VIII: More Inconstruable Flashbacks and Indescernable Personal Problems"
 

soggywulf

macrumors 6502
May 24, 2003
319
0
Re: Re: If you build it...

Originally posted by MacManDan
Just think .. 10 years ago, how many of you could have imagined dual 2.0ghz G5 processors that can render life-like 3d images, edit movies in professional quality, etc etc?

Hmm...10 years ago DOOM came out. I think just about everyone on the planet was imagining life-like 3D in the future. Not that we're there yet with the G5, by any means. I think a lot of these ideas that you consider old-hat are not developed anywhere near their potential, or not really developed at all. A PIII robot following a ball != Terminator. :)
 

jettredmont

macrumors 68030
Jul 25, 2002
2,731
328
Re: Sorry to rant, but...

Originally posted by eric_n_dfw
...everyone keeps talking about the PM G5's 9 fans and how:
a) The Xserve's 1U box won't be able to keep a G5 cool
b) the PB would fry your lap off with a G5
c) The extra fans are because the G5 runs so hot.

Wrong - Wrong and Wrong.

You are quoting an ArsTechnica article which was based off of preliminary specs put out by IBM. We now have the real specs, and the heat dissipation picture is far worse for the 970 at 1.8GHz than previously believed.

You will not find the 970 in laptops anytime soon according to some quite reliable Apple honchos ... 'course they could be aiming to deceive, but ...
 

daveL

macrumors 68020
Jun 18, 2003
2,425
0
Montana
Re: Re: Sorry to rant, but...

Originally posted by jettredmont
You are quoting an ArsTechnica article which was based off of preliminary specs put out by IBM. We now have the real specs, and the heat dissipation picture is far worse for the 970 at 1.8GHz than previously believed.

You will not find the 970 in laptops anytime soon according to some quite reliable Apple honchos ... 'course they could be aiming to deceive, but ...
Most of the posts here seem to equate 90 nm to the 980. That's so, but there's no reason why the 970 won't move to that process, and it could very well come before the 980. It will be a lot easier for IBM to move the existing 970 to the new process than bringing out the 980. With the 980, you have a new design *and* a new process, while the 970 will be a proven design. I guess my point is that we may get a mobile 970 out of the 90 nm process. Moving to the 90 nm process can boost the speed and/or reduce the power, take your pick in terms of the tradeoffs. So, maybe we get a 3GHz+ PM 970 and a 2+GHz PB 970, or something like that, from the 90 nm process.

Just a thought.
 

Analog Kid

macrumors G3
Mar 4, 2003
8,911
11,465
Re: I guess I have to wait again

Originally posted by g3ski
I am always waiting for the next thing. now that I know the 9900 will be 20+Ghz, i will have to wait unil 2010 to upgrade

:confused:

It's worse than that. By 2010 you'll be reading rumors of 200GHz SiGe machines just around the corner and you'll have to wait until 2015 when you'll hear about...

;)
 

wizard

macrumors 68040
May 29, 2003
3,854
571
Re: Re: Re: Re: Re: 25 GHz?

Hi guys;

Another way to look at this is that the 1.6 serves the needs of people who need backwards compatability with PCI. Thinking this way places the 1.8 at the begining of the modeline, or entry level as far as the new archetecture goes. So you there is more value in the 1.8 realtive to the 1.6 then first appears. This is why I consdier the 1.6 to be grossly overpriced, it is a machine designed to provide backward compatability as far as I can see and really should not be priced as close as it is to the 1.8.

Dave


Originally posted by jettredmont
Are you sure about this?

Note that 256MB RAm from Apple comes as 2x128 and 512MB RAM from Apple comes as 2x256 (everything's 2x for memory on the new systems because of the interleaved design). Granted, you have four memory slots, so with 2x128 you can buy two more 128 sticks to bring it up to 512MB overall, but there is a significant advantage to having 2.256 instead of 4x128 ... And I SERIOUSLY doubt you'll find 512MB of PC2700 RAM for $25 ($125 - claimed $100 savings in going to third-party RAM) unless you know of some deal that I don't see out there (Crucial is good memory and they have 2x128 at $48 and 2x256 at $82 ... maybe you can find someone to buy your excised 2x128 to offset the cost of the 2x256 ...)

Plus, as you said, this is PC2700 RAM on the 1.6GHz and PC3200 RAM on the 1.8GHz, which pretty much accounts for the cost differential you see between the two machines, even were the processors to be the same ... I think the 1.8GHz machine is, overall, a better deal than the 1.6GHz machine (and the 2x2GHz step-up is even more appealing).

Yes, if you don't need/want the power, save your money ... but in the overall scheme of things it is fairly rare for the high-end step-ups to be as attractively priced as these are ...
 

soggywulf

macrumors 6502
May 24, 2003
319
0
Re: Re: Re: Re: Re: Re: 25 GHz?

Originally posted by wizard
Another way to look at this is that the 1.6 serves the needs of people who need backwards compatability with PCI.

Don't think so, Dave. The PCI-X slots in the higher machines are already backwards compatible with current PCI cards, except maybe for some ancient ones. I don't think there are many of those 5V cards in use. Perhaps really high end stuff like Media 100 etc? But for those folks, clearly they would be getting new cards with their 2 gig duals (or the other way around, 2 gig duals with their new cards).

However, I am also curious as to why they decided to design a different motherboard for the 1.6. It must have cost them some significant money, just for a market differentiator. Seems to me that money could have been better used for other things, like maybe dropping the price points a bit. :D
 

cgc

macrumors 6502a
May 30, 2003
718
23
Utah
Originally posted by QCassidy352
good lord, makes me not even want a g5 anymore! ;) I hope it's all true. That sounds great.

One question though- can someone explain "power 4" and "power 5" as compared to 970 and 980? The 970 is a power 4 derivative, while the 980 is a single core power 5? Is the G4 also a power 4 derivative? Is the 990 also a power 5 derivative? Thanks for the help.

The G4 is a 604E derivative and the G3 is a 603 derivative (and lacking decent FPU, if I remember).
 

MarkCollette

macrumors 68000
Mar 6, 2003
1,559
36
Toronto, Canada
Re: Re: Re: Re: Re: Re: Re: 25 GHz?

Originally posted by soggywulf
However, I am also curious as to why they decided to design a different motherboard for the 1.6. It must have cost them some significant money, just for a market differentiator. Seems to me that money could have been better used for other things, like maybe dropping the price points a bit. :D

Maybe the 1.6 motherboard was an initial attempt, like a prototypig effort, and the 1.8 and 2.0 motherboard was a later improvement, after they had better access to DDR400 memory?
 

XForge

macrumors member
Jul 25, 2002
99
0
South Florida
Re: Re: Agreed

Originally posted by soggywulf
Did anyone read "Snow Crash"? Screw google, I want that Librarian!

For anyone who thinks we don't need more computing power, sci-fi is a good place to start.

I wanna be able to Google from inside my head like Mona Lisa Overdrive.
 

Rincewind42

macrumors 6502a
Mar 3, 2003
620
0
Orlando, FL
Originally posted by jettredmont
I disagree. A distinctive app requires more than programming prowess. It requires intelligent design, from the UI to the inner workings.

Which is all in a programmer's toolbox. Programming is more than just writing code (or putting together already created modules). There maybe others dictating what goes into your application, but it is up to programmers to make it work.

Programming bottlenecks in assembler can often buy you 10-20% performance improvement overall. If you're crunching numbers, that's awesome. But then again, if you're crunching numbers and you put the same amount of effort into improving the overall algorithms of your application at a higher level, you might find yourself gaining 50%-100% performance.

I specifically stated that writing assembly is going the way of the dodo (or rather, latin). That doesn't mean that you don't need to be able to read it. If you want to understand what the compiler is emmiting and how to get it to do better in a higher level language you need to know as much as you can about your architecture.

And to optimize you must know that you are using the best algorithm. If you aren't already using the best algorithm, then you are wasting your time optimizing suboptimal code.

If your app isn't about raw number-crunching, then the user is far more likely to benefit from an intuitive interface than even a 20% performance gain.

Agreed. But a fast and intuitive interface is even better.

20% sounds like a huge performance gain, and it looks great on a bar chart, but the simple fact rules performance optimizations:

Users generally do not notice less than a doubling of performance (ie, 100% gain).

Users may not notice a 20% increase in short tasks, but they notice it in long tasks. And while 20% in one place may not be all that great, 20% in a lot of places can make quite a big difference.

If Word takes 20% less time to spell check my project report, then my compiler doing a make on the latest version of my software will go that much faster. So an optimization in a short (but often repeated) task will make a long process go faster. And while it isn't noticed by the user, it's appreciated.

So, yes, when you have the most perfect algorithms and you have the most perfect UI then it is time to move on to bottleneck-busting. It is even time to bust bottlenecks when you can't think of any more fundamental improvements. But if you buy that 20% improvemtn by moving everything to hand-tuned and hard-to-maintain assembly then you've traded a vast amount of fundamental future improvements for a single immediate improvement.

No one ever said to do everything in assembly for speed - and in fact my post said the opposite. And I never said that you optimize everything - optimizing too much is just as bad as doing no optimization work at all. And optimizing too early is the root of all evil.

And you will never be done if you are waiting for the perfect algorithm & interface because the perfect interface & algorithm have something in common - the work is done before you even ask :D.

Greater hardware buys the ability to take applications to the next level. It allows developers to think in larger terms, to avoid having to worry (as much) about the nigley details. It allows the use of garbage collection schemes that work instead of pedantic allocation-deallocation rules and constraints. It allows reuse of code in ways the original developer may well have never foreseen and certainly not optimized for.

This line has been repeated ad nauseum for over 10 years. So why hasn't anything changed? Because with faster processors users expect faster applications that do more. They don't care if you the programmer have to deal with pedantic allocation rules & constraints or how much code you get to reuse - they want their work done fast, now and if they shell out money for a faster computer they expect it to go faster.

And regardless, newer computers/cpus don't always make software go excessively faster. Just look at the P4 when it came out - many many benchmarks shows that for a lot of tasks the P4 was slower than a P3. Whose to say that this rumored 9900 won't run code slower than the 990 that it replaces? If you want to take full advantage of the 9900 you would have to optimize for it. Sure if you wait long enough things would just be faster but do your customer really want to spend more money for the same speed as before?

Which brings us to the next question - what about the people running on slower machines? They want to use your application too, and they want to not feel as if they need to buy a new computer to use it. Unoptimized programs may run fast enough on your development machine, but often your development machine is also a high end machine. Your users often will have weaker hardware than you. If speed is only acceptable on your development machine, then it will almost certainly be unaccetable on a low end machine.

For the record, much of this same conversation took place on one of Apple development lists last week. That discusion concluded that optimization is a necessary step in development, not just so that things work well on older hardware, but also so that they work well on newer hardware.
 

Analog Kid

macrumors G3
Mar 4, 2003
8,911
11,465
Re: Sorry to rant, but...

Maybe I'll be surprised and we'll get a G5 Powerbook tomorrow, but people sound like they're going to riot without one...

The PBG4 is a beautiful machine, and it's one of the few areas where Apple has been competitive with Wintel in performance.

Originally posted by eric_n_dfw
...everyone keeps talking about the PM G5's 9 fans and how:
a) The Xserve's 1U box won't be able to keep a G5 cool
b) the PB would fry your lap off with a G5
c) The extra fans are because the G5 runs so hot.

Wrong - Wrong and Wrong.

I predict dual 2Ghz G5 XServe's very soon as well as 1.2 (maybe 1.6) Ghz G5 PB's early next year.

Everybody keeps pointing to the power draw of the 970 and then talk about how low power the system is...

Nobody talks about the power of driving a 64bit bus at 1GHz.

Nobody talks about the power of the off chip memory controller and system ASIC ("one of the fastest ASICs in the world").

Nobody talks about the power of a 128bit wide, double pumped, 400MHz RAM system.

No way I'm putting one of these on my lap until the CPU, the bridge ASIC and quite possibly the memory go through process shrinks.

Given my 'druthers, I'd integrate the memory controller onto the CPU too to remove the I/O power of one bus.

The next PBs are going to be based on cooler G4 systems. If the dual processor PB turns out to be anything it'll be 2 G4 cores on a single die...

That's the only thing that makes sense to me. Putting a G5 into that enclosure now would certainly make waves in the industrial design world when the unit glows a really funky red color during operation, but I don't think it's practical.

2 CPU modules would take an absurd amount of power when you could use a dual core chip.

No I don't know anything ya'll don't, I'm just making sense of what I see...

Is the G4 memory controller on chip?
 

Rincewind42

macrumors 6502a
Mar 3, 2003
620
0
Orlando, FL
Re: Re: Sorry to rant, but...

Originally posted by jettredmont
You are quoting an ArsTechnica article which was based off of preliminary specs put out by IBM. We now have the real specs, and the heat dissipation picture is far worse for the 970 at 1.8GHz than previously believed.

You will not find the 970 in laptops anytime soon according to some quite reliable Apple honchos ... 'course they could be aiming to deceive, but ...

Even after seeing the real specs I don't think that the dissipation picture is far worse. Yes the case has 9 fans. But first off realize that the case was designed for cooling 2 970s at 2Ghz. That's nearly 100 watts by itself (assuming IBM did hit their estimated mark). It is also cooling up to 8 GB of RAM, which isn't exactly cool running either. Then there is the DVD-R drive, the 2 up to 250 GB SATA hard drives, and 3 PCI-X slots (that can provide more power and run much faster than the PCI slots in the previous generation). And finally, there is the Mirrored Door G4s. You know, the ones that are known colloqually as 'Wind Tunnels'. I'm willing to bet that on top of the actual cooling concerns was the concern of what these machines would sound like. There are 9 fans in the case so that they are Quiet (and they are). I can easily imagine concerns of these new machines being nick-named "Hurricane" or "Tornado" G5s.

Now as for when they make it into a PowerBook, that is totally up to Apple. I still think that if they wanted to right now they could put a 1.2 Ghz G5 into a PowerBook. They'd probably fly off the shelves and not be all that hotter than the 1Ghz G4s (if at all). But they could have decided that the G4 has enough life left in it to go another generation in the PowerBooks. Or they could have discovered that the system controllers would consume too much power. Or any number of other things could have caused Apple to hold off on G5 PowerBooks. But I also think that most of these reasons will evaporate when the G5 goes to a 90nm process. Unfortunately, as far as the rumor scene knows, that won't happen until early next year or so.
 

-hh

macrumors 68030
Jul 17, 2001
2,550
336
NJ Highlands, Earth
Re: Re: Re: Re: Re: 25 GHz?

Originally posted by jettredmont
Are you sure about this?

Note that 256MB Ram from Apple comes as 2x128 and 512MB RAM from Apple comes as 2x256 (everything's 2x for memory on the new systems because of the interleaved design). Granted, you have four memory slots, so with 2x128 you can buy two more 128 sticks to bring it up to 512MB overall, but there is a significant advantage to having 2.256 instead of 4x128 ...

And I SERIOUSLY doubt you'll find 512MB of PC2700 RAM for $25 ($125 - claimed $100 savings in going to third-party RAM) unless you know of some deal that I don't see out there (Crucial is good memory and they have 2x128 at $48 and 2x256 at $82 ...

Oops - my math was slightly off - I misrecalled Apple's RAM markup as $150 ... you're right its only $125. In any event, while Mac compatibility needs to be verified, Pricewatch currently lists:

$59 - PC2700 DDR 512MB
$27 - PC2700 DDR 256MB
$21 - PC2700 DDR 128MB

So the net savings would be $125 - (2*$21) = $83 instead.
IMO, that's enough of a savings to be worth making the effort.


...maybe you can find someone to buy your excised 2x128 to offset the cost of the 2x256 ...)

So long as there's "empty" slots (meaning either literally empty, or full but with no need for a slot to add more RAM), the old RAM is not valuable enough to be worth pulling.

And based on the above prices, I'd not bother with the 128's...the 256 is the current sweet spot ("knee in the curve"), so I'd buy two of those instead. In conjunction with leaving the 2x128's in the machine, I'd end up with 768MB total.


-hh
 

eric_n_dfw

macrumors 68000
Jan 2, 2002
1,517
59
DFW, TX, USA
Re: Re: Sorry to rant, but...

Originally posted by Analog Kid
Everybody keeps pointing to the power draw of the 970 and then talk about how low power the system is...

Nobody talks about the power of driving a 64bit bus at 1GHz.

Nobody talks about the power of the off chip memory controller and system ASIC ("one of the fastest ASICs in the world").

Nobody talks about the power of a 128bit wide, double pumped, 400MHz RAM system.

No way I'm putting one of these on my lap until the CPU, the bridge ASIC and quite possibly the memory go through process shrinks.

Given my 'druthers, I'd integrate the memory controller onto the CPU too to remove the I/O power of one bus.

The next PBs are going to be based on cooler G4 systems. If the dual processor PB turns out to be anything it'll be 2 G4 cores on a single die...

That's the only thing that makes sense to me. Putting a G5 into that enclosure now would certainly make waves in the industrial design world when the unit glows a really funky red color during operation, but I don't think it's practical.

2 CPU modules would take an absurd amount of power when you could use a dual core chip.

No I don't know anything ya'll don't, I'm just making sense of what I see...

Is the G4 memory controller on chip?
very good points - maybe the PB G5 is farther off than I initially predicted. As far as the actual wattage of the G5's, can anyone point me to an official (Apple or IBM) page that lists them? All I can find is some article that states the 2Ghz G5 is a whopping 97 Watts. I find it a little hard to believe that numbers would have jumped that high from IBM's original predictions for the 1.6 - but I'm no electrical engineer so what do I know?

I still think we'll be seing G5 XServes - just with very high RPM fans.
 

Phinius

macrumors regular
Mar 15, 2003
196
0
Los Angeles
Originally posted by stingerman
Steve Jobs promised 3 GHz within 12 Months, that allows for less than 12 months. This will have to happen with the G5 (970).

If the G5 is moved to a smaller process within a year, IBM's transition to a smaller process will still effectively be at least 9 months behind Intel's move. I'd still expect to see the G5 at 3 GHz on a smaller process size and not at that frequency on the same process size as is being used now. IBM has already admited that they are working on the next generation of the G5 which makes it a strong possiblity that the G5 could be moved to a smaller process size within a year. Afterall, IBM does have a new state-of-the-art fabrication facility and the companies next generation process size is ready to go.

Where did you get 3.2 GHz for the G5 on the same process size? It looks like you just pulled that out of thin air or is it because the Pentium 4 is now at 3.2 GHz and you feel that IBM/Apple has to match that?

The G5 could be bumped up to higher frequencies if IBM increases the volts which is what Motorola did to increase the frequency of the G4 for Apple. However going up to 3.2 GHz from the 1.6-2.0 GHz that the G5 is at now is a big jump and would match the increases that Intel got with the current process size for the Pentium 4. Also, IBM states that the companies upcoming 970 powered blade server will go up to 2.5 GHz. So, you are effectively saying that Apple will get a 970 that is 700 MHz beyond the maximum that IBM will use in that blade computer. If a jump up in volts is what is needed to get to 3.2 GHz, then that would be a 27% increase in frequency over the topend IBM blade server speed. That would be highly unlikely to occur also.
 

jettredmont

macrumors 68030
Jul 25, 2002
2,731
328
Discussion thread recap ('cause it spans about five pages):

Analog Kid said:
Look at how quickly amateur coders are putting out Cocoa apps. If you have more cycles to burn, you can focus less on optimizing and more on putting apps together quickly with more reusable code.

to which Rincewind42 replied:

Sure, it's great to be able to put an app together without writing more than a handful of code - but if you can do it so can thousands of others and your app really isn't all that distinctive in that sense then is it? A distinctive app requires a programmer to take full advantage of all of the tools available to him or her - including optimizing.

to which I replied (specifically at the first sentence above, not the second; I agree that optimizing is one of many tools, but not that a lack of optimization means an undistinctive app):

I disagree. A distinctive app requires more than programming prowess. It requires intelligent design, from the UI to the inner workings.

And now, back to the game ...

Originally posted by Rincewind42
And to optimize you must know that you are using the best algorithm. If you aren't already using the best algorithm, then you are wasting your time optimizing suboptimal code.

In my field, there is never "the best algorithm". There are a hundred different theoretical approaches (and more proposed monthly), and some of them may in some cases be far better than others (and in other cases far worse). Most of these approaches have never been tried in code yet.

So, I admit, this skews my thinking a bit. Optimizing a particular implementation of a particular algorithm is good, but more often than not getting the job done fast is more a matter of being flexible and allowing experimentation than honing an implementation to as sharp an edge as possible.

In other words, back to my original post: building an app out of mostly-reused code does not mean that "anyone" else could do it just as well. In some fields, yes it obviously does (but then again, "anyone" can optimize their code too). But in fields where fundamental innovation is still possible, where the race amongst competitors is one of approach and elegance of implementation, reusing code, even though it is not 100% optimal for your approach necessarily, is the difference between the leaders and the followers.

I mean, yes, I could be getting a good 20-30% gain by using C constructs (structs and arrays instead of classes and vectors) throughout. In fact, my project has a significant portion which remains straight C because we've never bothered to go in and change it. Is the remaining C code a performance advantage? No, far from it. It is the slowest section of our application, because it is poorly understood and thus never innovated. The rest of the application has a record of 100% performance gains year-over-year. That is not from low-level optimizations; that is from devising and implementing (quickly) new approaches to the problem at hand. While this section of code started off as the fastest bit in the project, the C code rarely gets faster, and so, year by year, becomes more and more the bottleneck of the system. I know the C vs C++ speed advantage because I'm finally rewriting this chunk and, yes, a direct rewrite with no approach change does slow that code down by a good 25%. Said rewrite, however, will allow this bit to improve in the same way the rest of the app has improved over the years (and in fact we have a few ideas to try here that look like they'll make the new code just as fast as the old code was ... ideas we'd never have dreamed of implementing on the old C code because it just plain would have taken too much time).

For the record, much of this same conversation took place on one of Apple development lists last week. That discusion concluded that optimization is a necessary step in development, not just so that things work well on older hardware, but also so that they work well on newer hardware.

Of course one has to make sure your code works on the lowest target machine. That doesn't mean you have to develop on a 400MHz G3 clunker, but it does mean you should sanity-check and monitor ongoing performance on such a machine. And, yes, optimizations have to take place (to a certain extent) to achieve this. As I said before, however, performance is not only affected by assembly-level optimizations (by which I mean looking at the generated assembly and refining your C code to generate five instructions instead of eight). It is often (in my experience, at least) more affected by higher-level optimizations. AND such high-level optimization are more likely to work across multiple platforms than any assembly-level optimization!

My application runs well on an old G3 (reasonably so; it scales pretty linearly with processor speed; it runs faster than the competition on all hardware) as well as half-decade-old P3s because we have used off-the-shelf modules and high-level ingenuity to quickly implement fast algorithms, not because we have unrolled loops and used pre-increments instead of post-increments and put loop variables on the stack instead of from the heap, etc.

I'm not against low-level optimizing. I just don't agree that it is the key differentiating factor between applications. It can differentiate, but the absolute lack of assembly-level optimizations does not mean your application is undifferentiated.

At some point, you have to review and clean up code, rethink your low-level approaches and, and run it through Sampler to get an idea of exactly where you are spending time. That doesn't mean you need to be writing your own implementation of vectors and hashtable maps. Using "stock" code does not mean you get poor performance.
 

jettredmont

macrumors 68030
Jul 25, 2002
2,731
328
Re: Re: Re: Sorry to rant, but...

Originally posted by eric_n_dfw
very good points - maybe the PB G5 is farther off than I initially predicted. As far as the actual wattage of the G5's, can anyone point me to an official (Apple or IBM) page that lists them? All I can find is some article that states the 2Ghz G5 is a whopping 97 Watts.

Numbers are hard to come by, but EE Times is fairly respectable (ie, I don't imagine they made up the 97 Watts thing ...)

http://www.eetimes.com/sys/news/OEG20030623S0092
 

bcsimac

macrumors 6502
Nov 4, 2002
275
0
Bolivar, TN
I hope that Apple and IBM can continue working out

I hope that Apple and IBM can continue to work out for the both of them. I think that IBM is a better innovator than motorola. I also think IBM has much more resources to work with than Motorola has. The problem is making sure that IBM continues to think it is profitting from helping Apple with its processor needs. I think right now...it is all good. I just wonder how long it will stay that way. Apple and IBM have parted ways before at least twice. I would hate to see that again. I just really hope that this time they will stay in it together and not part ways because IBM is better than Motorola if you ask me.
 

rickag

macrumors regular
Apr 9, 2001
153
0
Re: Re: Re: Re: Sorry to rant, but...

Originally posted by jettredmont
Numbers are hard to come by, but EE Times is fairly respectable (ie, I don't imagine they made up the 97 Watts thing ...)

http://www.eetimes.com/sys/news/OEG20030623S0092

No they were confused. IBM has published at least 2 documents that contradict this, by a very large margin. Go to IBM's website, look up the 970 documentation and see for yourself.:D
 

Rincewind42

macrumors 6502a
Mar 3, 2003
620
0
Orlando, FL
Re: Re: Re: Re: Re: Sorry to rant, but...

Originally posted by rickag
No they were confused. IBM has published at least 2 documents that contradict this, by a very large margin. Go to IBM's website, look up the 970 documentation and see for yourself.:D

Could you please post a link to these documents? I think that since everyone saw the heatsinks they assumed that IBM completely blew their power specs =p.

Btw: if your looking at the article & the product presentation, that was pre-release specs, so they don't really count anymore :(
 

Frohickey

macrumors 6502a
Feb 27, 2003
809
0
PRK
Manufacturers blow power specs all the time.

I remember a chip that, when it came out was TWICE the power on the datasheet. After that, you revise the datasheet...if you remember. Engineers are usually not that good with documentation, and documentation is one of the first things to go when you get busy.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.