Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Re: Re: Hello, MOTO??

Originally posted by nixd2001


Part of Apple's price disparity is entirely of their own making - they just charge very high prices. For a smidgen over what Apple would charge me for taking a new PMG4 from 256MB to 512MB (i.e for roughly 256MB at Apple prices) I've just bought 1GB. That's nothing to do with Moto prices (although they may well be above the odds due to lack of competition).

[A total of 1.25GB may be overkill at present though - Dervish, as my machine is called, hasn't managed to swallow more than 750MB yet - so I've nearly 512MB sitting there not helping much at present. But I'm working on it!]

All PC manufacturors charge outrageous prices for upgrading ram etc.. Try outfitting a Dell or Compaq or any other major name brand similarly to a Mac and you will see that the price difference is simply because you get much more on a basic configuration Mac.
 
Re: Re: Re: Hello, MOTO??

Originally posted by MacBandit


All PC manufacturors charge outrageous prices for upgrading ram etc.. Try outfitting a Dell or Compaq or any other major name brand similarly to a Mac and you will see that the price difference is simply because you get much more on a basic configuration Mac.

ok - never tried it either. Never bought a big name PC, other than the Vaio laptop, but I've never upgraded the memory in that.
 
Re: Re: Re: Hello, MOTO??

Originally posted by MacBandit


All PC manufacturors charge outrageous prices for upgrading ram etc.. Try outfitting a Dell or Compaq or any other major name brand similarly to a Mac and you will see that the price difference is simply because you get much more on a basic configuration Mac.


ummm. . .

I don't know where you get your prices from, but I can get an AMD AthlonXP 2200+ for a whopping $145 and 512MB of PC2100 DDR SDRAM for $95.

An upgrade to Mac's fastest processor costs OVER 4 TIMES AS MUCH!!! Not to mention that most Macs still use slow PC100 SDRAM.

A processor upgrade for my G3 or G4 COSTS AS MUCH AS AN ENTIRE PC.

And there's no difference in hardware levels between Mac and a typical mid-range PC. Both have DVD-RWs, lots of RAM, big HDs, ethernet, USB, etc. Apple simply beats the PC in execution, that's all. Nice, pretty acrylic cases, mice, keyboards and speakers. All of which add $ to the price, as well as that overpriced CPU.

If Apple wants to increase marketshare, they need to be offering a BETTER DEAL than what the PC manufacturers are offering, not just the same deal.
 
Originally posted by alex_ant

The reason is obvious: Optimizing for AltiVec costs time and money. Let's assume that Product X contains two million lines of code. And let's assume that a competent programmer who earns $50,000/year is faced with the task of optimizing 50% (a million lines) of this program for AltiVec. Working at 1,000 lines per 8-hour day (a liberal estimate), it will take him over three years. ...

This is $150,000 that Company X never would have had to spend (and doesn't have to spend on anyone ELSE's processor) if Apple's CPU was not so esoteric. It sounds to me like this is much more Apple's problem than third-party developers' problem.
...
As we said, our CPU roadmap is a trade secret, but if you port to our platform, we'd be happy to provide you with a AltiVec Technical Summary PDF for free download, as thanks for being a valued Macintosh developer."

I just don't understand how the blaming of developers can be justified. If Macs actually had strong general-purpose FPUs, or if Apple would put more effort into its compiler, we wouldn't b ehaving this problem.

Alex
For a start I stuggle to think of any program of reasonable complexicty where you would optimise anywhere near 50% of the code for AltiVec. It is a very specialised unit. I mean do you seriously believe 50% of office could be done with AltiVec?

Secondly the troubles faced with programming for Apple's vector unit are no differnent than those you face when programming for Intel's.
At the end of the day for the best performance on the PC side you still have to get in there and do a lot of the coding by hand, which is just as slow as for the Velocity Engine. The main reason for greater adoption on the PC side is the increased competition, which largely doesn't exist on the Mac side. It's an easy way to differentiate performance and that is the first thing people look at.

The only other major difference is the compilers on the PC side are better but I'd expect Apple to keep working on their own. Apple does have a research group that does nothing but focus on higher level methods of programming for altivec or reprograms libraries to take advantage of altivec. For instance with Jaguar came a whole bunch of math libraries optimised for altivec. All you need to do is integrate them and the code is optimised for certain functions already.

Apple is more secretive with their roadmap than most but they do release some details to certain people. I can say their future roadmaps have Altivec compatibility built in though. They aren't abandoning it. Of course you always need to look at the fine print.

I agree Apple needs to improve their FPU and add a second one and certainly compilers can be improved but that's coming.
 
Originally posted by Telomar
For a start I stuggle to think of any program of reasonable complexicty where you would optimise anywhere near 50% of the code for AltiVec. It is a very specialised unit. I mean do you seriously believe 50% of office could be done with AltiVec?

That is a good point. Because AltiVec is so specialized, it does not lend itself well to many applications, which means that many programs are doomed to running on the G4's slow integer unit and very slow FPU. Which means that any objective performance advantages the G4 has over whatever else deserve an asterisk next to them.
Secondly the troubles faced with programming for Apple's vector unit are no differnent than those you face when programming for Intel's.
At the end of the day for the best performance on the PC side you still have to get in there and do a lot of the coding by hand, which is just as slow as for the Velocity Engine.

My point was that extensive hand-optimization is a requirement to achieve competitive performance on the Mac, whereas great performance is practically guaranteed out-of-the-box on Intel because Intel's compiler is so good - it's able to optimize somewhat well for SSE2 and MMX automatically (better than GCC for PPC optimizes for AltiVec), and it continues to improve. Not to mention the much faster integer and FP units on P4s and Athlons.
The main reason for greater adoption on the PC side is the increased competition, which largely doesn't exist on the Mac side. It's an easy way to differentiate performance and that is the first thing people look at.

The only other major difference is the compilers on the PC side are better but I'd expect Apple to keep working on their own. Apple does have a research group that does nothing but focus on higher level methods of programming for altivec or reprograms libraries to take advantage of altivec. For instance with Jaguar came a whole bunch of math libraries optimised for altivec. All you need to do is integrate them and the code is optimised for certain functions already.

This is good, and it's what Apple needs if it wants to stay on the AltiVec path, but they need to do a better job of making those research dollars translate into benchmark points. Sure Apple is improving - it's processors are getting faster, and its math libraries are getting more efficient, etc. - but the problem is that x86 is improving faster.

Alex
 
Re: Hello, MOTO??

Originally posted by KingRocky
There are plenty of companies out there who'd love to build processors/chipsets for Apple...
i'm curious. who wants to make chips for Apple? just asking.

.. but Apple has chosen MOTO as their sole vendor.
Motorola is not a sole vendor. IBM also make chips for Apple.
 
Sun Sparc?

What is the deal about Apples using sun Sparc chips. A friend of mine @ sun mentioned that there are mules in cupertino running sparc boxes. can someone substantiate this
 
Grrrr.

I'm normally passive and enjoy reading most of the discussions here. I wish that Alex would follow my lead and stay quiet unless he actually has something useful to say. Always trying to sound knowledgable when he obviously knows next to nothing on what he chooses to sound off about.

Here we go with AltiVec...

Very little of an average application can be vectorised and the parts that are, are often not difficult to. You can spend time finding an optimal algorithm for a particular problem to get the best gain from the vector unit but getting a general performance increase is not difficult.

Apple has some examples covering common optimisations here: http://developer.apple.com/samplecode/Sample_Code/Devices_and_Hardware/Velocity_Engine.htm

Coding for AltiVec is as simple as including a header file <AltiVec.h> and lining up data for it's vec_ C instructions. I don't profess to doing any MMX coding but that always used to be hand coded in assembler, with support for autovectorising code in some recent compilers.

The G4 processor has a weak floating point unit but doesn't really need anything more. Most math intensive code can be vectorised. When it can't be the general floating point unit is sufficient. (I believe that the G4 actually has 2 floating point units but that each only deals with certain types of operations)

The main weakness of AltiVec is it's lack of support for double precision floats. However for higher precision than this where processors don't normally have native support, better performance can be achieved with AltiVec again: http://developer.apple.com/hardware/ve/pdf/oct3a.pdf

On to benchmarks. They mean next to nothing as they are abused so often, especially the spec bench marks that the Intel world is so fond of quoting. First, the Spec bench marks do not stress the overall system which is important if you want to use benchmarks to base expectations for real world results. Second the spec benchmarks are run as-is. The only optimisations that can take place on the code must be done by the compiler, so unless you have an auto-vectorising compiler you will not get any benefit for that benchmark. Thus a chip with an inferior vector unit (P4) with an auto vectorising compiler can get a better spec result than a superior vector unit (G4) without auto vectorisation, because the vector unit is not being used.

If you have any doubt on which is superior. Arstechnica is a fantastic site for info on such things: http://www.arstechnica.com/cpu/1q00/simd/simd-1.html
http://arstechnica.com/cpu/01q2/p4andg4e/p4andg4e-1.html
http://www.arstechnica.com/cpu/1q00/g4vsk7/g4vsk7-1.html

And especially relevant to this:

http://www.arstechnica.com/cpu/2q99/benchmarking-1.html

Although modern compilers are good they can't know the context of the problem you are trying to solve. Optimisation by hand will always have the potential for better results.

Apple needs to add auto vectorising support to GCC, I do not believe it does so at all right now, even GCC 3. Apple knows it needs to improve the compiler and has in fact been hard at work doing that. GCC 3 has many improvements rolled into it that are from Apple many to do with vector support other general optimisations.

Apple should continue what they are doing now, improving the compiler in a way that benefits us developers and not aiming for high bench mark scores that only the uneducated seem to deem as relavent.

For reference, I am a Java/C programmer working for Marconi plc. I get to play with Objective-C and Macs in my spare time.
 
Re: Re: Hello, MOTO??

Originally posted by i_b_joshua

i'm curious. who wants to make chips for Apple? just asking.


Motorola is not a sole vendor. IBM also make chips for Apple.

You are correct about IBM. However, IBM doesn't seem interested at the moment in sharing their shiny new Power4, which I'm sure spanks some G4 booty.


And what vendor wouldn't want to expand their business by manufacturing CPUs for the Macintosh?

Although I'm sure that the costs of licensing the PowerPC/AltiVec architecture would be cost-prohibitive, any of the Taiwanese manufacturers could do it--VIA, SiS, UMC, etc. There's no special magic to processor manufacturing that only MOTO/IBM know.

I don't think that AMD would be interested--they've already invested heavily in the X86-64 architecture for their next-generation chip.
 
Originally posted by bertinman

I still feel that intel based osx is not the best idea none-the-less, because it would ruin Apple's control of hardware which is why Apple's systems are so great--They work together.

-- bert :cool:

Wow. There are really a LOT of misconceptions about the possibilities of Apple going with an AMD or Intel processors.

It would not ruin Apple's control over the integration of their hardware at all. Apple would *never* enable Mac OS X to be installed on any AMD/Intel box, that would be a huge disaster. There is no question that if Apple went x86, Mac OS X for x86 would only run on Apple hardware. This is not a difficult task to accomplish, at all.
 
Originally posted by thegrayrace


It would not ruin Apple's control over the integration of their hardware at all. Apple would *never* enable Mac OS X to be installed on any AMD/Intel box, that would be a huge disaster. There is no question that if Apple went x86, Mac OS X for x86 would only run on Apple hardware. This is not a difficult task to accomplish, at all.

Where there's a will, there's a crack. . .

M$ said that you had to register XP--cracked
M$ said that XP SP1 would fix the problem--cracked

You underestimate the time and resources of the hacker community. No, I'm not one myself.

MacOS on X86 hardware would be a step backwards unless it's on a 64-bit platform. But that's beside the point. There is NOTHING WRONG with the current PPC/AltiVec architecture that a little competition won't fix. With Apple being it's sole customer for G4 chips, MOTO isn't interested in spending the time and $ in turning the G-series into a real world-class processor.
 
Re: BS

Originally posted by Kethoticus
But what I can say is this: every few months I hear some new, incredible rumor. It promises something near-unimaginable on the horizon, something that every Mac fanatic raves will bury the Wintel duopoly once and for all.

How long have I been hearing this stuff? For years. The original Power PC chip was supposed to accomplish this overthrow 9 years ago. Then OS X was. Then the G4 chip, with its altivec instruction set, was. There have been other false promises in between, if memory serves. They've all consistently led to the following:

While I mostly agree with you, you have to admit that when it comes to raw power, the Macintosh platform has lost the lead it once had. The 680x0 chips blew away the x86 chips available at the time. The PPC 60x chips blew away the x86 chips available at the time. The G3, when it was first released, blew away the x86 chips available at the time. But in the last 4 years, Apple has fallen far behind in the raw power of their hardware (unless you believe 5 or 6 out of a few hundred Photoshop benchmarks equals raw power).

However, the POWER4-based desktop chip is not a pipe dream. It exists, and it could very well put Apple back on top in the hardware arena. Or, at the very least, allow them to catch up on the significant ground they have lost.

People who proclaim Apple is doing everything fine and hasn't fallen behind don't see what I see. I work at an Apple-only reseller/service center, and the graphics, video, and audio professionals aren't falling for Apple's benchmarks. We're constantly losing the high-end customers to the x86 world due to the speed issue. iBook, PowerBook, and iMac sales have remained pretty constant the 2.5 years I've been here, but Power Macintosh sales have noticibly decreased. Those Power Macintosh towers we do sell are to Mac-faithful, not new customers. Apple is behind in this area and they need to catch up if they want to remain strong in a market where speed is a MAJOR issue.
 
Originally posted by KingRocky
Where there's a will, there's a crack. . .

M$ said that you had to register XP--cracked
M$ said that XP SP1 would fix the problem--cracked

You underestimate the time and resources of the hacker community. No, I'm not one myself.

It isn't about simply cracking a method of software protection, although there would surely be software-hardware verification. The logic board (and ROM on the logic board) is just as significant as the processor.

There are other systems out there using the PPC chip.

IBM had servers which used the G3 chip. Mac OS X hasn't been hacked to run on those machines to the best of my knowledge. They won't run OS 8.x or 9.x.

The BeBox used 603e chips, it can't run OS 8.x or 9.x.

Correct me if I'm wrong.
 
I was speaking strictly from a PC user's perspective. I honestly don't know of ANY hacked or cracked Macintosh software/OS.

However. . .

Since OS X is UNIX based, I can forsee a time when somebody out there gets it to run on their PC. Granted, it would take a BIG effort, but I have no doubt that it could be done.
 
Originally posted by KingRocky


Where there's a will, there's a crack. . .

M$ said that you had to register XP--cracked
M$ said that XP SP1 would fix the problem--cracked

You underestimate the time and resources of the hacker community. No, I'm not one myself.

Also, if Apple was to go x86, they'd most likely have AMD and/or Intel manufacturing them slightly modified chips. Again, not difficult to accomplish.

It would NOT be as easy as building your own x86 box and tossing in your hacked version of Mac OS X. There likely would be all sorts of hardware modifications that would need to be done. Drivers for all third-party hardware would need to be written or compiled for OS X (kind of hard to do when the system won't even boot off of your motherboard, use your video card, recognize your keyboard). Nothing is impossible, but it is unlikely, and even if someone pulled it off, what's it hurt Apple if a few dozen people are running an unstable hacked version of Mac OS X on unsupported hardware?

They'd only be worried about the average Joe being able to install Mac OS X on his Dell or Compaq or the machine he built with parts at Fry's Electronics, which would not be happening. The hardcore hackers and hardware engineers aren't going to be a major concern anyway.
 
Originally posted by thegrayrace


It isn't about simply cracking a method of software protection, although there would surely be software-hardware verification. The logic board (and ROM on the logic board) is just as significant as the processor.

There are other systems out there using the PPC chip.

IBM had servers which used the G3 chip. Mac OS X hasn't been hacked to run on those machines to the best of my knowledge. They won't run OS 8.x or 9.x.

The BeBox used 603e chips, it can't run OS 8.x or 9.x.

Correct me if I'm wrong.

However, if you ran Mac OS X on Intel hardware you suddenly attract the attention of all the Windows and Linux hackers/crackers who will do it just to say they can.

IBM's PPC machines would never have been a target for a Mac OS mule because they're so damn expensive. No one wants to pay more than even Apple prices to run Mac OS. Pretty much the same story with BeBox. However, if you can buy a cheap, build-it-yourself PC and get a hacked version of Mac OS X to run, that might be interesting for the price (especially to that crowd).

If you need an example, just look at how the game hackers are beating Microsoft's hardware/software copy protection duo on the XBox.
 
Originally posted by kenohki


However, if you ran Mac OS X on Intel hardware you suddenly attract the attention of all the Windows and Linux hackers/crackers who will do it just to say they can.

IBM's PPC machines would never have been a target for a Mac OS mule because they're so damn expensive. No one wants to pay more than even Apple prices to run Mac OS. Pretty much the same story with BeBox. However, if you can buy a cheap, build-it-yourself PC and get a hacked version of Mac OS X to run, that might be interesting for the price (especially to that crowd).

If you need an example, just look at how the game hackers are beating Microsoft's hardware/software copy protection duo on the XBox.

True, it would attract more attention, and sure, it would be possible to hack, no question about that.

However, Apple, with relative ease, could put all sorts of hardware-software verification schemes throughout the operating system, not only in the installation. It would be tough work to break. Every "Security Update" one installs via Software Update could patch the system to perform hardware-software verification again. =)

Drivers for all the third-party hardware would also be a major issue. Someone could perhaps create a hacked build of Mac OS X to run on some specific hardware, but we'd never see a hacked version of Mac OS X that included even 5% of the drivers that are included in Windows XP, for example. All third-party hardware would need drivers which would have to be written and compiled specifically for Mac OS X, and these would have to be on the hacked install CD. To do that, one would have to hack a copy of Mac OS X already installed on Apple hardware to remove any hardware-software verification. Once that is done, they'd have to compile an install CD that includes OS X drivers for all the hardware they intend to use. Not simple tasks.

Again, it's all possible, it's just not anything I think Apple needs to worry about severely hurting their hardware sales if they went the x86 route. I certainly would prefer they went with the POWER4-based chip. But a lot of people have the misconception that if Apple went with x86, anybody would be able to grab a retail copy of Mac OS X for x86 and install it on their Dell.
 
Originally posted by senjaz
Very little of an average application can be vectorised and the parts that are, are often not difficult to. You can spend time finding an optimal algorithm for a particular problem to get the best gain from the vector unit but getting a general performance increase is not difficult.
...

The G4 processor has a weak floating point unit but doesn't really need anything more. Most math intensive code can be vectorised. When it can't be the general floating point unit is sufficient. (I believe that the G4 actually has 2 floating point units but that each only deals with certain types of operations)

The main weakness of AltiVec is it's lack of support for double precision floats. However for higher precision than this where processors don't normally have native support, better performance can be achieved with AltiVec again: http://developer.apple.com/hardware/ve/pdf/oct3a.pdf

The point still remains that Apple is asking developers to re-write however much of their code for this oddball AltiVec thing that nobody else uses. If it were so easy, all Mac developers would have done it by now. Until all desktop processors have similarly functioning vector processors onboard, or until whichever OS X compiler is capable of useful auto-vectorization, developers - if they want to involve themselves with the Mac - are forced into doing something they shouldn't have to do.
On to benchmarks. They mean next to nothing as they are abused so often, especially the spec bench marks that the Intel world is so fond of quoting. First, the Spec bench marks do not stress the overall system which is important if you want to use benchmarks to base expectations for real world results. Second the spec benchmarks are run as-is. The only optimisations that can take place on the code must be done by the compiler, so unless you have an auto-vectorising compiler you will not get any benefit for that benchmark. Thus a chip with an inferior vector unit (P4) with an auto vectorising compiler can get a better spec result than a superior vector unit (G4) without auto vectorisation, because the vector unit is not being used.

That's the whole point of SPEC, and it does serve a useful purpose. It's designed to approximate general performance of identical code across different platforms. If the test suite were allowed to be modified, then it wouldn't be a cross-platform benchmark anymore. There are two ways of looking at CPU-bound benchmarks like SPEC:

1) They're good, because they are decent approximators of general real-world CPU performance, and other factors of overall system performance (disk I/O, video, etc.) are likely similar enough across the Mac-PC divide that they won't account for too much disparity from this angle.

2) They're bad, because they do not have any bearing on the maximum potential of a processor. Well, Apple can go on and on about how the dual 1GHz is capable of 15 gigaflops and so on, but until the dual GHz is actually capable of regularly achieving 15 gigaflops on general floating-point tasks, then that number is theoretical and meaningless to the end user. Maybe it applies to a couple Photoshop filters.
Although modern compilers are good they can't know the context of the problem you are trying to solve. Optimisation by hand will always have the potential for better results.

But that means less when the x86 world is much faster by default than the Mac world. A developer today has a choice: 1) writing standard cross-platform code that will run pretty well by default on the platform that 95% of the desktop market uses, and 2) writing the same code and then combing through however much of it by hand to optimize it for what less than 5% of the desktop market uses, ultimately achieving performance anywhere that still might not be much better than x86. Of course it all boils down to what sells, but people have a tendency of being partial to what performs best.
Apple should continue what they are doing now, improving the compiler in a way that benefits us developers and not aiming for high bench mark scores that only the uneducated seem to deem as relavent.

If Apple continues what they're doing now, they will only fall further and further behind the rest of the world. Although improving the compiler is great, and it may be their only short-term option, they need faster chips in order to stay competitive, and those chips have to be able to achieve good performance without significant developer effort.

Alex
 
Originally posted by thegrayrace
However, Apple, with relative ease, could put all sorts of hardware-software verification schemes throughout the operating system, not only in the installation. It would be tough work to break. Every "Security Update" one installs via Software Update could patch the system to perform hardware-software verification again. =)
Wasn't it Steve Jobs who said (to paraphrase) "Every security scheme based on secrets eventually fails"? I believe this would be the case with an x86 Mac. You can put all the secret encryption chips or whatever onto the motherboard that you want, but it would only be a matter of time before the proprietary x86 Mac would be cracked wide open and low-level emulators became available for any PC under the sun. The Linux fanboys would be on this thing like flies on ****.

Not to mention the questionable probability of success of an x86 Mac...

All this talk of hacking just makes me uneasy. It's against what the Mac is all about: Simplicity. People shouldn't have to understand that the OS is separate from the hardware and that it needs special drivers and the manufacturers haven't released drivers for OS X-x86 yet and software publishers haven't recompiled their code yet and blah blah blah. They want a computer that works, and that's all. A hardware branch would upset this.

Alex
 
Originally posted by alex_ant

Wasn't it Steve Jobs who said (to paraphrase) "Every security scheme based on secrets eventually fails"?

He may have said it, but I doubt he originated the idea.
 
A lot of the underlying ideas I agree with here. But not some of the actual arguments used. So, at the danger of being labeled a troll, here's some comments (and don't take these personally - they're comments thrown to all the readers):

Originally posted by alex_ant

The point still remains that Apple is asking developers to re-write however much of their code

Much is the wrong word. The rewrite - ie away from C (or whatever) and to something to get benefit from AltiVec - is for the purposes of performance. It is extremely rare of performance bottlenecks to be anything other hotspots - the old 90% of the time in 10% of the code adage. There's not much to do for a word processor, other than leaning heavily on class libraries that Apple has spent lots of time working on overall (one hopes). But anything that could be considered to be performing data processing (think anything with audio, video, graphics, FP data sets) is likely to be extremely amenable to optimisations in small, key locations.

So I agree with the sentiment that it's a shame that you have to recode some things, but thing the balance isn't quite right here.


for this oddball AltiVec thing that nobody else uses.

It's probably as oddball as MMX, etc. In fact it's not at all oddball if you consider some of the processors outside the "general processor" category. Large register, split carry, repeated operation.... TI had a graphics co-proc doing this a good decade ago (340 series I think). And it's not that odd if you venture in to the world of DSPs, differing word sizes, true Harvard architecture, etc. I used an Analog Devices SHARC DSP a few years ago - data items in registers manipulated as 16, 32, or 48 bits, with data memory banked into units where an incremement of 1 moved up by one of these units (ie there was simply no concept of byte addressing). Oh, and that was VLIW too.

If you've just poured months into a large program, and have some respect for what you're doing, then

1) you should know where the bottlenecks are.
2) you should already have tried to isolate the key routines to make later optimisations easy.
3) the chance to achieve a 10x speed gain is almost irresistable
4) unless you're a VB coder who doesn't know what an interrupt or scalable algorithm is, you'll find ways to optimise if they exist.


If it were so easy, all Mac developers would have done it by now.

Here we start getting close to the real nub - is it really a case of being easy versus difficult, or actually that AltiVec won't make much difference? For comparitively simple operations over large data sets, there are benefits from AltiVec (and the like). But more general computing (let's say a Java VM just to choose a random example) is probably more just data bandwidth bound than anything else. In such situations, there will be no AltiVec optimisations, but because they're not going to contribute much versus being too difficult.

Until all desktop processors have similarly functioning vector processors onboard, or until whichever OS X compiler is capable of useful auto-vectorization, developers - if they want to involve themselves with the Mac - are forced into doing something they shouldn't have to do.

So I don't quite agree with this.

...
But that means less when the x86 world is much faster by default than the Mac world. A developer today has a choice: 1) writing standard cross-platform code that will run pretty well by default on the platform that 95% of the desktop market uses,

This 95% figure is interesting. We've been reading it quite happily for a while now, but maybe it's worth considering what it means.

Does Intel have 95% market share - no AMD ensures that it so.

Does Microsoft have 95% market share - yes (let's assume this at least for the sake of the argument).

Okay - what has Microsoft got 95% market share with? Their operating systems (note the plural). Even this 95% doesn't mean any one of 95, 98, 2000, XP, etc, has 95% market share.

Applications - Let's guess IE is the most used piece of code MS produce. This still probably has less than 95% share - not all MS users are necessarily using the 'net and some (not many!) are using altnerative browsers.

Office is probably MS's biggest selling application. This sure as hell hasn't got 100% of this 95% to itself.

In other words, this is classic lies, damned lies and statistics. The potential market share for a PC developer may be 95%, but this isn't a realistic figure for even MS, let alone more "lowly" developers.

It all comes down to market segmentation. Adobe may say that PC's represent about 20 times the market as Macs for Photoshop, but I imagine that the Mac community is more focused and would expect a smaller sales ratio than 20:1 in reality. (Can any one be bothered to trawl Adobe's SEC filings and pull out whatever figures they give?)

If we start talking "esoteric" applications, such as professional design (print, video, etc), then this 95% is probably prime MS FUD. Sales to Mac users might actually be higher!

[/b]

and 2) writing the same code and then combing through however much of it by hand to optimize it

the "by hand" bit is probably wrong. If you don't already know where its slow (eg writing a video application - not difficult to guess!) there are tools that will make this a fairly trivial process.


If Apple continues what they're doing now, they will only fall further and further behind the rest of the world. Although improving the compiler is great, and it may be their only short-term option, they need faster chips in order to stay competitive, and those chips have to be able to achieve good performance without significant developer effort.

Alex

This is where I really get closest to agreeing. But, as that wouldn't be providing troll fodder (not my intention, honest), consider this.

Let's assume we've factored out where AltiVec is relevant (possibly including compilers, but let's not go there).

A lot of the messages here recently seem to have focused very much on clock rates. Some have cited Power4 and derivates, but almost always in an off-hand fashion. But ask the question, what actuallty needs to be faster?

Much as I find it strange to say this, I think the BareFeats numbers might be relevant here. Combine the apparent oddities of these figures with the white paper about G4 upgrades and cache performance, and I think there's something lurking here (not everyone is going to be surprised by this). I think G4 at present is not MHz bound but MB/S bound - ie improving memory bandwidth might have a lot more effect that increasing clock rates or achieving higher IPC/superscalar performance.

Keeping everything about a G4 the same apart from doubling the speed of the memory bus and performing some ripple through on timings might well be a considerably cheaper way of getting better performance than things like Power4Lite.

But if this hypothesis is true, why hasn't it happened? I've no answer to that, other than it's far more in Motorola's (and maybe IBM's) hands than Apple's.

Phew - rant over.
 
Originally posted by nixd2001
Much is the wrong word. The rewrite - ie away from C (or whatever) and to something to get benefit from AltiVec - is for the purposes of performance. It is extremely rare of performance bottlenecks to be anything other hotspots - the old 90% of the time in 10% of the code adage. There's not much to do for a word processor, other than leaning heavily on class libraries that Apple has spent lots of time working on overall (one hopes). But anything that could be considered to be performing data processing (think anything with audio, video, graphics, FP data sets) is likely to be extremely amenable to optimisations in small, key locations.

So I agree with the sentiment that it's a shame that you have to recode some things, but thing the balance isn't quite right here.

These are good points. I didn't say "much" though, I said "however much" - in an ideal world, developers wouldn't have to do anything special to achieve maximum performance on the Mac. But I think we mostly agree here, and I admit my rewriting-50%-of-the-code figure or whatever it was was way high.
It's probably as oddball as MMX, etc.

Well, Intel's compiler has made enough strides that optimizing for MMX and SSE2 to some extent happens automatically. The auto-vectorization isn't perfect, but it's better than none at all (I would guess :)). So that kind of separates the oddballs from the not-quite-as-oddballs.
In fact it's not at all oddball if you consider some of the processors outside the "general processor" category. Large register, split carry, repeated operation.... TI had a graphics co-proc doing this a good decade ago (340 series I think). And it's not that odd if you venture in to the world of DSPs, differing word sizes, true Harvard architecture, etc. I used an Analog Devices SHARC DSP a few years ago - data items in registers manipulated as 16, 32, or 48 bits, with data memory banked into units where an incremement of 1 moved up by one of these units (ie there was simply no concept of byte addressing). Oh, and that was VLIW too.

There are indeed processors much weirder than the G4 out there, but out of the current crop of desktop & low-end server processors, the G4 with its AltiVec is probably the most oddball of the bunch.
If you've just poured months into a large program, and have some respect for what you're doing, then

1) you should know where the bottlenecks are.
2) you should already have tried to isolate the key routines to make later optimisations easy.
3) the chance to achieve a 10x speed gain is almost irresistable
4) unless you're a VB coder who doesn't know what an interrupt or scalable algorithm is, you'll find ways to optimise if they exist.

...
(moved)
the "by hand" bit is probably wrong. If you don't already know where its slow (eg writing a video application - not difficult to guess!) there are tools that will make this a fairly trivial process.

All good points. Perhaps I overestimated how difficult AltiVec accelleration really is. But time and economics do play into this. Every week a programming team spends optimizing is a week the Mac version of Software X has to be delayed, and every dollar that team is paid is a dollar that will be appearing in red on Company X's balance sheets. Probably just about all Mac developers could go hog-wild with AltiVec if they really had to, and still stay in business, but no company likes to spend money it shouldn't have to spend.
Here we start getting close to the real nub - is it really a case of being easy versus difficult, or actually that AltiVec won't make much difference? For comparitively simple operations over large data sets, there are benefits from AltiVec (and the like). But more general computing (let's say a Java VM just to choose a random example) is probably more just data bandwidth bound than anything else. In such situations, there will be no AltiVec optimisations, but because they're not going to contribute much versus being too difficult.

Yup - there are a lot of tasks AltiVec isn't suited for. And this reinforces what I said about Apple needing a CPU with better general-purpose performance.
This 95% figure is interesting. We've been reading it quite happily for a while now, but maybe it's worth considering what it means.

Does Intel have 95% market share - no AMD ensures that it so.

Does Microsoft have 95% market share - yes (let's assume this at least for the sake of the argument).

Okay - what has Microsoft got 95% market share with? Their operating systems (note the plural). Even this 95% doesn't mean any one of 95, 98, 2000, XP, etc, has 95% market share.

Applications - Let's guess IE is the most used piece of code MS produce. This still probably has less than 95% share - not all MS users are necessarily using the 'net and some (not many!) are using altnerative browsers.

Office is probably MS's biggest selling application. This sure as hell hasn't got 100% of this 95% to itself.

In other words, this is classic lies, damned lies and statistics. The potential market share for a PC developer may be 95%, but this isn't a realistic figure for even MS, let alone more "lowly" developers.

It all comes down to market segmentation. Adobe may say that PC's represent about 20 times the market as Macs for Photoshop, but I imagine that the Mac community is more focused and would expect a smaller sales ratio than 20:1 in reality. (Can any one be bothered to trawl Adobe's SEC filings and pull out whatever figures they give?)

If we start talking "esoteric" applications, such as professional design (print, video, etc), then this 95% is probably prime MS FUD. Sales to Mac users might actually be higher!

I think this is my fault... instead of saying "the platform 95% of the market uses," I should have said "the platform comprising 95% of the market in current sales" or something like that. Sure there are tons of legacy Windows PCs, but there are tons of legacy Macs just as well.
This is where I really get closest to agreeing. But, as that wouldn't be providing troll fodder (not my intention, honest), consider this.

Let's assume we've factored out where AltiVec is relevant (possibly including compilers, but let's not go there).

A lot of the messages here recently seem to have focused very much on clock rates. Some have cited Power4 and derivates, but almost always in an off-hand fashion. But ask the question, what actuallty needs to be faster?

Much as I find it strange to say this, I think the BareFeats numbers might be relevant here. Combine the apparent oddities of these figures with the white paper about G4 upgrades and cache performance, and I think there's something lurking here (not everyone is going to be surprised by this). I think G4 at present is not MHz bound but MB/S bound - ie improving memory bandwidth might have a lot more effect that increasing clock rates or achieving higher IPC/superscalar performance.

Keeping everything about a G4 the same apart from doubling the speed of the memory bus and performing some ripple through on timings might well be a considerably cheaper way of getting better performance than things like Power4Lite.

But if this hypothesis is true, why hasn't it happened? I've no answer to that, other than it's far more in Motorola's (and maybe IBM's) hands than Apple's.

Phew - rant over.
If the G4's memory bus were quadrupled tomorrow, that would be fantastic, but I would be happier if I knew that Apple had a strong and capable architecture with a bright, Moore's-Law-compliant future ahead. Increasing the bus speed on the G4 sounds like a nice short-term solution to the x86-PPC performance disparity, but Intel and AMD certainly aren't standing still. They're moving a lot faster than Motorola is, and that's the whole problem. :)

Alex
 
Originally posted by nixd2001
Borris
[Would the current claimant to the Borris ID care to to relinquish it, perchance....]

liked your post....hope you get your user id you requested Borris:p
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.