Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Originally posted by jettredmont
1) The compiler doesn't support Obj-C or Obj-C++. So, obviously not.

2) Apple has been quite vocal that GCC is its internal compiler. Not Code Warrior; not IBM. GCC.

So, the answer is: "No, Apple has not been using this compiler."

Theoretically, the core libraries might be able to be compiled with the IBM compiler, assuming is emits libraries using the gcc 3.3 ABI (which, actually, I rather doubt), but the Obj-C stuff obviously is not (unless they had an Obj-C-to-C compile step, then a C compile step on IBM's compiler ...)

[edit:

Looks like the IBM compiler does adhere to the GCC 3.3 ABI, so it is possible that the low-level libraries of Panther could be compiled with it and the front-end Cocoa interfaces with gcc:



I stand corrected.

However, as I said before: Apple has been quite vocal about "using its own dogfood" with gcc. I don't remember anything coming from this year's WWDC specifically saying that as an overall statement as they had said last year, but I do have confirmation that many low-level systems for Panther are compiled using gcc, not any other compiler.
I find this rather silly, actually. The IBM compiler is *beta*. Nobody in their right mind would compile a production OS with a beta compiler and ship it to their customers. Why not just slit your wrists? Apple is using gcc now because it generates solid, if less than optimal, code. When the IBM compiler firms up, Apple will use it; they aren't married to gcc. From first glance, I wouldn't be surprised to find out that the IBM compiler used gcc as its initial code base.
 
Originally posted by daveL
From first glance, I wouldn't be surprised to find out that the IBM compiler used gcc as its initial code base.


Yes, IBM's "code borrowing" is getting quite a bit of attention from the press (and from the lawyers)....
 
Originally posted by AidenShaw
Yes, IBM's "code borrowing" is getting quite a bit of attention from the press (and from the lawyers)....

Aiden, I couln't let this one past - sorry for going OT - but the only bit of evidence that SCO has shown so far that "potentially" gives credence to their claims points to SGI as the culprits, not IBM.

Their claims against IBM are LIES.

ok - sorry about that ;)

jettredmont (or anyone!):

What's Apple's relationship with GCC? They don't have anyone on the steering committee (I thinks? - but I'm fairly sure IBM does...).

Do they have a vested interest in using it internally - long term? I can't seem to see any impediments to Apple adopting 'chip specific' IBM complilers, expect maybe political ones.

[edit - sorry DaveL, you answered my question already!]
 
Originally posted by tiktokfx
VisualAge is at version 6, GCC at 3.3. I can't see at a glance why you'd think xlc would be built on appropriated gcc code.

IBM documentation mentions that they are making sure that their binaries are GCC compatible, so you can link to them from Obj-C. IBM' largest PPC customer is no longer IBM, it is now Apple. You can bet there moves to fit into the GCC world is motivated by their desire to bolster Apple's OS X, now IBM's premiere OS.

It is a matter of pride to IBM how well OS X works on their 970 processor. It's all good for us.
 
I just wanted to point out that both IBM C/C++ and Fortran compilers run on the G4 (despite the Read Me files suggesting a G5 hardware requirement). Both are free betas at the moment. I got the news from the apple scitech mail list where some tests have been done showing big speed increases over other compilers e.g. "speedups relative to all other FORTRAN compilers on OS X....anywhere from 1.4X to 3.5X increase in performance from xlf". It's good to see that the Apple community is gaining much more than the chance to buy decent hardware from working closely with IBM but that they are also helping the overall Mac performance.
 
finally

After Intel brought up a good compiler that makes the ol' pentiums shine more than anyone else, finally IBM is getting there to make automatic use of altivec &co

If this had only happend sooner... ;) Anyway this will kick ass on open source Apps that can now be optimized without rewriting it for the mac.
nice
very nice.
 
hmm...I dont understand all this compiling stuff. Is compiling when you take the code you write in C++ (or any other language), and turn it from being code into being a program?

All the speed improvements you guys are talking about seem like good news, soo...wohoo! :)
 
Originally posted by Raiden
Is compiling when you take the code you write in C++ (or any other language), and turn it from being code into being a program?

Exactly.

Good compilers can often find ways to do the work better than the way that the program was originally written - yet do exactly the same thing.

For a simple example, if some of the calculations in a loop have the same answer each time through the loop - the compiler will move the "loop invariant" computation to before the loop and save those results in a temporary variable. This avoids recomputing the number many times - it's done once and reused.

Smart compilers are getting very clever at analyzing program flow to find optimizations like this.

This is often called the "front end" of the compiler - it analyzes and improves the source code of the program.

----

On another level, once the compiler figures out the "improved program", it needs to generate the best sequence of machine instructions in order to actually build the program. This is called "code generation", or the "back end" of the compiler.

The back end is what needs to know about the machine architecture, AltiVec, number of registers, cache, etc. By understanding the underlying CPU and memory system, it can make the best use of the CPU, and get faster execution.

A compiler like "gcc" runs on many platforms - it will have a common front end for all platforms, and a specific back end for each. It will have an x86 back end (maybe even different back ends for 386/486/Pentium/P4), PPC (750,74xx,970), MIPS, IA64....

Compilers like Intel's or IBM's have an additional advantage in that the front end can do more to help the back end. For example, the front end can flag that certain sections of the code might be suitable for AltiVec or SSE2 parallel operations.

A general compiler like "gcc" has to handle many different types of systems, and might not have specific checks in the front end to save information like that for the back end.
 
Originally posted by pdickins
I just wanted to point out that both IBM C/C++ and Fortran compilers run on the G4 (despite the Read Me files suggesting a G5 hardware requirement). Both are free betas at the moment. I got the news from the apple scitech mail list where some tests have been done showing big speed increases over other compilers e.g. "speedups relative to all other FORTRAN compilers on OS X....anywhere from 1.4X to 3.5X increase in performance from xlf". It's good to see that the Apple community is gaining much more than the chance to buy decent hardware from working closely with IBM but that they are also helping the overall Mac performance.

IBM compilers for Power and PowerPC have been there before the age of PPC601. And at last, it is ported to Mac platform. This beta porting must be tesiting only about (1) BSD4.4 compatibility, (2) HFS+ file system compatibility, etc. i.e. purely library issue, and not the compiler itself.
Thus it is sure, the "OS X" must not use this beta version compiler, as you understand.

But all user programs should be tested using this beta. My experience on performance and code compatibility of the XLFortran under AIX has been extremely good. We should expect excellent IBM math/scientific libraries, as well as XL HPF, will be ported shortly.

Please remind, Fortram was invented by IBM in 1956.
 
If I remember correctly, Apple maintains their own variation of GCC. If this is true and Apple/IBM are sharing development of the XL compiler, then portions of the XL compiler could be used to enhance Apple's GCC.

What is the possibility of Apple using XLC to optimize the BSD kernel? What about video drivers for ATi and nVidia? What about optimizing portions of networking including AppleShare?
 
Originally posted by daveL
OS X isn't written in Objective C. The code base for the OS is from BSD 4.4, which is all C/C++. Objective C shows up in Cocoa; it's mainly used in the UI frameworks. Since most modern compilers have a frontend parser and a separate backend code generator, it wouldn't be that big a push to add an Objective C parser to the IBM compiler's code generator. Given how closely Apple and IBM have been working on the 970, I would think the Objective C parser is already in the works.

Sure there's lots of c++ code in OS X. I wasn't saying they couldn't recompile those sections. I was merely stating that you couldn't just recompile the entire panther codebase and get the speed increases this compiler appears to provide. I should have made that clearer, though I think you might be underestimating how much objective-c code there is sitting on top of that darwin core. Certainly large C++ frameworks like KHTML might benefit, and then, by extension, those cocoa frameworks that use them.

As far as IF IBM COULD add objective-c support to the compiler well sure, i'm not qualified to say whether that's possible for this particular compiler, but it seems feasible. The advantage of this compiler that everyone is cooing about, however, is the level of optimization it provides. I'm skeptical that IBM could just slap on another parser and acheive the same types of speed increases with a new language. Sure, unrolling a loop is unrolling a loop and there are a ton of syntactical similarities between obj. c and c++, but optimization is one of those ethereal things that can take five minutes or five years to get right.
 
Originally posted by daveL
Apple is using gcc now because it generates solid, if less than optimal, code. When the IBM compiler firms up, Apple will use it; they aren't married to gcc. From first glance, I wouldn't be surprised to find out that the IBM compiler used gcc as its initial code base.

True, to a point.

1) If the IBM compiler is distributed free, Apple might switch over to it (I know more than one developer at Apple that would be more than slightly affected by such a move though), or (more likely) offer it as an option in their XCode toolchain. "gcc" still == "open source" to many developers; dropping support for it would be backpeddling.

2) If the IBM compiler is open-sourced (unlikely, I think), then it is very likely that Apple will use it, and possibly drop gcc support altogether.

3) If the IBM compiler is not distributed free, Apple might switch to it for some development, but only if they also put hooks into it in XCode et al (hooks that might only be visible if you install IBM's compiler, etc). There is too much political advantage in telling your developers that you use the same tools they do to just throw away.
 
Originally posted by mim
jettredmont (or anyone!):

What's Apple's relationship with GCC? They don't have anyone on the steering committee (I thinks? - but I'm fairly sure IBM does...).

Do they have a vested interest in using it internally - long term? I can't seem to see any impediments to Apple adopting 'chip specific' IBM complilers, expect maybe political ones.


Apple does actively support gcc development. There are several developers on Apple's payroll who do nothing but work on the PPC gcc back end. How much of this is submitted back to gcc and how much is only in the Apple releases ... I'm not sure.

That having been said, Apple is a "hardware company" much more than they are a "gcc support company". If the terms of IBM's compiler are favorable, then they can and will switch to it.

However, yes, politically it is far better to be using the same open, free compiler you are asking your developers to use.

Point of reference: Code Warrior has produced better code than gcc for several versions now (gcc 3.3 is finally almost approaching CW's efficiency, but it still isn't quite there yet!). Nothing produced by Apple is compiled on Code Warrior, to the best of my knowledge. Currently all code is compiled on gcc.
 
Originally posted by andyduncan
As far as IF IBM COULD add objective-c support to the compiler well sure, i'm not qualified to say whether that's possible for this particular compiler, but it seems feasible. The advantage of this compiler that everyone is cooing about, however, is the level of optimization it provides. I'm skeptical that IBM could just slap on another parser and acheive the same types of speed increases with a new language. Sure, unrolling a loop is unrolling a loop and there are a ton of syntactical similarities between obj. c and c++, but optimization is one of those ethereal things that can take five minutes or five years to get right.

But, so long as IBM has an Objective-C parser (which, really, Objective C can still be just translated to C code still, right? A first-cut ObjC support module might be nothing more than a two-pass compiler, parsing to C and then compiling the C as usual) you benefit from the C/C++ optimizer in the 98% of your code that is actually just plain C.

I mean, look at any Objective C application. How much of the "real work" is done using ObjC constructs, and how much using plain vanilla C constructs? I suspect that you'll find the vast majority of bottleneck code is actually just doing straight-C code. Heck, the same can be said of most C++ applications!
 
Originally posted by jettredmont
How much of the "real work" is done using ObjC constructs, and how much using plain vanilla C constructs?

Yeah, there's quite a bit of overlap between the languages, it is c after all. I imagine most of the algorithmic optimizations for things like for loops would carry over pretty easily. Of course, how many for loops do you have that don't make any function calls (or send messages to objects)? Also, there are other types of optimization besides reordering/unwinding logic. I don't know much about writing Obj. C compilers, but I imagine true dynamic typing and linking has to have some effect on compiler design/optimization. The simple logic stuff is universal, it's the overall application architecture that would seem to be the bigger problem.

edit:
And we aren't talking about slapping some Obj. C capability on here, we're talking about a best-of-breed Obj. C compiler, correct? it wouldn't be very exciting if their Obj. C implementation was just as slow as gcc.
 
new SPEC scores !!!

I can't believe that no one noted them. I read this thread a while ago, with the assumption that everyone has already seen the scores. But it seems that no one knows. So as my duty to bring forward the information. look at the scores i found on this (german) site of a 1.8Ghz G5 with the new compiler. I made a graph showing a linear extrapolation of the single 1.6 and single 2.0.
(use http://babel.altavista.com/tr to translate)

SPEC_G5_18.jpg


EDIT: ATTENTION: these scores are estimates: see post below
 
Re: new SPEC scores !!!

Originally posted by isgoed
I can't believe that no one noted them. I read this thread a while ago, with the assumption that everyone has already seen the scores. But it seems that no one knows. So as my duty to bring forward the information. look at the scores i found on this (german) site of a 1.8Ghz G5 with the new compiler. I made a graph showing a linear extrapolation of the single 1.6 and single 2.0.
(use http://babel.altavista.com/tr to translate)

SPEC_G5_18.jpg


ATTENTION! Those scores aren't real SPEC results! The G5 1800 scores are IBM's predictions from a year ago and the G5 2000 scores are the same scores linearly scaled.

see http://www-3.ibm.com/chips/techlib/techlib.nsf/techdocs/A1387A29AC1C2AE087256C5200611780
 
Why GCC ?

Ok its nice to use an opensource compiler and all but we are talking about huge possible speed improvements. Doesn't CodeWarrior generates better ( faster ) code ? I wouldn't use IBM's beta compiler to generate mission critic binaries, but surely there are better alternatives to GCC.
Apple could still support GCC for Xcode and keep on optimizing its performance until it reaches CW's quality. But until then why not compile with CW? I think its a small price to pay.
I don't know much about programing , its only that I personally wouldn't mind a speed bump on OSX.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.