Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Sol said:
Mac developers will have some tough choices to make in the next two years. The PowerMacs will use new PPC CPUs but consumer Macs will use x86 processors so what are they supposed to do? They could write Universal Binaries that would run un-optimized on both, they could write and optimize for PPC only or for x86 only. In the mean-time Windows developers have only x86 to write for so their jobs are simpler.

Er, no. Very few applications need special tweaking for one CPU over another or contain any custom code. Most just rely on the compiler to do a good job of optimizing or they rely on the libraries Motorola, Intel and Apple provide to do specific functions. eg. VecLib to do AltiVec type instructions of which Intel have just announced they'll be providing optimized versions of and getting their compiler on that is a major help.

Most developers will still use GCC, because it's free and comes with XTools. Only those writing highly performance critical applications like games and certain maths apps where the system libraries aren't good enough will switch some of their code to Intel's compiler of IBM's xlc. Neither support Objective-C so it's as much as a given that all UI code is still going to be GCC.
 
ZLurker said:
I thought the UB contained two different sets of code, in one file. That is one code generated for ppc and one for intel. This doesnt exclude one from optimizing each part of the code for each platform or does it?
You're really compiling twice, so sure, you can optimize for each using #ifdef __ppc__ or __i386__ as needed.
 
Analog Kid said:
Anybody know what percentage of apps out there are Cocoa? Seems to me that not supporting Objective-C is going to cause problems-- it's the Cocoa framework that Apple has been pushing as the easy route to OS X development.




From http://neowiki.sixthcrusade.com/index.php/NeoOffice/J_and_Aqua :
Isn't Cocoa better than Java?

Aqua is aqua.

This question is most often asked by developers and power users. End users, on the other hand, tend to judge by the results. As one tester put it:

"If it looks like a duck, walks like a duck, and quacks like a duck, then to the end user it's a duck, and end users have made it pretty clear they want a duck; whether the duck drinks hot chocolate or coffee is irrelevant."



From an Apple developer: http://lists.apple.com/archives/Carbon-dev/2005/May/msg01121.html

I think the important point to understand is that artificially dividing up the world of Mac OS X APIs into Carbon and Cocoa, and considering that these are mutually exclusive, is the wrong way to look at things. Mac OS X provides a huge variety of APIs at different levels for use by an application. Some have C interfaces, some have Objective-C interfaces. Someday we may even have C++ interfaces. A Mac OS X application is just that - a Mac OS X application, not really a Carbon application or a Cocoa application. A Mac OS X developer should be prepared to use any API from across the system that does the job, regardless of what framework it comes from. Our job at Apple is to make that possible, and to remove barriers that are in your way when trying to use CoreImage in a app that uses primarily HIView, or a Carbon event in a app that uses [NSApplication run], or using a POSIX API from an app that uses Swing, or whatever is blocking you from using the most appropriate API for the task.
 
iMeowbot said:
You're really compiling twice, so sure, you can optimize for each using #ifdef __ppc__ or __i386__ as needed.
Then there is every bit of the puzzle!

No need to panic at all.
You can get the best from both worlds!
Its only up to dev. to use it...
 
Nermal said:
Are Intel's compilers usually free (like Xcode) or do you need to pay for them?

They are VERY EXPENSIVE, and in my previous experience, not all that good. Yes, the code they produce is fast in places, but there are bugs both in the compiler itself and in the generated code. I would not recommend them even for Windows developers.
 
qubex said:
The Intel compiler doesn't support Objective-C. Thus it cannot compile the whole OS, and many developers who use Objective-C/Cocoa won't be able to use it.

This is not true. Objective-C, as originally designed, is a pretranslator which outputs pure C, which is then compiled by a C compiler.

Also, I don't think it's called XLC, that's IBM's PowerPC compiler. I think Intel's compiler is called C86.
 
cubist said:
[Intel's compilers] are VERY EXPENSIVE ...

If you think $400 is expensive for a compiler, shop around. Microsoft Visual Studio costs as much as $2,500. Sun Studio costs $3,000. Portland Group's suite cost as much as $3,500.

I'm not endorsing Intel C++, but GCC can make it easy to take for granted the availability of a cheap compiler. We have GCC, in large part, to thank for BSD, Linux, and OS X, as we know them today.
 
Bad news for High End Apps

Sol said:
Mac developers will have some tough choices to make in the next two years. The PowerMacs will use new PPC CPUs but consumer Macs will use x86 processors so what are they supposed to do? They could write Universal Binaries that would run un-optimized on both, they could write and optimize for PPC only or for x86 only. In the mean-time Windows developers have only x86 to write for so their jobs are simpler.

This is why I hate the Intel Macs. However faster than PPC the x86 hardware is supposed to be, Windows applications will be faster because Mac developers will have to write for two very different architectures in what is seen as a niché market. I suspect most applications will not have OS X native versions at all and will rely on something like WINE or Virtual PC to run on OS X.

I agree with this. As it is now, the processor intensive apps I use (Maya, modo, Vue5 Infinite, Zbrush) can barely keep up in benchmarks with their Windows counterparts. I fear we will see FAR worse performance on IntelMacs due to the lack of optimization and experience for cross-coders. Maya will be a total slug in rendering and this may spell a wholesale abandonment of the Mac platform for 3D Artists. Too bad...
 
let's hope that you're optimizing for something newer than a 386!

iMeowbot said:
You're really compiling twice, so sure, you can optimize for each using #ifdef __ppc__ or __i386__ as needed.
386? not at least a 686?
 
Universal binaries don't require much effort

Sol said:
Mac developers will have some tough choices to make in the next two years. The PowerMacs will use new PPC CPUs but consumer Macs will use x86 processors so what are they supposed to do? They could write Universal Binaries that would run un-optimized on both, they could write and optimize for PPC only or for x86 only. In the mean-time Windows developers have only x86 to write for so their jobs are simpler.
Just as an FYI, from a developers perspective, it takes virtually no effort to create universal binaries. All that is usually required is clicking on a few checkboxes when compiling (in PB or XCode). This was the case back in the NEXTSTEP days as well, and those "fat" binaries supported at least 4 different processors at the same time (if I remember correctly). Easy & very cool.

Now, if the developers aren't using XCode yet... well, then they may have a bit of work. But really, they should of done that a long time ago. ;)
 
Nermal said:
Are Intel's compilers usually free (like Xcode) or do you need to pay for them?

You will be paying through your nose for them. The Intel compilers are very expensive atleast for Windows.
 
Fear not...

bernardb said:
I agree with this. As it is now, the processor intensive apps I use (Maya, modo, Vue5 Infinite, Zbrush) can barely keep up in benchmarks with their Windows counterparts. I fear we will see FAR worse performance on IntelMacs due to the lack of optimization and experience for cross-coders. Maya will be a total slug in rendering and this may spell a wholesale abandonment of the Mac platform for 3D Artists. Too bad...
You worry needlessly. Making universal binaries does not take away from the compilers ability to do optimization. Developers do not need to do cross platform development, that is the beauty of it. It is one of those things that sounds super complicated that Apple has made easy for people to take advantage of. I hate to say it, but it just works.

The only minor downside to universal binaries is that the apps are slightly larger... however in NEXTSTEP (and I assume OSX will be no different) it was a simple matter to "trim" out the additional platforms during or after install. Back then we didn't have broadband everywhere, HD space was pricier, and an extra Mb or two was a big deal.
 
AidenShaw said:
386? not at least a 686?

That something you whould specify when compiling. It's not something you'd put in the code.

I'm no dev. though so take that with a grain of...everything.
 
Sun Baked said:
That was sort of expected ... wonder whether Apple will stick with GCC after the transition is complete (probably will, since Apple will be doing fatApps for quite awhile.)
Apple will probably stick with gcc permanently. Intel's compilers are not free. If Apple switches to them, they won't be able to bundle it with the OS.
 
Good News

Good to see that apple is progressing well in the intel switch and that they are optimising the Mac OSX operating system. They must be very near to a perfect OS for intel chipsets.
 
AidenShaw said:
386? not at least a 686?
It's just a naming convention for 32-bit Intel. (The name was pinned down back when a '386 was the only variant.)
 
Intel Compiler Fees (not always Free)

maya said:
Yes, its Free. :)

In the past, Intel compilers are free for non-commercial use. If you want to use them commercially, you have to buy them from Intel. We experienced this issue specifically on the Intel compiler for Linux; our research partners got the compiler for free, we (as a commercial company) had to purchase the compiler.

I don't know if Intel will continue this approach with the new Intel-powered Macs under OSX or if Apple has cut a deal with them.
 
panphage said:
What's different about os x and linux is that they both have free options for developers, while Windows does not (ok, I suppose there's cygwin...)

You can get pretty much anything for cygwin that you can get for Linux. I use X11 with cygwin for Java development (since I also work with getting our code working on Linux and Solaris and switching from them to the Windows environment is too jarring) and have gcc installed ... although I can't say I've used it much, if at all.

Of course, if you're writing a Windows app, you would probably use MFC since you wouldn't get the same look and feel without a lot of hard work.
 
Abstract said:
You know, later in the future, when more people in the general public find out that Macs use Intel chips now, I guarantee that some of us are going to listen in on a conversation where someone calls it a "Windows Mac," as for many non-techie people, Intel is synonymous with computers in the PC world. And that's when I get the baseball bat...

Sounds like my parents. They were into computers a lot in the 80's and early 90's, when IBM "represented" the Windows platform. The other day they mentioned getting an IBM, and I knew what they were saying, but I had to point out that IBM doesn't even make consumer computers anymore.
-Chase
 
cwoloszynski said:
In the past, Intel compilers are free for non-commercial use. If you want to use them commercially, you have to buy them from Intel. We experienced this issue specifically on the Intel compiler for Linux; our research partners got the compiler for free, we (as a commercial company) had to purchase the compiler.

I don't know if Intel will continue this approach with the new Intel-powered Macs under OSX or if Apple has cut a deal with them.
If you can get ahold of your representative, it might be a good idea to twist their ear and tell them what you might be needing as far as support and features.
 
big deal

AidenShaw said:
Note that Visual Studio .NET does support SSE, and will do automatic [Alti-]vectorization using SSE instructions. No source code changes necessary....
Do you know how good compiler vectorisation is? Forget it.

If you write programs which doesn't need to be fast (like ftp clients, calender apps, word processors, ... most apps) you can use it. If you write a program which needs to do a lot of computation, you have to optimise your code yourself.
 
gweedo said:
You worry needlessly. Making universal binaries does not take away from the compilers ability to do optimization. Developers do not need to do cross platform development, that is the beauty of it. It is one of those things that sounds super complicated that Apple has made easy for people to take advantage of. I hate to say it, but it just works.

The only minor downside to universal binaries is that the apps are slightly larger... however in NEXTSTEP (and I assume OSX will be no different) it was a simple matter to "trim" out the additional platforms during or after install. Back then we didn't have broadband everywhere, HD space was pricier, and an extra Mb or two was a big deal.

After WWDC 2003 I watched a video about programming for the G5. They talked about how to deal with branch mispredictions, instruction latency and more issues. It sounded like a lot of work just to optimize for the G5 and I would think that programmers would have to do much more when switching to a completely new architecture.
Is it for commercial apps really enough to just check the dual-binary-box (without optimizing the whole app by hand)?
 
No....

tom_s said:
Is it for commercial apps really enough to just check the dual-binary-box (without optimizing the whole app by hand)?
No, it's not.

As you saw in the G5 video, there are many subtle optimizations to get the best out of any system.

Sometime these optimizations will benefit all architectures, sometimes what helps one chip will hurt another.

A big problem will be if your code or data has "endian" assumptions. PPC and x86 address memory by different conventions, so in some cases PPC code will corrupt x86 data (or vice versa). This is especially troublesome if your disk files or network messages contain endian-dependent data.

And for a commercial product, regression testing and QA need to be done on every architecture. So even if fat binaries are easy for the programmer, they double or triple the amount of work for the QA teams. ("triple" when x64 64-bit is added to the x86 32-bit and PPC binary.)

The "just check a box" line for the fat binaries is fantasy (or fallacy).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.