Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
AidenShaw said:
As you saw in the G5 video, there are many subtle optimizations to get the best out of any system.
Although for most apps, the compiler will take care of this. Usually, optimization beyond what the compiler does for you is usually only necessary in a few small timing-critical parts of an app.

Designing an app around a processor's optimization quirks is usually a bad idea. It results in code that's difficult to maintain. And there's no guarantee that a future version of the chip won't invalidate the assumptions your optimizations are based on. (Case in point - the fastest way of arranging code changed pretty dramatically between the 386, 486, Pentium, P-II, and P-4. Optimize for one and you usually hurt the others.)

Most of the time, it is better to come up with a generally efficient algorithm and code to that, paying most of your attention to how hard it will be to maintain (debug, scale-up, etc.). If you find yourself having performance problems after that, run profilers and hand-tweak only those sections that need it most. And make certain you add copious amount of comments to that code so future developers will realize what you did and why.
AidenShaw said:
Sometime these optimizations will benefit all architectures, sometimes what helps one chip will hurt another.
Even different model chips within a single architecture. That's why compilers often have switches for chip-specific optimizations (386/486/586/686/Athlon/A64, 601/603/604/G3/G4/G5).

Most of the time (but not always) it's best to optimize for the oldest/slowest chip you support. A modern chip will usually be fast enough that users won't notice sub-optimal code (optimized for the older chip) much, but an older chip can be seriously cripped if it gets sub-optimal code (optimized for the newer chip).

But, as I said before, none of this eliminates the need for testing, profiling, and occasional hand-tweaking where it's absolutely necessary.
AidenShaw said:
And for a commercial product, regression testing and QA need to be done on every architecture. So even if fat binaries are easy for the programmer, they double or triple the amount of work for the QA teams.
Reminds me of the Java mantra - write once, test everywhere.
 
nit

shamino said:
Even different model chips within a single architecture. ... (386/486/586/686/Athlon/A64, 601/603/604/G3/G4/G5).
I'd call 386/486/586/686 at least four different architectures (while Netburst is mostly a P6 architecture, it has some visible differences). There are also visible changes between the original P6, the Pentium II, and the Pentium III.

By "visible" I mean that scheduling and cache optimizations can differ from chip to chip. (There are also clear differences such as MMX/SSE levels.)
 
AidenShaw said:
I'd call 386/486/586/686 at least four different architectures (while Netburst is mostly a P6 architecture, it has some visible differences). There are also visible changes between the original P6, the Pentium II, and the Pentium III.
You can definitely make that argument.

I perhaps used the word "architecture" too loosely. I meant it only in the sense of "compatible instruction set."

From a developer's point of view, you only have to compile your code for one of these if you want to target them all. But if you have a component that is timing-critical, you may have to have different a implementation of that component for each one. (DLL's help greatly here - you can put the problem code in separate files that are built and optmized differently and have your application's installer choose which one to install.)

These are different kinds of problems from the kinds you encounter when porting between incompatible processors (like x86, PPC, SPARC, PA-RISC, MIPS, etc.) There, you have issiues beyond just optimization. You end up dealing with things like byte-ordering, memory alignment, etc.
 
rand() said:
But that doesn't mean you won't see binary distributions of OS X compiled with Intel's compiler. Apple would be smart to use the best optimized compiler and get the absolute best performance out of X - especially if James' remarks about "optimizing the compiler for Apple" are true.
My hunch is Apple will do that with Intel's compiler but I still wonder why IBMs XLC compiler isn't used to compile OS X for PPC.
 
cool

Good for them, it looks like the Mac might not be treated like a second class citizen. If this means it will be easier to create drivers for all those PC only add ons or even if it just speeds up porting games, I am all for it.
 
gio64 said:
I feel that Intel/Apple connection will go far ahead of simple supply/Manufacturer relashionship.
I am afraid that Apple will simply rely on Intel for the entire production of some of their models.
It's a speculation, but it is a possibility, since the more Intel engineering will go in the product, the more reliability and components efficiency you can expect; and Apple certainly doesn't have a whole lot of experience in working with Intel hardware...

Why are you afraid of this? Aren't some of the current motherboards that Apple uses actually manufactured by Foxconn, who I believe also manufactures many of the Intel branded motherboards. I know for sure that they manufacture the iMacs. So really what is the change?
 
No, it's not magic...

AidenShaw said:
[...]
And for a commercial product, regression testing and QA need to be done on every architecture. So even if fat binaries are easy for the programmer, they double or triple the amount of work for the QA teams. ("triple" when x64 64-bit is added to the x86 32-bit and PPC binary.)

The "just check a box" line for the fat binaries is fantasy (or fallacy).
Having done significant commercial and enterprise development back in the NeXT days (involving fat binaries on at least 3 platforms), at least 95% of the time it was as simple as that. <gasp!> Since most of this is based on that work way back when, I'm making the assumption it will work about the same in OSX.

I guess I may be making the massive assumption that you are actually using the Apple frameworks, and not rolling your own libraries for everything under the sun. Endian'ness, etc. is all taken care of for you if you use them properly. If you are writing vanilla C code (vs. Objective-C using all the associated frameworks), then ya, you have to manage architecture differences yourself. Going that route, you will also have to manage your own localization, write your own xml parser, etc. If you insist on writing all your own bits and pieces to maximize every last cpu cycle, then yes, it is not just clicking on a check-box... you pay the price for that level of control. :)

After going through a release cycle or two, you will have to decide for yourself if QA'ing every architecture and combination of architectures is necessary.
 
I was thinking of cross-platform apps in particular

gweedo said:
I guess I may be making the massive assumption that you are actually using the Apple frameworks, and not rolling your own libraries for everything under the sun. Endian'ness, etc. is all taken care of for you if you use them properly. If you are writing vanilla C code (vs. Objective-C using all the associated frameworks), then ya, you have to manage architecture differences yourself.
Is Photoshop completely Objective-C? How about Office? Firefox? How about Fortran applications? ....
 
Ta da!

AidenShaw said:
Is Photoshop completely Objective-C? How about Office? Firefox? How about Fortran applications? ....

That would be akin to magic, yes. However, Mathmatica is not a simple app and for sure not all ObjC... it didn't sound like they had significant problems. But we all know what a mess of things apps can tend to be inside. Just as one can make a java mess that can't work well across jvms, one can make a mess in ObjC. If you have a well structured mess that plays well with xcode, then, your pain might not be great. Your mileage will vary of course, but... I bet the apps from Omni worked without much trouble.

Firefox is already working on OSX for intel btw... as is Thunderbird, Camino...
http://josh.trancesoftware.com/firefox-1.0+.en-US.intelmac.dmg
http://josh.trancesoftware.com/mozilla/thunderbird-1.0+.en-US.mac.dmg
http://josh.trancesoftware.com/mozilla/Camino-x86.app.zip

Way to go guys!
 
pjkelnhofer said:
Why are you afraid of this? Aren't some of the current motherboards that Apple uses actually manufactured by Foxconn, who I believe also manufactures many of the Intel branded motherboards. I know for sure that they manufacture the iMacs. So really what is the change?
To date Apple have still been doing their own engineering and design work on the computers, even if others build them. They continued to roll their own chipsets and so on. not that it really matters in the end, but some are concerned that Apple may give up on doing their own engineering altogether and concentrate on making pretty cabinets.
 
Sol said:
Mac developers will have some tough choices to make in the next two years. The PowerMacs will use new PPC CPUs but consumer Macs will use x86 processors so what are they supposed to do? They could write Universal Binaries that would run un-optimized on both, they could write and optimize for PPC only or for x86 only. In the mean-time Windows developers have only x86 to write for so their jobs are simpler.
Most developer's do NOT optimize for the PPC right now--they optimize their general program flow. Also, Apple (since Panther) has provided optimization routines, so that any developers taking advantage of those will automatically gain Intel optmizations with universal binaries. Additionally, all of the system calls ALL apps have to make are optimized by Apple. That said, there is a core of apps (an Apple's Pro and iLife are amoung them) that ARE invested in Altivec optmizations; some of these apps will transition to the Apple optimization calls, thus gaining cross-platform optimization, with the others having to deal with optimizing on both platforms.

Sol said:
This is why I hate the Intel Macs. However faster than PPC the x86 hardware is supposed to be, Windows applications will be faster because Mac developers will have to write for two very different architectures in what is seen as a niché market.
For the reasons stated above, most developers will NOT have to write for both platforms--Apple's software hides most of the transition for them. A majority of what isn't hidden only needs to be written once (i.e., routines that have to byte-swap), so its a one-time hit. Also many cross platform apps probably have Intel optimizations already--so they can just reuse those for the Mac. The most likely fall out is less Altivec optimization being done. For example, I would very much doubt that Photoshop would be impacted--Adobe probably has a huge investment in Intel optimizations that the Mac side will be able to immediately take advantage of, or if not immediately, it would be a "one time" hit to make it available. And once done, no further work on the Mac side would be needed to be optimized for Intel.

So, to reiterate, for the VAST majority of apps on the Mac, speed has NOTHING to do with optimizations directed at PPC (like Altivec)--most of the speed has to do with how optimized Apple's underlying software is, as well as how optimized an app's general flow is. There WILL be apps that the transition will be VERY hard for, I'm sure, but those aren't in the majority. My guess is that the vast majority of these people will start working on Intel optimzations, and not improve their PPC optimizations any further.

Sol said:
I suspect most applications will not have OS X native versions at all and will rely on something like WINE or Virtual PC to run on OS X.
I think the biggest risk is that a Windows-only shop that might be looking into porting to the Mac market may decide to wait and see if they can get away with what you described. In theory, people are buying Macs for ease of use and productivity, so being forced to use crappy Windows apps would be considered a last resort, and they'd still buy a good Mac version.
 
cubist said:
They are VERY EXPENSIVE, and in my previous experience, not all that good. Yes, the code they produce is fast in places, but there are bugs both in the compiler itself and in the generated code. I would not recommend them even for Windows developers.

They are cheap.
Even as a student i bought a good Watcom C compiler for ~500 US$.
And for a company this is just peanuts.
 
AidenShaw said:
Is Photoshop completely Objective-C? How about Office? Firefox? How about Fortran applications? ....
Probably not.

But keep in mind that apps like Photoshop and Firefox are already built around portable code. They have to, since they are distributed for multiple incompatible platforms already (Windows, Mac/PPC, and others). (I don't know about MS Office, since the Mac version is clearly not just a recompile of the Windows version.)

Although it would be a lot of work to port those apps to an incompatible platform, that work has already been done. They are already built for both x86 and PPC chips. They are already built for multiple operating systems. Assuming their portability is organized and not just haphazard #ifdef statements all over the place, they should be able to make a few minor edits to a portability file (probably a header or a makefile) to specify what the new platform is, and the rest should already be done.

Having developed cross-platform code in the past, I know firsthand that this can be done, and can even be easy to do if the build environment is architected in an intelligent fashion. One project I worked on was not only cross-platform, but cross-language. It was a development toolkit that could be compiled for C (using a proprietary object model) or C++, and could be easily compiled for a huge number of platforms including Mac OS (PPC Carbon), Windows (Win32), OS/2 (32-bit), VMS, and a lot of different varieties of UNIX (Linux, Solaris (x86 and SPARC), HP-UX, Irix, OSF-1, and AIX). Porting to a new platform involved copying/editing about 100 lines in a single header file, and creating a new "environment" file to define stuff used by the makefiles.

Apple's Obejctive-C frameworks are very well-written cross-platform tools. Code that uses them generally needs very little modification to recompile for other platforms. But it isn't the only way. There have been many other cross-platform toolkits (one of which I helped develop). And even without a toolkit, writing portable code is not difficult as long as you keep portability in mind during the design and development phases of your project.
 
gweedo said:
That would be akin to magic, yes. However, Mathmatica is not a simple app and for sure not all ObjC... it didn't sound like they had significant problems.
Prior to the MacTel announcement, Mathematica was already ported to Windows, Mac OS and a wide variety of UNIX platforms. I'm sure they were able to quickly make a MacTel version because their engineers had already done all the hard work of making the code portable. So all they really need to do is change some definitions somewhere to turn on x86-specific code-paths and turn off PPC-specific code-paths, and recompile.

Now, if Mathematica had not already been ported to other platforms, if it was a Mac-only product, the porting work would have been much more than clicking a checkbox and editing a handful of lines.

How quickly you can port your project will depend greatly on what kind of platform-dependant code you have present. If you design your app for portability, or if you make extensive use of a portable toolkit (including the Apple/NeXT frameworks), then it won't be a lot of work. If not, then it will be.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.