Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I mean, why support two architectures when, based on Apple's customer database, they know they don't really have to? There's no new PPC hardware coming out of Apple, and whatever other PPC hardware there is still being made outside of Apple is probably so alien that Mac OS X wouldn't, in the normal course of events, support it anyhow. Besides, do you really want to have to test your code on more architectures than you really have to? If it were me, I wouldn't.

Coding on multiple platforms is a great way to improve the quality of your code. I was recently working on someone else's code and they had clearly not tested on anything other than their workstation. The code would compile on x86 but not PPC and only on x86 if optimisation was turned off. Turned out the code was full of buffer overruns which only showed up on the x86 when optimisation was turned on. PPC would just dump core. Once I found the buffers and fixed the problem it compiled fine on both platforms with optimisation.

Of course, this is an extreme example but it shows how bugs can lurk in code if you only test on one platform. I regularly compile my programs on Linux (32 bit and 64 bit), OS X (x86 and PPC) and even Windows and it has helped me find some really sneaky bugs.
 
makes sense, by the time apple releases os 10.6 in 2009 ish the PPC will be totally obselete by then, especially with everything switching up to 4 and 8 cores...i mean seriously less than a ghz single processer in 2 years??? that seems insane!

Technology soons becomes old news, as soon as you get it out of the box it's obselete but if the machine does the job that was intended and does it well then why change it!
 
GreatDrok:

While I understand the point you're trying to advance, you're simply mistaken in your concept of what factors specifically contribute to improving code. First off, different processor architectures work differently. Secondly, different processor architectures (or even branches of different architectures) have different optimization technologies. Thirdly, if you go and look at each branch of any architecture, you will note that the chips themselves are all different iterations, each with their own unique set of anomalies, quirks, and chip-designer-made assumptions.

Fortunately, they support fall-back architectures, which is why you see most software in the Intel world coded for some weird-ass mix of i386 + SSE2/3, or whatever. But to make the leap that learning how to code properly for a completely different architecture teaches you how to code better for your own, well...

I mean, if you studied German (to use a completely different kind of example), how does that teach you to speak better English? It doesn't.

Supporting multiple architectures (which in any kind of responsible sense means giving each one of them somewhat equal priority and status) means having to cater to lower and lower common denominators across all of them, until eventually the disparity between the top and bottom of the scale is so vast that you can only cater to the "lowest common denominator". Besides that, when you want to develop new software to take advantage of new technologies, why would you deliberately want to muff it by forcing it to run on old, by definition not-up-to-date hardware? That just doesn't make any sense.

Furthermore, it is NOT in Apple's (or anyone elses') business interest on the developer/manufacturer side to do this, since it dilutes the credibility and legitimacy of their efforts to sell current-tech products. And it really doesn't help the consumer very much, on the whole, since they're having to make due with either slower-and-slower running software, or software which then has to deliberately feature-strip itself (as sufficient time passes) in order to run in the first place.

Look, I'm not trying to say I like the notion that, every time Apple (or insert your favorite company here) wants to sell a new product, we have to give up what we've got and re-buy, especially since the perception is we're doing this largely for the good of the executive and the shareholders. And clearly, there's potentially a lot more development that could be done to make PPC run better than it ever did while it was still in the spotlight. However, that's simply not how things work.

And at this point, as I have stated diverse times before, I look forward to upgrading to the newest architecture, since I'm stuck on an 800MHz G4 myself.
 
GreatDrok:

While I understand the point you're trying to advance, you're simply mistaken in your concept of what factors specifically contribute to improving code. First off, different processor architectures work differently. Secondly,

Depends on just what you mean by programming. If you are simply writing a C program then there is little reason you need to concern yourself with the architecture beyond making sure that you don't allocate a huge array and leap around all over it rendering the cache useless. GCC handles all the platform optimisation in the case of Xcode anyway so writing C that will compile and run cleanly on multiple architectures isn't an issue as such. However, as different architectures can behave differently to mistakes it is a good idea to compile and run on a number of platforms as a final sanity check.

On the other hand, if you are hand optimising in assembler for some weird ass platform then you are investing a great deal of effort for a particular platform. Really, you should only be doing this if you are looking to get performance that wouldn't be achievable using standard programming languages. Back in the early 90's I was writing code on a 16384 processor supercomputer and I would sit with a sheet of the timings and latencies for the CPU instructions so I could reorder the calls to memory since the CPU didn't support on the fly instruction reordering like modern CPUs do. The peformance benefits of doing this rather than writing simple C was on the order of an 8 fold increase in performance simply because there was no way in C to express much of what I was doing which was far more than simply instruction reordering by the way.

different processor architectures (or even branches of different architectures) have different optimization technologies. Thirdly, if you go and look at each branch of any architecture, you will note that the chips themselves are all different iterations, each with their own unique set of anomalies, quirks, and chip-designer-made assumptions.
For standard C code, this is all the domain of the compiler writers. Developers should be aware of this stuff but they should be writing portable code and where necessary include platform specific optimisations only if there is a real benefit. For example, I worked on some code a few years back which used a chunk of SSE assembly to speed up string comparisons. This was developed on Intel P3 and for some reason the performance on AMD was much worse than expected. Turned out that due to a peculiarity of the Alpha EV6 bus that Athlons used a section of the code was horribly inefficient on AMD so I replaced it with code more suitable and the AMD implementation then went substantially quicker than the P3. This is one of the reasons I tend not to believe all these benchmarks that show AMD being so much slower than Intel for SSE stuff. As you say, they are very different architectures under the skin.

However, Apple uses their vector libraries to call SSE type instructions so it is up to them to implement each most efficiently for each architecture but once it is done then that is that. Developers shouldn't have to worry.

Fortunately, they support fall-back architectures, which is why you see most software in the Intel world coded for some weird-ass mix of i386 + SSE2/3, or whatever. But to make the leap that learning how to code properly for a completely different architecture teaches you how to code better for your own, well...

Well what? You generally shouldn't need to know the architecture, and if you do then focus on the bottlenecks and fix them. Depends on what you consider being a better programmer I guess. Personally, I think being able to work on many platforms makes for better code. Others think that being able to churn out VB quickly makes them better programmers. Depends on the domain. In my field, cross platform code is a must.

I mean, if you studied German (to use a completely different kind of example), how does that teach you to speak better English? It doesn't.

Actually, multilingual people tend to have much better language skills (i.e. they are better able to express themselves) than monolinguals.

Supporting multiple architectures (which in any kind of responsible sense means giving each one of them somewhat equal priority and status) means having to cater to lower and lower common denominators across all of them, until eventually the disparity between the top and bottom of the scale is so vast that you can only cater to the "lowest common denominator". Besides that, when you want to develop new software to take advantage of new technologies, why would you deliberately want to muff it by forcing it to run on old, by definition not-up-to-date hardware? That just doesn't make any sense.
Do you remember project Marklar? Apple compiled all versions of OS X on Intel because it was a fallback position. OS X is very portable and it wouldn't make sense to drop that portability like MS did with NT. At some point in the future, they may decide to move from Intel CPUs to some other new killer platform. Never say never.
 
GreatDrok:

While I'm not trying to dispute a lot of your points on a purely technical basis, and clearly the point about Apple having a history of keeping their options open by covertly coding for x86, and probably planning to do so again for future CPU architectures is well-received by me. But, I still don't see how any of this really supports the pro-PPC-continued-support portion of the argument in this thread.

While it would clearly be a lie to say that "PPC is dead", by all indications, it's time has passed, much like how the Amiga's innovative approach has been passed up by both PPC-based Macs and x86-based Macs and PCs through sheer brute force of more efficient CPUs, higher clock speeds, etc. Frankly, I don't see Apple returning to PPC, both for political issues as well as technical ones. And no, such things as the PPC-based Cell processor do not fundamentally change my thinking on this matter. (Not that you brought up Cell, but I'm just sayin'...)

Obviously there are times when one has to get to the level of intimacy with the hardware as your personal examples illustrate. And certainly there have been, probably are, and likely will be such situations in the future which face some number of Mac OS X developers. But that still doesn't help to back up the rationale of specifically continuing to support the PPC architecture on the part of Apple.

If I were Steve Jobs, there's no question my "orders to the troops" would include maintaining the portability of OS code for "TBD"-type hardware support scenarios. It's that that I feel looking back and, more to the point, burdening one's self with supporting an architecture that you left (one might be inclined to use the less gentle term "abandoned", but I digress...) accomplishes nothing positive, and is fundamentally a waste of resources.
 
GreatDrok:


I mean, if you studied German (to use a completely different kind of example), how does that teach you to speak better English? It doesn't.

Studying German, or any declined language is enormously helpful in understanding the case structure of language... which will empower one to speak better English!

Similar analogies hold for computer languages. Multilingual speakers and programmers often share a more in-depth understanding of language structure.
 
If I were Steve Jobs, there's no question my "orders to the troops" would include maintaining the portability of OS code for "TBD"-type hardware support scenarios. It's that that I feel looking back and, more to the point, burdening one's self with supporting an architecture that you left (one might be inclined to use the less gentle term "abandoned", but I digress...) accomplishes nothing positive, and is fundamentally a waste of resources.

The point isn't that PPC is abandoned as such, I agree it is. The point is that to maintain portability you need to make sure the code runs as specified on multiple platforms. At the moment, PPC and Intel still exist. Sure, some other platform such as MIPS or even Alpha could also be maintained as a build platform but they are deader than PPC and Apple does have plenty of PPC hardware that it built available so for the exercise of maintaining portability it makes sense for them to continue builds for PPC even as they move forward simply because no obvious successor to the x86 has presented itself.

If it wasn't for the fact that you can build perfectly portable code with no extra effort in Xcode then I would grant you the resources point but I know that it isn't difficult to keep building portable code by still supporting PPC and using the available libraries to implement Altivec/SSE operations transparently too through the accelerate framework.

The only platform that can realistically be used to maintain portability of the codebase at present is PPC. Drop PPC and what other platform would you use that does the job?

In the end, I think PPC will be dropped but given the typical life span of Macs, if 10.6 comes out less than five years after the last PPC Macs left the production line then Apple must include PPC support for those machines at the very least. A G5 PowerMac is still a pretty hefty beast. I don't expect any of my G4s to continue to work beyond Leopard and I don't have any G5s. My newest G4 is now three years old so I will likely have retired it from daily service by the time 10.6 comes out anyway.

10.7 really is likely to not support PPC but I wouldn't be surprised for Apple to still be building OS X.7 on a G5 or two even if it is never released to retail. You would be simply amazed at some of the nasty bugs in C code that can get caught if you build on another platform.
 
Sure, some other platform such as MIPS or even Alpha could also be maintained as a build platform but they are deader than PPC.
I'm sure Apple won't bother, but MIPS is still very popular in low-power applications. A MIPS-based cell phone would be really cool. You're right that MIPS is dead as a mainline CPU :mad: , though I still harbor fantasies of a 3GHz 4-core R10k. It would utterly smoke anything from Intel.
 
I'm sure this has already been brought up, but I hope they don't bother with PPC on 10.6.

I mean Windows needs to do this too. Draw the damn line already and support hardware thats, for example, no older than 2 years old. So anything 2 year old or newer is supported. Stop supporting PIII's and old-ass hardware that should be dead. The people who still run on those systems are most likely grandmas anyway who could give a rip less if they have the next best OS (and most likely only play Chess or some crap anyway).

I know when they were showing off the Windows 7 kernal (something along those lines) they were touting how small and lightweight it was.
 
I asked for some proof in my last post. To the best of my knowledge you at least need a PowerPC 74xx (G4) processor.

It won't run on any G3, the only G3 models that some people have had success with are G4 upgraded Pismos.

There's a guide on here made by suneohair that describes in detail how to make Leopard run on ANY G3 (Although under 500 MHz is not recommended heh heh)

I've got my eye on a 1.2 GHz iBook G4 for my main machine, and was planning to get 10.6 when it was released.

Don't get me wrong, Leopard will be OK eventually,(at about 10.5.7 or so) but I was really hoping to skip it. Oh well, if this true, I guess I'll be buying a copy of Leopard.

Although I would be surprised if they didn't at least support high-end G5's...
 
Studying German, or any declined language is enormously helpful in understanding the case structure of language... which will empower one to speak better English!

Similar analogies hold for computer languages. Multilingual speakers and programmers often share a more in-depth understanding of language structure.

^^ Agreed. From my studies in French and Linguistics knowing more languages is always better, and there is no "better language." I think the comparison to actual spoken language was a little dangerous, your point was understood in my corner nonetheless.

I think it might be more worthwhile to argue all this in a few years, as support will not be in the realm of a "drop off" until then, and even if powerPC support is dropped in the next OS I'm sure leopard will suffice with updates for diehard PPC users.

In two to three years I really do hope that there are quite a few nice and technologically "wow" features in the next OS that would make using PPC kinda futile...it will be the next decade, our needs digress...
 
I think it is disappointing that they might be dropping the ppc processors but it makes sense that they would because by the release the ppc's will be be 4 years old and most people would be in favor of the older intel macs rather than really old ppc's
 
I think it is disappointing that they might be dropping the ppc processors but it makes sense that they would because by the release the ppc's will be be 4 years old and most people would be in favor of the older intel macs rather than really old ppc's

Disappointing? Hardly. 10.6 will be out in...what? Over a year (at least) I'm sure. PPC is dead people (to Macs anyway). I don't even understand how this can be argued. I wasn't complaining when my G4 iMac couldn't install Leopard. There's a statue of limitations to this kind of thing. It's like being sad that your CRT TV doesn't support HD.
 
Both of my CRT are HDTV's...

Yes, I've seen this "HD" quality from a CRT TV (a high end one too). The center of the screen looked good, and the rest of it looked awful. It doesn't come even close to an actual flat screen (Plasma, etc.) HDTV.
 
I'm sure this has already been brought up, but I hope they don't bother with PPC on 10.6.

I mean Windows needs to do this too. Draw the damn line already and support hardware thats, for example, no older than 2 years old. So anything 2 year old or newer is supported. Stop supporting PIII's and old-ass hardware that should be dead. The people who still run on those systems are most likely grandmas anyway who could give a rip less if they have the next best OS (and most likely only play Chess or some crap anyway).

I know when they were showing off the Windows 7 kernal (something along those lines) they were touting how small and lightweight it was.
What they were showing off was MinWin, not the Windows 7 kernel. MinWin isn't going to be used in the next Windows OS, it's just a proof of concept at this point.
 
GreatDrok and gkroeger and philos:

With all due respect, gentlemen, you're looking at a side effect, not a cause, vis a vis improving your primary language by learning secondary languages. If you're a poor user of your primary language, you may or may not be a poor user of secondary ones as well. I frankly see no evidence to suggest it does, and refuse to accept it especially as experience tells me otherwise.

Quality of usage is a function of one's own ambitions to properly utilize any language, whether primary, secondary, tertiary, or what-have-you.

And as for Windows 7... wait 'til you see Windows 8! :p
 
Hey, for all we know 10.6 maybe Intel 64 bit only.

I am taking a guess here, but there maybe more G4/G5 PPC processors out there than Intel 32 bit systems.

Yes, they (Intel 32) wont be abandoned yet, I was just pulling your chain.

However once that happens (no PPC and no Intel 32) the binaries can become simple again and occupy a lot less disk space and the OS can get more optimizations.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.