Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
1. Drop support for Classic
2. Drop support for Rosetta
3. Drop support for Universal binaries?
 
1. Drop support for Classic
2. Drop support for Rosetta
3. Drop support for Universal binaries?

If anything, I am sure that Support for XCode creating Universal binaries has or will disappear soon. I would also imagine that the ability for the OS to handle universal binaries will slowly fade after that point.
 
If anything, I am sure that Support for XCode creating Universal binaries has or will disappear soon. I would also imagine that the ability for the OS to handle universal binaries will slowly fade after that point.

Why would it ? Universal binaries don't actually use anything extra on the system and allow a graceful way to ship multi-architecture binaries.

You know, x86 and x86_64 versions of applications inside 1 binary. Universal binaries are here to stay.
 
Why would it ? Universal binaries don't actually use anything extra on the system and allow a graceful way to ship multi-architecture binaries.

You know, x86 and x86_64 versions of applications inside 1 binary. Universal binaries are here to stay.

Doesn't this post completely contradict your claims about the high cost of having PPC streams in the universal binaries?
 
Doesn't this post completely contradict your claims about the high cost of having PPC streams in the universal binaries?

Universal Binaries don't just mean PPC, though that's the stigma and I think he should clarify. UB's have the stigma of PPC cross platform compatibility, and thus the official "Universal Binary" package in that sense is logically going to get canned, though there may still be, at least in the short term, UB's in the sense of being both 32 and 64 bit compatible.
 
Well I can't wait to get Lion for my G5 Quad!!


/sarcasm, Intel is the way forward with Apple, and thus I think for the sake of making a faster, less bloated OS, they're doing the - by far - right move dropping PPC support.

**HAS ANYONE EVER CONSIDERED**

Just have a legacy machine around for any old PPC apps? I mean G5's are becoming more and more easy to score fully loaded around the $300-400 ball park.
 
Doesn't this post completely contradict your claims about the high cost of having PPC streams in the universal binaries?

Nope. They still support and ship Macs that can run both x86 and x86_64 natively. Why would it contradict anything ?

The problem is maintaining codebases and builds for an architecture they haven't shipped a new system for in the last 5 to 6 years. Not maintaining a codebase and builds for architectures they still ship.

They could also use Universal Binaries to ship x86, x86_64 and ARM inside a single binary. Universal Binaries don't mean PPC at all, not even close. It's just a format for FAT binaries that support multiple architectures.
 
Nope. They still support and ship Macs that can run both x86 and x86_64 natively. Why would it contradict anything ?

The problem is maintaining codebases and builds for an architecture they haven't shipped a new system for in the last 5 to 6 years. Not maintaining a codebase and builds for architectures they still ship.

AMD64 is also much more similar (i.e. it's nearly identical) to x86 than PPC is to either of them. It's not a lot of incremental effort to support both (I did it with millions of lines of code for years, starting back in the day when the only AMD64 machines were in the offices where I worked).
 
ARM can do it both ways depending on how you implement the architecture in the processor. ;)

I'm not talking endianness (I assume that's what you refer to). The high order bit in PPC is numbered 0. The low order bit changes number depending on how many bits in the architecture. It's crazy.
 
AMD64 is also much more similar (i.e. it's nearly identical) to x86 than PPC is to either of them.

But, x64 has instructions that are completely foreign and unintelligible to an x86 processor. You may find streams with a few common instructions, or a few tens, or few hundreds/thousands/millions - but any arbitrary x64 code won't run on an x86 processor.

Besides, most modern compilers separate the source language and the intermediate language from the code generators. While x64 has many family resemblances to x86 - at the bit level an x86 processor can't do anything with x64 source code.
_____

So, what I'm hearing is that Apple will be dropping x86 streams from the fat binaries soon. It's too much disk space, coding and QA effort to support systems that Apple started to drop in summer 2006. You've had five years warning about the transition. Any company making 32-bit software should be drawn and quartered. Everyone should have moved to x64 code by now. Any company using x86 installers should be shot. Any people using x86 code for their workflows deserve failure and bankruptcy and deserve to live on the streets with all their belongings in a stolen shopping cart.
 
Last edited:
So, what I'm hearing is that Apple will be dropping x86 streams from the fat binaries soon. It's too much disk space, coding and QA effort to support systems that Apple started to drop in summer 2006. You've had five years warning about the transition. Any company making 32-bit software should be drawn and quartered. Everyone should have moved to x64 code by now. Any company using x86 installers should be shot. Any people using x86 code for their workflows deserve failure and bankruptcy and deserve to live on the streets with all their belongings in a stolen shopping cart.

You're not making any sense. x86 code runs fine natively on x86_64 CPUs. PPC code does not run fine natievely on x86 or x86_64 CPUs.

The situations are far from analogous and to pretend they are is either trying to bait everyone into a long drawn "Blu-ray thread" like l2m likes to do, or simply not understsand the concepts (which I know you do).

Fat binaries will stay around. Even after all 32 bit software has just naturally disappeared (because it will eventually, since it's almost effortless to do, compared to switching architectures), they won't amper anything and might find uses in spreading future architectures (Macs with ARM processors ?).
 
I'm not talking endianness (I assume that's what you refer to). The high order bit in PPC is numbered 0. The low order bit changes number depending on how many bits in the architecture. It's crazy.

..and now all your rage over PPC makes sense. Making an assembler must be hell. I've always wondered why my University uses a highly modified MIPS arch instead of just PPC or ARM for their development boards.
 
You're not making any sense. x86 code runs fine natively on x86_64 CPUs. PPC code does not run fine natievely on x86 or x86_64 CPUs.

The situations are far from analogous and to pretend they are...

Fat binaries will stay around. Even after all 32 bit software has just naturally disappeared (because it will eventually, since it's almost effortless to do, compared to switching architectures), they won't amper anything and might find uses in spreading future architectures (Macs with ARM processors ?).

I'm surprised that you can't see the parallels between x86 streams in fat binaries and PPC streams.
  • x86 streams cause bloat and QA headaches, just like PPC streams
  • all 10.7 systems can run x64 code, x86 isn't necessary
  • Apple moved the majority of their products to x64 about 5 years ago

Some of the Rosetta issue is due the fact that Apple didn't put out a clear roadmap - but some people argue that developers should have "read the tea leaves" to know that Rosetta was going to disappear.

Why isn't 10.7's drop of support for x86-only hardware "writing on the wall" for dropping x86 support?

I agree with you that the fat binary architecture will stick around, but question how many streams will be present inside them. A fat binary with only x64 code is still a fat binary.
___________

I'm not arguing the x86 support is doomed in the short term - I'm pointing out that many of the arguments that are being used to justify dropping PPC also apply to dropping x86, and that the "signs" from Apple that x86 is being dropped are just as "clear" as the signs that Rosetta was going away.

I put "clear" in quotes because the PPC/Rosetta thing is clear in hindsight, but was not clear at the time.
 
..and now all your rage over PPC makes sense. Making an assembler must be hell. I've always wondered why my University uses a highly modified MIPS arch instead of just PPC or ARM for their development boards.

I used to design PPC's (I was the floating point guy at Exponential Technology at the time it went under). It was very very confusing. Trying to implement binary math while the bits are numbered in reverse... Damned IBM.
 
I'm surprised that you can't see the parallels between x86 streams in fat binaries and PPC streams.

And I'm surprised you can't understand how both situations are unrelated. x86, at a low level, is very much x86_64. Same endianness, same high bit order, the base instructions are the same.

That throws out a lot of potential bugs in frameworks that would be introduced by these architectural differences. Eventually x86 will go away, it will be stripped from frameworks, you will be unable to run 32 bit code on OS X. Will that be 10.9 ? 10.10 ? Who knows.

But that doesn't mean FAT binaries are going away. At all. Why do away with a container format ? It's just a container. So what if for a while there it contains only 1 architecture binary ?
 
And I'm surprised you can't understand how both situations are unrelated. x86, at a low level, is very much x86_64. Same endianness, same high bit order, the base instructions are the same.

That throws out a lot of potential bugs in frameworks that would be introduced by these architectural differences. Eventually x86 will go away, it will be stripped from frameworks, you will be unable to run 32 bit code on OS X. Will that be 10.9 ? 10.10 ? Who knows.

But that doesn't mean FAT binaries are going away. At all. Why do away with a container format ? It's just a container. So what if for a while there it contains only 1 architecture binary ?


given the rate Apple is going I say another 2-3 years Apple will no longer be able to run 32 bit code. I honestly call that a huge waste of resources as why bother recompiling something to be 64 bit if what you gain is nothing. Hell you more than likely slow the thing down that speed it up.

You know as well as I do on large programs recompiling for a different CPU can interdiction a entirely new world of problems. Big time going from RISC to CISC.
 
given the rate Apple is going I say another 2-3 years Apple will no longer be able to run 32 bit code. I honestly call that a huge waste of resources as why bother recompiling something to be 64 bit if what you gain is nothing. Hell you more than likely slow the thing down that speed it up.

You know as well as I do on large programs recompiling for a different CPU can interdiction a entirely new world of problems. Big time going from RISC to CISC.

Compiling to 64 bit will speed up code by 10-20% on average (the higher number on more recent intel chips and all amd chips) even if all operands stay 32 bit and you don't need the extended address space, due to the increased number of registers, improved addressing modes, and other factors.
 
3. Drop support for Universal binaries?

Universal binaries are being used in iOS to support armv6 and armv7. Wouldn't be surprised to see Universal binaries used for ARM in some future Mac OS X. Just not for PPC any more since IBM (and Motorola) dropped the ball on Apple.
 
Proof right there you don't read my posts. I said "Lion is a 4GB download". Again. Lion is a 4GB download. Just to make sure you got it : Lion is a 4GB download.

No, it's proof that YOU don't read my posts because I acknowledged that you said it was Lion, but my point all along is that you pointing out something IRRELEVANT (i.e. Lion's size) is just an attention getter. It means nothing. If you don't want to download 4GB, complaint to Apple. It has nothing to do with how much more space including Rosetta would take up relative to Lion. Personally, I think going DL only for Lion is STUPID AS HELL, but Steve just has to ditch the DVD drive ASAP so he can tell everyone that BD is a waste because discs are dead. :rolleyes:

Read my posts, then we'll talk. Until then, cry and scream until you're red in the face, Apple is making the proper choice in transitioning off a transitional emulation layer.

If Rosetta is so "easy" to make work on Lion, then let the community take care of it. You know, like those nice folks who provide DOSBox, qEmu, DOSemu, Virtual Box and other emulation/virtualization packages. Apple is done with PPC.

I'm pretty sure VMWare or Parallels might have interest in doing it (seeing all the work they already have done) but Apple won't let them virtualize OSX for some stupid reason outside server. Apple could license out the job to a 3rd party for profit as well, but I think Steve's ego won't let him do it. He wants to get away from DVD drives, PPC and basically everything from the past.

And once again, I'm not talking about Rosetta as-is, but a virtualized machine using Rosetta as the core engine. But since you don't read my posts, you don't seem to know that. You're too busy ranting about how your ideas are always right to even notice WTF anyone is talking about.

Compiling to 64 bit will speed up code by 10-20% on average (the higher number on more recent intel chips and all amd chips) even if all operands stay 32 bit and you don't need the extended address space, due to the increased number of registers, improved addressing modes, and other factors.

Doesn't it also use a bit more memory (relatively speaking) than the same program compiled for 32-bit since various word lengths double even if they're not fully used? You can say memory is cheap, but programs and operating systems just keep getting more and more bloated over the years for various reasons (laziness probably being number 1) such that processor and ram capability gains are largely wasted. For example, how much power does it take to run the latest version of Microsoft Word? How much did it take to run ProWrite on the Amiga by comparison (or Font Master II on the C64, even though it wasn't WYSIWYG)?

Faster computers are often just larger crutches for bad programming practices in general and going higher and higher level for the languages means less and less optimization of code. Hell, does anyone optimize at all most of the time? I wrote Visual Basic Script code for a simulated/emulated pinball platform for years and just playing with different routines to do the same thing, I could get often get 3-4x faster operation (which was good since you wanted the emulated pinball game to run as fast as possible and for as many people as possible to get a wider audience). No one cares today if their program could run 100x faster as long as it's "good enough" on an average machine. So what if it keeps you from doing more multi-tasking or your browser slows down?

Look at iOS. It's growing so bloated so fast that Apple has to keep dropping the previous models from support in newer versions because they run too slow (they also like to do it to force upgrades in some cases, but it's obvious from at least one model that did get a newer version that there's some SERIOUS BLOAT going on). OSX itself is getting slower every version when up until Tiger it was getting faster every version. They apparently think by getting rid of older products from support you won't notice these things. I'm afraid that Apple is getting sloppy just like Microsoft and it's hurting their product. But like Microsoft, the average person doesn't even notice such things. Apple used to be better than that and its' a shame.
 
And I'm surprised you can't understand how both situations are unrelated. x86, at a low level, is very much x86_64. Same endianness, same high bit order, the base instructions are the same.

That throws out a lot of potential bugs in frameworks that would be introduced by these architectural differences.

And introduce bugs due to data size differences (integer overflow, for example), data structure differences (alignment changes due to different data type sizes), different instruction sets (MMX/SSE/SSE2/AVX/etc), different numbers of registers, different sizes of registers,....

How can you claim that bloat and QA issues with PPC streams are too expensive for Apple to deal with, but then turn around and claim that they're non-existent for x86 streams?


I agree with you that the fat binary architecture will stick around, but question how many streams will be present inside them. A fat binary with only x64 code is still a fat binary.

Eventually x86 will go away, it will be stripped from frameworks, you will be unable to run 32 bit code on OS X. Will that be 10.9 ? 10.10 ? Who knows.

But that doesn't mean FAT binaries are going away. At all. Why do away with a container format ? It's just a container. So what if for a while there it contains only 1 architecture binary ?

It's hard to carry on a good argument with you when you repeat the same things that I've said. ;)


Doesn't it also use a bit more memory (relatively speaking) than the same program compiled for 32-bit since various word lengths double even if they're not fully used?

In general, only "a bit more" memory.

Arithmetic data (byte/short/int/int64/float/double/character and the vectors) is the same size on both x64 and x86. Depending on the platform and language, "long" data may remain 32-bit or grow to 64-bit.

Pointers and size_t variables double in size, so any memory growth due to these is proportional to the number of pointers and size_t variables in memory. Data structures which aren't thoughtfully laid out can grow a bit more, because the compiler will naturally align data - if a pointer isn't on a 64-bit (8 octet) boundary some padded (wasted) space will be added to the structure to put the pointer on a natural boundary.

For most applications, memory growth is a minor drawback compared to the performance advantages of the improved architecture with more than twice as many available registers.
 
Last edited:
N
Doesn't it also use a bit more memory (relatively speaking) than the same program compiled for 32-bit since various word lengths double even if they're not fully used?

Generally not much. This sort of thing can happen when someone re-compiles code intended for 32-bit to 64-bit without changing anything (unless the 32-bit code was written carefully). But if my 64-bit code specifically asks for a 32-bit value, then that's what I get. If I'm lazy and declare every variable to be the biggest possible size, then I get what I deserve. Most people who write 32-bit code without thinking about eventually porting to 64-bit code do not end up with code that makes everything 64-bits, though.

Pointers, on the other hand, will increase in size to 64-bits. Whether that increases the memory footprint of your code depends on things like whether you are storing memory pointers in containers (generally not a good idea, anyway, and not something that happens a lot with Objective-C code), or whether you store one variable to hold a pointer, and then store offsets in the container (much more common, I should think, at least in Objective-C code).

In practice, I think it's extremely rare that you see significant memory increases for programs that aren't actually storing real data in all that memory (a lot of the time you'll see 64-bit ports use more memory because they are really making use of it - rather than having to rely on virtual memory, they instead make use of the larger physical address space. This is a good thing.)
 
Pointers, on the other hand, will increase in size to 64-bits. Whether that increases the memory footprint of your code depends on things like whether you are storing memory pointers in containers (generally not a good idea, anyway, and not something that happens a lot with Objective-C code), or whether you store one variable to hold a pointer, and then store offsets in the container (much more common, I should think, at least in Objective-C code).

Not being familiar with Objective-C, could you help me understand this? Offsets within a structure (is a container a structure) might be smaller than 64-bit depending on the size of the structure, but pointers to other structures (containers) would still have to be 64-bits - no?


In practice, I think it's extremely rare that you see significant memory increases for programs that aren't actually storing real data in all that memory

Twenty years ago when I was helping customers do 64-bit ports (on DEC's Alpha emulators) we'd rarely see much memory expansion (at least after rearranging structures to reduce alignment padding as much as possible). (And note that in 1992, a full blown 64-bit desktop computer would have several times less memory than you'd accept on a graphics card today - 256 MiB was considered huge.)

We did run into a couple of applications that nearly doubled in memory requirements, but on inspection those were using structures for a binary tree with forward and back links, and up and down links, and the actual leaves in the tree were a pair of longwords. Nearly 90% of the data in the active program was pointer data.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.