Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
poppe said:
OS X, since the G5 (i think) were 64 Bit, should be ready for 64 bit yes? Or was OS X never made 64 bit?


Only some core components of OSX are / were 64-bit for the G5. For the most part, nothing UI, Cocoa, or carbon based was 64-bit. This encompasses about 98% of the apps out there, which leads to 100% of the consumer apps.

Motion might have been the only application released for the general market that was a 64-bit app.

Max.
 
maxvamp said:
In order to further increase performance, should I be coding in 16-bit ? What about 8-bit?

Max.

If you're using an OS that can natively run 16-bit or 8-bit data on a processor that has a 16-bit or 8-bit operating mode, sure! (For example, the AltiVec instruction set CAN run on 8-bit data. If you want to do vector manipulation on a set of data, and 8 bits is enough, you can run 16 operations at once, since the AltiVec unit is 128 bits wide, and can be split into smaller segments.)

Unfortunately, to run native 16-bit instructions without 'padding' on a modern Intel processor requires dropping it into 16-bit 'Real' mode. No current OS will do that. (But if you want to install Windows 3.1 on your iMac Core Duo; go for it!) The PowerPC has no native 16-bit instruction set, so sorry, you're out of luck there.
 
poppe said:
OS X, since the G5 (i think) were 64 Bit, should be ready for 64 bit yes? Or was OS X never made 64 bit?

10.3 and 10.4 have partial 64-bit support.
 
maxvamp said:
In order to further increase performance, should I be coding in 16-bit ? What about 8-bit?

Max.

IBM and Microsoft used to write OS/2 together (versions 1.x). When they moved to OS/2 2.0, they wanted to make it pure 32-bit but instead kept key components 16-bit. Why? They performed better. It was faster having core components in split personalities with a 16-bit half and a 32-bit half and thunking between them, than writing new pure 32-bit code. OS/2 3.0 became pure 32-bit... and also became Windows NT when IBM and Microsoft split ways. It took many years for NT (2000/XP) to become acceptable in terms of performance and frugality and one could argue this was moore's law of increasing horsepower more than code optimization.

Microsoft did the same thing again with Windows 95. They could have gone completely 32-bit, and initially planned to. But they found that the pure 32-bit code was slower, and it was faster to keep the important components (kernel/gdi/user) 16-bit and give them a 32-bit thunking layer.

Now, how much of either case is the inefficiency of moving to a larger word size, and how much is that the 16-bit code was hand-tweaked assembly code that had been through years of refinement, who knows.

So, I know you're trying to be sarcastic, but increasing word size does not mean faster performance.
 
jhu said:
10.3 and 10.4 have partial 64-bit support.
Yeah, 64-bit libraries for large addressing but they're not pure 64-bit operating systems. Like Windows XP x64....then again we don't want to talk about THAT one do we? :rolleyes:
 
janstett said:
IBM and Microsoft used to write OS/2 together (versions 1.x). When they moved to OS/2 2.0, they wanted to make it pure 32-bit but instead kept key components 16-bit. Why? They performed better. It was faster having core components in split personalities with a 16-bit half and a 32-bit half and thunking between them, than writing new pure 32-bit code. OS/2 3.0 became pure 32-bit... and also became Windows NT when IBM and Microsoft split ways. It took many years for NT (2000/XP) to become acceptable in terms of performance and frugality and one could argue this was moore's law of increasing horsepower more than code optimization.

Microsoft did the same thing again with Windows 95. They could have gone completely 32-bit, and initially planned to. But they found that the pure 32-bit code was slower, and it was faster to keep the important components (kernel/gdi/user) 16-bit and give them a 32-bit thunking layer.

Now, how much of either case is the inefficiency of moving to a larger word size, and how much is that the 16-bit code was hand-tweaked assembly code that had been through years of refinement, who knows.

So, I know you're trying to be sarcastic, but increasing word size does not mean faster performance.

Although, my previous comment about running a current Intel chip in 16-bit mode WOULD make it slower. I had forgotten my history. (Which your post reminded me of.)

When Windows NT first came out, it made 486 and Pentium processors faster than when run on Windows 3.1, or even (upon its release,) Windows 95. Because NT was 'pure' 32-bit, while even Windows 95 retained 16-bit code in parts of its core. (Requiring 'thunking'.)

Upon the release of the Pentium Pro processor, speed freaks were dismayed. It performed WORSE in Windows 3.1 and DOS than the previous Pentium Processor, at faster speeds. (a.k.a. a 200 MHz Pentium Pro was outrun by a 166 MHz Pentium.) This defied logic! The Pentium Pro was a significantly better architecture, with its own on-package, full-speed L2 cache! (A first, at the time.) Why? Because, Intel had abandoned 16-bit computing. The Pentium Pro architecture (a.k.a. 'P6',) was designed from the ground up for 32-bit computing. While a 166 MHz Pentium may have beat a 200 Mhz Pentium Pro in 16-bit operations; a 166 Mhz Pentium Pro beat even a 200 MHz Pentium MMX in 32-bit operations.

That helped propel sales of both the Pentium Pro, and Windows NT, to the corporate market.

The Pentium Pro begat the Pentium II, which begat the Celeron, Pentium II Xeon, Pentium II, Pentium III Xeon, and Pentium-M. Then Intel abandoned this ground-breaking architecture for another new architecture, which they dubbed 'NetBurst'. Tuned for pure MHz speed, NetBurst became the Pentium 4, Xeon, (new) Celeron, and Pentium D. When Intel realized that this tuning for pure MHz actually made a more inefficient processor, and more power-hungry, they went back to the drawing board. Reviving the core ideals of the more-than-10-year-old P6 architecture, and adding the best parts of NetBurst, to come up with the 'Core' architecture at the heart of the latest, incredibly fast chips.
 
Eidorian said:
Yeah, 64-bit libraries for large addressing but they're not pure 64-bit operating systems. Like Windows XP x64....then again we don't want to talk about THAT one do we? :rolleyes:
Let's see, at my school, the video card driver would drop out every other reboot, if not every reboot. (Yes I'm talking about the x64 bit version). Hmm what else, software wouldn't work (does x64 have a 32-bit thunk layer or compatibility layer?). Anyways x64 isn't the best place to go. Now FreeDOS has a 2nd project where they're trying to bump it up to 32-bit bc hence forth (previous paragraph) 32-bit is now offically faster than 16-bit. What about 64-bit? The architecture and the OS must meet each other's requirements.
 
I run 64 bit linux. It still doesn't have full support, which stinks. I guess linux was the first to offer os's when the first 64bit processors came out. I think that was 2002.
 
I don't know if this was said, but I would put alot of money on 10.5 being 64 bit. Or better at what ever it does. and make the 64 bit computers work faster.

Or am i wrong?
 
poppe said:
I don't know if this was said, but I would put alot of money on 10.5 being 64 bit. Or better at what ever it does. and make the 64 bit computers work faster.

Or am i wrong?

On PowerPC systems, merely running in 64-bit mode doesn't make it any faster. In fact, for some activities, it could make it marginally slower (because it has to process 64-bits of data instead of 32, more data means more going through memory and the front side bus, effectively making the caches half as big.) If you actually want to work on 64-bit datasets, obviously working in 64-bit mode is faster, since it only takes one clock cycle instead of two. (Yes, 32-bit processors can work on 64-bit data, the just chop the data in two.)

On Intel systems, merely running in 64-bit mode will make for a slight increase in performance. The caveats regarding PowerPC above still apply; but when an x86 processor runs in 64-bit mode, it gains 50% more registers, and THAT adds speed. (Maybe 10-15% improvement over running the exact same processor-intensive task in 32-bit mode.) Yet again, obviously, working on 64-bit datasets is faster as well.

However, the BIG advantage of 64-bit modes (at least for now,) is the ability to address more than 4 GB of memory. Intel x86 systems can do this in 32-bit mode through trickery (called 'Physical Address Extension',) for the OS as a whole,but an individual process can still only use 4 GB. (So if you happened to work on truly humongous Photoshop files, they would work MUCH faster in 64-bit mode since Photoshop could then use more than 4 GB of RAM.) On 32-bit x86 systems, if you had 32 GB of RAM, it would basically mean that you could have 8 tasks using a full 4 GB of RAM (minus the small OS overhead,) but no one task could access, say, 8 GB. That's actually one of the main reasons enterprise-quality database apps and similar were multi-threaded years ago. Not so much to take advantage of multiple processors (but it helps,) but so that the app could use more than the 32-bit cap of 4 GB of RAM. It would just have each thread use 4 GB.
 
ferretboy said:
I run 64 bit linux. It still doesn't have full support, which stinks. I guess linux was the first to offer os's when the first 64bit processors came out. I think that was 2002.

linux has had 64-bit support since 1995 for the alpha and ultrasparc processors. btw, i run debian 3.1 on my amd64 machine without problems. the only issue i have is openoffice and doom3 being 32-bit only
 
jhu said:
linux has had 64-bit support since 1995 for the alpha and ultrasparc processors.

Yeah, nowadays people seem to forget that there were other 64-bit architectures before AMD. Heck, MICROSOFT supported 64-bit architectures as early as 1993. (The same two you list.)
 
ehurtley said:
Yeah, nowadays people seem to forget that there were other 64-bit architectures before AMD. Heck, MICROSOFT supported 64-bit architectures as early as 1993. (The same two you list.)

actually, i don't think ms really supported 64-bit computing until itanium since their non-x86 versions of windows prior to itanium still ran in 32-bit mode.
 
jhu said:
actually, i don't think ms really supported 64-bit computing until itanium since their non-x86 versions of windows prior to itanium still ran in 32-bit mode.

Yeah, I was being a little deceptive there. :p I did only say that they supported those 64-bit architectures. :-D
 
So from my stand point (knows little about 64 bit stuff): Lets say Merom MBP's come out this August. How are they gonna perform to an equivalant 32Bit MBP? Same, Worse, Better?

I'm getting Merom just because I dont need a MBP yet, and I was hoping if I got 64 bit eventually it would bennifit me. (I'll be doing Avid, and FCP work).
 
poppe said:
So from my stand point (knows little about 64 bit stuff): Lets say Merom MBP's come out this August. How are they gonna perform to an equivalant 32Bit MBP? Same, Worse, Better?

I'm getting Merom just because I dont need a MBP yet, and I was hoping if I got 64 bit eventually it would bennifit me. (I'll be doing Avid, and FCP work).

Well, if you compare a 2.16 GHz Core Duo (current chip/Yonah,) to a 2.16 GHz Core 2 Duo (new chip/Merom,) in the current Mac OS, you will probably see about a 5-10% improvement in performance. If Leopard supports 64-bit Intel processors, then running a 32-bit app in Leopard would see about a 10-15% improvement in speed. (Again, comparing a Core 2 Duo in 64-bit mode to a Core Duo in 32-bit mode.) Running a 64-bit app (that only really NEEDS 32-bit data, so there isn't an inherent disadvantage,) would see a 15-20% improvement; and running a 64-bit app that really needs 64-bit data would see an improvement somewhere in the 40-50% range.

This is based on similar comparisons on Windows.
 
janstett said:
IBM and Microsoft used to write OS/2 together (versions 1.x). When they moved to OS/2 2.0, they wanted to make it pure 32-bit but instead kept key components 16-bit. Why? They performed better. It was faster having core components in split personalities with a 16-bit half and a 32-bit half and thunking between them, than writing new pure 32-bit code. OS/2 3.0 became pure 32-bit... and also became Windows NT when IBM and Microsoft split ways. It took many years for NT (2000/XP) to become acceptable in terms of performance and frugality and one could argue this was moore's law of increasing horsepower more than code optimization.

Microsoft did the same thing again with Windows 95. They could have gone completely 32-bit, and initially planned to. But they found that the pure 32-bit code was slower, and it was faster to keep the important components (kernel/gdi/user) 16-bit and give them a 32-bit thunking layer.

Now, how much of either case is the inefficiency of moving to a larger word size, and how much is that the 16-bit code was hand-tweaked assembly code that had been through years of refinement, who knows.

So, I know you're trying to be sarcastic, but increasing word size does not mean faster performance.


Ahhh OS/2...

I was an OS/2 programmer once... Just one note... NT ( MS-OS/2 3.0 ) was pure 32-bit, but in the IBM lineage, there was actually still 16-bit code, and thunking still happened.

Still, performance wise, you ware right. No where was this more apparent than on the 386 SX, especially the earlier ones....

Max.
 
ehurtley said:
Well, if you compare a 2.16 GHz Core Duo (current chip/Yonah,) to a 2.16 GHz Core 2 Duo (new chip/Merom,) in the current Mac OS, you will probably see about a 5-10% improvement in performance. If Leopard supports 64-bit Intel processors, then running a 32-bit app in Leopard would see about a 10-15% improvement in speed. (Again, comparing a Core 2 Duo in 64-bit mode to a Core Duo in 32-bit mode.) Running a 64-bit app (that only really NEEDS 32-bit data, so there isn't an inherent disadvantage,) would see a 15-20% improvement; and running a 64-bit app that really needs 64-bit data would see an improvement somewhere in the 40-50% range.

This is based on similar comparisons on Windows.

Ok thanks. Good then I feel confident about my future purchase
 
poppe said:
I don't know if this was said, but I would put alot of money on 10.5 being 64 bit. Or better at what ever it does. and make the 64 bit computers work faster.

Or am i wrong?

First all,

Most things I say are tongue in cheek. Enjoy.. BUT!!

I am not sure if Apple will immediately adopt all 64-bit leopard. This is almost a cart before the horse situation.. 64-Bit Intel needs to be throw in 64-bit mode. To run 32-bit code, there needs to be a compatibility layer. This would be something like WOW-64.

Since Apple has released 32-Bit Intel machines, the introduction would require OS-X to be released in three flavors (PPC, IA32, IA64 ).This also means really fat binaries.

This would mean Ultra Binaries.... hmmm sounds like a lot of testing to me....

Max.
 
Catfish_Man said:
There is very strong evidence that even the GUI frameworks in 10.5 will have 64 bit versions (as opposed to just the non-gui ones in 10.4).

I am curious what 64-bit GUI controls would bring to the table? The only thing I could possibly see would be just interoperability with 64-bit code, but as for being 64-bit... NADA.

I might need a little help wrapping my brain around this one...

Max.
 
maxvamp said:
I am curious what 64-bit GUI controls would bring to the table? The only thing I could possibly see would be just interoperability with 64-bit code,

Exactly.

64 bit code that cannot access the GUI is a bit sad.
 
To elaborate on gnasher's answer: Currently the only way to make a 64 bit app is to have a 32 bit GUI program that communicates with a separate 64 bit core. Mathematica is a good example of this; Mathematica.app is 32 bit, but MathCore is 64 bit. Unfortunately, this divided process setup is poorly suited to apps that require realtime response, so things like Photoshop or Final Cut couldn't effectively be designed that way (and therefore can't use >4GB of ram).
 
ehurtley said:
Yeah, nowadays people seem to forget that there were other 64-bit architectures before AMD. Heck, MICROSOFT supported 64-bit architectures as early as 1993. (The same two you list.)

Not 64 bits but 60. What's four bits? I was a system programmer on a CDC 6600 which I think was the first 60 bit machine introduced in the 1960's It had some very advanced features even by today's standards. For example almost 100% of the operating system and I/O processing was off loaded from the CPU(s) to a set of 10 or 20 smaller "peripheral processors". Using modern terminology the machine had up to 24 "cores" but they were specialized, up to 20 for the OS and I/O and up to 4 for number crunching.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.