Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Re: Re: Re: This just adss fuel to the fire

Originally posted by MacCoaster

Uh. The entire M68k family is 32bit, even the original Motorola 68000.

I guess sorta... I found this:

"The 68000 has 32-bit registers but only a 16-bit ALU and external data bus. It has 24-bit addressing and a linear address space, with none of the evil segment registers of Intel's contemporary processors that make programming them unpleasant. That means that a single directly accessed array or structure can be larger than 64KB in size. Addresses are computed as 32 bit, but the top 8 bits are cut to fit the address bus into a 64-pin package (address and data share a bus in the 40 pin packages of the 8086 and Zilog Z8000). "

32-bit registers... 16-bit math and external databus. Because the databus and execution unit are 16-bit, I think it would qualify more as a 16-bit processor, although, I'm sure people would disagree...

I *do* remember the this-n-that about System 7 being 32-bits clean, though, and that being a big deal for transition to PPC.

Sooo... with the attention given to MacOS X and it being 64-bit clean, I think it all sounds good to me.

:)

Binky
 
Re: Generally....

Originally posted by Microsoft_Windows_Hater
A movement of any BSD or 'nix in general simply requires a recompile. If apple has been playing their cards correctly then all kernel memory calls will be in place correctly for 64bit and it would only require a recompile. As for Mac OS X itself, not darwin, I don't think there would be any major problems. The only big problem would be what applications would be.

Today's app's are compiled in 32bit. If you go and compile in 64bit then it isn't backwards compatible. Would apple have to have 64bit app's compiled on install? I doubt it, really messy. I guess they will just introduce 64bit across the entire line at once, to enable everyone to get the required speed boost without any nasty consequences.

Again, its my understanding from having done a layman's read of Book E white paper, that all PPC chips adhering to Book E (the 64-bit implementation of PPC) are able to execute 32-bit instructions. I would imagine, though, that unless the apps are written 64-bit clean as well (I've found numerous articles about ftp servers that aren't etc...) they might take a speed hit while the CPU pads the registers out to 64-bit... <shrug> However, the GPUL will apparantly be working in the same capacity as AMD's clawhammer where 32-bit and 64-bit instructions will be running in a mixed environment.

I think it will be pretty interesting to see how this affects memory and storage performance. Currently, 4Gb is the physical limit for RAM in a machine due to 32-bit mapping. Some high-end chipsets for servers can address more than 4Gb by doing a kind of "virtual RAM" mapping technique onto real RAM. This will definately allow Macs to have a theoretical RAM limit of vast numbers (way over even Terabytes, what's that... Petabytes?). Also, this would directly affect storage as well. Remember drives having partitions limited to 4Gb? 32-bit addressing again... now we do a map to maintain 64-bit addressing across the drives (IDE uses LBA addressing to make it happen)... with full 64-bit ops, the drives could be addressed directly with no translation.

This doesn't sound like a big deal, but... if the OS is having to do a little look-up everytime the drive is accessed to map out the 64-bit address to 32-bit storage... those could add up significantly... removing those (and other little tricks that had to be put into 32-bit operations) could aggregate to a nice speed boost for OS X...

Binky
 
It's my understanding that the PPC spec was designed to permit a clean move to 64-bit from the beginning. Over the past month, several techs have stated repeatedly on these forums that today's 32-bit applications will run cleanly on the new PPC64 chip without modification.

Concerning the chip availability fiasco we have repeatedly suffered through because Moto hasn't been able to provide chips in the quantities needed, the new PPC64 GPUL chip will be manufactured by IBM, most likely at their new Fishkill, NY facility. IBM is a master at this. I would really doubt they would fail to supply contracted quantities of any chip, barring events beyond their control (e.g., the plant burning down, or whatever). They plan, and they execute...professionally.

Finally, it is extremely doubtful to me that IBM would have invested the time/money to build in the SIMD vector processing unit (Altivec-compatible) without a customer in mind to use it. While IBM will certainly use this chip themselves, it seems clear this modification was included to meet a key customer's requirements...Apple's.

I am disappointed to hear that this chip may not be in production for another 8-9 months. From prior posts, it appears this chip has been available (at least in small quantities) for testing for many months. I wonder what's been taking so long to get the final version right? I too am looking forward with great interest to what IBM has to say next week.
 
Re: Re: Re: This just adss fuel to the fire

Originally posted by MacCoaster

Uh. The entire M68k family is 32bit, even the original Motorola 68000.

You're kind of right. Mac's were 32 bit from the git go BUT only used the first 24 bits for a long time. I think system 7 was the first of the real 32 usage. Lots of programs used the upper 8 bits for their own special purpose until the full use of 32 came into being.

Jim
 
Originally posted by Dave Marsh


I am disappointed to hear that this chip may not be in production for another 8-9 months. From prior posts, it appears this chip has been available (at least in small quantities) for testing for many months. I wonder what's been taking so long to get the final version right? I too am looking forward with great interest to what IBM has to say next week.

I completely agree with you. But in all fairness these major changes in the processors take YEARS to bring to market. P5 has been in development for at least a year. I read a really long article a year or so ago that was predicting the leap frog AMD is about to do over Intel. It was way too technical for me, but I was able to nod my head and follow the major points. It was the author's view that Intel made a decision to dominate speed wise in the short term without planning well enough for the longer term. Course the guy could be just a disgruntled engineer who wasn't getting enough at home, but it was intresting to see just how much planning is required.

A coworker sent me a PDF the other day showing Moto's in the beginning stages of planning the G6 (VERY early...).

Anyway, my evil peece is begging for my attention. Sorry if I didn't really say anything.

Cheers!
-john
 
Re: Generally....

Originally posted by Microsoft_Windows_Hater
The only big problem would be what applications would be.

Today's app's are compiled in 32bit. If you go and compile in 64bit then it isn't backwards compatible. Would apple have to have 64bit app's compiled on install? I doubt it, really messy. I guess they will just introduce 64bit across the entire line at once, to enable everyone to get the required speed boost without any nasty consequences.

I would assume that they'd like to have software shipped in the Mac OS X bundle system. Then they can have both the 32-bit and 64-bit versions of the software in one "icon". The OS knows which microarchitecture it's running on so it just executes the proper binaries when the application is launched. That would make the user experience a lot better during the migration. Sort of like fat binaries during the 68k to PPC transition or bundles between 68k, x86, SPARC, and PA-RISC with NeXTstep/OpenStep. 32 bit binaries should run just fine, just like SPARC binaries run on UltraSPARC or SPARC64 without a recompile. But they're going to have to maintain backward compatibility with the installed 32-bit base.

However, the majority of users won't need the extra memory or precision at this point.
 
Originally posted by j763
this all sounds great but the big question is -- when will we get our hands on it? Jan? July? or later?

If I know Apple correctly here is a possible path they may take:
MWSF Apple updates iMacs, laptops, and XServe
MWT Apple increases the powermacs to 1.5ghz
MWNY Laptops possibly, Hints at 10.3 and provides some details (Little do we know it is a 64 bit OS X)
Sometime after MWNY before October Apple will release Powermacs with Power4 lite(64bit) at maybe 1ghz to 1.5ghz with 10.3. This my IBM 64bit speculation. If I am right the possiblilites are endless(We may be reading this post a yeart from now;) ;)

One question what happnes to the G5 from Moto??? Does it go into the iMacs and laptops?
 
Re: Re: Re: Re: This just adss fuel to the fire

Originally posted by DharvaBinky

"The 68000 has 32-bit registers but only a 16-bit ALU and external data bus. It has 24-bit addressing and a linear address space, ... Addresses are computed as 32 bit, but the top 8 bits are cut to fit the address bus into a 64-pin package (address and data share a bus in the 40 pin packages of the 8086 and Zilog Z8000). "
...
I *do* remember the this-n-that about System 7 being 32-bits clean, though, and that being a big deal for transition to PPC.

You're kind of right. Mac's were 32 bit from the git go BUT only used the first 24 bits for a long time. I think system 7 was the first of the real 32 usage. Lots of programs used the upper 8 bits for their own special purpose until the full use of 32 came into being.

These are both accurate statements. The 68000 was internally 32 bit, but was restricted to an effective 24 bits of external addressing because Motorola decided to cram it into a 64-pin package. This was resolved with the 68020.

As an old-school trick, some programmers took advantage of the extra 8 bits as a sort of "extra register" where they could cram through a little more information to gain performance. Of course, when System 7 started utilizing 32 bit addressing, this "extra space" would get clobbered causing the programs to crash.

"32-bit clean" refers to an application that doesn't attempt to take advantage of the extra space that's left open by the 24-bit System 6 running on a semi-32 bit processor (68030).

In fact, some of the early Macintosh ROMs also used these tricks. This caused those machines to not be 32-bit clean and to require either running in System 7's 24-bit compatibility mode or by installing a piece of software made by Connectix (Mode32) and later purchased by Apple (32 bit enabler) that "patched out" the dirty ROM routines.

Supporting 32-bit apps on 64-bit hardware should really not be a big deal at all.
 
btw...

...the big deal with this chip is NOT that it's 64 bit (although that makes a darn nice marketing number :)). It's that it's based on the POWER4 architecture which has modern features that the G4+ is lacking (out of order execution, multiple FPUs, high bandwidth memory bus), as well as being capable of an insane 8 instructions/cycle (both the G4+ and the Pentium 4 can do 3). I don't know how much of this the GPUL will include, but any of it will be a major improvement (dual FPUs will mean that doing a single FP divide will no longer block floating point heavy apps for 25 cycles while it finishes, they'll just use the other FPU, oooe will also help with that).
 
Where's Moto in the equation?

Originally posted by Macmaniac


If I know Apple correctly here is a possible path they may take:
MWSF Apple updates iMacs, laptops, and XServe
MWT Apple increases the powermacs to 1.5ghz
MWNY Laptops possibly, Hints at 10.3 and provides some details (Little do we know it is a 64 bit OS X)
Sometime after MWNY before October Apple will release Powermacs with Power4 lite(64bit) at maybe 1ghz to 1.5ghz with 10.3. This my IBM 64bit speculation. If I am right the possiblilites are endless(We may be reading this post a yeart from now;) ;)

One question what happnes to the G5 from Moto??? Does it go into the iMacs and laptops?

I think Apple will bring out Moto's G5 in March as the 1.5GHz. Reports have stated that they would be delivering them in supply in early '03. Then possibly IBM units in January '04. Apple could then replace the Moto with the IBM and move the Moto to the iMac and PBook. And maybe the iBook will get a G4 sometime in '04.
 
Re: Re: btw...

Originally posted by Scottgfx


Mmmmmmm, Atari Jaguar. :)

Very good point indeed. However i feel that this entire board has fallen into a lapse of wanting big numbers right now. I thought that a board of Mac heads would surely know that overall system performance is what. For example, we have an elephant at the circus that can jump through hoops (Bare with me here...). He can jump through an amazing amount of 1500 hoops each minute. Wow! I gotta see this in action! But when I go to the show, it turns out that the elephant is terribly trained and won't do what the ring master says. My point is that Ghz is a theoretical value. Just like this elephant CAN jump through 1500 hoops a minute, but won't.

Sorry guys, but I think that was the worst analogy that has ever been used on these boards. But if you dug how confusing it was, then the movie Mulholland Drive is for you.

P-Worm
 
Re: This just adss fuel to the fire

Originally posted by MacManiac1224
This just adss fuel to the fire. I hope that Apple will use this chip, and that it will be out by MWNY. That is my bope, if it is, they can count on me to buy it. The G4 is apporaching 2 years old.

My question is: If Apple wanted to change OS X into a 64-bit operating system, how long would that take? And if so, when they release the IBM chip, do they have to have a 64 bit operating system, or will they just wait. Personally, I think it is perfect: the IBM chip comes out at MWNY with a shipping date in august, and at the same time, 10.3 comes out, with that same shipping date, therefore a 64-bit operating system.

What do you guys think?

Me thinks your math is a little off. The G4 IS over 3 years old. Sept '99.
 
Re: Re: This just adss fuel to the fire

Originally posted by DharvaBinky


As I understand it... MacOS X is already what Apple refers to as "64-bits clean". Meaning, that...

The basic assumption in "64-bit clean" code is that no assumption is ever made that a pointer is 32 bits. Such an assumption can be manifest in certain data structures (such as ones that leave 32 bits as "user data", intended to be a pointer) or can be in code where an assumption is made about comparing the size of a void* to an int. I don't know how gcc or Metrowerks handles it, but some of the old 64-bit Alpha compilers had "int" be 32 bits, void* be 64 bits, and "long long" be 64 bits. Badly written code could store a void* in an int or vice-versa, or perform bit or bytewise operations on the pointer as if it were an int.

In answer to another question, currently all home computers use 32 bit ABIs, with "Wintel" machines using the IA32 (aka x86) instruction set, and Macintoshes using the 32bit PowerPC instruction set. Since 32 bit chips can only address a flat 4GB of space, it makes that the hard upper limit for RAM in most of these machines. Due to this limitation, the lifetime in years of any 32 bit ABI can be counted on one hand. The replacements on the other side of the fence are x86-32 (AMD) and IA64 (Intel/HP). If IBM comes in with a viable 64 bit offering, since everyone else must soon change ABIs anyway, who knows...
 
Re: Re: Re: btw...

Originally posted by P-Worm


Very good point indeed. However i feel that this entire board has fallen into a lapse of wanting big numbers right now.


That's natural, we always want what the other doesn't have. (yet) I just glad that someone got the reference. :)

I think I was rooting for the Amiga CD32 at the time. 68020 I believe, right? Man, if Commodore had just added up all the registers and used that number instead... Well, by that time Mehdi Ali had already mucked things up too much already. :)
 
Re: Re: Re: Re: Re: This just adss fuel to the fire

Originally posted by oldMac




As an old-school trick, some programmers took advantage of the extra 8 bits as a sort of "extra register" where they could cram through a little more information to gain performance. Of course, when System 7 started utilizing 32 bit addressing, this "extra space" would get clobbered causing the programs to crash.

Applause! Applause! Applause!

And this was the reason why Tempus was the fastest text editor I ever worked with! And it ran on my 8MHz Atari...



Supporting 32-bit apps on 64-bit hardware should really not be a big deal at all.

Two thoughts:

If a program writes out a memory location to a file, the 32bit version will write out 4 bytes, and the 64bit version will write out 8 bytes. The data files will not be compatible without extra effort.

Early PPC systems provided an emulation mode for 680x0 code. If a software company didn't have the manpower to port their whole product to the new processor, they could at least port the performance-intensive parts first. I remember that a good portion of System 7.5 was still 680x0 code running in emulation even on the PPC. Same goes for altivec optimization. It only pays out for, say, 10% of the code, so why bother with the rest, since the G4 can behave as a perfect G3?

This helps migration a lot. (Similar to introducing the carbon lib very early and providing classic mode in OS X.) I guess Apple would do the same thing with a 64-bit-processor: Create an environment to run your 32-bit-stuff, but make it possible to hand-taylor specific subroutines to run natively on 64-bit for performance reasons...
 
Re: Re: Re: Re: Re: Re: This just adss fuel to the fire

Originally posted by pianojoe

If a program writes out a memory location to a file, the 32bit version will write out 4 bytes, and the 64bit version will write out 8 bytes. The data files will not be compatible without extra effort.

Well, if you would write down memory locations from a 64bit program to a file and try to read it on another (32bit) computer you couldn't do anything with them since they're locations in another (physical) memory. If you would try to read the file from a 32bit application on the same computer you still wouldn't be able to get any usage from the memory addresses since they're in different processes and therefore protected by the operating system...

Early PPC systems provided an emulation mode for 680x0 code. If a software company didn't have the manpower to port their whole product to the new processor, they could at least port the performance-intensive parts first. I remember that a good portion of System 7.5 was still 680x0 code running in emulation even on the PPC. Same goes for altivec optimization. It only pays out for, say, 10% of the code, so why bother with the rest, since the G4 can behave as a perfect G3?

This helps migration a lot. (Similar to introducing the carbon lib very early and providing classic mode in OS X.) I guess Apple would do the same thing with a 64-bit-processor: Create an environment to run your 32-bit-stuff, but make it possible to hand-taylor specific subroutines to run natively on 64-bit for performance reasons...

Yes, but I don't think the operating system can (or should) cope with a process changing from 32bit to 64bit mode. This would mean a lot of work for the scheduler. Besides, I for one think that the biggest benefit of 64bit processing isn't necessarily performance but the possibility to address more than 4GB memory. I am a developer and I have a lot of memory hungry applications running (not to mention the system I am developing) so even if I currently can't have more than 1GB physical memory on my PowerBook I would certainly like to have more virtual memory instead of getting "Out of memory errors"...

/ Nerm
 
Re: Re: Re: Re: Re: Re: Re: This just adss fuel to the fire

Originally posted by nerm


Well, if you would write down memory locations from a 64bit program to a file and try to read it on another (32bit) computer you couldn't do anything with them since they're locations in another (physical) memory. If you would try to read the file from a 32bit application on the same computer you still wouldn't be able to get any usage from the memory addresses since they're in different processes and therefore protected by the operating system...



Yes, but I don't think the operating system can (or should) cope with a process changing from 32bit to 64bit mode. This would mean a lot of work for the scheduler. Besides, I for one think that the biggest benefit of 64bit processing isn't necessarily performance but the possibility to address more than 4GB memory. I am a developer and I have a lot of memory hungry applications running (not to mention the system I am developing) so even if I currently can't have more than 1GB physical memory on my PowerBook I would certainly like to have more virtual memory instead of getting "Out of memory errors"...

/ Nerm

I completely agree with you. Thank you for proving my point, that not all software written for 32bit can be used on a 64bit machine, no matter how clever the "emulation handler" will be.

(Again, this is what people want: Buy the latest Mac, copy over your applications, documents and prefs folder, and continue working with minimal downtime.)
 
Originally posted by Dave Marsh

I am disappointed to hear that this chip may not be in production for another 8-9 months. From prior posts, it appears this chip has been available (at least in small quantities) for testing for many months. I wonder what's been taking so long to get the final version right? I too am looking forward with great interest to what IBM has to say next week.

Steve's constant search for the next great thing or a processor that is "insanely great" coupled with Apples need for a large stock of CPU's at the moment of product release combine to make this condition.

Steve wants a million 2.0 Ghz G5's and IBM can supply a range of CPU's from 1.0 to 1.4 Ghz now. To get closer to insanely great Steevie-poo has to wait and thus makes us wait too.

Maybe by then hypertransport, firewire2 and other enabling technologies will also be ready.

Ah, the problems of MASS MARKET supercomputers.

Rocketman
 
Re: Re: This just adss fuel to the fire

Originally posted by bretm


Me thinks your math is a little off. The G4 IS over 3 years old. Sept '99.

When I was a kid one of the advertiser for the Friday Night fights was General Electric. One of their lines was something about 10 years ago 90% of their products didn't exist. In the PC world that product life cycle time has shorten, for most vendors, to less that 3 years. Apple got caught with their proverbial equipment in a wringer by using Motorola as their primary vendor and has not kept up to the industry standard.

Apple has another problem. They have few second sources for their critical parts. Further Apple's customers do not have a second source for their computer needs. When new hot products come out Apple often lacks the capacity to meet customer demand. Imagine what would happen if Apple was able to just double their market penetration and they were able to bring new products to market at an industry standard rate. Apple would be grid lock with their present business model.

I think Apple's business model will prevent them from ever being a sustained industry leader in performance. Apple has been behind for over a year and this thread talks about another year for this new processor to come on line. The rest of the PC industry does not have Apple's loyal customer base. They have to innovate or suffer the rath of fickled customers who have the dollars to buy innovation some place else.

Around here folks buy PC's because their neighbor did. They are just doing email, surfing the web and balancing their check book. Nothing special. Yet Apple can not crack that market with a $1,000...$2,000 iMac because Apple is an unknown commodity that nobody can help them with when it breaks. In my opinion Mac is not paying attention to their market in either the performance or common use arena's. G4's for 3 years....... common on..... it's getting so marriages don't last that long.
 
Re: Re: Re: btw...

Originally posted by P-Worm

For example, we have an elephant at the circus that can jump through hoops (Bare with me here...)
The real problem is, elephants can't jump.
;)
 
Re: Re: Re: Re: Re: Re: Re: Re: This just adss fuel to the fire

Originally posted by pianojoe


I completely agree with you. Thank you for proving my point, that not all software written for 32bit can be used on a 64bit machine, no matter how clever the "emulation handler" will be.

(Again, this is what people want: Buy the latest Mac, copy over your applications, documents and prefs folder, and continue working with minimal downtime.)

Actually what I meant was that when you save your documents, preferences or whatever the application doesn't write memory addresses to disk even now on your 32bit computer where the size of pointers wouldn't matter. It just doesn't make sense to do that for an application (unless it's a debugger or anything like that).
When it comes to "emulate" 32bit mode on a 64bit processor that doesn't happen either. If the scheduler is modified to handle 32 and 64bit processes, they all run directly on the hardware with the only difference that the 32bit processes can not address more memory than they can today. All user-processes currently run in a virtual memory area, so when that virtual memory area is extended to 64 bits it just means that the OS can fit more memory-hungry 32bit processes as well as 64-bit processes into that memory area.

Hope I made it a bit clearer...

/nerm
 
Re: This just adss fuel to the fire

Originally posted by pianojoe


I completely agree with you. Thank you for proving my point, that not all software written for 32bit can be used on a 64bit machine, no matter how clever the "emulation handler" will be.

(Again, this is what people want: Buy the latest Mac, copy over your applications, documents and prefs folder, and continue working with minimal downtime.)

UGH! This is not how it works! The architects of PowerPC had enough foresight to see 64-bit computing coming. The PowerPC ISA (Instruction Set Architecture) is 64-bit. All PowerPC processors follow this ISA. The difference between a 32-bit number and a 64-bit number is the precision with which it can be "described". The more ones and zeros you have to work with, the more combinations of ones and zeros you can have. A 64-bit number is a combinarion of ones and zeros that is 64-bits long. Obviously you can have exponentially more combinations of ones and zeros with 64 bits to work with than you can with 32 bits. Thus you can be more precise about describing your number. But a number is still a number and a command is still a command. They're just different data types. The ISA was created so that the processor "knows" this.

Thus, 32-bit applications run NATIVELY on a 64-bit PowerPC. There is no "emmulation handler" required. You may have heard stuff about Intel moving to a 64-bit processor that requires emmulation but that's because Intel is moving to a completely different ISA. Opteron (AMD's 64-bit chip) on the other hand, is seriously being looked at because it is a 64-bit extension of the x86 ISA. Meaning, it allows 32-bit x86 applications to run natively and also allows you to take advantage of 64-bit addressing with new or rewritten applications. Thus, your investment in software is preserved.

Apple doesn't need to worry about moving to a 64-bit PPC as much as they need to worry about backward compatibility. Once developers start to take advantage of 64-bit addressing and precision, your installed base of 32-bit processors is subject to the dustbin because of software support. So, most software unless it explicitly needs the advantages of being able to work with huge numbers will probably still be "32-bit". For those that will benefit from "big math", you'll probably see developers compile both a 32-bit and 64-bit version and package them inside one Mac OS X bundle. Then you maintain backward compatibility with your installed base for new or enhanced apps, and still get the Mac OS simplicity of a drag-n-drop install.

I have a Sun/SPARC/Solaris (as well as Apple ;) ) background. Sun made the transition to a 64-bit implementation of the SPARC ISA when it introduced UltraSPARC I. All the software that ran on the 32-bit SuperSPARC, TurboSPARC, HyperSPARC etc. ran just fine on the 64-bit UltraSPARC. Besides some different core files in Solaris, all you had to do was "buy the latest Sun, copy over your applications, documents and prefs folder, and continue working with minimal downtime." In fact, most SPARC applications are still 32 bit because it's easier to maintain one binary since they run full throttle on either the 64-bit or 32-bit implementations. Most other companies' transitions to a 64-bit processor were relatively painless. I'll bet that Apple's will be even smoother. No worries here.
 
Re: Re: This just adss fuel to the fire

........Most other companies' transitions to a 64-bit processor were relatively painless. I'll bet that Apple's will be even smoother. No worries here.
Yes, THANK YOU... Finally someone who knows what's up! I read over the dev docs @ the ADC regarding the G4 fpu addressing protocol - 32-bit fpu calls are referred to as "single" and 64-bit calls are referred to as "double," indicating that the groundwork has already been laid for the advent of 64-bit integer register space.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.