Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Originally posted by york2600
A 64bit processor woud require a complete recompile of all code. Simple as that.
You speak like such an authority on the subject that I fear many readers will take what you say for fact. It is not, and the most glaring is the above. The beauty of this line of processors is that a 64 bit processor will run old 32 bit applications as they are. No modification. It is not emulation. Read about it on the IBM web site, as they discuss merits of the Power4 series running old code, and "protecting the customer's investment in software." Apple's G5 will do the same.
 
Sounds like were in for a huge change. Finally, its seems Apple is switching to IBM for its chip supply. That coupled with Apples recent partnering with Nvidia sounds like a pretty good combination. Maybe well see prices on systems drop? Just wishful thinking though. I do wish Apple would get the upper hand in tech at least on the Pmac side. I do understand the philosophy behind the iMac. I low cost consumer pc that uses current tech but there is no excuse for the Pmacs. Come on this is the proverbial "flagship of the fleet"! Onward and upward i say!
I think since Jobs returned the simplification of the computer line was needed but now there should be more options to choose from like so:
iMac: Consumer model
Power Mac: For home and small bussines power users that want power for a good price
ProMac: For the pros, the latest and greatest: Multi-Processprs, Top of the line video, Better system boards than the other lines, more room for expansion: HD's PCI cards Memory and the like

Just my opinion. ;)
 
You are SOOO wrong

I love reading comments on this board. You people say things that surely come right out of your ass. If you say something learn to back it up with facts. If you read something always question it. Some of the things said in response to this post are absurd.

" A 64bit PPC would require quite the massive motherboard redesign and right there without proper maintence of OS 9 code would end OS 9s compatibility with new Mac hardware. That's just the motherboard though. A 64bit processor woud require a complete recompile of all code. Simple as that. Notice MS has different version of XP and .Net server for Itanium."

Wrong. You can run an old version of Windows 2000 on an Itanium...it runs under legacy mode using IA32 instructions. Simple as that. Just as the PPC had 68k legacy instruction set to run mac applications from before the PPC introduction. The reason for a seperate version of .NET server is to add 64-bit API calls for memory and other services that can take advantage of the 64-bits. You DONT NEED to rebuild it, not to mention, if Apple doesn't take advantage of the 64-bit processor off the bat, they don't have to do a thing to make it work.

"Nforce with a G4
This one is just crazy. Yes they both use Hypertransport, and yes Apple's current motherboard goals are similar to that of Nforce, but Nforce is a PC chipset build around a PC BIOS. You would have to fundamentally redesign the architecture to make it work with a Mac. Some of you make it sound like they just change the pin out on the ZIF slot to allow a PowerPC and their done. It would be a huge task to create a Mac compatibile Nforce chipset."


Wow...you don't know how computers work do you.....PCI and AGP devices follow strict protocols, and can be used on both machines as long as the software is written for the device. Thats where the BIOS and drivers come in. These chips don't fit in freaking ZIF sockets...chipsets are soldered onto the motherboard and the motherboard is designed around the functionality of the chipsets. The GForce and NForce chips are the same on PC and Mac....and can handle Big & Little Endian data formats with a simple register setting on the chip. And to people who don't like the embedded video of the NForce, remember it can be disabled and a high-end AGP card can be added. Buying a machine with the NForce disabled, you would barely notice a difference in price....why? The cost of the NForce significantly lowers the cost of the machines....look at the ethernet, usb, firewire, ata, video, and dobly sound controllers that are built in. NVidia charges very little for this chip compared to what the components would cost seperately. I would prefer Apple to use NForce as it would reduce their custom ASIC costs and costs for the end user, even if the video isnt used on all machines.

In addition, lets not forget NForce already has multiple CPU bus interfaces. While they only sell for the Athlon EV-6 bus on the open market, Microsoft/NVidia licensed the Intel PIII GTL+ bus for the XBox Northbridge version of the NForce chipset. Adding support for even the current line of PPC processors would be relatively easy.

"Think about what you read."

Think about what you say.
 
Apple developing chips

Sorry if someone has already said this but...

I think I remember reading about a year ago that Apple was hiring people to work on microprocessor designs. What were the results of these hirings? Maybe working on the next gen chips with IBM?

Just a thought, and forgive a newbie if I'm mistaken.
 
New Power4 might not be intended for Apple

According to this eWeek article , the new IBM Power4 might not be intended for apple:

IBM's decision to tout the chip may indicate that Apple has so far balked at embracing the chip, one analyst said.

"What I find is interesting is the fact that IBM can talk about it. If there was committed iMac design, you know (Apple CEO) Steve Jobs would have his hands around IBM's neck not to talk about this chip," said Kevin Krewell, a senior analyst at In-Stat/MDR. "The fact that IBM is talking about it indicates to me that it's not a mainstream Apple product at this time."
P.
 
Originally posted by IndyGopher

What kind/size monitor are YOU using? Every 19, 20,21, and larger CRT I have seen has had a fan, with the exception of the NEC 29" but those are really just televisions, no matter what NEC says.
Samsung 900p, 19", ca. 1999, no fan.
Sony GDM-20E21, 20", ca. 1994, no fan.

I've personally never seen a modern computer CRT with a fan in it...
 
IndyGopher:

I've never seen a CRT with a fan, and I have seen and used a lot of them.

snoopy:

"6.4gb/s bandwidth" says nothing about how they got it. So it happens to equal the max spec of HyperTransport, so what? Also equals dual channels of DDR400, or quad channels of PC800 (RDRAM). Also equals a 800mhz FSB P4-style... that is to say 200mhz, quad pumped. Could be a 128 bit wide 100mhz quad pumped bus, or a 128 bit wide 400mhz single-pumped bus, or a 256-bit wide 100mhz double-pumped bus, or any number of other options.

york2600:

Of course the G4 does not currently use any sort of HyperTransport.

GPTurismo:

Huh? You seem to be equating OS's with certain operations like floating point math and running databases well, which doesn't reflect reality. The OS can influence overall performance somewhat, and filesystem can influence performance, but at the core of it you need competent hardware. (Oh, and Irix is the worst OS I have ever interacted with.)

dguisinger:

Amusing that you chose to start your post the same as york2600... but in the end I'm not sure that you are more correct than he is. I too am highly sceptical of the nForce for Mac rumor. Amoung the advantages of the nForce you list integrated components which is exactly what we already have from Apple, except nForce lacks the gigabit ethernet. Amoung the nForce disadvantages is the huge pin count due to dual-channel DDR which sure hasn't shown any performance advantages on the Athlon platform (and note we're already discussed the issue of the Athlon FSB being too slow for it). You are also in error about the nForce being used in the XBox, that's a GeForce4ti-like chip, not GeForce4mx-like.

pfrencken:

Uh, I think the situation is that IBM is not intimidated by Apple and intends to drum up the processor for it's own good. IBM will be using it in their own machines at least, I'm sure Apple is just an extra market.
 
A german article states, that the vector unit in the IBM chip will NOT(!!!!!) be Altivec-compatible, but can emulate Altivec instructions.

So the new processor apperas in a new light then, it's questionable if it will be used in the given design in a Mac. It would be weird if Altivec would be just emulated and not run native anymore. A lot of the speed advantages would be burned by the emulation then...

Here is the link:

http://www.heise.de/newsticker/data/as-09.08.02-000/

groovebuster:(
 
Originally posted by IndyGopher

What kind/size monitor are YOU using? Every 19, 20,21, and larger CRT I have seen has had a fan, with the exception of the NEC 29" but those are really just televisions, no matter what NEC says.

I'm sitting at a 21" Sony CRT, to my left is a 17" Iiyama VM Pro naturalflat CRT screen, downstairs is a 21" Mitsubishi CRT.

None of them have fans, I haven't seen a big screen with a fan for.. oooh.. 4-5 years?, and they weren't new at the time.
 
Originally posted by ddtlm
You are also in error about the nForce being used in the XBox, that's a GeForce4ti-like chip, not GeForce4mx-like.

He's sorta right, the Xbox chipset and Nforce are fairly closely related. the audio /networking hardware is the same, the video hardware on nforce is scaled down, and the processor bus is GTL+ instead of EV6, other than that they are pretty similar.
 
Re: You are SOOO wrong

Originally posted by dguisinger
I love reading comments on this board. You people say things that surely come right out of your ass. If you say something learn to back it up with facts. If you read something always question it. Some of the things said in response to this post are absurd.

" A 64bit PPC would require quite the massive motherboard redesign and right there without proper maintence of OS 9 code would end OS 9s compatibility with new Mac hardware. That's just the motherboard though. A 64bit processor woud require a complete recompile of all code. Simple as that. Notice MS has different version of XP and .Net server for Itanium."

Wrong. You can run an old version of Windows 2000 on an Itanium...it runs under legacy mode using IA32 instructions. Simple as that. Just as the PPC had 68k legacy instruction set to run mac applications from before the PPC introduction. The reason for a seperate version of .NET server is to add 64-bit API calls for memory and other services that can take advantage of the 64-bits. You DONT NEED to rebuild it, not to mention, if Apple doesn't take advantage of the 64-bit processor off the bat, they don't have to do a thing to make it work.

Think about what you say.

Not entirely correct. The PPC ISA is a 64 bit ISA with 32 bit implementations, i.e. it was designed from the start to be used on 64 bit processors. In other words the instructions used in 32 bit PPCs are also natively present in 64 bit PPCs. No emulation necessary. IBM states that the POWER4 will run both 32 and 64 bit AIX 5.1 kernels. Furthermore, it also states that 64 bit applications can run on the 32 bit kernel and visa versa, with very little performance degradation. The only warning is that all kernel drivers for the 32 bit kernel must be 64 bit and all kernel drivers for the 64 bit kernel must be 64 bit. AIX 5.1 also breaks binary compatibility with AIX 4 64 bit binaries, but that seems to be an OS issue. What this means is that most likely this machine won't boot OS9, though it may depending on how much effort Apple spends on enabler, but it will run Classic, because all Classic is to the OS is another application. Apple can even do the M68K emulation still.

P.S. The M68K emulation is not part of the PPC, it is an Apple addition.
 
Re: Re: You are SOOO wrong

Originally posted by peterh


Not entirely correct. The PPC ISA is a 64 bit ISA with 32 bit implementations, i.e. it was designed from the start to be used on 64 bit processors. In other words the instructions used in 32 bit PPCs are also natively present in 64 bit PPCs. No emulation necessary. IBM states that the POWER4 will run both 32 and 64 bit AIX 5.1 kernels. Furthermore, it also states that 64 bit applications can run on the 32 bit kernel and visa versa, with very little performance degradation. The only warning is that all kernel drivers for the 32 bit kernel must be 64 bit and all kernel drivers for the 64 bit kernel must be 64 bit. AIX 5.1 also breaks binary compatibility with AIX 4 64 bit binaries, but that seems to be an OS issue. What this means is that most likely this machine won't boot OS9, though it may depending on how much effort Apple spends on enabler, but it will run Classic, because all Classic is to the OS is another application. Apple can even do the M68K emulation still.

P.S. The M68K emulation is not part of the PPC, it is an Apple addition.

Which is why the hell I'm wondering why people in this thread are talking about having to recompile the OS or applications for binary compatibility. C'mon guys...Sun has been doing this for years. The UltraSPARC is a 64 bit implementation of SPARC. You can run a SPARC32 binary on a SPARC64 processor with no problems. Granted you won't get all the features of the 64bit environment, but your binaries will run unmodified which means your investment in software is protected. Apple just needs to add 64bit support to certain core parts of the OS in order to enable large memory addressing, etc. It's really not that hard considering OS X's roots.

PowerPC was designed as a 64bit ISA from the ground up. I'm sure that in the ten or so years of the PowerPC's existence, Apple's had a chance to play with the 64bit ISA (I'm sure they have a few POWER4 machines kicking around too.)
 
Originally posted by ddtlm
Huh? You seem to be equating OS's with certain operations like floating point math and running databases well, which doesn't reflect reality. The OS can influence overall performance somewhat, and filesystem can influence performance, but at the core of it you need competent hardware. (Oh, and Irix is the worst OS I have ever interacted with.)

Well, AIX is designed to run on Processors that do fpi's strongly. It's designed for mass calculation, on risc chips. As for Solaris which runs on cisc is a very stable slow os. Here is the thing. If you were building a standard Oracle database, I would say sun because simple placement of files and datamanagement you want a system that would be stable and has less chance or err.

If you were doing some higher end searching, a lot of calculations, and a lot of higher end processing, i would go AIX on Power4 or another IBM Risc.

It does reflect real world.

Also IRIX is a pain. Powerful, but just a pain. As for worse, I don't know, it really depends on what what is your classification for bad.

IRIX is very clean, but very very complicated. Yet powerful.

And they also changed a lot fo the standard apps >_<
 
Chip Not for Apple???

What is difficult to believe is that this very G5-like processor was developed for some other application than a Macintosh computer. Maybe we are to believe that Steve Jobs knew nothing about it? Unlikely, in my opinion. Considering the cost of doing this chip, there is a specific, big application for it, like the next PowerMac.

So IBM wouldn't talk about it before Apple? True. That is why I think Apple will announce the G5 before October 15, so IBM is free to discuss the chip in detail. There is no rule about what stage of development IBM can talk about a chip at this forum. It could be in pilot production. They would likely not talk about it in the early stage of development, which would tip their hand too soon.

Someone looked up patent information on AltiVec, and he claims that all three parties are on the original patents, Motorola, Apple and IBM. He further claims that later patents have only IBM on them. If true, it suggests that IBM has rights to it, and has continued to develop this vector processor on its own. The vector processor in the G5 may be second generation.
 
CNET ARTICLE...

Maybe this article will clear things up a bit.

http://news.com.com/2117-1001-949030.html

"Network equipment and other communications gear is the most likely destination for the new PowerPC, as the bulk of existing PowerPCs are used there. However, IBM is also wooing Apple Computer, sources familiar with the chip said. The company is in a constant tug of war with Motorola, which makes most of the PowerPC chips slotted into Macs, for Apple's business.

The new chip offers significantly higher performance than IBM's current desktop PowerPC 750 and so could provide Apple with a performance boost if used in future desktop computers, the source said. "
 
Re: Re: Re: New thought....

Originally posted by kenohki


Well, the journaling file system will not happen until Apple either extends HFS+ or gets application developers to stop using metadata features of HFS+ to the point where they could move to an existing journaling filesystem.

Isn't it fairly trivial to support yet another file system? They already support CD and DVD format, HFS, HFS+, Wintel, and Unix by being able to mount, read and write those formats. Why is a journaling file syatem any more difficult to believe can happen. In fact isn't it an element of BSD distribution on which OSX is based anyway?

And as for IBM vertical market applications for unix/linux, it seems to me unix is unix to a large degree and the porting issues are fairly minimal.

Rocketman
 
interesting link to Cnet regarding "Book E" from 1999:-

http://news.com.com/2100-1040-225442.html?legacy=cnet

IBM and Motorola noted that one of the main advantages of the new "Book E" architecture will make it easier for customers to migrate from 32-bit to 64-bit PowerPC designs, executives said. Applications designed for the current PowerPC architecture will run on 64-bit versions of the chip, although no specific products have been announced yet.

Another advantage, said Tom Sartorius, senior engineer for PowerPC products at IBM, is that the new design allows for the easy addition of what he calls "application-specific processing units." These can act as specialized co-processors such as a digital signal processor or multimedia playback accelerator, but are on the same piece of silicon.

If IBM did indeed continue with their co-implemented "Book E" standards, including AltiVec, then wouldn't it be interesting to see new DSP and MuliMedia acceleration 'on the same piece of silicon' ?:rolleyes:
 
Re: CNET ARTICLE...

Originally posted by chubakka
Maybe this article will clear things up a bit.

http://news.com.com/2117-1001-949030.html

"Network equipment and other communications gear is the most likely destination for the new PowerPC, as the bulk of existing PowerPCs are used there...."

The new IBM chip is NOT an embedded chip. Just because a chip carries the label "PowerPC" doesn't mean anything. The PowerPCs used in Macs are NOT used for embedded applications and the PowerPCs used for embedded applications are NOT used in Macs. This doesn't mean that can't be. Just that there are different requirements that are needed for each application. The design of the new IBM processor has Mac written all over it.

That author is a dolt.
 
embedded?

He mentions network and communications in one sentence.


Also... his source STATES that IBM is wooing Apple to use the chip.
 
Re: Re: Re: Re: New thought....

Originally posted by Rocketman


Isn't it fairly trivial to support yet another file system? They already support CD and DVD format, HFS, HFS+, Wintel, and Unix by being able to mount, read and write those formats. Why is a journaling file syatem any more difficult to believe can happen. In fact isn't it an element of BSD distribution on which OSX is based anyway?

Yes, it is easy for the OS to add support but it can break Macintosh applications. HFS and HFS+ have extra metadata in the filesystem that is used to set type/creator codes, and provide the resource fork. Traditional Mac applications have a data fork and a resource fork (used to store resource data and accessable through ResEdit). The reason encoding schemes like BinHex and MacBinary were created is that other filesystems can't handle the resource fork of Macintosh applications like HFS and HFS+ can. They strip out the resource fork, leave the data fork and thus, you are left with an unusable application.

Apple's new guidelines for developers are to stop using the resource fork and to move resources into .rsrc files which should typically be hidden within application bundles. Apple would also like everyone to stop using the four charachter type/creator codes embedded in the HFS/HFS+ metadata and use a dot notation file extension like Windows. However, there are developers who are not following these guidelines to the T. Also, the Classic environment as well as most Classic applications require the resource fork and usually at least like to have type/creator codes.

So in short, yes, you could add support for whatever you want, FAT32, NTFS, HPFS, EXT2...whatever. But it would break your apps so that's why we haven't migrated yet.
 
Originally posted by ddtlm
alex_ant:

When Intel needed a Celeron, they took cache from a Pentium. When AMD needed a Duron, they took cache from an Athlon. Messing with the execution core is just too difficult.

Not even the things Intel did to the Celerons ever crippled them as much as you suggest IBM will do to the Power4.



Wouldn't IBM have to mess with the core to include a vector
processor? Or maybe they could include a vector coprocessor
core? (like in the old days when we had a seperate FPU
coprocessor). Is that possible? The Power4 is desgined to
schedule instructions to more than one core. They could
keep one core, replace the other with a vector processor
and reduce the cache.:D
 
Re: Re: CNET ARTICLE...

Originally posted by Faeylyn
The PowerPCs used in Macs are NOT used for embedded applications

I was under the impression that some high end routers had 7455's in them?
 
Re: Re: Re: Re: Re: New thought....

Originally posted by kenohki


Apple's new guidelines for developers are to stop using the resource fork and to move resources into .rsrc files which should typically be hidden within application bundles. Apple would also like everyone to stop using the four charachter type/creator codes embedded in the HFS/HFS+ metadata and use a dot notation file extension like Windows. However, there are developers who are not following these guidelines to the T. Also, the Classic environment as well as most Classic applications require the resource fork and usually at least like to have type/creator codes.

So in short, yes, you could add support for whatever you want, FAT32, NTFS, HPFS, EXT2...whatever. But it would break your apps so that's why we haven't migrated yet.

With all computers being on a network now at one time or another couldn't there be a central database on an apple server that provides "equivelent type and creator codes" for files and aplications as they arise? There could be a whole hackerbase making and updating those codes as a back channel operation sort of against the will of the original UNIX/WinX authors.

Likewise Apple could assign .3 and .4 codes to common Mac applications and files (as it has already largely done).

In short it would be like a super version of the older DOS compatibility built into some older macs

By using a central server for maintaining the codes any new discovery updates it for all users, and all users would be using the SAME .3 and .4 codes, UNLIKE either wintrel or unix now. Making it yet another mac specific seamless user friendly aspect, yet FULLY COMPATIBLE with you know who.

Rocketman

Hi Steve Jobs!
Send me a share of Pixar.
 
Re: Re: Re: Re: Re: Re: New thought....

Originally posted by Rocketman


With all computers being on a network now at one time or another couldn't there be a central database on an apple server that provides "equivelent type and creator codes" for files and aplications as they arise? There could be a whole hackerbase making and updating those codes as a back channel operation sort of against the will of the original UNIX/WinX authors.

Likewise Apple could assign .3 and .4 codes to common Mac applications and files (as it has already largely done).

In short it would be like a super version of the older DOS compatibility built into some older macs

By using a central server for maintaining the codes any new discovery updates it for all users, and all users would be using the SAME .3 and .4 codes, UNLIKE either wintrel or unix now. Making it yet another mac specific seamless user friendly aspect, yet FULLY COMPATIBLE with you know who.

Rocketman

Hi Steve Jobs!
Send me a share of Pixar.

Woah, the resource fork is not just type/creator codes. All bundle bits, icons, menus, dialogs, code snippiet, are all stored in the resource fork. A big giant "hackerbase" on a central server would constitute software piracy I'm sure by Adobe or Quark's standards.
 
Ok, let me see if I'm right... A 32 bit OS or App could run on a 64 bit chip, but it would only process 32 bits at a time, meaning the processor is running at a 50% performance level.

Now, to have a 32 bit app take advantage of a 64 bit processor, would it be possible to have the processor put two 32 bit commands in each clock cycle? Isn't this how Apple was able to say that Velocity Engine could process data in 128 bit chunks, because in reality, it just does 4 32 bit chunks per cycle?

So, 32 bit apps will work with a 64 bit chip, right? Well, if an app was to take true advantage of that chip, rather than just work with it, it would have to be changed to a 64 bit app, right? But this change would be only to optimize it, because it would run ok without it... This is how I see it, based on some of the stuff I've been reading here...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.