Originally posted by 123
What hardware? And what's the problem there?
Oh no

, this is a bit of a bag of worms if I don't know your background, so just some quick info about big-endian vs little-endian (stolen from another website; scary thing is Microsoft actually has a really good explaination, but I fear it might be a bit heretical to post a link to a microsoft site

)
"Bi-endian
There are two possible byte ordering conventions for representing scalar (that is, non-string) quantities in system memory. Big-endian byte ordering places the most significant byte of the scalar at the lowest memory address, whereas little-endian byte ordering places the least significant byte of the scalar at the lowest memory address. Virtually all CPUs support either one convention or the other, but a CPU that can be configured to handle both schemes is called bi-endian. The PowerPC is bi-endian in nature, which allows it to execute both big-endian 680x0 binaries and little-endian x86 binaries under emulation, without running afoul of byte-ordering conflicts.
Big-endian
A term specifying a byte-ordering convention for scalar data items. In the big-endian scheme, the most significant byte of a scalar is stored at the lowest memory address, the next most significant byte of the scalar is stored at the next higher address, and so on. The Motorola 680x0 line of CPUs is big-endian in nature. The PowerPC, while defaulting on power-up to big-endian mode, is in fact bi-endian and can operate either in big-endian or little-endian mode as required.
Little-endian
A term specifying a byte-ordering convention for scalar data items. In the little-endian scheme, the least significant byte of a scalar is stored at the lowest memory address, the next most significant byte of the scalar is stored at the next higher address, and so on. The Intel x86 line of CPUs is little-endian in nature."
[gross simplification]
So the deal is when things were moved to the PPC, Apple had a choice to use little-endian, however since it had history with the Moto 680x0 line, it chose to go with big-endian. In PCI buses and AGP, this effects how the information is transported between the system and the card. It's like when you want to describe a circle to the monitor through a graphics card, the system will send
0 1 2 3 4 5 6 to the AGP slot on a x86 system which will then be sent to the GPU for rendering. On a PPC (big-endian) system, the info to the AGP slot needs to be "flipped" so the system will send
6 5 4 3 2 1 0 to the AGP, but the GPU has that flipped to what it can understand before using the info.
What card makers do is they have ROM translate how to receive the info (or some other embedded system depending on how complicated the info.) This is why Apple requires different hardware from the same manufacturer of PC hardware (and why flashing ROMS sometimes works) Also, performance hits can occur if the hardware manufacturer is lazy and then you'll notice things like the exact same GPU on a x86 and PPC will run slower on the PPC. If Apple went to little-endian, then all the x86 hardware would become available to Macs (as long as, of course, they are physically compatible.) All that would really need to be done is a software driver solution for OS X (and since hardware manufacturing costs are a big reason why specialty cards are not developed for smaller niche platforms (Macs), software development would be a negligible cost to open up another market)
This may seem of little concern for consumer products, but it is one of the larger concerns for scientific markets and graphics/video/audio markets if they want to move to Macs
[/gross simplification]
Hope that makes sense. My DEC Alpha (a 64bit RISC processor) is also bi-endian like the PPC. If I start it in VAX/VMS it is running big-endian, if I start it in WinNT, it is running little-endian so that it can transparently use x86 code. The system board chipset manages what to send to add-on boards on the buses, so there is no concern about what system is running at the moment for the hardware.