Originally posted by KingArthur
I understand everyone's conern about the lack of an AltiVec 128-bit subprocessor, but you have to realise, if the G5 is a true 64-bit processor, then we are talking twice the data in one cycle for everything BUT vector processing (which would be halved).
Um, that's not how it works. It is true that a 64-bit processor can handle 64-bit word, which is obviously twice as long as a 32-bit word. However, each integer unit can still only process one word at a time. The only difference is that the words are larger, so you can have extra precision, which most people don't need (in fact, the main advantage of a 64-bit processor is that 64-bit addressing allows you to have much more memory than a 32-bit chip, which IIRC is limited to 4 GB). It does not mean that the words are processed any faster. Let me repeat that again: there is no inherent reason why a 64-bit chip should be much faster than a 32-bit chip (you can't just merge two 32-bit words into one 64-bit word and pretend that's like processing them separately...that's like saying that you can solve the two expressions (1 + 4) and (2 + 7) by solving the single expression (12 + 47)...it doesn't work that way!). The 64-bit chip is just able to access a lot more memory.
Altivec, in contrast, is a 128-bit wide vector unit. The key difference with Altivec is that it doesn't process a single 128-bit wide word. Instead, it processes either four 32-bit words, eight 16-bit words, or sixteen 8-bit words. This is why Apple can claim it is "up to 16 times faster" for some instructions. Specifically, if you're doing the same instruction to a bunch of 8-bit words (and it must be the same instruction - that's why it's called "Single Instruction, Multiple Data"), then it will take you at least 16 cycles on the 32-bit integer unit (note that we'll only be using 8-bits of each 32-bit word - the rest will essentially just be unused space). In contrast, Altivec could take the sixteen 8-bit words and process all of them in one cycle: hence it can be up to 16 times faster. Note that with a 64-bit processor, it would still take 16 cycles - not any faster than the 32-bit processor (the only difference is that we'd have 56 bits of unused space for each word that was processed rather than 24 bits of unused space).
Perhaps a concrete example would help. Say that you have eight numbers: 500, 501, 502, 503, 504, 505, 506, and 507. Say that you decided to use a 16-bit word to store each of these numbers. Now assume that the operation you wanted to perform was to add 1 to each of these numbers. With a 32-bit chip, this would be eight separate operations: first you'd compute 1 + 500, then 1 + 501, then 1 + 502, and so on until 1 + 507. The 64-bit chip would do the same thing - the fact that you can handle 64-bit words (which are unnecessary for this problem) doesn't help you at all (what are you going to do, try to merge the first two operations by computing 1 + 500501 or something?). But Altivec does help you, because in a single operation it can just take the vector [500 501 502 503 504 505 506 507] and add one to each element, spitting out [501 502 503 504 505 506 507]. That's exactly the output that you wanted, and it only took a single operation!
Now, where the 64-bit chip would help you is if you were working with really, really, really big numbers. Say you had the number 2^42 and you wanted to add 1 to it. You wouldn't necessarily be able to represent 2^42 with a single 32-bit word, so the 32-bit integer unit obviously would not be able to carry out that operation in a single cycle. But the 64-bit processor would have no problems with such a large number, because it can handle 64-bit words (also note that Altivec probably would have problems, since I believe the largest word it takes is 32-bits...it's just that it takes up to four of them at a time). However, most personal computer users have no need for such large numbers, so a 64-bit processor really doesn't help them very much (and at any rate, all of the floating point units already handle 64-bit "double precision" floats).
As for all of the people who think the G5 is just around the corner, I wouldn't hold your breath. I don't have any inside information, but a little common sense seems in order. Remember, most of the G5 rumors came from a single source, quoted repeatedly on both The Register and Mac OS Rumors. But that source clearly lacks credibility, since among other things he/she claimed that the G5 would definitely debut at MWSF (which it did not, unless I really missed something in the Keynote). Furthermore, do you really think it's likely that the G5 would be released *before* the Apollo G4?? I think there's a good chance we will see the G5 before the year is it, but I would be really surprised if it happened anytime in the next few months. Hopefully we will see the Apollo G4 very soon however. It's clear that they need to bolster the G4 Pros vis a vis the G4 iMac - the only question is will the Apollo be as fast as rumored (up to 1.4 Ghz). If it is that fast, I certainly wouldn't advise people to wait around for the G5, which probably wouldn't be released until at least the Fall, and at any rate will have unknown capabilities (remember, the only performance data we have on it came from the now-discredited Register source).