Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Direct from Motorola's web site... check out http://e-www.motorola.com/webapp/sps/site/taxonomy.jsp?nodeId=02M0ylfVS0lM0ypLRtk6

All of their MPC74xx processors (aka G4) have a 64 bit bus interface.

Continue to insist that they are 32 bit if you want... Even Motorola states that the Altivec technology makes the processor 128 bit.

I intend to call Apple directly tomorrow and put the question directly to them. I am not calling sales, but the real tech's. We have real resources available to us at work, not the normal public ones.
 
What the freakin' heck

You people don't know anything. The G4 is 32 bits entirely, just like a P4 or K7. It has a 128bit AltiVec unit that ONLY WORKS ON OPTIMIZED CODE, which most isn't. Also, a 64 bit processor is exactly no times faster than a 32 bit processor. What, do you think the processor cuts your wordsize in half and distributes it over the entire core. NO. How can you be so mislead? I can't believe it. The macintosh is an amazing machine, people, but you have no IDEA about the technology you are using. Read ARS TECHNICA before you flame yourself!!!!!!!!!!!!!!!!!!
 
For the record, when I refer to IA-32, that has nothing to do with Motorola's processors. Please don't confuse my posts with that, I'm merely trying to enlighten PC folks as to the truth about their processors.

And for the last proof. The processor does understand math of the sort I am saying. In fact, you should, too. It's called scientific notation. Say you had the number 1.57 e 9 (that's times 10 to the power of 9) and 3.63 e 7.

Try multiplying them. As any person who knows math will say it's simple. Sure, to us it may seem difficult (1570000000 x 36200000). But it's not. Here's the answer: 5.6834 e 16.

How to come up with that? Multiply 1.57 and 3.62. Then add their power s(9 + 7). Ta-da! That's how floating point math works inside the IA-32 processors. It doesn't look at the huge 64-bit number or 80-bit number BECAUSE IT'S A 32-BIT PROCESSOR. It takes the numbers and works on them (15 digits of real numbers) and works with the powers.

There is a lot more to scientific notation (like dividing and the such, and making sure that the numbers are with the range of 0 to 1), but this isn't math class.

So just because you don't know scientific notation doesn't mean the processor doesn't.

(You see, a 64-bit number can support numbers with up to 307 zeroes, but the common misconception interpretation it can only show NINETEEN zeroes. That's why the processor uses scientific notation. In fact, the 80-bit can support up to aroun 4000 zeroes, but according to the misconception method, only TWENTY-FOUR. See, 2^64 only has 19 zeroes, and 2^80 only has 24 zeroes.)

(If you are referring to the "wide" integer, that is what sees 64-bit as a straight number, and 80-bit as a straight number, but that is 17 digits... WITHOUT A DECIMAL PLACE because it is an integer)
 
Actually.. the G4 is a 74xx processor... 7400 being used in the G4 AGP graphics and 7410 being used in the TiBook (at least the Rev A).

This is 100% accurate, confirmed on both computers under 2 minutes ago for all you nay sayers. Unless you have one of those in front of you, and can prove 100% that they are not those... you know what you can do.
 
Also, support for 64-bit data bus (which the G4 has) does not make it a 64-bit processor. It is still a 32-bit processor internally with the Altivec engine being 128-bit.
 
"uhh....so if one can give a number to a higher number of decimal places then that is not an increase in precision??! "

No, it's not an increase in PRECISION. It's an increase in ACCURACY. An increase in PRECISION is where you have more DIGITS (5, 3, etc). To have an increase in decimal places (where after 15 decimal places they all are zero) does not increase PRECISION.

The 64-bit is more precise (btw, "double precision" means 64-BIT, not 80-bit) then the 32-bit, but then 80-bit is not more precise then 64-bit. It's just more accurate.
 
Oops, that last post was by me (but on a different machine where I wasn't logged in). Sorry 'bout that!

If you have the numbers:

3.456765345654123 e 306
3.456765345654123 e 2978

they are the same for precision. The larger one (80-bit versus 64-bit) is more ACCURATE, but it is not more PRECISE. Precision deals with how many significant digits there are (the rules for that are a bit long winded, but basically any zeroes after the last non-zero digits after a decimal place are NOT considered to make a number more precise). Accuracy deals with if the number is close or not. Precision deals with is the number EXACT or not.

If you have the numbers:

3.456765345654123 e 306
3.4567653456541234 e 2978

then the second one would be more precise. Do you see what I mean?
 
Originally posted by Unregistered
"uhh....so if one can give a number to a higher number of decimal places then that is not an increase in precision??! "

No, it's not an increase in PRECISION. It's an increase in ACCURACY. An increase in PRECISION is where you have more DIGITS (5, 3, etc). To have an increase in decimal places (where after 15 decimal places they all are zero) does not increase PRECISION.

The 64-bit is more precise (btw, "double precision" means 64-BIT, not 80-bit) then the 32-bit, but then 80-bit is not more precise then 64-bit. It's just more accurate.

do you have it backwards? or are you nocking the G4 I'm confused.

more Accuracy is more correct
more Precision is more specific (not nessarily correct)

example I am 18 years my birthday is during the summer so the statement
"I am 18.5 years old"
is verry acurate but not very precise where as the statement
"I am 18.7746532 years old"
Is very precise yet extreamly inacurate. the statement
"I am 18.55583 years old"
is both accurate and precice


so you're saying the the G4 is not verry correct but is very specific that's a bad thing for the G4.

you also may have confused readers you seemed to say that a larger number is more accurate (3.blablae2xxx compared to 3.blablae3xx) where as if the number is suposed to me a smaler number like 3.4e456 then the first one would be more acurate and just as presice. im my example with my age the larger number 1.87746532e1 is inaccurate compared to the smaller number 1.85e1

also doesn't the P4 just convert the numbers to 80bit and then back into 64 bit after it's done messing with them?

anyway I don't imagine either of them is too imprecise or too inaccurate.
 
for the reccord:
my understanding is that both the p's and the g's are all 32bit procesers.
they each have seperate sizes of in's and outs and the G4's altivec is 128bit.
I figure that a 64 bit proceser does have many advantages over the 32bit archetecture but I'm not to concerned since right now all the chips I'm woried about are 32. I'll go do some reserch on the diferences between 32 and 64 bit procesers
see you all later
:)
 
Actually, let's clear this all up right now:

I did not mean the larger in all cases, I meant the larger as in 80-bit numbers, not larger as in it's a bigger number, is more accurate in measuring huge numbers. Sorry for any confusion.

Now on the answer of accuracy and precision: It depends on what you used to measure those numbers. The precise one, if done with improper measurements, is not correct. BUT if the precise one is done with perfect measurements, it IS correct. The idea that accuracy is better is only due to the fact that we can only measure to a certain point with perfect precision, and after that it is just a good guess. In the world of math, more precision is better. In the real world, more of a balance of precision and accuracy is better.


This is not all the rules of precision, only a portion that have been summarized:

"Any zero preceeding non-zero digits before a decimal point does not count towards precision. (ie 003 == 3)

Any zero following all non-zero digits after the decimal point does not count towards precision. (ie .300 == .3)

Any zero after the non-zero digits in a number, without a decimal, is not counted towards precision. (ie 300 has the same precision as 3).

Any zero after the decimal place but before non-zero digits is not counted towards precision. (ie .003)

Any zero after the non-zero digits in a number WITH a decimal IS counted towards precision. (ie 300. is more precise then 300).

Any zero between non-zero digits IS counted. (ie 303 or .303)"


Sorry, I didn't mean to confuse you, I'm knocking the idea that the 80-bit number is more "precise" then the 64-bit number. (An idea from before in the thread.) Both have the same amount of digits (15), the only difference is the amount of decimal places the 80-bit number can say there are (approx 4000, which is 3985 zeroes, which, as you can see from above, does not add to the "precision" of the number).
 
This thread is TEH FUNNY!

"g4s are 64 bit!! yah!! 64 bit math is no better than 32 bit math!! "
"I knoW AsSeMblAr! and YUO only No PaScal! " "I rite for ArseTechnica And No Stuff! "


hey arent G4s really 128 bit? ;)

I give up. No really, I do. :)
 
sorry- that was a tad harsh, I know.

anyway - some of us actually have a use for all those numbers after a decimal place. "64-bit computation: Having larger registers for holding integer and floating point data allows for an increased dynamic range.   The dynamic range of a number format is just the range of values, from the lowest to the highest, that it can hold.  Not too many mainstream programs use integers or floating-point values that are outside of the dynamic range available in a 32-bit system (we're talking really large numbers here), but it does happen. " An example being the maths my bank does on my overdraft.....

But the real deal with a 64 bit computer is more RAM than you can shake a stick at, and much much bigger file sizes.....

anyway - heres a light touch on the 128 v 64 v 32 bit postering of machine owners.

http://www.actsofgord.com/Chronicles/chapter18.html
"The Dreamcast is 128-bit."

"No it's not.  It's 64-bit."

"It's 128-bit!"

"Really?  And why is that?"

"It just is."

"I see.  Ok, it's got a 64-bit CPU.  64-bit GPU.  64-bit databus.  In fact the entire machine is 64-bit or less except the geometry sub-processor on the GPU.  Even then, it's only 128 bit for internal math.  It still talks to the rest of the machine 64-bits at a time."

"So it's 128-bit!"

"Not by any measuring stick that the world is using.  Unless you feel the Genesis also had 'blast processing.'  Also, the Nintendo 64 was the exact same way.  64-bit system with a 128-bit geometry sub-processor."

"The Nintendo 64 is 64-bit, the Dreamcast is 128-bit!"

"Just because you say it is doesn't make it so."

"Sega wouldn't lie."

"That's right.  They would never do that.  Do you live in a cave?  Sega is Japanese for compulsive liar."

"Then why does the Dreamcast look so much better than the 64?"

"Because the Nintendo 64 sucks ass.  And since the DC came out 3 years after it, it had damn well better be a lot better."

"Since part of the machine is 128-bit, it's 128-bit."

"So by your argument, the PS2 is a 2,560-bit machine as the data bus from the GPU to the ram is 2,560 bits across?"

"No, it's a 128-bit machine."

"So what you're saying here is you make things up as you go along to justify your position?"

"You just don't like the Dreamcast!"

"Actually, I like the machine and it's got some good games like StarLancer.  Shame the controller sucks, but we'll discuss that another time.  However, this doesn't change the machine being a 64-bit machine."

"It's 128-bit."

"So what colour is the sky in your world?"



sounds like this thread! :)
 
;) Touché.

I got more then a little long-winded in my frustration. I guess my basic point does boil down to something along the lines of:

"X: It's 80-bit!"

"Me: It's 32-bit pretending to be 80-bit"

"X: It's 80-bit!"

"Me: Here's a bunch of long-winded posts on why 64-bit and 80-bit numbers are not really that to a 32-bit processor."

And, of course:

"X: G4 is 64-bit!"

"Me: It's 32-bit. Here's some posts about that."
 
Ye gods!

This is enough to give a casual Geek like me a nosebleed.........Ow.

Sinopsis: the data path of any given machine is only as wide as it's narrowest common point.

Individual components may have rediculously wide bit paths.

Processor bit ratings should be measured at the chip/Mobo interface and as such are 99.999% of the time 32 bit.

Until a 64 bit mobo and 64 bit chip-IO are available there are no 64 bit machines.

Even at 64 bit the only bennefit is larger file and memory size recognition as 99.999% of software is only 32 bit.

Some very special firmware and kernel mojo will be necessary to take full advantage of a completely 64 bit machine.

A "true" 64 bit machine is overkill unless you want to make NSA nervous.

Does that about cover it for the non-math-geeks? Or am I missing the point.
 
Originally posted by anshelm
Oops. You're correct, the G4 is 74xx. I was a little hurried when I posted my post.

However, my post is still correct about the G4 being 32-bit.

(http://e-www.motorola.com/webapp/sps/site/taxonomy.jsp?nodeId=03M943030450467M98653

Click on any of the 74xx links and it will begin by stating that it is a 32-bit implementaion of the PowerPC)

Which part of that looks like 32-bit???? See the column that has Bus Interface (Bits) and then lists all of them as 64??? I am not seeing things, maybe you are.

The level 1 cache is listed as 32 Kbytes, not bits.

I can admit to being wrong about the 128-bit (the Altivec implementation is 128 and maybe not the entire chip). But right there in black and white is 64-bit.
 

Attachments

  • picture 1.jpg
    picture 1.jpg
    28.4 KB · Views: 179
A joke

0010010010111100101001001001010101011010101001011001000101111001010100100101001011100100100101111001010010010010101010110101010010110010001011110010101001001010010111001001001011110010100100100101010101101010100101100100010111100101010010010100101110010010010111100101001001001010101011010101001011001000101111001010100100101001011100100100101111001010010010010101010110101010010110010001011110010101001001010010111001010101001010010?

0010010010000100101010010101001001010010100101010100101010101010011111001101001000111101001001001011100110010100110100101111010101000101010010101010010!

If you were a 64bit G4, you'd get it......
 
" Which part of that looks like 32-bit???? See the column that has Bus Interface (Bits) and then lists all of them as 64??? I am not seeing things, maybe you are. "

None of those have to do with what bit-type the processor is. What determines the bit-type is the size of bit chunks it moves around in the processor. Ever wonder WHY the AltiVec engine can only move four 32-bit, eight 16-bit, or sixteen 8-bit? Because the G4 is 32-bit!

Besides, please read the article. It will clarify something for you. Motorola starts the article by stating that it is a 32-bit processor.

Data bus bit size does not say anything about what the processor does on the inside, only how it interfaces with the world. Read arstechnica, geek.com, or at least something that explains processor architecture before posting things like this. Learn what a 32-bit processor is, and what a 64-bit processor is. It would make this much easier. *shakes head*

Here, let's quote that line I mentioned:

"The MPC7400 Host Processor is a high-performance, low-power, 32-bit implementation of the PowerPC Reduced Instruction Set Computer (RISC) architecture combined with a full 128-bit implementation of Motorola's AltiVec[tm] technology instruction set..."

http://e-www.motorola.com/webapp/sps/site/prod_summary.jsp?code=MPC7400&nodeId=03M943030450467M98653

"Motorola's MPC7451 host processor is a high-performance, low-power, 32-bit implementation of the PowerPC architecture with a full 128-bit implementation of Motorola's AltiVec(tm) technology."

http://e-www.motorola.com/webapp/sps/site/prod_summary.jsp?code=MPC7451&nodeId=03M943030450467M98653

(Emphasis mine)

Even Motorola states that it is 32-bit. That means operates on 32-bit chunks of data. The data bus bit size has nothing to do with what bit-type a processor is!
 
Re: A joke

Originally posted by maiku
0010010010111100101001001001010101011010101001011001000101111001010100100101001011100100100101111001010010010010101010110101010010110010001011110010101001001010010111001001001011110010100100100101010101101010100101100100010111100101010010010100101110010010010111100101001001001010101011010101001011001000101111001010100100101001011100100100101111001010010010010101010110101010010110010001011110010101001001010010111001010101001010010?

0010010010000100101010010101001001010010100101010100101010101010011111001101001000111101001001001011100110010100110100101111010101000101010010101010010!

If you were a 64bit G4, you'd get it......


thats the weed man!
 
Someone just took a few bytes out of my bits...

Can't we all just get along??? We all know that the Mac's are superior no matter what bit level they are. Where they truely rule is in the applications that have been writen to take advantage of the G4/Altivec engine. See PhotoShop comparison above. I probably could attempt the same thing between my G4 500 and the peecee that I built that has a 1.4GHz AMD t-bird processor. The G4 has 2x the memory though (1.5GB, pc100 vs. 768MB, DDR PC2100) of the peecee.

The real test will be when the next generation comes out from Apple. I know we are all hoping for it by, or soon after, MWNY. I just hope that there is an architecture change to allow the Mac processors to surpass the peecee's for at least a year or two. Something like true 128 bit, or advanced 64bit processors would be sweet.
 
Where do MIPS processors fit into all this? SGI's used 'em for years so they must be worth discussing. I think they are 64-bit, as are Sparcs...

Also, isn't Unix itself only 32-bit?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.