PDA

View Full Version : FLOP rating


mischief
Jan 8, 2002, 01:32 PM
I've been using Mflop instead of Gflop. Go check out Apple's G4 page. To clarify earlier tech errors from myself and others: G4 dual 800= 11.8 Gflops. That's still 6-10 times more data crunching per second than ANY OTHER PROCESSOR (card) currently available outside of refridgerator-size models.

G4 is 128 bit wide, not 64 bits. To the best of my knowledge, most other machines run 32 or 64 bits wide-yuck.

Total Impact is making expansion chassis' with up to 10 4-processor cards running G4 500's. If each G4 is worth a theoretical 5.3 Gflops than the Array would be around 212Gflops! The Linux core they use to run it all can handle more processors than that too!

spikey
Jan 8, 2002, 01:45 PM
G4 128bit wide?

thought it was 32bit, with altivec having 4 32bits at a time?

Am i wrong?

mischief
Jan 8, 2002, 01:56 PM
The Velocity Engine governs IO for the chip and it runs 128 bits wide.QED

sturm375
Jan 8, 2002, 02:02 PM
First: You are right, the information pathway to the velocity engine is 128-bit wide, however, the processor only takes 32-bits. By the way both Intel and AMD use similar technology under differnt names to do the same thing. The only processor in the mainstream, kind of, that is wider than 32-bit is the Itanium.

FLOP rating, I don't know where you get your numbers, but if this is true, I guess my NVidia video card whoops the G4 at a whopping 35 GFlops? The amount of Flops that a computer can do has a lot to do with how much they are processing. For instance if you read the white papers on the processers you will find that the G4 and all the past Motorola processors only go as high as 64-bit floating point. Where as both AMD and Intel have been supporting 80-bit floating point numbers. This is why you will not find too many G4s in CAD offices. Especially when they have to submit drawings to a DOT (Department of Transportation) or other government entity. At a 64-bit floating point, many lines that line up on the screen don't on print-out. 80-bit FP numbers do.

What does this mean?
Apple is "fuzzy" which is great for artists.
Intel/AMD is percise (in comparison to Apple) which is great for science and industry.

mischief
Jan 8, 2002, 02:28 PM
So what you're saying is that the BUS is 32 bits wide universally. That doesn't suprise me. I think the Cache handling and on-chip bit ratings would make more of a difference. The number of steps in the pipeline matters too. All of this will get more interesting when Intel goes RISC on more than just Itanium: Apple will be close to or past the 1Ghz mark and Intel will be just stepping in at around 800 Mhz. It's interesting to note too that 486's are the top of what x86's can currently turn out hardened for Space where as Apple has launched a satellite (see other posts) with a G4.

I use G4's for CAD all day and I have noticed that stuff changes between the active model and the "flattened" drawings. I'm a bit suprised that 16 bits would make that much of a difference.Although I DO NOT use Autocad. It sux for Architecture.

Yes, your nVidia probably has a SICK flop rating, but it only does one thing. The G4 does those frighteningly high numbers for many things. It does raise an interesting point though: What is the AVERAGE flop score between chips over several types of calculations?

Here's where I'm ignorant: Why don't CAD packages dump more processor duty to graphics cards? You'd thik it'd make sense.

sturm375
Jan 8, 2002, 02:46 PM
Try this:
on a calculator if you can find one that this will work.

enter: 2^64 (in english that is 2 to the 64th power)
now compare that number to: 2^80

this number is equal to (2^64)*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2

Since most people would judge the "power" of a processor as how many operations can be done in a certian amount of time, the G4 would smash the other processers in this. However if you weight the work in terms of the percision, hands down Intel & AMD win

By the way I own:
1: AMD K6-2
1: AMD Athlon
2: AMD Athlon MP 1600+ (dual processor)
1: G4 Titanium (Writing on it now)

mischief
Jan 8, 2002, 03:03 PM
I've never doubted Nonmac's superiority for "basic" math. the issue is whether, over many types of operations there's ANY difference in performance between chips when a mean average is drawn.

anshelm
Jan 21, 2002, 12:13 PM
Your computer numbers bit math is WRONG (Actual bit math is correct, but NOT IN THIS CASE!!). As a C++ programmer I can tell you this: 64-bit numbers AND 80 bit bumbers are no different in the real precision!

That may seem strange, but they don't use the 64-bits and 80-bits for the precision. They only have 15 digits of actual numbers in them. BOTH OF THEM. Why?

Simple. They are FLOATING POINT NUMBERS. They are not used to calculate precision, they are used to calculate huge numbers.

The numbers are stored to define where the decimal place is. An example: 1500000000 is stored as 15e9 to the computer. The only difference between the 64-bit and the 80-bit numbers are how many decimal points they can tell you about (the 64-bit number maxes out at 308 decimal places, the 80-bit around 4000-ish).

To summarize: There is NO DIFFERENCE in precision!

mischief
Jan 21, 2002, 12:17 PM
:p

anshelm
Jan 21, 2002, 12:19 PM
Also, Motorola's processors are more precise then Intel's and AMD's. Simply put, the IA-32 architecture is math-impaired.

Why do I say this? Simple. To an Intel/AMD chip, the range of numbers from 0.9999991 to 1.0000001 are the same number! (And working with CAD, I assume you want numbers more precise than that). 32-bit floating point and 32-bit integer are the ONLY precise numbers on the Intel side. Anything beyond that and you have to deal with the fact that the Intel/AMD design doesn't understand numbers.

"What does this mean?
Apple is "fuzzy" which is great for artists.
Intel/AMD is percise (in comparison to Apple) which is great for science and industry."

Intel's processors are "fuzzy", and this has been know EVER SINCE THE PENTIUM. You do remember the stink about how it can't add, right? Well, they never changed that. It just considers a range of numbers to be the same number. It is FAR "fuzzier" then Motorola's processors.

That's just my 2 cents.

(Edit: For those Intel/AMD users who don't believe me, browse to my up and colming website (right now it's just text to this, but it will be more) at http://jeni-lee.com/anshelm/downloads/ to get a simple source code and exe to demonstrae that the IA-32 architecture is math retarded.)

AlphaTech
Jan 21, 2002, 12:54 PM
According to Apple's spec database, all G4 towers have a 128-bit data path for the processor.

http://www.info.apple.com/info.apple.com/applespec/applespec.taf?ql=ri&name=g4

Select any of the G4 towers and they all show 128-bit data path, the only difference being the MHz (newer ones are 133, older are 100).

peace

Pants
Jan 21, 2002, 01:37 PM
So what you're saying is that the BUS is 32 bits wide universally. That doesn't suprise me. I think the Cache handling and on-chip bit ratings would make more of a difference. The number of steps in the pipeline matters too. All of this will get more interesting when Intel goes RISC on more than just Itanium: Apple will be close to or past the 1Ghz mark and Intel will be just stepping in at around 800 Mhz. It's interesting to note too that 486's are the top of what x86's can currently turn out hardened for Space where as Apple has launched a satellite (see other posts) with a G4.


sounds like someones been to Arstechnica... :/

"intel goes more RISC" sheesh!

the pros and cons of intel/amd/motorola are all very fine, but come on, define me RISC and CISC in relation to a modern processor. A p4 is no more 'risc' than a g4 is 'cisc' . They are terms left in the past, and do not strictly apply to modern processors.
Im also really not sure what the point is with the 'space' comment - 'hardened for space"? hmmm.......fighter aircraft require greater 'hardening' of chips due to the expected 'worst case' scenario - I suspect that power consumption is more a deal in satellites, that and the ever present fear of having your transcontinental phone chat 'BSOD'ed' with an MS based os...

"look! g4 128 bit! p4 only 32 it!! ") the g4 a 128 bit machine? as if! :)

Pants
Jan 21, 2002, 01:47 PM
Originally posted by anshelm



The numbers are stored to define where the decimal place is. An example: 1500000000 is stored as 15e9 to the computer. The only difference between the 64-bit and the 80-bit numbers are how many decimal points they can tell you about (the 64-bit number maxes out at 308 decimal places, the 80-bit around 4000-ish).

To summarize: There is NO DIFFERENCE in precision!


uhh....so if one can give a number to a higher number of decimal places then that is not an increase in precision??!

and there was me wondering why we bother with double precision .....

mischief
Jan 21, 2002, 01:59 PM
To shield microelectronics from EM noise as incurred in a "hostile" environment as present in orbit. Orbital EM noise as compared to the average desktop is like comparing a string quartet (Desktop) to a Space Shuttle launch (Orbit).

AND: G4=RISC x86=SISC

Don't even try to say that isn't relevant anymore.

Pants
Jan 21, 2002, 03:10 PM
Originally posted by mischief
To shield microelectronics from EM noise as incurred in a "hostile" environment as present in orbit. Orbital EM noise as compared to the average desktop is like comparing a string quartet (Desktop) to a Space Shuttle launch (Orbit).

AND: G4=RISC x86=SISC

Don't even try to say that isn't relevant anymore.

It isnt. simple.

blah blah blah..."To be specific, chips that implement the x86 CISC ISA have come to look a lot like chips that implement various RISC ISA’s; the instruction set architecture is the same, but under the hood it’s a whole different ball game. But this hasn't been a one-way trend.* Rather, the same goes for today’s so-called RISC CPUs. .....Thus the "RISC vs. CISC" debate really exists only in the minds of marketing departments and platform advocates whose purpose in creating and perpetuating this fictitious conflict is to promote their pet product by means of name-calling and sloganeering.**"

It really gets up my nose when people bash things just because of a misunderstood buzz word as well. Thats straight from arstechnica. go look for it.

As for satelites, I am well aware that the environment is 'hostile'. However, the reasoning behind using a ppc over an intel chip is more likely to do with power consumption than shielding - its the same for the embedded market.

Catfish_Man
Jan 21, 2002, 04:27 PM
...the G4 is not 128 bit. It's 32 bit with 128 bit Altivec (Motorola's implementation of SIMD). It uses Altivec to apply the same calculation to 4 32 bit variables. The G5 (probably) is 64 bit, with 128 bit Altivec. The P3, P4, and Athlon are all 32 bit, with 64 bit SSE/3dNOW!. Performance-wise, the P4 and Athlon XP beat the crud out of the G4 at non-Altivec stuff, and lose equally badly at Altivec. To give you an example of how much difference it can potentially make, my friend's G4 450MHz got 300MFlops on Altivec Fractal Carbon demo with Altivec turned off. It got 1500MFlops with it turned on. That's 5X the performance.

sturm375
Jan 21, 2002, 04:54 PM
First, the G4 is 32-bit. The path from the memory, to the engine is 128-bit. The only thing the engine does is funnel the data into 32-bit wide chuncks. Inside the processer, where work is actually done, it is 32-bits wide. This is the widest part of the system. The BUS to the add-on cards gets in some MotherBoards to 64 bits wide. If you are lucky and have onboard SCSI, that can get pretty wide to. Some video cards, high end, and only on the card itself, contain places up to 256-bits wide.

Yes the Apple website does say that it is 128-bits, but keep in mind, they are trying to sell you a computer. And technically it does go into the CPU chip at 128. This is not the processor. Where the processing is done, is 32-bits. Check out the Motorola information, they are not trying to sell you a computer.

Bit Math:

Yes I, bet you are right, C++ probably only recoginizes math out to 15 digits. This is a compiler, not the processor. The processor does not see 15e9, the processor sees 101110101001011101010100101010101010 only. (previous was an exageration I have know idea what those 1s and 0s say. Point being that when you pass a number to the processor, depending on the assignment (real/integer) determines how many 1s and 0s pass go through. If you assign a single integer and do math on it both the G4 and the Px/AMD will see 1=000000000000000000000000000001 (I think). That is a 32-bit number. Now on the other extream, if you pass the widest number, a real, to a processor, double the number of characters for the G4. For the Px and the AMD processors, you pass 80 1s and 0s. It doesn't matter to the processor whether you passed 3 or 3x10^456, a real number has the same number of 1s and 0s. Now math gets even more complicated, as the processor cannot make judgements. It can't shorten the number, it doesn't see the whole number all at once. It has to spin it's wheels looking at each bit. That means that for a real on a G4 will take 2 cycles, and a Px/AMD will take 3 cycles.

Bottom line, The G4 is faster, because it takes short-cuts.

mischief
Jan 21, 2002, 05:00 PM
Originally posted by sturm375


Bit Math:

Yes I, bet you are right, C++ probably only recoginizes math out to 15 digits. This is a compiler, not the processor. The processor does not see 15e9, the processor sees 101110101001011101010100101010101010 only. (previous was an exageration I have know idea what those 1s and 0s say. Point being that when you pass a number to the processor, depending on the assignment (real/integer) determines how many 1s and 0s pass go through. If you assign a single integer and do math on it both the G4 and the Px/AMD will see 1=000000000000000000000000000001 (I think). That is a 32-bit number. Now on the other extream, if you pass the widest number, a real, to a processor, double the number of characters for the G4. For the Px and the AMD processors, you pass 80 1s and 0s. It doesn't matter to the processor whether you passed 3 or 3x10^456, a real number has the same number of 1s and 0s. Now math gets even more complicated, as the processor cannot make judgements. It can't shorten the number, it doesn't see the whole number all at once. It has to spin it's wheels looking at each bit. That means that for a real on a G4 will take 2 cycles, and a Px/AMD will take 3 cycles.

Bottom line, The G4 is faster, because it takes short-cuts.

Swing that Hammer!!!:D :D :D

Matthé
Jan 21, 2002, 05:29 PM
finally, we're actually talking facts here
not the usual 'G4 kicks Pentiums ass' crap

Ensign Paris
Jan 21, 2002, 05:38 PM
I will try to get some figures to here soon, but this is my observations of a AMD 1900+

My friend is the same on the Windows machine as I am on Mac.

The machine spec (WIndows) is:

Processor: AMD 1900+
Ram: 1gb DDR-Ram
GC: GeForce3 Ti500 64mb summut

It seems to be not as fast in things like Photoshop and stuff like that. And I was hopeing not to be impressed with MS FlightSimulator 2002 and you know what I wasn't! The GRAPHICS were not as good as I have dreaded and for once I KNEW that the mac was VERY superior!

C ya for the night, I am off to bed (Exams in the morn!)

AlphaTech
Jan 21, 2002, 08:18 PM
For everyone claiming that the G4 is 32-bit... How do you explain Gauge Pro showing the PowerBook G4 (Titanium) as having 64-bit??? Just as Apple claims in the specifications database, not where they sell systems, and not a place many people look. The main reason I have been looking there was to check specifications for older systems before attempting upgrades.

I even attached a screen shot taken off of my TiBook just a few moments ago to prove that I am not blowing smoke (or inhaling).

I will be checking the G4 733 (pre-QuickSilver) at work come tomorrow and can guarantee that it will be at least 64-bit. I am not 100% on the 128, since I do not remember when I ran the utility on it last. Either way, the G4 has been 64-bit for a very long time (since at least the agp video model).

anshelm
Jan 21, 2002, 08:50 PM
"Yes I, bet you are right, C++ probably only recoginizes math out to 15 digits. This is a compiler, not the processor."

Um, dude, do you even know what you are talking about? No, it's not the compiler. Every C++ compiler acts identical. Why? Because C++ turns it into Assembly language. And guess what....

I know some Assembly, as well. Not enough to turn out a program, but enough to know that I am right. (A good online book on the 32-BIT Assembly of the IA-32 can be found at "http://webster.cs.ucr.edu/Page_AoA/0_AoAHLA.html") The chip DOES understand NUMeNUM. You should give them credit, they really are trying to make it smarter. It's just so stupid that anything beyond 32-bit is inaccurate.

anshelm
Jan 21, 2002, 08:58 PM
You see, the reason why it is called 32-bit architecture is because that all it really understands. If the chip could handle 64-bit numbers that way (the way you THINK they do), IT WOULD BE A 64-BIT CHIP. But it's NOT. Why do you think the memory addressing of the IA-32 is 32-bit? If it can understand this wonderful 64-bit numbers, why not just give you that memory range? BECAUSE IT ONLY REALLY UNDERSTANDS 32-BIT NUMBERS. Everything else, it's 64-bit and 80-bit numbers, are NUMeNUM!!

Intel's Itanium works the way you think all processors do because it's 64-BIT.
Learn about your subject before making unfounded assumptions.

Plus, since you say "Integer/Real", I would venture a guess that you know Pascal, right? Either that, or claim to understand Assembly and don't have a clue. Yes, they are stored as 80-bits, but in the manner I am saying.

Unregistered
Jan 21, 2002, 09:00 PM
Yes, you are right. Your little picture is showing 64bits. Unfortunatly, all that is showing you is the width of the memory bus (how large the path between the CPU and the RAM is). It has nothing to do with how many bit the processor can acctually work with at a time.

Quark
Jan 21, 2002, 09:21 PM
Of course the G4 is not 64-bit!

If the processor was 64-bit, Steve would boast that in a big WAY!

Anytime he did any comparisons between Pentium and PPC, he would have clearly made the distinction that Pentium is 32-bit and G4 (if it were) 64-bit.

This is very significant in processor ability/performance.

G4 is not 64-bit.

AlphaTech
Jan 21, 2002, 09:48 PM
Direct from Motorola's web site... check out http://e-www.motorola.com/webapp/sps/site/taxonomy.jsp?nodeId=02M0ylfVS0lM0ypLRtk6

All of their MPC74xx processors (aka G4) have a 64 bit bus interface.

Continue to insist that they are 32 bit if you want... Even Motorola states that the Altivec technology makes the processor 128 bit.

I intend to call Apple directly tomorrow and put the question directly to them. I am not calling sales, but the real tech's. We have real resources available to us at work, not the normal public ones.

Unregistered
Jan 21, 2002, 10:12 PM
You people don't know anything. The G4 is 32 bits entirely, just like a P4 or K7. It has a 128bit AltiVec unit that ONLY WORKS ON OPTIMIZED CODE, which most isn't. Also, a 64 bit processor is exactly no times faster than a 32 bit processor. What, do you think the processor cuts your wordsize in half and distributes it over the entire core. NO. How can you be so mislead? I can't believe it. The macintosh is an amazing machine, people, but you have no IDEA about the technology you are using. Read ARS TECHNICA before you flame yourself!!!!!!!!!!!!!!!!!!

Unregistered
Jan 21, 2002, 10:18 PM
Okay, so that last post was kinda harsh. Sorry.

anshelm
Jan 21, 2002, 10:19 PM
For the record, when I refer to IA-32, that has nothing to do with Motorola's processors. Please don't confuse my posts with that, I'm merely trying to enlighten PC folks as to the truth about their processors.

And for the last proof. The processor does understand math of the sort I am saying. In fact, you should, too. It's called scientific notation. Say you had the number 1.57 e 9 (that's times 10 to the power of 9) and 3.63 e 7.

Try multiplying them. As any person who knows math will say it's simple. Sure, to us it may seem difficult (1570000000 x 36200000). But it's not. Here's the answer: 5.6834 e 16.

How to come up with that? Multiply 1.57 and 3.62. Then add their power s(9 + 7). Ta-da! That's how floating point math works inside the IA-32 processors. It doesn't look at the huge 64-bit number or 80-bit number BECAUSE IT'S A 32-BIT PROCESSOR. It takes the numbers and works on them (15 digits of real numbers) and works with the powers.

There is a lot more to scientific notation (like dividing and the such, and making sure that the numbers are with the range of 0 to 1), but this isn't math class.

So just because you don't know scientific notation doesn't mean the processor doesn't.

(You see, a 64-bit number can support numbers with up to 307 zeroes, but the common misconception interpretation it can only show NINETEEN zeroes. That's why the processor uses scientific notation. In fact, the 80-bit can support up to aroun 4000 zeroes, but according to the misconception method, only TWENTY-FOUR. See, 2^64 only has 19 zeroes, and 2^80 only has 24 zeroes.)

(If you are referring to the "wide" integer, that is what sees 64-bit as a straight number, and 80-bit as a straight number, but that is 17 digits... WITHOUT A DECIMAL PLACE because it is an integer)

AlphaTech
Jan 21, 2002, 10:55 PM
I gotcha flames right here...

anshelm
Jan 21, 2002, 11:06 PM
Sorry to disappoint, but the G4 IS 32-bit.

http://e-www.motorola.com/webapp/sps/site/prod_summary.jsp?code=MPC7451&nodeId=02M0ylfVS0lM943030450467M98653

You see, the page you posted a link to is the page for the 8xxx series. That has nothing to do with the 754x, which is the G4.

Also, if you notice, on Motorola's site, the link to get to info on the PowerPC is listed as: "32-bit Embedded Processors: PowerPC ISA"....

http://e-www.motorola.com/

AlphaTech
Jan 21, 2002, 11:26 PM
Actually.. the G4 is a 74xx processor... 7400 being used in the G4 AGP graphics and 7410 being used in the TiBook (at least the Rev A).

This is 100% accurate, confirmed on both computers under 2 minutes ago for all you nay sayers. Unless you have one of those in front of you, and can prove 100% that they are not those... you know what you can do.

anshelm
Jan 21, 2002, 11:29 PM
Oops. You're correct, the G4 is 74xx. I was a little hurried when I posted my post.

However, my post is still correct about the G4 being 32-bit.

(http://e-www.motorola.com/webapp/sps/site/taxonomy.jsp?nodeId=03M943030450467M98653

Click on any of the 74xx links and it will begin by stating that it is a 32-bit implementaion of the PowerPC)

anshelm
Jan 21, 2002, 11:33 PM
Also, support for 64-bit data bus (which the G4 has) does not make it a 64-bit processor. It is still a 32-bit processor internally with the Altivec engine being 128-bit.

Unregistered
Jan 22, 2002, 12:04 AM
"uhh....so if one can give a number to a higher number of decimal places then that is not an increase in precision??! "

No, it's not an increase in PRECISION. It's an increase in ACCURACY. An increase in PRECISION is where you have more DIGITS (5, 3, etc). To have an increase in decimal places (where after 15 decimal places they all are zero) does not increase PRECISION.

The 64-bit is more precise (btw, "double precision" means 64-BIT, not 80-bit) then the 32-bit, but then 80-bit is not more precise then 64-bit. It's just more accurate.

anshelm
Jan 22, 2002, 12:57 AM
Oops, that last post was by me (but on a different machine where I wasn't logged in). Sorry 'bout that!

If you have the numbers:

3.456765345654123 e 306
3.456765345654123 e 2978

they are the same for precision. The larger one (80-bit versus 64-bit) is more ACCURATE, but it is not more PRECISE. Precision deals with how many significant digits there are (the rules for that are a bit long winded, but basically any zeroes after the last non-zero digits after a decimal place are NOT considered to make a number more precise). Accuracy deals with if the number is close or not. Precision deals with is the number EXACT or not.

If you have the numbers:

3.456765345654123 e 306
3.4567653456541234 e 2978

then the second one would be more precise. Do you see what I mean?

lera
Jan 22, 2002, 01:33 AM
Originally posted by Unregistered
"uhh....so if one can give a number to a higher number of decimal places then that is not an increase in precision??! "

No, it's not an increase in PRECISION. It's an increase in ACCURACY. An increase in PRECISION is where you have more DIGITS (5, 3, etc). To have an increase in decimal places (where after 15 decimal places they all are zero) does not increase PRECISION.

The 64-bit is more precise (btw, "double precision" means 64-BIT, not 80-bit) then the 32-bit, but then 80-bit is not more precise then 64-bit. It's just more accurate.

do you have it backwards? or are you nocking the G4 I'm confused.

more Accuracy is more correct
more Precision is more specific (not nessarily correct)

example I am 18 years my birthday is during the summer so the statement
"I am 18.5 years old"
is verry acurate but not very precise where as the statement
"I am 18.7746532 years old"
Is very precise yet extreamly inacurate. the statement
"I am 18.55583 years old"
is both accurate and precice


so you're saying the the G4 is not verry correct but is very specific that's a bad thing for the G4.

you also may have confused readers you seemed to say that a larger number is more accurate (3.blablae2xxx compared to 3.blablae3xx) where as if the number is suposed to me a smaler number like 3.4e456 then the first one would be more acurate and just as presice. im my example with my age the larger number 1.87746532e1 is inaccurate compared to the smaller number 1.85e1

also doesn't the P4 just convert the numbers to 80bit and then back into 64 bit after it's done messing with them?

anyway I don't imagine either of them is too imprecise or too inaccurate.

lera
Jan 22, 2002, 01:34 AM
for the reccord:
my understanding is that both the p's and the g's are all 32bit procesers.
they each have seperate sizes of in's and outs and the G4's altivec is 128bit.
I figure that a 64 bit proceser does have many advantages over the 32bit archetecture but I'm not to concerned since right now all the chips I'm woried about are 32. I'll go do some reserch on the diferences between 32 and 64 bit procesers
see you all later
:)

anshelm
Jan 22, 2002, 01:53 AM
Actually, let's clear this all up right now:

I did not mean the larger in all cases, I meant the larger as in 80-bit numbers, not larger as in it's a bigger number, is more accurate in measuring huge numbers. Sorry for any confusion.

Now on the answer of accuracy and precision: It depends on what you used to measure those numbers. The precise one, if done with improper measurements, is not correct. BUT if the precise one is done with perfect measurements, it IS correct. The idea that accuracy is better is only due to the fact that we can only measure to a certain point with perfect precision, and after that it is just a good guess. In the world of math, more precision is better. In the real world, more of a balance of precision and accuracy is better.


This is not all the rules of precision, only a portion that have been summarized:

"Any zero preceeding non-zero digits before a decimal point does not count towards precision. (ie 003 == 3)

Any zero following all non-zero digits after the decimal point does not count towards precision. (ie .300 == .3)

Any zero after the non-zero digits in a number, without a decimal, is not counted towards precision. (ie 300 has the same precision as 3).

Any zero after the decimal place but before non-zero digits is not counted towards precision. (ie .003)

Any zero after the non-zero digits in a number WITH a decimal IS counted towards precision. (ie 300. is more precise then 300).

Any zero between non-zero digits IS counted. (ie 303 or .303)"


Sorry, I didn't mean to confuse you, I'm knocking the idea that the 80-bit number is more "precise" then the 64-bit number. (An idea from before in the thread.) Both have the same amount of digits (15), the only difference is the amount of decimal places the 80-bit number can say there are (approx 4000, which is 3985 zeroes, which, as you can see from above, does not add to the "precision" of the number).

Pants
Jan 22, 2002, 04:35 AM
This thread is TEH FUNNY!

"g4s are 64 bit!! yah!! 64 bit math is no better than 32 bit math!! "
"I knoW AsSeMblAr! and YUO only No PaScal! " "I rite for ArseTechnica And No Stuff! "


hey arent G4s really 128 bit? ;)

I give up. No really, I do. :)

Pants
Jan 22, 2002, 05:25 AM
sorry- that was a tad harsh, I know.

anyway - some of us actually have a use for all those numbers after a decimal place. "64-bit computation: Having larger registers for holding integer and floating point data allows for an increased dynamic range. * The dynamic range of a number format is just the range of values, from the lowest to the highest, that it can hold.* Not too many mainstream programs use integers or floating-point values that are outside of the dynamic range available in a 32-bit system (we're talking really large numbers here), but it does happen. " An example being the maths my bank does on my overdraft.....

But the real deal with a 64 bit computer is more RAM than you can shake a stick at, and much much bigger file sizes.....

anyway - heres a light touch on the 128 v 64 v 32 bit postering of machine owners.

http://www.actsofgord.com/Chronicles/chapter18.html
"The Dreamcast is 128-bit."

"No it's not.* It's 64-bit."

"It's 128-bit!"

"Really?* And why is that?"

"It just is."

"I see.* Ok, it's got a 64-bit CPU.* 64-bit GPU.* 64-bit databus.* In fact the entire machine is 64-bit or less except the geometry sub-processor on the GPU.* Even then, it's only 128 bit for internal math.* It still talks to the rest of the machine 64-bits at a time."

"So it's 128-bit!"

"Not by any measuring stick that the world is using.* Unless you feel the Genesis also had 'blast processing.'* Also, the Nintendo 64 was the exact same way.* 64-bit system with a 128-bit geometry sub-processor."

"The Nintendo 64 is 64-bit, the Dreamcast is 128-bit!"

"Just because you say it is doesn't make it so."

"Sega wouldn't lie."

"That's right.* They would never do that.* Do you live in a cave?* Sega is Japanese for compulsive liar."

"Then why does the Dreamcast look so much better than the 64?"

"Because the Nintendo 64 sucks ass.* And since the DC came out 3 years after it, it had damn well better be a lot better."

"Since part of the machine is 128-bit, it's 128-bit."

"So by your argument, the PS2 is a 2,560-bit machine as the data bus from the GPU to the ram is 2,560 bits across?"

"No, it's a 128-bit machine."

"So what you're saying here is you make things up as you go along to justify your position?"

"You just don't like the Dreamcast!"

"Actually, I like the machine and it's got some good games like StarLancer.* Shame the controller sucks, but we'll discuss that another time.* However, this doesn't change the machine being a 64-bit machine."

"It's 128-bit."

"So what colour is the sky in your world?"



sounds like this thread! :)

anshelm
Jan 22, 2002, 05:44 AM
;) Touché.

I got more then a little long-winded in my frustration. I guess my basic point does boil down to something along the lines of:

"X: It's 80-bit!"

"Me: It's 32-bit pretending to be 80-bit"

"X: It's 80-bit!"

"Me: Here's a bunch of long-winded posts on why 64-bit and 80-bit numbers are not really that to a 32-bit processor."

And, of course:

"X: G4 is 64-bit!"

"Me: It's 32-bit. Here's some posts about that."

mischief
Jan 22, 2002, 05:14 PM
This is enough to give a casual Geek like me a nosebleed.........Ow.

Sinopsis: the data path of any given machine is only as wide as it's narrowest common point.

Individual components may have rediculously wide bit paths.

Processor bit ratings should be measured at the chip/Mobo interface and as such are 99.999% of the time 32 bit.

Until a 64 bit mobo and 64 bit chip-IO are available there are no 64 bit machines.

Even at 64 bit the only bennefit is larger file and memory size recognition as 99.999% of software is only 32 bit.

Some very special firmware and kernel mojo will be necessary to take full advantage of a completely 64 bit machine.

A "true" 64 bit machine is overkill unless you want to make NSA nervous.

Does that about cover it for the non-math-geeks? Or am I missing the point.

AlphaTech
Jan 22, 2002, 06:22 PM
Originally posted by anshelm
Oops. You're correct, the G4 is 74xx. I was a little hurried when I posted my post.

However, my post is still correct about the G4 being 32-bit.

(http://e-www.motorola.com/webapp/sps/site/taxonomy.jsp?nodeId=03M943030450467M98653

Click on any of the 74xx links and it will begin by stating that it is a 32-bit implementaion of the PowerPC)

Which part of that looks like 32-bit???? See the column that has Bus Interface (Bits) and then lists all of them as 64??? I am not seeing things, maybe you are.

The level 1 cache is listed as 32 Kbytes, not bits.

I can admit to being wrong about the 128-bit (the Altivec implementation is 128 and maybe not the entire chip). But right there in black and white is 64-bit.

maiku
Jan 22, 2002, 06:58 PM
0010010010111100101001001001010101011010101001011001000101111001010100100101001011100100100101111001 0100100100101010101101010100101100100010111100101010010010100101110010010010111100101001001001010101 0110101010010110010001011110010101001001010010111001001001011110010100100100101010101101010100101100 1000101111001010100100101001011100100100101111001010010010010101010110101010010110010001011110010101 001001010010111001010101001010010?

0010010010000100101010010101001001010010100101010100101010101010011111001101001000111101001001001011 100110010100110100101111010101000101010010101010010!

If you were a 64bit G4, you'd get it......

anshelm
Jan 22, 2002, 09:59 PM
" Which part of that looks like 32-bit???? See the column that has Bus Interface (Bits) and then lists all of them as 64??? I am not seeing things, maybe you are. "

None of those have to do with what bit-type the processor is. What determines the bit-type is the size of bit chunks it moves around in the processor. Ever wonder WHY the AltiVec engine can only move four 32-bit, eight 16-bit, or sixteen 8-bit? Because the G4 is 32-bit!

Besides, please read the article. It will clarify something for you. Motorola starts the article by stating that it is a 32-bit processor.

Data bus bit size does not say anything about what the processor does on the inside, only how it interfaces with the world. Read arstechnica, geek.com, or at least something that explains processor architecture before posting things like this. Learn what a 32-bit processor is, and what a 64-bit processor is. It would make this much easier. *shakes head*

Here, let's quote that line I mentioned:

"The MPC7400 Host Processor is a high-performance, low-power, 32-bit implementation of the PowerPC Reduced Instruction Set Computer (RISC) architecture combined with a full 128-bit implementation of Motorola's AltiVec[tm] technology instruction set..."

http://e-www.motorola.com/webapp/sps/site/prod_summary.jsp?code=MPC7400&nodeId=03M943030450467M98653

"Motorola's MPC7451 host processor is a high-performance, low-power, 32-bit implementation of the PowerPC architecture with a full 128-bit implementation of Motorola's AltiVec(tm) technology."

http://e-www.motorola.com/webapp/sps/site/prod_summary.jsp?code=MPC7451&nodeId=03M943030450467M98653

(Emphasis mine)

Even Motorola states that it is 32-bit. That means operates on 32-bit chunks of data. The data bus bit size has nothing to do with what bit-type a processor is!

krossfyter
Feb 21, 2002, 07:29 PM
Originally posted by maiku
0010010010111100101001001001010101011010101001011001000101111001010100100101001011100100100101111001 0100100100101010101101010100101100100010111100101010010010100101110010010010111100101001001001010101 0110101010010110010001011110010101001001010010111001001001011110010100100100101010101101010100101100 1000101111001010100100101001011100100100101111001010010010010101010110101010010110010001011110010101 001001010010111001010101001010010?

0010010010000100101010010101001001010010100101010100101010101010011111001101001000111101001001001011 100110010100110100101111010101000101010010101010010!

If you were a 64bit G4, you'd get it......


thats the weed man!

AlphaTech
Feb 21, 2002, 08:18 PM
Someone just took a few bytes out of my bits...

Can't we all just get along??? We all know that the Mac's are superior no matter what bit level they are. Where they truely rule is in the applications that have been writen to take advantage of the G4/Altivec engine. See PhotoShop comparison above. I probably could attempt the same thing between my G4 500 and the peecee that I built that has a 1.4GHz AMD t-bird processor. The G4 has 2x the memory though (1.5GB, pc100 vs. 768MB, DDR PC2100) of the peecee.

The real test will be when the next generation comes out from Apple. I know we are all hoping for it by, or soon after, MWNY. I just hope that there is an architecture change to allow the Mac processors to surpass the peecee's for at least a year or two. Something like true 128 bit, or advanced 64bit processors would be sweet.

krossfyter
Feb 21, 2002, 09:01 PM
i hear ya man!! you rule alpha tech... you rule man!!!

dig it!

graydecember
Feb 21, 2002, 09:16 PM
Where do MIPS processors fit into all this? SGI's used 'em for years so they must be worth discussing. I think they are 64-bit, as are Sparcs...

Also, isn't Unix itself only 32-bit?