FLOP rating

Discussion in 'General Mac Discussion' started by mischief, Jan 8, 2002.

  1. macrumors 68030

    mischief

    Joined:
    Aug 1, 2001
    Location:
    Santa Cruz Ca
    #1
    I've been using Mflop instead of Gflop. Go check out Apple's G4 page. To clarify earlier tech errors from myself and others: G4 dual 800= 11.8 Gflops. That's still 6-10 times more data crunching per second than ANY OTHER PROCESSOR (card) currently available outside of refridgerator-size models.

    G4 is 128 bit wide, not 64 bits. To the best of my knowledge, most other machines run 32 or 64 bits wide-yuck.

    Total Impact is making expansion chassis' with up to 10 4-processor cards running G4 500's. If each G4 is worth a theoretical 5.3 Gflops than the Array would be around 212Gflops! The Linux core they use to run it all can handle more processors than that too!
     
  2. macrumors 6502a

    Joined:
    Apr 26, 2001
    #2
    G4 128bit wide?

    thought it was 32bit, with altivec having 4 32bits at a time?

    Am i wrong?
     
  3. thread starter macrumors 68030

    mischief

    Joined:
    Aug 1, 2001
    Location:
    Santa Cruz Ca
    #3
    Nope

    The Velocity Engine governs IO for the chip and it runs 128 bits wide.QED
     
  4. macrumors 6502

    Joined:
    Jan 8, 2002
    Location:
    Bakersfield, CA
    #4
    FLOP & 128-bit

    First: You are right, the information pathway to the velocity engine is 128-bit wide, however, the processor only takes 32-bits. By the way both Intel and AMD use similar technology under differnt names to do the same thing. The only processor in the mainstream, kind of, that is wider than 32-bit is the Itanium.

    FLOP rating, I don't know where you get your numbers, but if this is true, I guess my NVidia video card whoops the G4 at a whopping 35 GFlops? The amount of Flops that a computer can do has a lot to do with how much they are processing. For instance if you read the white papers on the processers you will find that the G4 and all the past Motorola processors only go as high as 64-bit floating point. Where as both AMD and Intel have been supporting 80-bit floating point numbers. This is why you will not find too many G4s in CAD offices. Especially when they have to submit drawings to a DOT (Department of Transportation) or other government entity. At a 64-bit floating point, many lines that line up on the screen don't on print-out. 80-bit FP numbers do.

    What does this mean?
    Apple is "fuzzy" which is great for artists.
    Intel/AMD is percise (in comparison to Apple) which is great for science and industry.
     
  5. thread starter macrumors 68030

    mischief

    Joined:
    Aug 1, 2001
    Location:
    Santa Cruz Ca
    #5
    Okay......

    So what you're saying is that the BUS is 32 bits wide universally. That doesn't suprise me. I think the Cache handling and on-chip bit ratings would make more of a difference. The number of steps in the pipeline matters too. All of this will get more interesting when Intel goes RISC on more than just Itanium: Apple will be close to or past the 1Ghz mark and Intel will be just stepping in at around 800 Mhz. It's interesting to note too that 486's are the top of what x86's can currently turn out hardened for Space where as Apple has launched a satellite (see other posts) with a G4.

    I use G4's for CAD all day and I have noticed that stuff changes between the active model and the "flattened" drawings. I'm a bit suprised that 16 bits would make that much of a difference.Although I DO NOT use Autocad. It sux for Architecture.

    Yes, your nVidia probably has a SICK flop rating, but it only does one thing. The G4 does those frighteningly high numbers for many things. It does raise an interesting point though: What is the AVERAGE flop score between chips over several types of calculations?

    Here's where I'm ignorant: Why don't CAD packages dump more processor duty to graphics cards? You'd thik it'd make sense.
     
  6. macrumors 6502

    Joined:
    Jan 8, 2002
    Location:
    Bakersfield, CA
    #6
    Bit Math

    Try this:
    on a calculator if you can find one that this will work.

    enter: 2^64 (in english that is 2 to the 64th power)
    now compare that number to: 2^80

    this number is equal to (2^64)*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2

    Since most people would judge the "power" of a processor as how many operations can be done in a certian amount of time, the G4 would smash the other processers in this. However if you weight the work in terms of the percision, hands down Intel & AMD win

    By the way I own:
    1: AMD K6-2
    1: AMD Athlon
    2: AMD Athlon MP 1600+ (dual processor)
    1: G4 Titanium (Writing on it now)
     
  7. thread starter macrumors 68030

    mischief

    Joined:
    Aug 1, 2001
    Location:
    Santa Cruz Ca
    #7
    the word is: "precision"

    I've never doubted Nonmac's superiority for "basic" math. the issue is whether, over many types of operations there's ANY difference in performance between chips when a mean average is drawn.
     
  8. macrumors member

    Joined:
    Jan 17, 2002
    #8
    Your computer numbers bit math is WRONG (Actual bit math is correct, but NOT IN THIS CASE!!). As a C++ programmer I can tell you this: 64-bit numbers AND 80 bit bumbers are no different in the real precision!

    That may seem strange, but they don't use the 64-bits and 80-bits for the precision. They only have 15 digits of actual numbers in them. BOTH OF THEM. Why?

    Simple. They are FLOATING POINT NUMBERS. They are not used to calculate precision, they are used to calculate huge numbers.

    The numbers are stored to define where the decimal place is. An example: 1500000000 is stored as 15e9 to the computer. The only difference between the 64-bit and the 80-bit numbers are how many decimal points they can tell you about (the 64-bit number maxes out at 308 decimal places, the 80-bit around 4000-ish).

    To summarize: There is NO DIFFERENCE in precision!
     
  9. thread starter macrumors 68030

    mischief

    Joined:
    Aug 1, 2001
    Location:
    Santa Cruz Ca
    #9
    Thanx, I was beginning to hear the "dark side" calling.

    :p
     
  10. macrumors member

    Joined:
    Jan 17, 2002
    #10
    Also, Motorola's processors are more precise then Intel's and AMD's. Simply put, the IA-32 architecture is math-impaired.

    Why do I say this? Simple. To an Intel/AMD chip, the range of numbers from 0.9999991 to 1.0000001 are the same number! (And working with CAD, I assume you want numbers more precise than that). 32-bit floating point and 32-bit integer are the ONLY precise numbers on the Intel side. Anything beyond that and you have to deal with the fact that the Intel/AMD design doesn't understand numbers.

    "What does this mean?
    Apple is "fuzzy" which is great for artists.
    Intel/AMD is percise (in comparison to Apple) which is great for science and industry."

    Intel's processors are "fuzzy", and this has been know EVER SINCE THE PENTIUM. You do remember the stink about how it can't add, right? Well, they never changed that. It just considers a range of numbers to be the same number. It is FAR "fuzzier" then Motorola's processors.

    That's just my 2 cents.

    (Edit: For those Intel/AMD users who don't believe me, browse to my up and colming website (right now it's just text to this, but it will be more) at http://jeni-lee.com/anshelm/downloads/ to get a simple source code and exe to demonstrae that the IA-32 architecture is math retarded.)
     
  11. macrumors 601

    AlphaTech

    Joined:
    Oct 4, 2001
    Location:
    Natick, MA
    #11
  12. macrumors regular

    Joined:
    Aug 21, 2001
    #12

    sounds like someones been to Arstechnica... :/

    "intel goes more RISC" sheesh!

    the pros and cons of intel/amd/motorola are all very fine, but come on, define me RISC and CISC in relation to a modern processor. A p4 is no more 'risc' than a g4 is 'cisc' . They are terms left in the past, and do not strictly apply to modern processors.
    Im also really not sure what the point is with the 'space' comment - 'hardened for space"? hmmm.......fighter aircraft require greater 'hardening' of chips due to the expected 'worst case' scenario - I suspect that power consumption is more a deal in satellites, that and the ever present fear of having your transcontinental phone chat 'BSOD'ed' with an MS based os...

    "look! g4 128 bit! p4 only 32 it!! ") the g4 a 128 bit machine? as if! :)
     
  13. macrumors regular

    Joined:
    Aug 21, 2001
    #13

    uhh....so if one can give a number to a higher number of decimal places then that is not an increase in precision??!

    and there was me wondering why we bother with double precision .....
     
  14. thread starter macrumors 68030

    mischief

    Joined:
    Aug 1, 2001
    Location:
    Santa Cruz Ca
    #14
    Hardening:

    To shield microelectronics from EM noise as incurred in a "hostile" environment as present in orbit. Orbital EM noise as compared to the average desktop is like comparing a string quartet (Desktop) to a Space Shuttle launch (Orbit).

    AND: G4=RISC x86=SISC

    Don't even try to say that isn't relevant anymore.
     
  15. macrumors regular

    Joined:
    Aug 21, 2001
    #15
    Re: Hardening:

    It isnt. simple.

    blah blah blah..."To be specific, chips that implement the x86 CISC ISA have come to look a lot like chips that implement various RISC ISA’s; the instruction set architecture is the same, but under the hood it’s a whole different ball game. But this hasn't been a one-way trend.  Rather, the same goes for today’s so-called RISC CPUs. .....Thus the "RISC vs. CISC" debate really exists only in the minds of marketing departments and platform advocates whose purpose in creating and perpetuating this fictitious conflict is to promote their pet product by means of name-calling and sloganeering.  "

    It really gets up my nose when people bash things just because of a misunderstood buzz word as well. Thats straight from arstechnica. go look for it.

    As for satelites, I am well aware that the environment is 'hostile'. However, the reasoning behind using a ppc over an intel chip is more likely to do with power consumption than shielding - its the same for the embedded market.
     
  16. macrumors 68030

    Catfish_Man

    Joined:
    Sep 13, 2001
    Location:
    Portland, OR
    #16
    First of all...

    ...the G4 is not 128 bit. It's 32 bit with 128 bit Altivec (Motorola's implementation of SIMD). It uses Altivec to apply the same calculation to 4 32 bit variables. The G5 (probably) is 64 bit, with 128 bit Altivec. The P3, P4, and Athlon are all 32 bit, with 64 bit SSE/3dNOW!. Performance-wise, the P4 and Athlon XP beat the crud out of the G4 at non-Altivec stuff, and lose equally badly at Altivec. To give you an example of how much difference it can potentially make, my friend's G4 450MHz got 300MFlops on Altivec Fractal Carbon demo with Altivec turned off. It got 1500MFlops with it turned on. That's 5X the performance.
     
  17. macrumors 6502

    Joined:
    Jan 8, 2002
    Location:
    Bakersfield, CA
    #17
    32-bit\processor math

    First, the G4 is 32-bit. The path from the memory, to the engine is 128-bit. The only thing the engine does is funnel the data into 32-bit wide chuncks. Inside the processer, where work is actually done, it is 32-bits wide. This is the widest part of the system. The BUS to the add-on cards gets in some MotherBoards to 64 bits wide. If you are lucky and have onboard SCSI, that can get pretty wide to. Some video cards, high end, and only on the card itself, contain places up to 256-bits wide.

    Yes the Apple website does say that it is 128-bits, but keep in mind, they are trying to sell you a computer. And technically it does go into the CPU chip at 128. This is not the processor. Where the processing is done, is 32-bits. Check out the Motorola information, they are not trying to sell you a computer.

    Bit Math:

    Yes I, bet you are right, C++ probably only recoginizes math out to 15 digits. This is a compiler, not the processor. The processor does not see 15e9, the processor sees 101110101001011101010100101010101010 only. (previous was an exageration I have know idea what those 1s and 0s say. Point being that when you pass a number to the processor, depending on the assignment (real/integer) determines how many 1s and 0s pass go through. If you assign a single integer and do math on it both the G4 and the Px/AMD will see 1=000000000000000000000000000001 (I think). That is a 32-bit number. Now on the other extream, if you pass the widest number, a real, to a processor, double the number of characters for the G4. For the Px and the AMD processors, you pass 80 1s and 0s. It doesn't matter to the processor whether you passed 3 or 3x10^456, a real number has the same number of 1s and 0s. Now math gets even more complicated, as the processor cannot make judgements. It can't shorten the number, it doesn't see the whole number all at once. It has to spin it's wheels looking at each bit. That means that for a real on a G4 will take 2 cycles, and a Px/AMD will take 3 cycles.

    Bottom line, The G4 is faster, because it takes short-cuts.
     
  18. thread starter macrumors 68030

    mischief

    Joined:
    Aug 1, 2001
    Location:
    Santa Cruz Ca
    #18
    Re: 32-bitprocessor math

    Swing that Hammer!!!:D :D :D
     
  19. macrumors member

    Joined:
    Oct 25, 2001
    #19
    unbiased

    finally, we're actually talking facts here
    not the usual 'G4 kicks Pentiums ass' crap
     
  20. macrumors 68000

    Ensign Paris

    Joined:
    Nov 4, 2001
    Location:
    Europe
    #20
    Facts

    I will try to get some figures to here soon, but this is my observations of a AMD 1900+

    My friend is the same on the Windows machine as I am on Mac.

    The machine spec (WIndows) is:

    Processor: AMD 1900+
    Ram: 1gb DDR-Ram
    GC: GeForce3 Ti500 64mb summut

    It seems to be not as fast in things like Photoshop and stuff like that. And I was hopeing not to be impressed with MS FlightSimulator 2002 and you know what I wasn't! The GRAPHICS were not as good as I have dreaded and for once I KNEW that the mac was VERY superior!

    C ya for the night, I am off to bed (Exams in the morn!)
     
  21. macrumors 601

    AlphaTech

    Joined:
    Oct 4, 2001
    Location:
    Natick, MA
    #21
    G4 TiBook has a 64-bit chip...

    For everyone claiming that the G4 is 32-bit... How do you explain Gauge Pro showing the PowerBook G4 (Titanium) as having 64-bit??? Just as Apple claims in the specifications database, not where they sell systems, and not a place many people look. The main reason I have been looking there was to check specifications for older systems before attempting upgrades.

    I even attached a screen shot taken off of my TiBook just a few moments ago to prove that I am not blowing smoke (or inhaling).

    I will be checking the G4 733 (pre-QuickSilver) at work come tomorrow and can guarantee that it will be at least 64-bit. I am not 100% on the 128, since I do not remember when I ran the utility on it last. Either way, the G4 has been 64-bit for a very long time (since at least the agp video model).
     

    Attached Files:

  22. macrumors member

    Joined:
    Jan 17, 2002
    #22
    "Yes I, bet you are right, C++ probably only recoginizes math out to 15 digits. This is a compiler, not the processor."

    Um, dude, do you even know what you are talking about? No, it's not the compiler. Every C++ compiler acts identical. Why? Because C++ turns it into Assembly language. And guess what....

    I know some Assembly, as well. Not enough to turn out a program, but enough to know that I am right. (A good online book on the 32-BIT Assembly of the IA-32 can be found at "http://webster.cs.ucr.edu/Page_AoA/0_AoAHLA.html") The chip DOES understand NUMeNUM. You should give them credit, they really are trying to make it smarter. It's just so stupid that anything beyond 32-bit is inaccurate.
     
  23. macrumors member

    Joined:
    Jan 17, 2002
    #23
    You see, the reason why it is called 32-bit architecture is because that all it really understands. If the chip could handle 64-bit numbers that way (the way you THINK they do), IT WOULD BE A 64-BIT CHIP. But it's NOT. Why do you think the memory addressing of the IA-32 is 32-bit? If it can understand this wonderful 64-bit numbers, why not just give you that memory range? BECAUSE IT ONLY REALLY UNDERSTANDS 32-BIT NUMBERS. Everything else, it's 64-bit and 80-bit numbers, are NUMeNUM!!

    Intel's Itanium works the way you think all processors do because it's 64-BIT.
    Learn about your subject before making unfounded assumptions.

    Plus, since you say "Integer/Real", I would venture a guess that you know Pascal, right? Either that, or claim to understand Assembly and don't have a clue. Yes, they are stored as 80-bits, but in the manner I am saying.
     
  24. Guest

    #24
    Yes, you are right. Your little picture is showing 64bits. Unfortunatly, all that is showing you is the width of the memory bus (how large the path between the CPU and the RAM is). It has nothing to do with how many bit the processor can acctually work with at a time.
     
  25. macrumors regular

    Joined:
    Jan 9, 2002
    #25
    64-bit?

    Of course the G4 is not 64-bit!

    If the processor was 64-bit, Steve would boast that in a big WAY!

    Anytime he did any comparisons between Pentium and PPC, he would have clearly made the distinction that Pentium is 32-bit and G4 (if it were) 64-bit.

    This is very significant in processor ability/performance.

    G4 is not 64-bit.
     

Share This Page