Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

nyc2pdx

macrumors 6502
Original poster
Oct 20, 2012
292
165
Portland, Oregon
I'm seeing all this discussion about 500 million teraflops this and 250 million teraflops that....what is the difference, and why would the average user who plays an occasional game on their mac be interested in this? Sorry for my ignorance...but I am actually curious.
 
Everything I'm going to post here is an oversimplification of complex realities... but for the sake of understanding:

FOPS just a measure of how much processing a system can do. FLOPS stands for "floating point operation per second." You can think of a FLOPS as a single addition, subtraction, multiplication, or division of a pair of numbers. A 1 FLOPS computer could do one of these operations every second. A 1 TeraFLOPS computer could do 1,000,000,000,000 (1x10^12) of these operations every second.

At the dawn of computing, computers usually had a single processor called the Central Processing Unit (or CPU), so it was easy to compare performance between computers. Now that computers have many processing units (multiple CPUs with multiple processing cores, and multiple graphics processing units (GPUs) with many, many, many cores) it is sometimes easier to compare performance by adding up all of the FLOPS these processing cores can calculate.

That's why you might see Microsoft and Sony comparing FLOPS on their latest consoles. The end user doesn't really care how many cores the CPUs and GPUs in these systems have, or at what speed (hertz) they're running at. All the user cares about is performance.

In case you're interested, the term "Supercomputer" originally defined a computing system that could execute 1 gigaFLOPS (or 1x10^9 operations per second). In 1999, Apple advertised the Power Mac G4 as the first "personal supercomputer" because it could process 4 gigaFLOPS. The G4 chip had an accelerated floating point co-processor (which Apple called the "Velocity Engine"), which gave it the boost over the 1 gigaFLOPS barrier. Applications had to be written specifically to take advantage of this co-processor.

The fastest supercomputer cluster these days is capable of over 93,000,000,000,000,000 (93x10^15) FLOPS (or roughly 93 petaFLOPS)
 
Last edited:
I'm seeing all this discussion about 500 million teraflops this and 250 million teraflops that....what is the difference, and why would the average user who plays an occasional game on their mac be interested in this? Sorry for my ignorance...but I am actually curious.

OK, so you've never seen several million teraflops. Tera in itself means a trillion. Flops is short for "floating point operations per second", and is basically how fast the graphics processor can, well, do maths on floating point numbers (i.e. numbers with decimals). When you move the coordinates for something in a 3D environment, you're adding or subtracting values from its coordinates, and so the more flops, the faster you can do it.
But this is just one aspect of performance, as this is only the shader/stream processing we're talking about. The ROPs (render output units), TMUs (Texture mapping units) and other logic in a GPU also contributes to its performance in different circumstances.
PS. A teraflop doesn't exist. The 's' is required. Otherwise it'd be "floating point operation per"
 
  • Like
Reactions: nyc2pdx
I'm seeing all this discussion about 500 million teraflops this and 250 million teraflops that....what is the difference, and why would the average user who plays an occasional game on their mac be interested in this? Sorry for my ignorance...but I am actually curious.

You know of a 500 million teraflop machine? WHERE?

This is going to be AWESOME!
 
iMac 2018 man....it is coming!

straight into the top500.

(The fastest supercomputer in the world, is rated at 93,015 teraflops)
[doublepost=1497305697][/doublepost]By the way, you've probably seen references to single, double and half precision.

Let us take the example of pi.

Under double precision, pi is equal to
3.1415926535897931
Under single precision, pi is equal to
3.1415927

Under half precision (the new kid on the block), pi is equal to
3.1406

that last one isn't very precise. However, it's useful in calculating the lighting associated with hdr graphics, among other things

Code:
>>> import numpy
>>> import math
>>> np.float64(math.pi)
3.1415926535897931
>>> np.float32(math.pi)
3.1415927
>>> np.float16(math.pi)
3.1406
 
  • Like
Reactions: nyc2pdx
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.