Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

obelix

macrumors member
Original poster
Oct 20, 2004
99
0
"High performance computing is often measured in terms of floating point operations per second, and is used in an environment where every second counts." - Some HPC manual I'm reading

I am wondering what exactly is meant by floating point operations? Is this some large number that the computer is just crunching all the time in a calculation?
 
A floating-point number is a digital representation for a number in a certain subset of the rational numbers, and is often used to approximate an arbitrary real number on a computer. [...]

A floating-point calculation is an arithmetic calculation done with floating-point numbers and often involves some approximation or rounding because the result of an operation may not be exactly representable.
from Wikipedia - a source of more and more knowledge... ;)
 
As a more simplified answer, floating point operations are when you take two numbers that might not be integers (i.e. 5.4 or -1.1) and add / subtract / multiply / divide them.

The reason it comes up in assessing the performance of computing is that there are some kinds of computing problems, like doing permutations and combinations, that mostly involve integer math, and some that involve mostly floating math (like calculating distances and angles and orientations in a 3D environment). So over time, computers came to do integer and floating point math in separate parts. But especially for high-performance scientific computing, it's the floating part that is the really computer intensive part. So the number of floating point operations per second (flop) became the benchmark of speed.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.