Okay, this is how floating point numbers work,
on every single computer in existance. This is the IEEE standard for binary floating point numbers. The format is thus:
Single Format (standard float)
Code:
|s| exp (8) | mantissa (23) |
The 's' is a single bit indicating if the number is positive or negative. The 'exp' is an 8 bit biased exponent. By biased, I mean that whatever numeric representation is in 'exp', the value used in calcuations is going to be exp-127 (for normalized mantissa). The mantissa is the factional part of the exponential, with an implied leading '1' for normalized floats.
So, the number is:
(-1 * s) * (1.mantissa * 2^(exp-127))
Its pretty much like scientific notation.
Floating points, however, have a real hard time dealing with numbers like 92.79. Sure, I can easily represent 92 in binary. Its just 1011100. Nice and exact. The true problem lies in the question: "How do you represent 0.79 in binary?" This is more difficult. The solution in the IEEE specification for floating point numbers is to approximate the decimal fraction with a binary fraction. What's a binary fraction? Its a binary number where each decimal places represents another negative power of two. So, the binary fraction:
0.11001
Is the same as
(1/2) + (1/4) + (1/32)
Trouble is that decimal values like 0.79 require a veritably endless binary fraction to represent. We only have 23 bits in the mantissa to work with. Less, if you consider that the mantissa is normalized, and it contains the representation for 92 as well. So, the best that can be done is to find a close approximation. In the case of 92.79, its floating point representation is:
11000010101110011001010001111011
Where:
1 is the sign bit
10000101 is the exponent (remember, this is biased by 127)
(1.)01110011001010001111011 is the mantissa. The (1.) is implied here.
When we de-bias the exponent, we are left with a value of 6, so we have to shift the mantissa six places to the left to get the represented value:
1.01110011001010001111011 -> 1011100.11001010001111011
So what decimal value does 11001010001111011 represent? Do the math:
(1/2)+(1/4)+(1/32)+(1/128)+(1/2048)+(1/4096)+(1/8192)+(1/16384)+(1/65536)+(1/131072) = 0.7900009155273438
That's not bad, and good enough if all you need is 5 decimal places of precision. But errors like this can compound through time, so if you are working on real mission-critical calculations you can either use longer floating point representations, or implement and arbitrary precision software solution. These libraries are pretty easy to find, and writing one isn't terribly hard. We've all learned the basic algorithms for adding, subtracting, multiplying, and dividing arbitrarily sized decimal numbers: all you have to do is translate that into C or your favorite language.
Calculator isn't advertised as a mission critical arbitrary decimal precision computational tool, nor do I really expect it to implement arbitrary precision libraries.