PDA

View Full Version : floats for better precision?

iphoneGuy
Mar 30, 2008, 05:34 PM
I am working on a new program that uses a lot of float values. However, I notice that if I set the value it will not be precise..

ie.

float val = 10.3

may show up in the debugger as 10.300002 or 10.299999993

How can I make these numbers alway return 10.3 exactly?

Thanks,

toddburch
Mar 30, 2008, 05:47 PM
Don't use float if you can't tolerate rounding. 10.3 can't be represented in floating point without rounding.

CaptainZap
Mar 30, 2008, 05:48 PM
Don't use float if you can't tolerate rounding. 10.3 can't be represented in floating point without rounding.

Why's that?

antibact1
Mar 30, 2008, 05:56 PM
Don't use float if you can't tolerate rounding. 10.3 can't be represented in floating point without rounding.

There are some numbers that just can't be represented at all in binary on a computer, regardless of precision used.

http://www.ima.umn.edu/~arnold/disasters/patriot.html

toddburch
Mar 30, 2008, 06:00 PM
Why's that?

What was I thinking? 3/16 = 0.1875. No big deal for a float in this case, eh?!

It's a problem further down in the precision department. Here's C++ example.

#include <iostream>
#include <iomanip>
using namespace std ;

int main(void)
{
float f ;
f = 10.000003 ;
cout.precision(12) ;
cout << "f=" << f ;
return 0 ;
}

output:

[Session started at 2008-03-30 18:00:48 -0500.]
f=10.000002861
exercise35 has exited with status 0.

CaptainZap
Mar 30, 2008, 06:21 PM

So is there something else to use like double, or do programmers just get used to it?

toddburch
Mar 30, 2008, 06:47 PM
So is there something else to use like double, or do programmers just get used to it?

Well, there are different ways to get around floating point rounding behavior.

Using this example, if you knew you needed 6 digits of non-rounded precision, you could multiply all your floats by 1,000,000, do all your calculations, and then divide by 1,000,000 for the final output.

#include <iostream>
#include <iomanip>
using namespace std ;

int main(void)
{
float f ;
f = 10000003.0 ;
cout.precision(12) ;
cout << "f=" << f/1000000.0 ;
return 0 ;
}
Output:

[Session started at 2008-03-30 18:43:23 -0500.]
f=10.000003
exercise35 has exited with status 0.

Following that logic, you could even use integers.

Working 3D point data, it's very common to have slight inaccuracies. In those cases, a lot of programs takes EPSILON into account. http://en.wikipedia.org/wiki/Machine_epsilon

Todd

toddburch
Mar 30, 2008, 06:49 PM
Also, doubles have greater precision, but the rounding dilemma still exists.

The same program as above, using a double instead of float, does not round in this case.

#include <iostream>
#include <iomanip>
using namespace std ;

int main(void)
{
double d ;
d = 10.000003 ;
cout.precision(12) ;
cout << "d=" << d ;
return 0 ;
}

Output:

[Session started at 2008-03-30 18:48:07 -0500.]
d=10.000003
exercise35 has exited with status 0.

iphoneGuy
Mar 30, 2008, 07:10 PM

So is there something else to use like double, or do programmers just get used to it?

No problem.. I just wish there was a clean wat to represent 10.3 as exactly 10.3....

lee1210
Mar 30, 2008, 08:15 PM
No problem.. I just wish there was a clean wat to represent 10.3 as exactly 10.3....

You will need to invent a computer that uses base-10 instead of base-2 for numerical representations.

There are discussions about this, but base-2 computers are so much easier to architect, that this will probably never come to be.
http://homepages.transy.edu/~jmiller/web706/chapt3.htm

You may also wish to take this up with the IEEE (http://www.ieee.org), as IEEE-754 defines this standard for storing floating point numbers in base-2.

As was mentioned earlier, if you are doing fixed precision arithmatic, using an int, long int, etc. and a fixed bias for your numbers might be a good way to go. We can exactly represent base-10 integers in base-2, so it's no problem (unless you need REALLY big integers...).

-Lee

pensfan
Mar 31, 2008, 12:12 AM
What you want is a "decimal" data type, if available . . . .NET languages have decimal, Java has BigDecimal I believe, I think Python has a decimal as well, etc. Just as toddburch and lee1210 suggested, the idea is that you multiply your floating point value by whatever scaling factor is necessary to represent it as an integer value, do your work on that integer value, and divide by the scale to get your resulting floating point value. Those decimal types implement that behavior so you don't have to manually do it.

lazydog
Mar 31, 2008, 08:46 AM
I am working on a new program that uses a lot of float values. However, I notice that if I set the value it will not be precise..

ie.

float val = 10.3

may show up in the debugger as 10.300002 or 10.299999993

How can I make these numbers alway return 10.3 exactly?

Thanks,

Hi,

I know it's annoying but to be honest, it's not a problem that most applications have to worry about it too much. Just be aware of doing things like this:-

float value ;

if ( value == 10.3f )
{

}

which would produce different results depending on how 'value' was calculated.

b e n

EDIT: Here's an example of what I mean by 'not worrying'. Calculators suffer from exactly the same problem but most people use them in blissful ignorance and trust their results.

iphoneGuy
Mar 31, 2008, 10:14 AM
Hi,

I know it's annoying but to be honest, it's not a problem that most applications have to worry about it too much. Just be aware of doing things like this:-

float value ;

if ( value == 10.3f )
{

}

which would produce different results depending on how 'value' was calculated.

b e n

EDIT: Here's an example of what I mean by 'not worrying'. Calculators suffer from exactly the same problem but most people use them in blissful ignorance and trust their results.

thanks for the responses... I ended up switching to ints it took a couple of minutes and all is good...

gnasher729
Mar 31, 2008, 11:11 AM
I am working on a new program that uses a lot of float values. However, I notice that if I set the value it will not be precise..

ie.

float val = 10.3

may show up in the debugger as 10.300002 or 10.299999993

How can I make these numbers alway return 10.3 exactly?

Thanks,

Google for "what every programmer should know about floating point arithmetic" and you should find a few links to the Kahan article, which on one hand contains much more than you want to know, on the other hand just barely covers what you need to know.