# Long Double Precision

Discussion in 'Mac Programming' started by empeeu, Dec 13, 2010.

1. ### empeeu macrumors newbie

Joined:
Dec 13, 2010
#1
HI all,

I have a strange application where I need the precision of long doubles. However, when testing division, I ran into a snag:

Code:
```long double tmp;
tmp = 1.0L / 6.0L;
```
And from GDB I get:
tmp = 0.1666666666666666666711841757186896

which, admittedly is better than just a plain double, but I was expecting something like:
0.1666666666666666666666666666666667

So, then I tried:
Code:
`tmp =0.1666666666666666666666666666666667;`
Which GDB tells me is:
tmp = 0.1666666666666666574148081281236955

Strange? Any ideas? I'm compiling in debug mode with GCC4.0 with no optimization.

Also, I'm very new to Xcode/mac, and I'm a little rusty in c++.

Any help much appreciated.

2. Dec 13, 2010
Last edited: Dec 13, 2010

### lee1210 macrumors 68040

Joined:
Jan 10, 2005
Location:
Dallas, TX
#2
No, not at all. There are numbers that can't be represented in binary, and often even very good approximations will still be off if you try to look at enough significant digits. In your second example you didn't use the L tag so you got the best estimate using a double, then it was cast to a long double. In your first example your precision was better because long double was used, but you're still going to be a slave to approximating decimal in binary. Check out IEEE 754 for an explanation of how this approximation takes place. Note that every significant decimal digit needs ~3.4 bits to represent. Even if you throw enough bits at the problem, you're still approximating.
There are libraries for other languages (Python and Java spring to mind, surely there are others) that do decimal math closer to exact decimal precision. It's much slower, because CPUs don't work that way, but it's meant for cases when binary approximation "won't do it".
Good news! This problem has nothing to do with Xcode or the mac. It plagues practically every platform and language and is one of the basic tradeoffs we get for the speed increase for calculations that computers provide.

-Lee

3. ### empeeu thread starter macrumors newbie

Joined:
Dec 13, 2010
#3
Thanks Lee! I'm aware of the finite-precision problem, but I usually see it happen in the last two digits of precision, so I this caught me off-guard.

Thanks for the reply! You've convinced me to check out sympy!

4. ### naples98 macrumors member

Joined:
Sep 9, 2008
Location:
Houston
#4
I had a friend (Math major) in grad school that had a class where they studied how wildly imprecise numbers could become in binary. I wish I had that the notes because it was pretty crazy how fast it could happen. Luckily I havent had to deal with it yet.

5. ### empeeu thread starter macrumors newbie

Joined:
Dec 13, 2010
#5
I'm a Meche major myself, but I do a significant amount of coding, so I've had to learn these things on-the-fly. I have a somewhat badly conditioned algorithm (with no hope of fixing) so yeah, in my case the finite-precision numbers are really killing me. I was hoping the long doubles would be enough for me to get by, but alas!

6. ### autorelease macrumors regular

Joined:
Oct 13, 2008
Location:
Achewood, CA
#6
If you need high-precision decimal arithmetic in Objective-C, you can try using NSDecimalNumber. They can handle up to 38 significant decimal digits, but support only a limited set of operators.

7. ### gnasher729 macrumors P6

Joined:
Nov 25, 2005
#7
Tell us about the algorithm. Why "no hope of fixing" it? There is always hope unless we tell you. BTW. The simplest way to get about 33 digits of precision is to use long double and compile as PowerPC code. Only disadvantage is that gdb won't work unless you run it on a real PowerPC Macintosh, and you have to go back to the MacOS X 10.5 SDK.

Joined:
Mar 8, 2004
#8
9. ### lazydog macrumors 6502a

Joined:
Sep 3, 2005
Location:
Cramlington, UK
#9
Hi

Well I think you would get the extra accuracy that you expected if you were running on a system that implemented long doubles in 128bits, eg PowerPC. But it sounds as though you're running on an Intel Mac in which case long doubles are implemented in 80bits, so not that much more precision over 64bit doubles.

b e n

10. ### gnasher729 macrumors P6

Joined:
Nov 25, 2005
#10
Remember that PowerPC code runs just fine on an Intel Macintosh. Just switch the compiler to generate only PowerPC code.

11. ### lee1210 macrumors 68040

Joined:
Jan 10, 2005
Location:
Dallas, TX
#11
As long as Rosetta is installed on your target. It's not by default in 10.6.

-Lee

12. ### lazydog macrumors 6502a

Joined:
Sep 3, 2005
Location:
Cramlington, UK
#12
Ah yes good point (sorry didn't see your earlier post on PowerPC). Though I guess running under emulation will slow things down in general. So perhaps if speed was important then switching to gcc 4.3 and using __float128 type might be the thing to do, but I'm out of my depth here as I wouldn't even know how to go about installing 4.3!

b e n

13. ### jared_kipe macrumors 68030

Joined:
Dec 8, 2003
Location:
Seattle
#13
just change your type from long double to __float128

14. ### empeeu thread starter macrumors newbie

Joined:
Dec 13, 2010
#14
Wow, I love this place! So many helpful replies (very contrary to other programming forums I have tried in the past...). This looks like a nice community.

@gnasher729
It's a Gram-Schmidt -like orthonormalization procedure. Ill conditioned because of lots of divisions, square roots, and adding small and large numbers together. I've already messed around with the algorithm to the point where I've squeezed about 3-4 more sig figs out of the "stabilized" implementation (using iteration and various other tricks). I've also tried the Householder and Cholesky decompositions approaches.

@Multiple
I will try the PowerPC route for interest sake. Thanks to all who suggested it. Also the 80bit vs. 128 implementation of long double is good to know...

I did try the __float128 but I'm not that familiar with xcode and the mac programming environment, so i got a compile-time error. While it will probably be "good enough," I'm having fun at this point and plan to muck around with Python.

@Mac_Max and autorelease
I did a search for libraries and did come across gmplib, but not NSDecimalNumber . Both looks pretty useful, I'll file those away for future use. However, I think I'm going to muck around with sympy instead, because it seems like it might serve my purposes quite well. Also, apparently (I recently found a paper), symbolic math is the preferred way to go to go about this process anyway... (and BONUS: learning python is something I want to do anyway, and this is a nice little project).

@lazydog
Speed is NOT important. I'm trying to create an orthonormal polynomial basis for a Finite Element application. I just need to create this basis once, I don't care if I have to leave my machine for a week to get it done... I'll tabulate it, and that will be it. What I have right now is probably 'good enough,' but I'm having fun with it, and I figure I might as well do it right.

Thanks a lot for all the help/comments.

15. ### empeeu thread starter macrumors newbie

Joined:
Dec 13, 2010
#15

I assumed I was getting 128bit doubles, and perhaps gdb thought the same thing. So, gdb might displays more digits, but after my 80bit precision I am actually getting junk (essentially). I'd be happy with that explanation because it still seems odd to me that my last 15 (?) digits are wrong due to finite precision.

Anyway, just curious. Feel free to ignore.

16. ### chown33 macrumors 604

Joined:
Aug 9, 2009
Location:
Sailing beyond the sunset
#16
You're not getting "junk" in the sense of random digits. What you're getting is the binary equivalent to infinitely many 0-bits for the less significant bits of the number, but it's converted to decimal. Remember, the actual base of the numbers is BINARY. It's being converted to DECIMAL (base 10) for display. What you perceive as "junk" is in fact a precise number in binary.

I suggest this exercise. Take a float variable (32-bit floating point), assign it the value 1.0/3.0, and print it to 20 decimal places. Then print the hex bits of that IEEE-754 representation. Next, assign that float variable to a double variable and do the same things: print it to 20 decimal places, and print its hex bits. What you should find is that in the conversion from float to double, the less significant bits are simply zeroed. The binary value of this is precisely what is being shown. You can do the same thing again with long double.

If that doesn't make sense, consider a number in base-3. Consider the fractional values 0.1 and 0.2. What do these values represent? 1/3 and 2/3, and they represent these values EXACTLY. That is, in base-3, 1/3 is exactly 0.1 and all less significant digits are 0. There are no infinitely repeating digits as there are when representing 1/3 in base-10 or base-2. Work out how you'd represent 1/2, 1/6, and 1/9 in base-3.

17. ### lee1210 macrumors 68040

Joined:
Jan 10, 2005
Location:
Dallas, TX
#17
http://forums.macrumors.com/showpost.php?p=11244423&postcount=10

It's only for float, but I have code in that post that tears apart an IEEE-754 number, shows the bits, etc. It may be instructive in this case. What it really comes down to is that your decimal number must be composed of the sum of 1/2^n. If n is very large you can end up with a lot of zeros after the decimal point then a long string of digits.

1/2^11 = 0.00048828125
Not too pretty. That's with a pretty small n.

-Lee