Register FAQ / Rules Forum Spy Search Today's Posts Mark Forums Read
 MacRumors Forums printf Precision

 Nov 1, 2012, 10:14 AM #1 Senor Cuete macrumors member   Join Date: Nov 2011 printf Precision In the XCode debugger, floating point variables are rounded to a few significant digits. Earlier versions of XCode than the 4.5.1 showed them with more significant digits. I had to look at some of these with greater precision and I couldn't figure out how to do it with the XCode preferences so I decided to print them to the console using printf(). As a test I printed M_PI which is defined in math.h as a literal. long double pi = M_PI; printf("%.48Lf", pi); //zeros after 48 places printf logs a different value than M_PI In math.h: 3.14159265358979323846264338327950288 and with printf: 3.141592653589793115997963468544185161590576171875 Only the first 16 significant digits are the same. Yes I know that this is plenty of precision and that I shouldn't worry about that. Why would this be? 0
 Nov 1, 2012, 11:03 AM #2 gnuguy macrumors newbie   Join Date: Nov 2006 http://en.wikipedia.org/wiki/Double_precision Basically you hit the limit of what a double can store. 0
Nov 1, 2012, 03:36 PM   #3
Senor Cuete
macrumors member

Join Date: Nov 2011
Quote:
 Originally Posted by gnuguy Basically you hit the limit of what a double can store.
That could be correct if my variable wasn't a 128 bit (16 byte) long double. If it was a double then printf wouldn't have display that long mantissa. In the header math.h there is a comment that M_PI would be more useful as a long double but it's a literal because POSIX requires it.
0
 Nov 1, 2012, 06:01 PM #4 Persifleur macrumors member   Join Date: Jun 2005 Location: London, UK Your variable is a long double, but the constant itself is a double literal (i.e. it doesn't have the L suffix). The comment in the header file is merely explaining why they're double constants. You're doing an implicit cast to long double through the variable assignment, but not actually gaining any precision. If you use the constant M_PI directly in the printf statement, you'll get a warning indicating a mismatch in type between the format type (long double) and the argument type (double). Code: ```long double pi = M_PI; long double more_pi = 3.14159265358979323846264338327950288L; printf("%.48Lf\n", pi); printf("%.48Lf\n", (double) more_pi); // Same, with warning printf("%.48Lf\n", more_pi); // Different``` 0
Nov 1, 2012, 07:32 PM   #5
Senor Cuete
macrumors member

Join Date: Nov 2011
Quote:
 Originally Posted by Persifleur printf("%.48Lf\n", (double) more_pi);
Why did you cast more_pi to a double? Is there a printf specifier for a long double? Also doesn't the L at the end of the value of pi cast it to a long integer, not a 128 bit long double?
0
Nov 1, 2012, 07:44 PM   #6
gnasher729
macrumors G4

Join Date: Nov 2005
Quote:
 Originally Posted by Senor Cuete Why did you cast more_pi to a double? Is there a printf specifier for a long double? Also doesn't the L at the end of the value of pi cast it to a long integer, not a 128 bit long double?
Just wondering... Are you using a PowerPC? If you are, then you have 128 bit long double (implemented by using two doubles). If you are using any x86 based computer, then your long doubles are actually 80 bit.

The cast was there to demonstrate that you get a warning, because the Lf _is_ the printf format specifier for long double, but the value passed is just double.
0
Nov 2, 2012, 09:59 AM   #7
Senor Cuete
macrumors member

Join Date: Nov 2011
Quote:
 Originally Posted by gnasher729 Just wondering... Are you using a PowerPC? If you are, then you have 128 bit long double (implemented by using two doubles). If you are using any x86 based computer, then your long doubles are actually 80 bit.
X86 and
sizeof(float) = 4
sizeof(double) = 8
sizeof(long double) = 16 - 128 bits

Is the specifier Lf for both doubles and long doubles?
0
Nov 2, 2012, 07:50 PM   #8
Persifleur
macrumors member

Join Date: Jun 2005
Location: London, UK
Quote:
 Originally Posted by Senor Cuete Why did you cast more_pi to a double?
To show that the output of printf was the same when you have the value of the M_PI literal defined as a long double, but cast to a double (and still formatted as a long double, which is a bit naughty and you get warned accordingly). I.e. to show that what you thought was a long double actually had the precision of a plain double (because the referenced constant was a double). Perhaps I confused things by using the (intentionally inappropriately) wrong format string for (double) more_pi.

Quote:
 Originally Posted by Senor Cuete Is there a printf specifier for a long double?
That would be %Lf. Double is %f.

Quote:
 Originally Posted by Senor Cuete Also doesn't the L at the end of the value of pi cast it to a long integer, not a 128 bit long double?
If there's a decimal point in the literal, you get a long double. Otherwise, you get a long integer.

Quote:
 Originally Posted by Senor Cuete X86 and sizeof(float) = 4 sizeof(double) = 8 sizeof(long double) = 16 - 128 bits
sizeof() tells you how much space is taken by the variable, but nothing about how much of that space is used to increase precision. If you want the actual precision:

Code:
```#include <float.h>
printf("double: %i, long double: %i\n", DBL_DIG, LDBL_DIG);

// output on x86: double: 15, long double: 18```
Ultimately, how many bits are used in memory is irrelevant. What you're concerned with is precision, and on x86 you get 18 significant figures for a long double.
0

 MacRumors Forums

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules