printf Precision

Discussion in 'Mac Programming' started by Senor Cuete, Nov 1, 2012.

  1. macrumors regular

    Joined:
    Nov 9, 2011
    #1
    In the XCode debugger, floating point variables are rounded to a few significant digits. Earlier versions of XCode than the 4.5.1 showed them with more significant digits. I had to look at some of these with greater precision and I couldn't figure out how to do it with the XCode preferences so I decided to print them to the console using printf(). As a test I printed M_PI which is defined in math.h as a literal.

    long double pi = M_PI;
    printf("%.48Lf", pi); //zeros after 48 places

    printf logs a different value than M_PI
    In math.h:
    3.14159265358979323846264338327950288 and with printf:
    3.141592653589793115997963468544185161590576171875

    Only the first 16 significant digits are the same. Yes I know that this is plenty of precision and that I shouldn't worry about that. Why would this be?
     
  2. macrumors newbie

    Joined:
    Nov 25, 2006
    #2
  3. thread starter macrumors regular

    Joined:
    Nov 9, 2011
    #3
    That could be correct if my variable wasn't a 128 bit (16 byte) long double. If it was a double then printf wouldn't have display that long mantissa. In the header math.h there is a comment that M_PI would be more useful as a long double but it's a literal because POSIX requires it.
     
  4. macrumors member

    Joined:
    Jun 1, 2005
    Location:
    London, UK
    #4
    Your variable is a long double, but the constant itself is a double literal (i.e. it doesn't have the L suffix). The comment in the header file is merely explaining why they're double constants. You're doing an implicit cast to long double through the variable assignment, but not actually gaining any precision. If you use the constant M_PI directly in the printf statement, you'll get a warning indicating a mismatch in type between the format type (long double) and the argument type (double).

    Code:
    long double pi = M_PI;
    long double more_pi = 3.14159265358979323846264338327950288L;
    printf("%.48Lf\n", pi);
    printf("%.48Lf\n", (double) more_pi);  // Same, with warning
    printf("%.48Lf\n", more_pi);  // Different
     
  5. thread starter macrumors regular

    Joined:
    Nov 9, 2011
    #5
    Why did you cast more_pi to a double? Is there a printf specifier for a long double? Also doesn't the L at the end of the value of pi cast it to a long integer, not a 128 bit long double?
     
  6. macrumors G5

    gnasher729

    Joined:
    Nov 25, 2005
    #6
    Just wondering... Are you using a PowerPC? If you are, then you have 128 bit long double (implemented by using two doubles). If you are using any x86 based computer, then your long doubles are actually 80 bit.

    The cast was there to demonstrate that you get a warning, because the Lf _is_ the printf format specifier for long double, but the value passed is just double.
     
  7. thread starter macrumors regular

    Joined:
    Nov 9, 2011
    #7
    X86 and
    sizeof(float) = 4
    sizeof(double) = 8
    sizeof(long double) = 16 - 128 bits

    Is the specifier Lf for both doubles and long doubles?
     
  8. macrumors member

    Joined:
    Jun 1, 2005
    Location:
    London, UK
    #8
    To show that the output of printf was the same when you have the value of the M_PI literal defined as a long double, but cast to a double (and still formatted as a long double, which is a bit naughty and you get warned accordingly). I.e. to show that what you thought was a long double actually had the precision of a plain double (because the referenced constant was a double). Perhaps I confused things by using the (intentionally inappropriately) wrong format string for (double) more_pi.

    That would be %Lf. Double is %f.

    If there's a decimal point in the literal, you get a long double. Otherwise, you get a long integer.

    sizeof() tells you how much space is taken by the variable, but nothing about how much of that space is used to increase precision. If you want the actual precision:

    Code:
    #include <float.h>
    printf("double: %i, long double: %i\n", DBL_DIG, LDBL_DIG);
    
    // output on x86: double: 15, long double: 18
    Ultimately, how many bits are used in memory is irrelevant. What you're concerned with is precision, and on x86 you get 18 significant figures for a long double.
     

Share This Page