Float calculations just aren't quite right...

Discussion in 'Mac Programming' started by jakee.stoltz, Nov 2, 2010.

  1. macrumors newbie

    Joined:
    Sep 17, 2010
    #1
    I already realize the inherent dangers in working with float values but I've never really run into this problem until now..

    Here's the code for this particular problem.

    Code:
    -(IBAction) calculatePresAlt: (id) sender
    {
    	float pressureAltAmount;
    	calculator = [[TakeoffLandingCalc alloc] init];
    	[calculator setAltimeterSetting: [altimeterField floatValue]];
    	[calculator setFieldElevation:   [elevationField floatValue]];
    	
    	pressureAltAmount = [calculator [I]calcPressureAltitude[/I]];
    	
    	[pressureField setFloatValue: pressureAltAmount];
    }
    
    **Here is the calcPressureAltitude method**
    -(float) [I]calcPressureAltitude[/I]
    {
    [B]	pressureAltitude = (29.92 - altimeterSetting) * 1000.0 + fieldElevation;
    [/B]	return pressureAltitude;
    }
    
    Basically I get two float values from the interface and run them through the equation above in bold. I have the same program in a command line type of program coded in C and my calculations come up just fine but whenever I do it in my Cocoa program, the pressureAltitude result comes up like .000000001 off or some silly little number.

    Is there a way to fix this? I've fixed it indirectly by adding a number formatter to that text field that it displays in and just rounding up but I'd like to know if I can fix the number itself.
     
  2. Moderator emeritus

    robbieduncan

    Joined:
    Jul 24, 2002
    Location:
    London
    #2
    Use something like BigDecimal (a Java class) that allows you to control the precision of the calculations? Whilst I don't know of one I would expect something similar would exist for Objective-C (or even plain C).
     
  3. macrumors 6502a

    GorillaPaws

    Joined:
    Oct 26, 2003
    Location:
    Richmond, VA
    #3
    I've been told that you should always use doubles instead of floats unless there is a performance critical reason for using the type with fewer bits.
     
  4. macrumors 603

    Joined:
    Aug 9, 2009
    #4
    Show the C code for the command-line program. If it uses double instead of float, then you're not doing the same thing in your Obj-C code, so it's almost guaranteed there will be differences, starting at around the 6th or 7th significant decimal digit.

    Show some actual input and output values, for both the command-line program and the Obj-C program.

    Saying it's "like .000000001 off" is meaningless without a context. If the numbers are on the order of 1e-6 or smaller magnitude, then that's a significant difference. If the numbers are on the order of 1e-1 or greater magnitude, then .000000001 is beyond the significant digits of a float, so you shouldn't be surprised.

    Is there a specific reason you're using float instead of double? I don't see why you can't use double everywhere you're using float now, and simply avoid the precision problems inherent to float.

    I think you're saying you realize the dangers of using float, but you don't really understand everything that using float actually means.
     
  5. thread starter macrumors newbie

    Joined:
    Sep 17, 2010
    #5
    Alright you guys nailed it. I basically just changed the word float to double in my program and it works fine now.

    I think where I got confused converting C code to Objective-C is the %f specifier. That's what I was using in my C program and I forgot that %f is used for floats and doubles. So when I converted to Objective-C, I thought %f = float so I used float values instead of doubles when technically I was using doubles the whole time in my C program. So yeah.. Little bit of confusion in the conversion I guess.

    I really have no reason for using float other than that's kind of how I was taught. I took a class on C programming and the instructor said to use float because most of the time 6 digits was accurate enough
     
  6. macrumors 68040

    lee1210

    Joined:
    Jan 10, 2005
    Location:
    Dallas, TX
    #6
    He or she should be brutally beaten about the head and neck with an object of similar heft and malleability of a large summer sausage.

    If this class was being taught in the early 1980s there may have been an argument that some of the time float is alright due to the savings in computation and memory, but otherwise the summer sausage it is.

    -Lee
     
  7. Moderator emeritus

    robbieduncan

    Joined:
    Jul 24, 2002
    Location:
    London
    #7
    Will this damage the sausage? Or can we grill and eat it afterwards? Because I'm quite partial to sausage with a bit of mustard.
     
  8. macrumors 68040

    lee1210

    Joined:
    Jan 10, 2005
    Location:
    Dallas, TX
    #8
    Generally with a sausage of the size I am thinking the casing will be rather thick so it should prevent damage to the meat product when used as a bludgeoning tool.

    -Lee
     
  9. macrumors 6502

    Joined:
    Apr 24, 2008
    #9
    Also, %f will default to six digits. I'm not sure what the default is for text fields in Cocoa, but perhaps the default rounding in %f hid the small error in your "command line" version.
     
  10. macrumors 6502a

    Joined:
    Jun 27, 2010
    #10
    Float vs Double

    I just looked up Floats vs Doubles. Seems to be an interesting subject with no clear answer.

    I am going to do some performance benchmarks to see which is faster.
     
  11. macrumors 68030

    jared_kipe

    Joined:
    Dec 8, 2003
    Location:
    Seattle
    #11
    Benchmarking will depend heavily on the architecture.

    To the OP, I do almost everything in doubles (calculation wise), and one thing I've learned is that when you want them to be displayed to put them in NSNumbers and use NSNumberFormatter to generate the string output for you. This will take a lot of the headaches away as far as 4.9999999999 or something displaying that way instead of 4.0 or rather just 4

    EDIT: Also remember if you're hard coding in a function like that you probably should code your constants with the f at the end like
    pressureAltitude = (29.92f - altimeterSetting) * 1000.0f + fieldElevation;
     
  12. macrumors 6502a

    Joined:
    Jun 27, 2010
    #12
    Did some benchies on a windows PC, using MSVC++ 2010 compiler.
    Performance wise, floats and double are about the same.

    Someone with access to macs want to volunteer to do benchies? I have one, but its at home.

    GPU computation, I am not sure, but I am guessing floats are faster.

    Unless if you have huge datasets or you are using GPU algorithms or if you are developing on embedded, you should stick to double for everything.
     
  13. macrumors 68020

    Krevnik

    Joined:
    Sep 8, 2003
    #13
    I'm not even sure you can do double-precision on a GPU right now. IIRC, most GPUs use 32-bit registers for floats so a buffer can be swapped between a 32-bit int and float easily. If you have to do it in software, yes, it will definitely be slower.

    On your standard Intel chip, the FPU registers are 80 bits, and are converted to single or double precision when stored to RAM. So again, you are right that using double on the desktop is pretty much the way to go.
     
  14. gnasher729, Nov 3, 2010
    Last edited: Nov 3, 2010

    macrumors G5

    gnasher729

    Joined:
    Nov 25, 2005
    #14
    The inherent dangers are reduced a lot if you use double instead of float. Unless you have a very, very good reason to use float, use double or better yet long double.


    QFT. Note "my instructor told me to use float" does not fall under the category "very, very good reason to use float", but under the category "very, very good reason to pick another instructor".

    To the original poster: Feel free to show this to your instructor. If you wants to discuss this, I am happy to do this.


    You''ll only have a difference if you have hand-written SSE vector code (because one vector op can do four floats but only two doubles), or when you use massive amounts of data that don't fit into caches. And then only if you have brutally optimised code where the amount of time spent doing floating point is actually significant. On the other hand, there are things like solving differential equations where the lack of precision directly causes you to perform more operations, so float would actually end up a lot slower. Most people will never in their life write code where there is a measurable speed difference.
     
  15. macrumors 68000

    Sydde

    Joined:
    Aug 17, 2009
    #15
    There really is no reason to imagine a performance advantage with float - quite the opposite, in fact. If you implement FP in the hardware, you almost invariably have to build for the largest supported precision and downsample as needed, so if there is going to be a performance penalty, it would be with the smaller format, though typically I think it comes at no cost.
     
  16. thread starter macrumors newbie

    Joined:
    Sep 17, 2010
    #16
    I took his class almost a year ago. It was actually part of the mechanical engineering curriculum and the professor himself is more an electrical engineer than programmer.
     
  17. macrumors 68040

    lee1210

    Joined:
    Jan 10, 2005
    Location:
    Dallas, TX
    #17
    Even worse! I hope he doesn't build anything if he's not that concerned with precision.

    -Lee
     
  18. macrumors 68020

    Krevnik

    Joined:
    Sep 8, 2003
    #18
    These days, your have to downsample down to either single or double precision. x86/x64 use extended precision in hardware. That's why the measured perf is the same, since you are downsampling on every store, not just for one format or the other. So in reality, you are getting a bit more precision on a series of operations than double-precision normally gives you, as long as you can keep the values all in registers during the process.
     
  19. holmesf, Nov 8, 2010
    Last edited: Nov 8, 2010

    macrumors 6502a

    Joined:
    Sep 30, 2001
    #19
    You can do double precision arithmetic on current generation GPUs, and some last gen GPUs as well.

    On the Nvidia side of things support started with the Geforce 200 series GPUs, where it was 8x slower than single precision arithmetic. This was because each stream multiprocessor included 8 single precision floating point units, but only 1 double precision unit. In the Geforce 400 series it appears that each floating point unit can actually perform double precision calculations, but they take 2 clock cycles instead of 1, which makes them 2x slower (instead of 8x slower as with the last gen).

    On the ATI side of things it looks like they began double precision support with the 5000 series of GPUs. From what I can find, double precision arithmetic is about 5x slower than single precision there.

    With GPU programming it's a bad idea to use double precision not just because the actual operations are slower, but because you typically have thousands of resident threads at once, which results in registers being a precious commodity. The register files on GPUs end up being as huge as the caches!
     
  20. macrumors 65816

    Joined:
    Sep 19, 2009
    #20

    And there is your answer. ;) My C instructor is real hard nosed about only a few things, among them, using floats instead of doubles and using goto statements. :D Doing either one == fail the assignment.

    Unless there is some pressing need as others have stated, use doubles. :)
     

Share This Page