using float to calculate sin cos and tan

Discussion in 'Mac Programming' started by abcdefg12345, Aug 26, 2013.

  1. abcdefg12345, Aug 26, 2013
    Last edited by a moderator: Aug 26, 2013

    macrumors regular

    im trying to calculate sin cosine and tan but im getting the wrong calculation

    what am i doing wrong

    - (IBAction)sin:(id)sender
        float result = sin([conv_display floatValue]);
        [conv_display setFloatValue:result];
    - (IBAction)cos:(id)sender
        float result = cos([conv_display floatValue]);
        [conv_display setFloatValue:result];
    - (IBAction)tan:(id)sender
        float result = tan([conv_display floatValue]);
        [conv_display setFloatValue:result];
    and also does anyone know how to do inverse sin cos and tan
  2. macrumors G5


    1. Could you explain why you are using float and not double, restricting yourself to about digits precision for no gain whatsover?

    2. What results do you get? I'd first want to see what results you get before giving any other advice.
  3. Senor Cuete, Aug 26, 2013
    Last edited: Aug 26, 2013

    macrumors regular

    1. Use doubles instead of floats.

    2. What is conv_display and what is [conv_display floatValue] returning? Is it an angle?

    3. Angles used by computers are in radians. If [conv_display floatValue] is an angle in degrees you need to convert it from degrees to radians to pass them to trigonometric functions.

    4. Why do you take the cosine of [conv_display floatValue] and then set [conv_display floatValue] to it's cosine?

    Here is some useful code:

    #include <math.h> //M_PI defined in this header
    double degreesToRadians(double x)
        return((x / 360.0 ) * (2.0 * M_PI));
    }/*double degreesToRadians(double x)*/
    double radiansToDegrees(double x)
        return((x / (2.0 * M_PI)) * 360.0);
    }/*double degreesToRadians(double x)*/
  4. macrumors 601


    double asin(double x) calculates inverse sine and returns a value in the range -pi/2 to pi/2
    double acos(double x) calculates inverse cosine and returns a value in the range 0 to pi
    double atan(double x) calculates inverse tangent and returns a value in the range >-pi/2 to <pi/2. There is also double atan2(double y, double x) which calculates inverse tangent of y/x, which works for all four quadrants.
  5. macrumors regular

    In math.h M_PI is a literal: 3.14159265358979323846264338327950288
  6. macrumors G5


    It's more likely "trigonometric functions in maths express angles in radians". Otherwise even simple formulas like sin' = cos, cos' = -sin, would become very, very complicated.
  7. macrumors newbie

    I have used sin and cos many times and I never mix math with Objective-C. I feel like it is easier to use the C scalar types like float, double, and int and then convert the answers I need into Objective-C objects later.
  8. macrumors 6502a

    Since nobody has mentioned it, if you are doing trig with float, use the float versions of the trig functions: sinf, cosf, tanf, etc.
  9. macrumors G5


    More importantly, if you use a few million of these operations, for example within 3d graphics, there is usually a way to avoid them altogether. But unless you have a very good reason, you should avoid float altogether and use double instead.
  10. macrumors 601


    Yes. C (and by inference, Objective-C) performs all calculations in double (or larger), never in float, so these float functions can actually end up being less efficient to use once you figure in the conversion times. The only reason to use float is to save memory storing large arrays of values, but with the large RAM sizes these days I wouldn't even bother unless I had a hundred million values to store.
  11. Qaanol, Sep 5, 2013
    Last edited: Sep 5, 2013

    macrumors 6502a

    If you are performing the same sequence of operations on a large array of values, and you do not need the precision of doubles and prefer a smaller RAM footprint, there are float-optimized vector functions (as well as double-optimized ones) available. For example:

    void vvsinf(float *outputArray, const float *inputArray, const int *pointerToArrayLength);
    void vvcosf(float *outputArray, const float *inputArray, const int *pointerToArrayLength);
    void vvtanf(float *outputArray, const float *inputArray, const int *pointerToArrayLength);

    And of course, for extra speed, you can compute sin and cos in one pass with

    void vvsincosf(float *outputSinArray, float *outputCosArray, const float *inputArray, const int *pointerToArrayLength);

    There are inverse trig functions, exp and log functions, and a bunch of others, as well as all of the above in double precision if that floats your boat. For more basic operations such as arithmetic on arrays there is vDSP, which even lets you do things like Xj = (Aj + Bj) * (Cj - Dj) all in one pass. That is also the library with FFT functions.

    From my experience, if you are performing multiple operations on large arrays, it is fastest to process the arrays in chunks that fit in the processor's cache. Operating on 1024 floats at a time with vectorized functions has worked well for me.
  12. macrumors 603

    Interesting... LLVM in Xcode 4.6.3 seems to spit out only a single:

    ARM instruction when multiplying two floats into a float result variable, e.g.

    float y = ...
    float z = ...
    float x = y * z; 
    Using doubles is kinda wasteful, and potentially slower on current iOS devices, using most real data (audio, pixel, measurements, etc.) which is rarely accurate to more than a few decimal places. With such data, the illusion of extra precision using doubles is likely the cause of greater problems/bugs, than is the lesser numerical accuracy of floats.

    When do you ever know or can even measure an angle to more than 6 decimal places accuracy?
  13. macrumors 601


    Well, to quote my copy of K&R, "Notice that all float's in an expression are converted to double; all floating point arithmetic in C is done in double precision." That said, I do a lot of embedded programming and the compilers inevitably have an option for arithmetic as floats for performance and size reasons.

    I just did some checking and by default (at least) LLVM generates OS X code like you are seeing. However when I tried gcc on a Linux system and Visual C++ under Windows, it worked like I described.
  14. macrumors G5


    K&R (Kernighan and Ritchie for the younger readers) is old now, and things change. It is "implementation defined" whether all floating-point arithmetic is done in long double, or at least in double, or only in float if both operands are float, or some other way. In other words, it's up to the compiler. The compiler should define the macro FLT_EVAL_METHOD according to the method.
  15. macrumors G5


    Unless you do a pretty good analysis of the maths that you are using, you never know how errors add up. Maybe you don't need results with more than 6 decimals of accuracy. Doesn't mean doing your calculations with only 6 digits is right. Using double precision numbers doesn't give you "the illusion of extra precision". It gives you extra precision which makes sure that the results will be a lot closer to the mathematically correct result.

    BTW. Apple's libraries represent time as seconds since some base date using floating point. Using "float" would give you a resolution of 16 or 32 seconds.
  16. talmy, Sep 5, 2013
    Last edited: Sep 5, 2013

    macrumors 601


    So I just spent some time looking at the C99 standard and, IMHO, it's scary stuff. Indeed, C99 infers that FLOAT + FLOAT can be done as either FLOAT or something bigger. I must say that I mostly use GCC and have never faced this coming up as a portability problem, however I did have a mysterious line of code, decades old, that decided not to work when first used CLANG on the Mac. I finally got it working (and compatibly with GCC, etc.) but never really understood why.
  17. macrumors G5


    You haven't seen scary yet.

    A compiler is allowed to do operations at a higher precision than necessary. And it is allowed to do _some_ operations at a higher precision than necessary, and not others. So if a and x are the same, and b and y are the same, you'd think that a+b and x+y are the same, right? Not if a+b is calculated in long double precision, and x+y in double precision only.

    It's not a problem today, because on current Intel processors double precision is faster than long double, so long double is only used when you tell the compiler, but a few years ago (before the release of the first Intel Mac) there were eight "long double" floating-point registers and nothing else, so that kind of thing would happen. In an extreme case, "if (a + b != a + b) printf ("Weird"); " would actually print "Weird".
  18. macrumors 603

    If you don't do an analysis of numerical stability, and your calculation is likely to go bad in single precision, it's very often also a microscopic distance away from going bad in double precision as well. Thus, using double is a false/fake security blanket and even a trap for those unsophisticated in numerical analysis.

    Even expecting two numbers to be equal is a delusion when using FP math. See above. That's normal with FP. And should be taught as such.
  19. macrumors 601


    I'm an Electrical Engineer, not a Mathematician. Back in the early days of electronic computers, they were designed by Electrical Engineers and both the circuits and even the floating point formats were not designed well from a mathematical standpoint. One could argue that you wanted double precision just to keep the calculation errors in the noise. And you certainly got different results if you used an IBM computer (and even different models of IBM computers) versus a Control Data computer, both big in the sciences. However IEEE Floating Point, contrary to the name of the sponsoring organization, was designed by Mathematicians and single precision can be safely used.
  20. macrumors 68040


    Ugh. Working with single precision is a nightmare. Even doing the calculations in double precision then storing the result in single precision sucks. Then it's time to compare! Hooray, let's get some machine delta going on, etc. FP is generally awful, and we make the same mistakes with FP over and over. If you're doing math whose result is important, there are numerical libraries in many languages that can shield you from the garbage. I feel like the burden is on the programmer to prove the tiny precision float (and double, in many cases) provides is guaranteed to work for the use case.

  21. gnasher729, Sep 6, 2013
    Last edited: Sep 6, 2013

    macrumors G5


    If you get killed in a car accident not wearing a seat belt, you would be close to getting killed without a seat belt. So don't wear seat belts.

    If something heavy falls on your head, it might kill you even wearing a helmet. So safety helmets shouldn't be worn because they give you a false sense of security.

    You can get lost in the woods with a map. So throw away your map before you enter any forest; it only gives you a false sense of security.

    The main proponents for the IEEE 754 standard (Apple and Intel; Apple created the first software implementation SANE (Standard Apple Numeric Environment) which was available even for the Apple II computer, Intel created the first hardware implementation with the 80387 co-processor) insisted on adding "extended precision" which gives 3.3 decimals more precision and a much larger range than double. What do you think why they did that? Just for fun? No, because extended precision gives you a better chance at getting correct results.

    Sure, nothing will go mysteriously wrong if you use float. Things will go wrong in a completely well-defined way, IEEE 754 makes sure of that.
  22. macrumors 603

    The difference between a seatbelt with its width specified in float and one specified in double is less than hair. If you drive badly enough to get killed wearing one, you will also almost certainly be dead wearing the other. They will both go wrong in that precisely defined manner.
  23. macrumors regular

    "math" is a collective noun so there is no such thing as "maths".

    All the time:

    A typical theodolite measures angles to one arc second or 1 60th of a 60th of a degree or 0.00027777777777777... degrees. Precise survey techniques like turning angles left and right or winding up an instrument in high-order surveys result in sub-second accuracy.

    Astronomical algorithms give constants to long precision because it's necessary to get correct results. The position of any planet is calculated using theories like the VSOP 87 theory which use many hundreds of terms. see: The smallest corrections change the angle by way less than six decimal places but are needed to calculate the problem correctly.

    High accuracy astronomical calculations are expected to calculate the positions of objects to less than one arc second and to do this you have to calculate the intermediate results to much greater accuracy to avoid rounding errors.

    I recommend Astronomical Algorithms by Jean Meeus:
  24. macrumors 6502a

    I was going to make a crack about how this quote indicates you must not have studied maths, but then the rest of your post pretty well indicates that you probably have. So I'll just say that yes, maths are indeed referred to as maths by many people who study maths.

    Wikipedia (last sentence of the section)
  25. macrumors 601


    Hey, floating point is just a crutch for lazy programmers anyway. :)


    I think it depends on which side of the Atlantic you are on.

Share This Page