PDA

View Full Version : using float to calculate sin cos and tan

abcdefg12345
Aug 26, 2013, 04:32 AM
im trying to calculate sin cosine and tan but im getting the wrong calculation

what am i doing wrong

- (IBAction)sin:(id)sender
{
float result = sin([conv_display floatValue]);
[conv_display setFloatValue:result];
}

- (IBAction)cos:(id)sender
{
float result = cos([conv_display floatValue]);
[conv_display setFloatValue:result];
}

- (IBAction)tan:(id)sender
{
float result = tan([conv_display floatValue]);
[conv_display setFloatValue:result];
}

and also does anyone know how to do inverse sin cos and tan

gnasher729
Aug 26, 2013, 08:29 AM
im trying to calculate sin cosine and tan but im getting the wrong calculation

what am i doing wrong

- (IBAction)sin:(id)sender
{
float result = sin([conv_display floatValue]);
[conv_display setFloatValue:result];
}

- (IBAction)cos:(id)sender
{
float result = cos([conv_display floatValue]);
[conv_display setFloatValue:result];
}

- (IBAction)tan:(id)sender
{
float result = tan([conv_display floatValue]);
[conv_display setFloatValue:result];
}

and also does anyone know how to do inverse sin cos and tan

1. Could you explain why you are using float and not double, restricting yourself to about digits precision for no gain whatsover?

2. What results do you get? I'd first want to see what results you get before giving any other advice.

Senor Cuete
Aug 26, 2013, 08:45 AM
1. Use doubles instead of floats.

2. What is conv_display and what is [conv_display floatValue] returning? Is it an angle?

3. Angles used by computers are in radians. If [conv_display floatValue] is an angle in degrees you need to convert it from degrees to radians to pass them to trigonometric functions.

4. Why do you take the cosine of [conv_display floatValue] and then set [conv_display floatValue] to it's cosine?

Here is some useful code:

#include <math.h> //M_PI defined in this header

{
return((x / 360.0 ) * (2.0 * M_PI));

{
return((x / (2.0 * M_PI)) * 360.0);

talmy
Aug 26, 2013, 08:57 AM
and also does anyone know how to do inverse sin cos and tan

double asin(double x) calculates inverse sine and returns a value in the range -pi/2 to pi/2
double acos(double x) calculates inverse cosine and returns a value in the range 0 to pi
double atan(double x) calculates inverse tangent and returns a value in the range >-pi/2 to <pi/2. There is also double atan2(double y, double x) which calculates inverse tangent of y/x, which works for all four quadrants.

Senor Cuete
Aug 26, 2013, 09:07 AM
double asin(double x) calculates inverse sine and returns a value in the range -pi/2 to pi/2
double acos(double x) calculates inverse cosine and returns a value in the range 0 to pi
double atan(double x) calculates inverse tangent and returns a value in the range >-pi/2 to <pi/2. There is also double atan2(double y, double x) which calculates inverse tangent of y/x, which works for all four quadrants.

In math.h M_PI is a literal: 3.14159265358979323846264338327950288

gnasher729
Aug 26, 2013, 04:43 PM
3. Angles used by computers are in radians. If [conv_display floatValue] is an angle in degrees you need to convert it from degrees to radians to pass them to trigonometric functions.

It's more likely "trigonometric functions in maths express angles in radians". Otherwise even simple formulas like sin' = cos, cos' = -sin, would become very, very complicated.

blackenedheart
Sep 4, 2013, 07:47 AM
I have used sin and cos many times and I never mix math with Objective-C. I feel like it is easier to use the C scalar types like float, double, and int and then convert the answers I need into Objective-C objects later.

Qaanol
Sep 4, 2013, 05:17 PM
Since nobody has mentioned it, if you are doing trig with float, use the float versions of the trig functions: sinf, cosf, tanf, etc.

gnasher729
Sep 5, 2013, 05:55 AM
Since nobody has mentioned it, if you are doing trig with float, use the float versions of the trig functions: sinf, cosf, tanf, etc.

More importantly, if you use a few million of these operations, for example within 3d graphics, there is usually a way to avoid them altogether. But unless you have a very good reason, you should avoid float altogether and use double instead.

talmy
Sep 5, 2013, 09:41 AM
More importantly, if you use a few million of these operations, for example within 3d graphics, there is usually a way to avoid them altogether. But unless you have a very good reason, you should avoid float altogether and use double instead.

Yes. C (and by inference, Objective-C) performs all calculations in double (or larger), never in float, so these float functions can actually end up being less efficient to use once you figure in the conversion times. The only reason to use float is to save memory storing large arrays of values, but with the large RAM sizes these days I wouldn't even bother unless I had a hundred million values to store.

Qaanol
Sep 5, 2013, 10:15 AM
More importantly, if you use a few million of these operations, for example within 3d graphics, there is usually a way to avoid them altogether. But unless you have a very good reason, you should avoid float altogether and use double instead.

Yes. C (and by inference, Objective-C) performs all calculations in double (or larger), never in float, so these float functions can actually end up being less efficient to use once you figure in the conversion times. The only reason to use float is to save memory storing large arrays of values, but with the large RAM sizes these days I wouldn't even bother unless I had a hundred million values to store.

If you are performing the same sequence of operations on a large array of values, and you do not need the precision of doubles and prefer a smaller RAM footprint, there are float-optimized vector functions (https://developer.apple.com/library/mac/documentation/Performance/Conceptual/vecLib/Reference/reference.html) (as well as double-optimized ones) available. For example:

void vvsinf(float *outputArray, const float *inputArray, const int *pointerToArrayLength);
void vvcosf(float *outputArray, const float *inputArray, const int *pointerToArrayLength);
void vvtanf(float *outputArray, const float *inputArray, const int *pointerToArrayLength);

And of course, for extra speed, you can compute sin and cos in one pass with

void vvsincosf(float *outputSinArray, float *outputCosArray, const float *inputArray, const int *pointerToArrayLength);

There are inverse trig functions, exp and log functions, and a bunch of others, as well as all of the above in double precision if that floats your boat. For more basic operations such as arithmetic on arrays there is vDSP (https://developer.apple.com/library/mac/documentation/Accelerate/Reference/vDSPRef/Reference/reference.html), which even lets you do things like Xj = (Aj + Bj) * (Cj - Dj) all in one pass. That is also the library with FFT functions.

From my experience, if you are performing multiple operations on large arrays, it is fastest to process the arrays in chunks that fit in the processor's cache. Operating on 1024 floats at a time with vectorized functions has worked well for me.

firewood
Sep 5, 2013, 12:25 PM
Yes. C (and by inference, Objective-C) performs all calculations in double (or larger), never in float, ...

Interesting... LLVM in Xcode 4.6.3 seems to spit out only a single:

vmul.f32

ARM instruction when multiplying two floats into a float result variable, e.g.

float y = ...
float z = ...
float x = y * z;

Using doubles is kinda wasteful, and potentially slower on current iOS devices, using most real data (audio, pixel, measurements, etc.) which is rarely accurate to more than a few decimal places. With such data, the illusion of extra precision using doubles is likely the cause of greater problems/bugs, than is the lesser numerical accuracy of floats.

When do you ever know or can even measure an angle to more than 6 decimal places accuracy?

talmy
Sep 5, 2013, 12:59 PM
Interesting... LLVM in Xcode 4.6.3 seems to spit out only a single:

vmul.f32

ARM instruction when multiplying two floats into a float result variable

Well, to quote my copy of K&R, "Notice that all float's in an expression are converted to double; all floating point arithmetic in C is done in double precision." That said, I do a lot of embedded programming and the compilers inevitably have an option for arithmetic as floats for performance and size reasons.

I just did some checking and by default (at least) LLVM generates OS X code like you are seeing. However when I tried gcc on a Linux system and Visual C++ under Windows, it worked like I described.

gnasher729
Sep 5, 2013, 02:21 PM
Well, to quote my copy of K&R, "Notice that all float's in an expression are converted to double; all floating point arithmetic in C is done in double precision." That said, I do a lot of embedded programming and the compilers inevitably have an option for arithmetic as floats for performance and size reasons.

I just did some checking and by default (at least) LLVM generates OS X code like you are seeing. However when I tried gcc on a Linux system and Visual C++ under Windows, it worked like I described.

K&R (Kernighan and Ritchie for the younger readers) is old now, and things change. It is "implementation defined" whether all floating-point arithmetic is done in long double, or at least in double, or only in float if both operands are float, or some other way. In other words, it's up to the compiler. The compiler should define the macro FLT_EVAL_METHOD according to the method.

gnasher729
Sep 5, 2013, 02:35 PM
Using doubles is kinda wasteful, and potentially slower on current iOS devices, using most real data (audio, pixel, measurements, etc.) which is rarely accurate to more than a few decimal places. With such data, the illusion of extra precision using doubles is likely the cause of greater problems/bugs, than is the lesser numerical accuracy of floats.

When do you ever know or can even measure an angle to more than 6 decimal places accuracy?

Unless you do a pretty good analysis of the maths that you are using, you never know how errors add up. Maybe you don't need results with more than 6 decimals of accuracy. Doesn't mean doing your calculations with only 6 digits is right. Using double precision numbers doesn't give you "the illusion of extra precision". It gives you extra precision which makes sure that the results will be a lot closer to the mathematically correct result.

BTW. Apple's libraries represent time as seconds since some base date using floating point. Using "float" would give you a resolution of 16 or 32 seconds.

talmy
Sep 5, 2013, 03:47 PM
K&R (Kernighan and Ritchie for the younger readers) is old now, and things change. It is "implementation defined" whether all floating-point arithmetic is done in long double, or at least in double, or only in float if both operands are float, or some other way. In other words, it's up to the compiler. The compiler should define the macro FLT_EVAL_METHOD according to the method.

So I just spent some time looking at the C99 standard and, IMHO, it's scary stuff. Indeed, C99 infers that FLOAT + FLOAT can be done as either FLOAT or something bigger. I must say that I mostly use GCC and have never faced this coming up as a portability problem, however I did have a mysterious line of code, decades old, that decided not to work when first used CLANG on the Mac. I finally got it working (and compatibly with GCC, etc.) but never really understood why.

gnasher729
Sep 5, 2013, 04:41 PM
So I just spent some time looking at the C99 standard and, IMHO, it's scary stuff. Indeed, C99 infers that FLOAT + FLOAT can be done as either FLOAT or something bigger. I must say that I mostly use GCC and have never faced this coming up as a portability problem, however I did have a mysterious line of code, decades old, that decided not to work when first used CLANG on the Mac. I finally got it working (and compatibly with GCC, etc.) but never really understood why.

You haven't seen scary yet.

A compiler is allowed to do operations at a higher precision than necessary. And it is allowed to do _some_ operations at a higher precision than necessary, and not others. So if a and x are the same, and b and y are the same, you'd think that a+b and x+y are the same, right? Not if a+b is calculated in long double precision, and x+y in double precision only.

It's not a problem today, because on current Intel processors double precision is faster than long double, so long double is only used when you tell the compiler, but a few years ago (before the release of the first Intel Mac) there were eight "long double" floating-point registers and nothing else, so that kind of thing would happen. In an extreme case, "if (a + b != a + b) printf ("Weird"); " would actually print "Weird".

firewood
Sep 5, 2013, 10:43 PM
Unless you do a pretty good analysis of the maths that you are using, you never know how errors add up. Maybe you don't need results with more than 6 decimals of accuracy. Doesn't mean doing your calculations with only 6 digits is right. Using double precision numbers doesn't give you "the illusion of extra precision". It gives you extra precision which makes sure that the results will be a lot closer to the mathematically correct result.

If you don't do an analysis of numerical stability, and your calculation is likely to go bad in single precision, it's very often also a microscopic distance away from going bad in double precision as well. Thus, using double is a false/fake security blanket and even a trap for those unsophisticated in numerical analysis.

Even expecting two numbers to be equal is a delusion when using FP math. See above. That's normal with FP. And should be taught as such.

talmy
Sep 5, 2013, 11:02 PM
If you don't do an analysis of numerical stability, and your calculation is likely to go bad in single precision, it's very often also a microscopic distance away from going bad in double precision as well. Thus, using double is a false/fake security blanket and even a trap for those unsophisticated in numerical analysis.

I'm an Electrical Engineer, not a Mathematician. Back in the early days of electronic computers, they were designed by Electrical Engineers and both the circuits and even the floating point formats were not designed well from a mathematical standpoint. One could argue that you wanted double precision just to keep the calculation errors in the noise. And you certainly got different results if you used an IBM computer (and even different models of IBM computers) versus a Control Data computer, both big in the sciences. However IEEE Floating Point, contrary to the name of the sponsoring organization, was designed by Mathematicians and single precision can be safely used.

lee1210
Sep 6, 2013, 12:35 AM
Ugh. Working with single precision is a nightmare. Even doing the calculations in double precision then storing the result in single precision sucks. Then it's time to compare! Hooray, let's get some machine delta going on, etc. FP is generally awful, and we make the same mistakes with FP over and over. If you're doing math whose result is important, there are numerical libraries in many languages that can shield you from the garbage. I feel like the burden is on the programmer to prove the tiny precision float (and double, in many cases) provides is guaranteed to work for the use case.

-Lee

gnasher729
Sep 6, 2013, 02:44 AM
If you don't do an analysis of numerical stability, and your calculation is likely to go bad in single precision, it's very often also a microscopic distance away from going bad in double precision as well. Thus, using double is a false/fake security blanket and even a trap for those unsophisticated in numerical analysis.

Even expecting two numbers to be equal is a delusion when using FP math. See above. That's normal with FP. And should be taught as such.

If you get killed in a car accident not wearing a seat belt, you would be close to getting killed without a seat belt. So don't wear seat belts.

If something heavy falls on your head, it might kill you even wearing a helmet. So safety helmets shouldn't be worn because they give you a false sense of security.

You can get lost in the woods with a map. So throw away your map before you enter any forest; it only gives you a false sense of security.

However IEEE Floating Point, contrary to the name of the sponsoring organization, was designed by Mathematicians and single precision can be safely used.

The main proponents for the IEEE 754 standard (Apple and Intel; Apple created the first software implementation SANE (Standard Apple Numeric Environment) which was available even for the Apple II computer, Intel created the first hardware implementation with the 80387 co-processor) insisted on adding "extended precision" which gives 3.3 decimals more precision and a much larger range than double. What do you think why they did that? Just for fun? No, because extended precision gives you a better chance at getting correct results.

Sure, nothing will go mysteriously wrong if you use float. Things will go wrong in a completely well-defined way, IEEE 754 makes sure of that.

firewood
Sep 6, 2013, 09:54 AM
If you get killed in a car accident not wearing a seat belt, you would be close to getting killed without a seat belt. So don't wear seat belts. ...

Sure, nothing will go mysteriously wrong if you use float. Things will go wrong in a completely well-defined way, IEEE 754 makes sure of that.

The difference between a seatbelt with its width specified in float and one specified in double is less than hair. If you drive badly enough to get killed wearing one, you will also almost certainly be dead wearing the other. They will both go wrong in that precisely defined manner.

Senor Cuete
Sep 6, 2013, 09:59 AM
It's more likely "trigonometric functions in maths express angles...

"math" is a collective noun so there is no such thing as "maths".

When do you ever know or can even measure an angle to more than 6 decimal places accuracy?

All the time:

A typical theodolite measures angles to one arc second or 1 60th of a 60th of a degree or 0.00027777777777777... degrees. Precise survey techniques like turning angles left and right or winding up an instrument in high-order surveys result in sub-second accuracy.

Astronomical algorithms give constants to long precision because it's necessary to get correct results. The position of any planet is calculated using theories like the VSOP 87 theory which use many hundreds of terms. see: http://www.phpsciencelabs.us/vsop87/. The smallest corrections change the angle by way less than six decimal places but are needed to calculate the problem correctly.

High accuracy astronomical calculations are expected to calculate the positions of objects to less than one arc second and to do this you have to calculate the intermediate results to much greater accuracy to avoid rounding errors.

I recommend Astronomical Algorithms by Jean Meeus:

http://www.willbell.com/math/mc1.htm

Qaanol
Sep 6, 2013, 10:54 AM
"math" is a collective noun so there is no such thing as "maths".

I was going to make a crack about how this quote indicates you must not have studied maths, but then the rest of your post pretty well indicates that you probably have. So I'll just say that yes, maths are indeed referred to as maths by many people who study maths.

Merriam-Webster (http://www.merriam-webster.com/dictionary/maths)
Dictionary.com (http://dictionary.reference.com/browse/maths)
Wikipedia (http://en.wikipedia.org/wiki/Mathematics#Etymology) (last sentence of the section)

talmy
Sep 6, 2013, 10:57 AM
Ugh. Working with single precision is a nightmare. Even doing the calculations in double precision then storing the result in single precision sucks. Then it's time to compare! Hooray, let's get some machine delta going on, etc. FP is generally awful, and we make the same mistakes with FP over and over. If you're doing math whose result is important, there are numerical libraries in many languages that can shield you from the garbage. I feel like the burden is on the programmer to prove the tiny precision float (and double, in many cases) provides is guaranteed to work for the use case.

-Lee

Hey, floating point is just a crutch for lazy programmers anyway. :)

----------

I was going to make a crack about how this quote indicates you must not have studied maths, but then the rest of your post pretty well indicates that you probably have. So I'll just say that yes, maths are indeed referred to as maths by many people who study maths.

I think it depends on which side of the Atlantic you are on.

gnasher729
Sep 6, 2013, 02:14 PM
"math" is a collective noun so there is no such thing as "maths".

There's also the subject of geography.

You see, there is this little island in the North Sea, called Great Britain. And the people living on that little island speak proper English aka British English. And when the do maths, they do maths. Not math.

Senor Cuete
Sep 6, 2013, 04:14 PM
There's also the subject of geography.

You see, there is this little island in the North Sea, called Great Britain. And the people living on that little island speak proper English aka British English. And when the do maths, they do maths. Not math.

According to the dictionary "math" is a mid 19th century abbreviation of the word "mathematics" a plural noun usually treated as singular. So yes, I studied math and also English, but not British English slang.

lloyddean
Sep 6, 2013, 10:20 PM
If the British speak English it is certainly not proper English.

Qaanol
Sep 7, 2013, 09:01 AM
Back on topic, I did a quick test of vvsincos and vvsincosf on my mid-2007 MBP, where I found the float version is about 3-4 times faster than the double version when processing the same number of values.

Does anyone with a more recent machine want to test the same thing?

mrichmon
Sep 7, 2013, 10:14 AM
Back on topic, I did a quick test of vvsincos and vvsincosf on my mid-2007 MBP, where I found the float version is about 3-4 times faster than the double version when processing the same number of values.

Does anyone with a more recent machine want to test the same thing?

If you want to post your code I can run on a 2012 and 2013 rMBP.

Qaanol
Sep 7, 2013, 03:09 PM
Here’s my test code, written as a command-line utility (be sure to compile for “release”, not “debug”, so it will run at full-speed in Terminal.) Be sure to link the Accelerate framework as well. There is one optional command-line argument to specify the size of the arrays. If omitted, the default is 10,000.

#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include <vecLib/vecLib.h>

int main (int argc, const char * argv[]) {
int n = 10000;
if (argc > 1) {
n = atoi(argv[1]);
if (n < 1) {
printf("First argument must be positive\n");
return 1;
}
}
time_t t[3];
float *f1 = malloc(n * sizeof(float));
float *f2 = malloc(n * sizeof(float));
double *d1 = malloc(n * sizeof(double));
double *d2 = malloc(n * sizeof(double));
srandomdev();
for (int ii = 0; ii < n; ii = ii + 1) {
d1[ii] = (double)random();
f1[ii] = (float)random();
}
t[0] = clock();
vvsincosf(f1, f2, f1, &n);
t[1] = clock();
vvsincos(d1, d2, d1, &n);
t[2] = clock();
double t1 = (double)(t[1] - t[0])/CLOCKS_PER_SEC;
double t2 = (double)(t[2] - t[1])/CLOCKS_PER_SEC;
printf("%f seconds for %i floats\n", t1, n);
printf("%f seconds for %i doubles\n", t2, n);
printf("float is %f times the speed of double\n", t2/t1);
return 0;
}

Here is a typical result on my machine (though to be fair I ran this while streaming video online and the computer was running hot, so who knows what that did to my performance.)

0.577504 seconds for 16777216 floats
2.182337 seconds for 16777216 doubles
float is 3.778912 times the speed of double

gnasher729
Sep 7, 2013, 05:31 PM
Here’s my test code, written as a command-line utility (be sure to compile for “release”, not “debug”, so it will run at full-speed in Terminal.) Be sure to link the Accelerate framework as well. There is one optional command-line argument to specify the size of the arrays. If omitted, the default is 10,000.

There's a problem with the code that makes the results practically meaningless.

You store the result of random () to be used as the argument of the sine and cosine functions. The values returned by random are in the range from about 0 to 2 billion. That's absolutely non-typical for the arguments of sine and cosine. Worse, when you store a value between 1 and 2 billion into a float, the resolution is 128. That means, the lowest bit has a value of 128, and the difference between two consecutive float numbers is 128. The period of sine and cosine is 2pi. 128 is more than 20 times that period, so for float the actual arguments are totally meaningless. vvsincosf might as well just return 0 and 1 for the sine and cosine for these large values. For double, that's not the case; the arguments even in that huge range still have a resolution of 1/8million radians.

You'd get much more meaningful results if you scaled the values lets say into the interval [-2, +2].

----------

According to the dictionary "math" is a mid 19th century abbreviation of the word "mathematics" a plural noun usually treated as singular. So yes, I studied math and also English, but not British English slang.

"Maths" hasn't been slang for the last 100 years.

Qaanol
Sep 7, 2013, 06:48 PM
There's a problem with the code that makes the results practically meaningless.

You store the result of random () to be used as the argument of the sine and cosine functions. The values returned by random are in the range from about 0 to 2 billion. That's absolutely non-typical for the arguments of sine and cosine. Worse, when you store a value between 1 and 2 billion into a float, the resolution is 128. That means, the lowest bit has a value of 128, and the difference between two consecutive float numbers is 128. The period of sine and cosine is 2pi. 128 is more than 20 times that period, so for float the actual arguments are totally meaningless. vvsincosf might as well just return 0 and 1 for the sine and cosine for these large values. For double, that's not the case; the arguments even in that huge range still have a resolution of 1/8million radians.

You'd get much more meaningful results if you scaled the values lets say into the interval [-2, +2].
Okay, here is a typical run of my original code without a lot of other programs running:
0.210407 seconds for 16777216 floats
0.773957 seconds for 16777216 doubles
float is 3.678380 times the speed of double

And here is after updating the code:
0.214049 seconds for 16777216 floats
0.782355 seconds for 16777216 doubles
float is 3.655028 times the speed of double

The updated code is identical except the assignments inside the for loop are now:
d1[ii] = 6.2832 * ((double)random() / RAND_MAX);
f1[ii] = (float)(6.2832 * ((double)random() / RAND_MAX));

The times are usually within a couple milliseconds over many runs, so I’d say the result of the test doesn’t change.

"Maths" hasn't been slang for the last 100 years.
This is false. If you go to a major university, even in the USA, and spend some time in the math department graduate lounge, you’ll hear plenty of people refer to their subjects of study as ‘maths’.

Mac_Max
Sep 10, 2013, 03:31 AM
This is false. If you go to a major university, even in the USA, and spend some time in the math department graduate lounge, you’ll hear plenty of people refer to their subjects of study as ‘maths’.

I think Gnasher's point was that it's not slang because it is a correct word...

Blimy, colour me queer, it's as if the United States are not speaking the same language. (I apologize to all the Brits and Aussies in the room).

For what it's worth:

http://en.wikipedia.org/wiki/Comparison_of_American_and_British_English
http://www.dailywritingtips.com/math-or-maths/

You're all correct.

One thing that always bothers me when I hear British English is the treatment of collective noun - verb agreements. Jeremy Clarkson in particular likes to strongly inflect "have" for dramatic effect.

I.e.

(British) "Germany have won the competition."

v.s.

(US) "Germany has won the competition."

Damn Brits, can't they speak their own language? :) Back to Math/Maths... It kinda makes sense when you look at the difference in subject-verb agreement between British and US english.

(British) "Maths are difficult."

(US) "Math is difficult."

Both sound reasonable. Swapping our nation's respective agreements...

"Math are difficult."

"Maths is difficult"

Not particularly poetic.

gnasher729
Sep 10, 2013, 04:17 AM
This is false. If you go to a major university, even in the USA, and spend some time in the math department graduate lounge, you’ll hear plenty of people refer to their subjects of study as ‘maths’.

Missing the point. In the UK, "maths" isn't slang. It is the proper abbreviation for mathematics.

The times are usually within a couple milliseconds over many runs, so I’d say the result of the test doesn’t change.

The point was that you started with totally non-typical value, so you couldn't know whether your results applied to real-world use or not. Seems they do (which is disappointing, since I would have expected shortcuts to calculate sine / cosine faster for typical values), but it still needed verification.

Qaanol
Sep 11, 2013, 09:13 AM
I think Gnasher's point was that it's not slang because it is a correct word...

Missing the point. In the UK, "maths" isn't slang. It is the proper abbreviation for mathematics.

Yep, I missed that the first time.

The point was that you started with totally non-typical value, so you couldn't know whether your results applied to real-world use or not. Seems they do (which is disappointing, since I would have expected shortcuts to calculate sine / cosine faster for typical values), but it still needed verification.
I'm actually quite glad that the vectorized functions do not make special optimizations for atypical values. That would require testing every single entry to see if the atypical conditions were met, which the vast majority of the time would not be because they are atypical, so the effect would just be to slow down the processing of typical values.

gnasher729
Sep 11, 2013, 03:43 PM
Yep, I missed that the first time.

I'm actually quite glad that the vectorized functions do not make special optimizations for atypical values. That would require testing every single entry to see if the atypical conditions were met, which the vast majority of the time would not be because they are atypical, so the effect would just be to slow down the processing of typical values.

Excuse me, but that's not how it works. You don't make optimizations for atypical values, you make optimizations for typical values (which possibly isn't happening).

Every implementation of sine / cosine transforms the argument into the range from -pi/4 to +pi/4. In the general case, you multiply the argument by pi/2, round to the nearest integer, multiply by pi/2 _very_ carefully to avoid rounding errors and subtract that from the argument. That's expensive. In the typical case, say -1.25 pi < x < 1.25 pi, all you do is add or subtract pi or pi/2 or leave the argument unchanged. A lot faster. But if you want to do this you _must_ check for the atypical case or get nonsense results.

BTW. Lots of MacOS X / iOS graphics code uses the type CGFloat. On the iPhone 5s, they now use CGFloat = double.