So that's why even though it's supposed to be stored in a float?
Thanks. It works. And I got the right answer after some editing. I'm removing the code now to prevent cheating.
Thanks. It works. And I got the right answer after some editing. I'm removing the code now to prevent cheating.
Is it a assignment you're supposed to do for school or equivalent, and you used this forum and thread to get it done? Wouldn't that constitute a far worse incident of cheating than anyone else with the same assignment just stumbling upon this thread would?
My guess was that the original code was something like this:
Code:float x = 3/4;
float x = 3f/4f;
float x = 3.0f/4.0f;
It has nothing to do with school, it's in my free time. It was a problem on Project Euler. Besides, my problem had to do with not getting a decimal, not on how to solve the problem as I already had the idea on how to solve it.
One really should specify floating point constants as -
Code:float x = 3f/4f;
although I prefer -
Code:float x = 3.0f/4.0f;
float x = (float)3/4;
// or
float y = 3.0/4.0;
Aah, the inner workings of c.
Now let's see.
3 is an integer constant.
3.0 is a floating-point const, generally assumed to be of type double.
3.0f is a floating-point constant of type float
3f is as far as I know more or less illegal, not allowed as 3 is not a floating-point constant and hence can not be of type float. Compilers might actually let this pass.
(float)3 is an integer constant cast to float.
float x = (float) 3 / 4;
I would read this as.
1) convert the integer constant 3 into a float
2) since the division will be a floating-point division, also the integer 4 has to be converted into some float data type. Here is where my knowledge is a bit flakey, could be converted into a double here?
3) now do the floating-point division
4) assign the value to x.
In contrast
float x = 3.0f / 4.0f;
is totally clear and very simple. Two constants of type float are divided and yields a float. No conversions, no ambiguites.
Gunnar
For IEEE 754 floating point numbers the approximation will be inexact when the denominator is not a power of 2,All floating-point numbers may turn out to be approximations, sometimes exact the value and sometimes real close.
I'd file Project Euler under "or equivalent", but whatever...
Isolate the problem and re-create it in a separate dummy-program and post that instead. Seriously, if you aren't going to let the fundamental posts of a thread remain intact, don't make them at all!
You've piqued my curiosity. Why is this better than:
Code:float x = (float)3/4; // or float y = 3.0/4.0;
Is it a compiler optimization thing? a readability thing? or something else? I'm not second-guessing you, just trying to understand the benefit.
...
Say I have two double numbers x and y. To calculate the average, I can write (x+y)/2 or (x+y)/2.0 or (x+y)*0.5 and each must give the exact same result and most likely produces exactly the same code. Which one you prefer is up to you.
Casting is a necessity in c, and is usually very fast, nothing to worry about in most cases. Of course if you do a thing a billion times in order to calculate next weeks weather, it might matter.Aren't all of these conversions optimized by the compiler and so you should never see a difference at runtime? Also, isn't casting an incredibly fast operation?
Sorry, this is actually wrong in general for c, but it will probably be true for just about any computer you will meet. I will elaborate.
(x+y)/2.0 happens give the same result as (x+y)*0.5
but we know that
(x+y)/3.0 is probably not the same as (x+y)*0.3333333333 ... (add any amount of decimals)
This can easily be seen from the fact that 1/3 can not be accurately described as a decimal system fraction.
The first one is bad, because it depends on knowing the exact details of the C language: It is not obvious whether (float)3/4 casts 3 to float and divides by 4 (giving (float) 0.75) or whether it is an integer division 3/4 giving 0, cast to float giving (float) 0.0. Would you bet which one without consulting a C book or trying with the compiler?
The second one: That is just someone's personal preference. Say I have two double numbers x and y. To calculate the average, I can write (x+y)/2 or (x+y)/2.0 or (x+y)*0.5 and each must give the exact same result and most likely produces exactly the same code. Which one you prefer is up to you.
Casting is a necessity in c, and is usually very fast, nothing to worry about in most cases. Of course if you do a thing a billion times in order to calculate next weeks weather, it might matter.
But one point is that if you write an ambigous statement in c, the compiler might choose different ways to implement it. So depending on which compiler you use, or even depending on which optimization setting you have, the compiler might select anyone of the two ( or more) ambigous interpretations of the c statement. This is one situation where it is very difficult to actually test your program with certainty.
Casting is a necessity in c, and is usually very fast, nothing to worry about in most cases.