Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

jsmwoolf

macrumors regular
Original poster
Aug 17, 2011
123
0
I'm working on another problem, but the problem is why does the float go to zero, even though I divide two ints and it should give me a decimal as the numerator isn't zero?
Code:
//Removed
 
Last edited:
I've not bother to read your code, I'm just going on your subject and initial sentence.

3 / 4 != 0.75; 3 / 4 == 0;
3 / 4.0 == 0.75; 3.0 / 4 == 0.75; 3.0 / 4.0 == 0.75;

If both operands of the division operator are ints, then integer division is performed. At least one of operands needs to be float or double to get float or double division.
 
So that's why even though it's supposed to be stored in a float?

But the evaluation has already occurred. The conversion to float happens afterwards only as the last step immediately before the actual assignment.

Added a typecast to one of the operands to promote it to float before the division.

BTW If you used to the likes of Perl, C doesn't have the concept of evaluation context. An expression in C will also evaluate the same, regardless of what is then done with the result.
 
Thanks. It works. And I got the right answer after some editing. I'm removing the code now to prevent cheating.
 
Thanks. It works. And I got the right answer after some editing. I'm removing the code now to prevent cheating.

Really really bad form to mangle a post that way. :mad:

Just curious what you mean with "to prevent cheating"...

Is it a assignment you're supposed to do for school or equivalent, and you used this forum and thread to get it done? Wouldn't that constitute a far worse incident of cheating than anyone else with the same assignment just stumbling upon this thread would?
 
Is it a assignment you're supposed to do for school or equivalent, and you used this forum and thread to get it done? Wouldn't that constitute a far worse incident of cheating than anyone else with the same assignment just stumbling upon this thread would?

It has nothing to do with school, it's in my free time. It was a problem on Project Euler. Besides, my problem had to do with not getting a decimal, not on how to solve the problem as I already had the idea on how to solve it.

Besides, in Project Euler some of the members argue that people are just copying answer or code in order to rank up. I just removed the code in order to prevent people from quickly finding a basis and then edit/get the answer it a lot faster than someone who takes the time and struggles on the problem in order to write an algorithm.
 
Last edited:
My guess was that the original code was something like this:

Code:
float x = 3/4;

and x turned out to be 0.

so now the new code looks like:

Code:
float x = (float)3/4;
 
It has nothing to do with school, it's in my free time. It was a problem on Project Euler. Besides, my problem had to do with not getting a decimal, not on how to solve the problem as I already had the idea on how to solve it.

I'd file Project Euler under "or equivalent", but whatever...

Isolate the problem and re-create it in a separate dummy-program and post that instead. Seriously, if you aren't going to let the fundamental posts of a thread remain intact, don't make them at all!
 
One really should specify floating point constants as -

Code:
float x = 3f/4f;

although I prefer -

Code:
float x = 3.0f/4.0f;

You've piqued my curiosity. Why is this better than:
Code:
float x = (float)3/4;

// or

float y = 3.0/4.0;

Is it a compiler optimization thing? a readability thing? or something else? I'm not second-guessing you, just trying to understand the benefit.
 
Aah, the inner workings of c.
Now let's see.

3 is an integer constant.
3.0 is a floating-point const, generally assumed to be of type double.
3.0f is a floating-point constant of type float
3f is as far as I know more or less illegal, not allowed as 3 is not a floating-point constant and hence can not be of type float. Compilers might actually let this pass.
(float)3 is an integer constant cast to float.

float x = (float) 3 / 4;
I would read this as.
1) convert the integer constant 3 into a float
2) since the division will be a floating-point division, also the integer 4 has to be converted into some float data type. Here is where my knowledge is a bit flakey, could be converted into a double here?
3) now do the floating-point division
4) assign the value to x.

In contrast
float x = 3.0f / 4.0f;
is totally clear and very simple. Two constants of type float are divided and yields a float. No conversions, no ambiguites.

Gunnar
 
Next, i am not certain about the difference between
float x = 3.0f / 4.0f;
and
double x = 3.0 / 4.0;

It depends on the exact number.

All floating-point numbers may turn out to be approximations, sometimes exact the value and sometimes real close. In c it might be slighltly different in different compilers and especially on different hardware. Most compilers nowadays follow pretty much the same standard, but it can differ. And there are generally two different floating point numbers, float and double, where the double can hold larger values and has 'more precision' or put differently generally comes closer in approximation.

For all floating-point numbers there will be a limit where

float x = some very large value;
float y = x+1.0f;

past the very large value x and y will not be different anymore.
 
Aah, the inner workings of c.
Now let's see.

3 is an integer constant.
3.0 is a floating-point const, generally assumed to be of type double.
3.0f is a floating-point constant of type float
3f is as far as I know more or less illegal, not allowed as 3 is not a floating-point constant and hence can not be of type float. Compilers might actually let this pass.
(float)3 is an integer constant cast to float.

float x = (float) 3 / 4;
I would read this as.
1) convert the integer constant 3 into a float
2) since the division will be a floating-point division, also the integer 4 has to be converted into some float data type. Here is where my knowledge is a bit flakey, could be converted into a double here?
3) now do the floating-point division
4) assign the value to x.

In contrast
float x = 3.0f / 4.0f;
is totally clear and very simple. Two constants of type float are divided and yields a float. No conversions, no ambiguites.

Gunnar

Aren't all of these conversions optimized by the compiler and so you should never see a difference at runtime? Also, isn't casting an incredibly fast operation?
 
All floating-point numbers may turn out to be approximations, sometimes exact the value and sometimes real close.
For IEEE 754 floating point numbers the approximation will be inexact when the denominator is not a power of 2,
and exact when the denominator is a power of 2 (and the numerator is not too large to represent exactly)
 
I'd file Project Euler under "or equivalent", but whatever...

Isolate the problem and re-create it in a separate dummy-program and post that instead. Seriously, if you aren't going to let the fundamental posts of a thread remain intact, don't make them at all!

Good idea. I'll remember the next time that I post code that is similar to Project Euler.
 
You've piqued my curiosity. Why is this better than:
Code:
float x = (float)3/4;

// or

float y = 3.0/4.0;

Is it a compiler optimization thing? a readability thing? or something else? I'm not second-guessing you, just trying to understand the benefit.

The first one is bad, because it depends on knowing the exact details of the C language: It is not obvious whether (float)3/4 casts 3 to float and divides by 4 (giving (float) 0.75) or whether it is an integer division 3/4 giving 0, cast to float giving (float) 0.0. Would you bet which one without consulting a C book or trying with the compiler?

The second one: That is just someone's personal preference. Say I have two double numbers x and y. To calculate the average, I can write (x+y)/2 or (x+y)/2.0 or (x+y)*0.5 and each must give the exact same result and most likely produces exactly the same code. Which one you prefer is up to you.
 
Casting can be useful if these where not constants but integer variables, and the result of a division needs to be stored in a float.
 
...
Say I have two double numbers x and y. To calculate the average, I can write (x+y)/2 or (x+y)/2.0 or (x+y)*0.5 and each must give the exact same result and most likely produces exactly the same code. Which one you prefer is up to you.

Sorry, this is actually wrong in general for c, but it will probably be true for just about any computer you will meet. I will elaborate.

(x+y)/2.0 happens give the same result as (x+y)*0.5
but we know that
(x+y)/3.0 is probably not the same as (x+y)*0.3333333333 ... (add any amount of decimals)
This can easily be seen from the fact that 1/3 can not be accurately described as a decimal system fraction.

One interesting point is that many other numbers can not be accurately described in the binary number format used for floating point. Why not try for yourself and see if (x+y)/5 is exactly equal to (x+y)*0.2

Actually, c as a language does not describe the exact meaning of floating point operations. It will allow any amount of different representations of floating point. This can lead to (x+y)/2 not beeing exactly same as (x+y)*0.5.

But a compiler and runtime system has to define how it works. This means that if you change compiler or target hardware you need to read those specs describing the exact situation.

This is a rather readable description in my mind.
http://en.wikipedia.org/wiki/Floating_point

You will find that common mathematical rules, say a*(b+c) = a*b + a*c, not necessarily gives exactly the same result when done in floating point in c.

// gunnar
 
Aren't all of these conversions optimized by the compiler and so you should never see a difference at runtime? Also, isn't casting an incredibly fast operation?
Casting is a necessity in c, and is usually very fast, nothing to worry about in most cases. Of course if you do a thing a billion times in order to calculate next weeks weather, it might matter.

But one point is that if you write an ambigous statement in c, the compiler might choose different ways to implement it. So depending on which compiler you use, or even depending on which optimization setting you have, the compiler might select anyone of the two ( or more) ambigous interpretations of the c statement. This is one situation where it is very difficult to actually test your program with certainty.

Another point is that unless you delve into a lot of implementation specific details, floating point numbers should always be considered as approximations. The expression
float x = 3.0f;
will give x a value which is as close to 3.0 that the representation is capable of. But it might be slightly off the target value 3.0. Exactly how far off depends on a lot of details.

Yet another point is that floating point always has a limited precision. It is a bit like looking at a digital picture, zoom in enough and you see the pixels. There is no "intermediate" information between the pixels. But the analogy is not perfect, the larger the floating point number is, the larger the pixels are (so to say). So once you have a large enough number, x + 1.0 might not actually be able to count up as the "pixels" are larger than 1. This is the reason why we should avoid storing monetary values, say the amount in a bank account, in a floating point number.

Gunnar
 
Sorry, this is actually wrong in general for c, but it will probably be true for just about any computer you will meet. I will elaborate.

(x+y)/2.0 happens give the same result as (x+y)*0.5
but we know that
(x+y)/3.0 is probably not the same as (x+y)*0.3333333333 ... (add any amount of decimals)
This can easily be seen from the fact that 1/3 can not be accurately described as a decimal system fraction.

Excuse me, but I wrote what I wrote and I didn't write what I didn't write. I wrote that (x + y) * 0.5 is the same as (x + y) / 2 and the same as (x + y) / 2.0 and that is absolutely one hundred percent true. No point claiming that things I didn't write are wrong.
 
The first one is bad, because it depends on knowing the exact details of the C language: It is not obvious whether (float)3/4 casts 3 to float and divides by 4 (giving (float) 0.75) or whether it is an integer division 3/4 giving 0, cast to float giving (float) 0.0. Would you bet which one without consulting a C book or trying with the compiler?

The second one: That is just someone's personal preference. Say I have two double numbers x and y. To calculate the average, I can write (x+y)/2 or (x+y)/2.0 or (x+y)*0.5 and each must give the exact same result and most likely produces exactly the same code. Which one you prefer is up to you.

Casting is a necessity in c, and is usually very fast, nothing to worry about in most cases. Of course if you do a thing a billion times in order to calculate next weeks weather, it might matter.

But one point is that if you write an ambigous statement in c, the compiler might choose different ways to implement it. So depending on which compiler you use, or even depending on which optimization setting you have, the compiler might select anyone of the two ( or more) ambigous interpretations of the c statement. This is one situation where it is very difficult to actually test your program with certainty.

Thanks for taking the time to post your helpful answers. You both make excellent points in this regard. I hadn't ever considered the point that the order of operation of C isn't always immediately obvious, and that code that prevents someone else from having to double-check what happens first is more friendly and readable (especially for people less experienced with C).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.