Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

portreathbeach

macrumors member
Original poster
Feb 19, 2011
79
4
Hi, I'm new to Objective C and just trying a few things out with it. Please can someone tell me whay this gives the wrong answer:


int diam;
float pi;
float circ;


pi=3.14;
diam=15;
circ=pi*diam;


circ should equal 47.1, when running the program, circ actually equals 47.1000023 !!!

If I change circ and pi to doubles, the answer is 47.100000000000001 !!!

Why is this happeneing?
 
Why is this happeneing?

Besides what robbieduncan said, you shouldn't be looking "out there".

You started with three significant digits on pi so you should really only trust your result to three significant digits or maybe four and both of those results are 47.10 to four significant digits.

That's one way to deal with the fact that floating point math in inherently "fuzzy" just don't display the fuzziness.

Oh, and BTW, please wrap any code you post in the forum in a CODE block (The # symbol in the edit window).

B
 
Thanks very much for your answers. I've only ever programmed in VB and RealBasic, and have never come across this before. Even a programmer friend of mine who programs C# couldn't understand what was going on.

So, basically I have to round any answer to 'so many decimal places' after every multiplication or division I do in Objective C then?
 
Thanks very much for your answers. I've only ever programmed in VB and RealBasic, and have never come across this before. Even a programmer friend of mine who programs C# couldn't understand what was going on.

So, basically I have to round any answer to 'so many decimal places' after every multiplication or division I do in Objective C then?

The same thing happens in VB and C#, they may just default the display not to show it. As robbie's Wikipedia link shows this happens at the level of the CPU (or FPU).

EDIT: Here's what Microsoft has to say about this topic: http://support.microsoft.com/kb/42980

B
 
Thanks for the reply,

It just seems so complicated in Objective C to do calculations! For example:

Code:
	int diam;
	float pi;
	float circ;
	

	pi=3.14;
	diam=15;
	circ=pi*1;

I am multiplying a number by 1, so the answer should be the same, but it isn't, it has several 0s and the a 1 at the end. Surely I don't have to worry about this every time I do any calculations in my code and have to worry about decimal placing, do I?
 
I am multiplying a number by 1, so the answer should be the same, but it isn't, it has several 0s and the a 1 at the end. Surely I don't have to worry about this every time I do any calculations in my code and have to worry about decimal placing, do I?

Your code does not show how you are displaying the number. A single precision float only has about 7 significant digits maximum. If you are looking beyond that you are fooling yourself on any platform.

B
 
Thanks for the reply,

It just seems so complicated in Objective C to do calculations! For example:

I am multiplying a number by 1, so the answer should be the same, but it isn't, it has several 0s and the a 1 at the end. Surely I don't have to worry about this every time I do any calculations in my code and have to worry about decimal placing, do I?

This is not an ObjC thing. This is a computers thing. This happens with most computer languages on every platform, and is intrinsic to how floating point math works.

You are getting a perfectly accurate answer for floating point math, it just does not fit into your ascetic desires. You are just used to systems that do a lot of extra work to make things look pretty for you. Others have already provided you with the answers on this. Accept it and move on.
 
Hi,

I'm not actually displaying anything yet, I'm just trying to get my head around this.

Code:
int diam;
	float pi;
	float circ;
	

	pi=3.14;
	diam=15;
	circ=pi*diam;
	
	
	circ=circ/diam;

OK, so here I am multiplying 'pi' with 'diam' and storing it in 'circ'. I am then taking circ and dividing it by 'diam'. The answer should surely be 'pi' what we started out with in the first place, but it ends up being 3.1400001

I understand that when I come to display this somewhere, like on the screen I can round off to 'so many decimal places' but in VB and C#, you could simply set a label to display circ and it would show '3.14', I wouldn't have to do any rounding or decimal placing etc. Why does Objective C not do this for you?
 
OK, so here I am multiplying 'pi' with 'diam' and storing it in 'circ'. I am then taking circ and dividing it by 'diam'. The answer should surely be 'pi' what we started out with in the first place, but it ends up being 3.1400001
Where are you seeing the 3.1400001? That's what I mean by displaying.

The value that is stored in binary. Something is converting that to decimal. What are you doing to do that.

None of your code is Objective-C this is all straight boring C.

EDIT: Maybe this will help. http://www.h-schmidt.net/FloatApplet/IEEE754.html

B
 
Hi balamw,

I am 'seeing' this 3.1400001, when I put the cursor over the variable when debugging.
 
I am 'seeing' this 3.1400001, when I put the cursor over the variable when debugging.

So it's what I said before.

The same thing happens in VB and C#, they may just default the display not to show it.

The display in Xcode, which has nothing to do with Objective-C or Macs chooses to display more digits than are really significant. Probably as a reminder that floating point on a computer always includes these inaccuracies.

B
 
Deep Thoughts

Even a programmer friend of mine who programs C# couldn't understand what was going on.

So, basically I have to round any answer to 'so many decimal places' after every multiplication or division I do in Objective C then?

I weep for the future.

On a more helpful note, just remember that there's more going on here than may first appear. You can either ignore the man behind the curtain and just get by with your programs, or you can spend some time and effort to understand things.

There are a lot of issues here:
1. binary vs. decimal representation of numbers, particularly fractions;
2. displayed precision vs. calculated precision;
3. digits of accuracy.

Each of those topics is worthy of serious study. For example, the "digits of accuracy" was a prominent topic in a Numerical Analysis course that I took in college (30 years ago). It was offered in the Math department, there was a corresponding one in the Computer Science department that covered the issues from a different perspective.

As an aside, the whole "just-get-by vs. understand-things" conflict is not isolated to your programming life. This is an issue about critical thinking and how you want to live your life. At some point you'll need to make decisions about your finances, insurance, where you live, politics, and many other things. When those issues come up, will you just make a decision or will you think about who's doing what to whom and why they're doing it before making your choice?
 
I weep for the future.
I was close to posting something along those lines.

I do find it a bit odd that Xcode displays differently than printf("%g").

Code:
#include <stdio.h>

int main (int argc, const char * argv[]) {
    int diam;
	float pi;
	float circ;
	
	
	pi=3.14;
	printf("%g\n",pi);
	diam=15;
	printf("%d\n",diam);
	circ=pi*diam;
	printf("%g\n",circ);
	
	
	circ=circ/diam;
	printf("%g\n",circ);

    return 0;
}

gives:

Code:
3.14
15
47.1
3.14

A simple experiment is to replace %g with %.8g which shows that you only get 7 digits of precision with a single precision float.

You then get:
Code:
3.1400001
15
47.100002
3.1400001
B
 
Hi PatricCocoa,

Thanks for your response.

I suppose it's all down tome struggling with C and understanding floats and doubles. I can program assembly language into PIC microprocessors, and I coded a complete in van entertainment system with VB for my Van. I can also program RealBasic and have written backup software for the mac and a guitar tablature piece of software.

I have the Sams "iPhone Application Development' book, but it doesn't really teach you much about C and objective C, can you point me in the right direction for a good book to learn Objective C. I know all the fundamentals of OOP and classes etc from VB and RealBasic, but C seems to be structured very differently with the curly bracket arrangement going on. {}
 
I suppose it's all down tome struggling with C and understanding floats and doubles.

Read and understand the Wiki article robbieduncan pointed you to. Regardless of programming language and OS floating point numbers are inherently inaccurate when seen as decimal numbers because they are represented in binary internally. You cannot escape this unless you use another way of representing the number, but when you do you lose performance because the FPU is no longer doing the heavy lifting. (e.g. http://gmplib.org/ is a library that gives arbitrary decimal precision).

As per the Wikipedia article the 24 binary bits of data give you maximum 7.2 decimal digits of precision (7.2=log10(2^24)).

So do you choose to show only digits and ignore the 0.2 digits or do you show 8 and reveal the inherent limitations? That's what the experiment I did in post #14 was all about.

B
 
Just for a cheap thrill, the only kind of thrill I ever get :), I wrote this program in C, compiled it, and ran it to get 47.12390.

By the way, I used double, not float, because I think float variables always get converted to double precision.

Code:
#include <stdio.h>
#include <math.h>

static double pi(void)
{
  return 4.0 * atan(1.0);
}

static double circumference (const double diameter)
{
   return pi() * diameter;
}

int main(void)
{
  double diam = 15;

  printf("%lf\n", circumference(diam));
  return 0;
}
 
Last edited:
Just for a cheap thrill, the only kind of thrill I ever get :), I wrote this program in C, compiled it, and ran it to get 47.12390.

By the way, I used double, not float, because I think float variables always get converted to double precision.

Code:
#include <stdio.h>
#include <math.h>

static double pi(void)
{
  return 4.0 * atan(1.0);
}


int main(void)
{
  double circ, diam = 15;

  circ = pi() * diam;
  printf("%lf\n", circ);
  return 0;
}

Yes. If a FPU can do double math, why would you bother implementing single (float) internally? The PPC lfsx loads the 32 bit float and immediately converts it to a 64 bit double for use. The value and/or its derivatives will continue to be doubles internally until written to memory with stfsx. In fact, I think the math unit adds 3 extra bits of precision for computation, which get rounded off on register-write-back. I believe most FPUs work this way, because to support floats internally would be a pointless waste of gates.

The only time you should ever use float is when memory space is a serious concern (almost never, except for extremely large arrays).

balamw said:
I wonder if that's still the case on the 2006 Core Duo Macs which were the last 32 bit CPU devices.
32 bit x86 always used a separate internal FPU. I seem to recall x87 internal architecture was 80 bit.
 
Last edited:
What Every Programmer Should Know About Floating-Point Arithmetic:
http://floating-point-gui.de/

e_to_the_pi_minus_pi.png


ROFL.

B
 
I think I have Standard Apple Numerics Environment in hardbound around here somewhere, from about 18 or so years ago. The introduction is pretty amusing, at least to the kind of person who knows how to chat up a CAFEBABE.
 
How are you so sure that the same thing doesn't happen in C#?

This is a fundamental hardware level phenomenon. You would have to implement a software layer that filters the math to eliminate the floating point inaccuracies.

I have even seen these errors in math computation packages, like MATLAB.

I am pretty sure C# via .Net uses the hardware FPU directly. Hence it will also show these errors.

Your initial assumption that C# does not have these inaccuracies is wrong. The reason why you may not have seen them in other languages is probably because you may not have had enough experience with them to come across this.
 
Last edited:
You may be right about .net and C# having these errors, but all I know is after programming .Net for years, I've never seen this before. If I were to multiply 2 numbers the same way as I described earlier and then display the result, I wouldn't have to do any formatting of the string, it would just show the correct answer. The same way as a standard calculator would.
 
Well I just tried your code in c# .net 4.0.

I get 47.100023 too. It should work the same in VB too, because they are all .net.

I do a lot of work in numerical simulation. So this was one of the first things I noticed.

However, in more "general" programming, you will probably never notice these small errors. And it only happens with floating points too. So if you are working with ints, you will never encounter this problem.

Besides, if you print out a floating point number, the print function may be smart enough correct for the error. So you will never see it unless you spend time in the debugger monitoring each variable.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.