Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Well I just tried your code in c# .net 4.0.

I get 47.100023 too. It should work the same in VB too, because they are all .net.

I do a lot of work in numerical simulation. So this was one of the first things I noticed.

However, in more "general" programming, you will probably never notice these small errors. And it only happens with floating points too. So if you are working with ints, you will never encounter this problem.

Besides, if you print out a floating point number, the print function may be smart enough correct for the error. So you will never see it unless you spend time in the debugger monitoring each variable.
I rarely praise COBOL, but I wish other programming languages would let me compute wi binary coded decimal numbers, too. In BCD, you can calculate with dollar amounts, say, without rounding errors.
 
I rarely praise COBOL, but I wish other programming languages would let me compute wi binary coded decimal numbers, too. In BCD, you can calculate with dollar amounts, say, without rounding errors.

I've never used COBOL, but in Objective-C Foundation framework you have NSDecimalNumber:

NSDecimalNumber, an immutable subclass of NSNumber, provides an object-oriented wrapper for doing base-10 arithmetic. An instance can represent any number that can be expressed as mantissa x 10^exponent where mantissa is a decimal integer up to 38 digits long, and exponent is an integer from –128 through 127.

In Java you have java.math.BigDecimal:

BigDecimal
Immutable, arbitrary-precision signed decimal numbers. A BigDecimal consists of an arbitrary precision integer unscaled value and a 32-bit integer scale. If zero or positive, the scale is the number of digits to the right of the decimal point. If negative, the unscaled value of the number is multiplied by ten to the power of the negation of the scale. The value of the number represented by the BigDecimal is therefore (unscaledValue × 10^(-scale)).


Both of these would reduce performance as you're no longer using hardware floating point maths.
 
I rarely praise COBOL, but I wish other programming languages would let me compute wi binary coded decimal numbers, too. In BCD, you can calculate with dollar amounts, say, without rounding errors.

Nothing makes BCD more useful than binary. For dollars, the simplest approach is to handle the internal variables as cents, or mills, converting them to dollar amounts with division and/or modulus. Integers do not introduce rounding errors.
 
Nothing makes BCD more useful than binary. For dollars, the simplest approach is to handle the internal variables as cents, or mills, converting them to dollar amounts with division and/or modulus. Integers do not introduce rounding errors.
Thanks. You're right, Sydde.
 
I wonder if that's still the case on the 2006 Core Duo Macs which were the last 32 bit CPU devices.

The C, C++, and Objective-C compilers support three floating-point types: float, double, and long double. When you create code for Intel processors, whether 32 or 64 bits, float uses 32 bits, double uses 64 bits, and long double uses 80 bits. Exactly the same thing will happen whether you create 32 or 64 bit code, and which processor you use doesn't make any difference at all (except the Core Duo doesn't support 64 bit at all). When you create code for PowerPC processors, float and double are 32 and 64 bits, long double is 128 bits.

As a rule, you should never use float but use double, unless you have a good reason that you can yourself justify to use float.


Hi, I'm new to Objective C and just trying a few things out with it. Please can someone tell me whay this gives the wrong answer:


int diam;
float pi;
float circ;


pi=3.14;
diam=15;
circ=pi*diam;

The type "float" represents a binary floating-point number with a 24 bit mantissa. According to the rules of the C / C++ / Objective-C languages, 3.14 is the "double" number closest to 3.14, which means it is slightly larger or smaller than 3.14. Assigning to pi of type float rounds this number to the nearest number of type float, so you get a number somewhere within six or seven decimal digits of 3.14, but most definitely not the number 3.14 itself. This number pi is then multiplied by 15. The result will most likely not fit into the type "float", so it is rounded again. So it is quite guaranteed that the result will _not_ be 47.1, but some number within about 6 decimal digits of it.
 
Last edited:
So basically I should use doubles and not worry about the rounding errors? When it comes to displaying any results, I should simply format the string accordingly.

I asked in a previous post about any books I should read to help me learn C and Obj C as I only know VB.net, RealBasic and Assembly language? I have seen this book on Amzon:

Programming in Objective-C 2.0 (Developer's Library) by Stephen G. Kochan

http://www.amazon.co.uk/Programming-Objective-C-2-0-Developers-Library/dp/0321711394/ref=sr_1_4?s=books&ie=UTF8&qid=1298241728&sr=1-4#productPromotions

Does anyone know if this will be able to teach me the the syntax I need in C as well as Obj C to be able to program the iPhone?

Thanks in advance
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
So basically I should use doubles and not worry about the rounding errors?
You can't escape them. So what is there to worry about? You still need to understand the limitations of the data types you choose.

Does anyone know if this will be able to teach me the the syntax I need in C as well as Obj C to be able to program the iPhone?

Its primary focus is on Objective C syntax and use of the Foundation framework. It does teach some C along the way. The last part of the book introduces Cocoa and Cocoa Touch which give you an introduction to iOS programming, but you will probably need at least one more book on Cocoa Touch and may also want to learn a bit more C to get where you want to go.

B
 
Thanks again balamw,

I have the 'Sams Teach Yourself - iPhone Application Development' book, which gives a fairly good view of how to use interface builder and xcode and how they 'link' together. So would this book and 'Programming in Objective-C 2.0' be enough to get me going programming iPhone apps?

As I said in a previous post, I have good programming experience in VB.net, so understand how to code and use calsses and methods etc. I just need to know the syntax of C and Obj-C and when to use each one when coding in xCode.
 
I have the 'Sams Teach Yourself - iPhone Application Development' book, which gives a fairly good view of how to use interface builder and xcode and how they 'link' together. So would this book and 'Programming in Objective-C 2.0' be enough to get me going programming iPhone apps?
Probably so, but everyone learns things differently. Some forum users tired Kochan and didn't like it's approach, but for most people it's a great place to start.

B
 
That's strange. Amazon.co.uk say 28th Feb, Amazon.com says 6th June.

Perhaps I'll get the 2nd edition :confused:
 
Last edited:
Just like to mention that there are perfectly valid uses for floats. :D If you know that your range of numbers fit perfectly in the mantissa, it can be a great way to scale a number without losing precision.
 
I've never used COBOL, but in Objective-C Foundation framework you have NSDecimalNumber:



In Java you have java.math.BigDecimal:




Both of these would reduce performance as you're no longer using hardware floating point maths.
Thanks, McGordon. I'm sure BCD would slow a program. But maybe it saves memory, because COBOL programmers call it "packed decimal."

Be glad you haven't used COBOL. Its inventor should have renamed it to allude to writer's cramp because it's the wordiest programming language I've ever used.
 
Compared to what? I always thought BCD was just "packed" compared to ASCII or just storing one decimal digit per byte in the lower nibble.

On the x86 architecture BCD is either stored in the lower nibble 1 digit per byte, or in both nibbles 2 digits per byte.

I don't know if it's true in x64, but the x86 ISA had BCD math instructions. So I wouldn't think it would be as slow as first impressions would suggest. But only measuring a real-word app would show for sure.
 
Compared to what? I always thought BCD was just "packed" compared to ASCII or just storing one decimal digit per byte in the lower nibble.

B
I haven't heard them compare it to anything, and I used to be one of them.

Nobody told me what Pascal's packed arrays or its packed records got compared. Some Pascal compilers even make the computer ignore the word "packed."
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.