Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

rinseout

macrumors regular
Original poster
Jan 20, 2004
160
0
In my application under GCC 3.2 on a Pentium machine, I do something like
Code:
  unsigned int i;
  i = static_cast< unsigned int >(mu / dmu);
to get an index from doubles mu and dmu. When mu = 1.0, dmu = 0.005, you get i = 199 on the pentium box (you expect 200). Presumably this is because the internal representation of mu/dmu on the Pentium is 199.999999999999.

On PPC under GCC 4, you get 200, and everything works fine.

This has caused me no end of grief; an entire day sorting out this weird error which turned out to be quite deep in the code. At least it wasn't an endianness issue or something more sinister. Since I now have to go through an enormous amount of code and take out all occurences of this trick to obtain the integer part of a double, what is the best way of doing what I'm trying to do that avoids this 0.999999999999 issue?
 

rinseout

macrumors regular
Original poster
Jan 20, 2004
160
0
nearbyint / rint

Just as a followup, are the functions nearbyint() or rint() in cmath.h the way to do this properly?
 

savar

macrumors 68000
Jun 6, 2003
1,950
0
District of Columbia
rinseout said:
Just as a followup, are the functions nearbyint() or rint() in cmath.h the way to do this properly?

I don't know the answer to this, but I'm guessing its too complicated for MR. Did you try comp.sys.mac.programmer?
 

slooksterPSV

macrumors 68040
Apr 17, 2004
3,543
305
Nowheresville
As long as you use rint() and nearbyint() with float or double it should be:
int rint(float sNum);
int nearbyint(float sNum); - since float and double are a lot like each other (ones double ones more floaties :p)
 

rinseout

macrumors regular
Original poster
Jan 20, 2004
160
0
slooksterPSV said:
Don't do it like that do it like this:
i = (unsigned int)(mu / dmu);
that may clear things up.
That's just the C-style cast, and it does the same thing (I've checked).
 

rinseout

macrumors regular
Original poster
Jan 20, 2004
160
0
Weird

I wrote a small program whose content is pretty guessable just from its output. Apparently GNU's "floor" is not equivalent to a cast to integer... First from the Pentium machine:
Code:
Input mu (-99 to quit): 1.0
Input dmu: 0.005
mu / dmu = 200
static_cast< int >( mu / dmu ) = 199
(int)(mu / dmu) = 199
static_cast< int >( nearbyint( mu / dmu ) ) = 200
static_cast< int >( floor( mu / dmu ) ) = 200
And now from the Mac (G4):
Code:
Input mu (-99 to quit): 1.0
Input dmu: 0.005
mu / dmu = 200
static_cast< int >( mu / dmu ) = 200
(int)(mu / dmu) = 200
static_cast< int >( nearbyint( mu / dmu ) ) = 200
static_cast< int >( floor( mu / dmu ) ) = 200
Since "floor" is what i was originally trying to do with the cast, that's what I'll be going with. Please, if anybody with experience with this kind of thing recommends against "floor" for conversion to integer parts, let me know.
 

Mechcozmo

macrumors 603
Jul 17, 2004
5,215
2
rinseout said:
Since I now have to go through an enormous amount of code and take out all occurences of this trick to obtain the integer part of a double, what is the best way of doing what I'm trying to do that avoids this 0.999999999999 issue?

Use a function next time?
 

gekko513

macrumors 603
Oct 16, 2003
6,301
1
I find it very strange that the Pentium and the PowerPC give different results here. I would be interested in knowing if the actual bits of the double are different after the division is performed.

Code:
#include <stdio.h>

void printDoubleBits(char *msg, double d) {
  int *p = (int*) &d;
  if (msg)
    printf("%s: %08x%08x\n", msg, p[0], p[1]);
  else
    printf("%08x%08x\n", msg, p[0], p[1]);
}

int main(int argc, char *argv[]) {
  double mu = 1.0;
  double dmu = 0.005;
  double result = mu / dmu;
  printDoubleBits("mu    ", mu);
  printDoubleBits("dmu   ", dmu);
  printDoubleBits("mu/dmu", result);
  printf("%f / %f = %i\n", mu, dmu, (unsigned int)(mu/dmu));
  return 0;
}
Running this on a G5 gives.
Code:
mu    : 3ff0000000000000
dmu   : 3f747ae147ae147b
mu/dmu: 4069000000000000
1.000000 / 0.005000 = 200
The different endianness of the Pentium and the G5 will probably make the bits different, but it should be possible to rearrange them and see if the result is the same. Can you try it on the Pentium?
 

superbovine

macrumors 68030
Nov 7, 2003
2,872
0
you should try checking the guard bits of the constants and resultant before and after you divide; so you can see where the rounding problem is.
 

rinseout

macrumors regular
Original poster
Jan 20, 2004
160
0
gekko513 said:
Can you try it on the Pentium?
Here we are:
Code:
mu    : 000000003ff00000
dmu   : 47ae147b3f747ae1
mu/dmu: 0000000040690000
1.000000 / 0.005000 = 199
So, aside from endianness the double representations are the same, but still the cast is different. Could this have to do with the compiler version, then? (The above code was produced using gcc 3.2, and I think macs are now gcc 4 by default, with gcc 3.3 installed too.)
 

superbovine

macrumors 68030
Nov 7, 2003
2,872
0
rinseout said:
Here we are:
Code:
mu    : 000000003ff00000
dmu   : 47ae147b3f747ae1
mu/dmu: 0000000040690000
1.000000 / 0.005000 = 199
So, aside from endianness the double representations are the same, but still the cast is different. Could this have to do with the compiler version, then? (The above code was produced using gcc 3.2, and I think macs are now gcc 4 by default, with gcc 3.3 installed too.)

gcc_select is your friend.
 

rinseout

macrumors regular
Original poster
Jan 20, 2004
160
0
I've checked that the output under gcc 3.3 and gcc 4 on the G4 is the same. I'm still interested to know if this is a compiler thing (i.e., a change in gcc implementation between 3.2 and 3.3) or if it is a side-effect of the different platforms. I would have expected that the cast is a language feature that should behave the same way on bit-identical values.

Why is gcc 3.2 on the Pentium giving 199 and gcc 3.3 on the Mac giving 200? These kinds of things are supposed to be deterministic, aren't they?
 

makeme

macrumors member
Jul 16, 2005
48
0
Don't cast when you want to round

You are casting from floating point to integer, which just throws away the decimal portion. Technically, 1 / .005 is NOT 200 and IS 199.999999999999. If you cast that to an integer you SHOULD get 199. If you want 200 you need to round it to the nearest integer. In C / C++ you can use floor to do this. This program gives the output 200 on my Mac and PC.

Code:
#include <iostream>
#include <cmath>
using namespace std;

int main()
{
        cout << floor(1 / .005);
        return 0;
}

I don't know why you are getting different results with different compiler versions on different platforms, but it does not surpirse me. Different compilers and even different versions of the same compiler generate different machine code for the same program. The above code SHOULD work on both and for what it's worth casting 1 / .005 to an integer should result in 199 and not 200.
 

makeme

macrumors member
Jul 16, 2005
48
0
Explicit Type Conversion or Casting

The following wisdom on Explicit Type Conversion or Casting is from The C++ Programming Language by Bjarne Stroustrup emphasis mine.

Sometimes, we have to deal with "raw memory;" that is, memory that holds or will hold objects of a type not known to the compiler. For example, a memory allocator may return a void* pointing to newly allocated memory or we might want to state that a given integer value is to be treated as the address of an I/O device:

Code:
void* malloc(size_t);

void f()
{
    int* p = static_cast<int*>(malloc(100));                 // new allocation used as ints
    IO_device* d1 = reinterpret_cast<IO_device*>(0xff00);    // device at 0xff00
    // ...
}

A compiler does not know the type of the object pointed to by the void*. Nor can it know whether the integer 0xff00 is a valid address. Consequently, the correctness of the conversions are completely in the hands of the programmer. Explicit type conversion, often called casting, is occasionally essential. However, traditionally it is seriously overused and a major source of errors.

If you feel tempted to use an explicit type conversion, take the time to consider if it is really necessary. In C++, explicit type conversion is unnecessary in most cases when C needs it and also in many cases in which earlier versions of C++ needed it. In many programs, explicit type conversion can be completely avoided; in others, it's use can be localized to a few routines.

The problem the original poster was having is a wonderful example of why casting is "a major source of errors", often "unnecessary" and should be avoided if at all possible. There is almost no reason to cast unless you are working with "raw memory", which you shouldn't be doing unless you are programming on a very low level and writing device drivers, operating systems, etc. The exceptions are casting away constantness and casting down a class hierarchy. Even these situations can sometimes be avoided with better design. Lots of casts is a sure sign of a design flaw on the part of the programmer.
 

rinseout

macrumors regular
Original poster
Jan 20, 2004
160
0
makeme said:
You are casting from floating point to integer, which just throws away the decimal portion. Technically, 1 / .005 is NOT 200 and IS 199.999999999999. If you cast that to an integer you SHOULD get 199. If you want 200 you need to round it to the nearest integer. In C / C++ you can use floor to do this. This program gives the output 200 on my Mac and PC.
This is curious because the floor of 199.9999999999999 (to a finite number of decimal places) is 199, not 200. So I guess somehow floor() in GNU libc is smart enough to know that it really means 200.0. We've established that on intel and PPC that the result of the division is (after re-ordering the bits) the same, so surely the cast should have some kind of deterministic result.

I understand that the naked cast here was probably bad style; that choice came only from my lack of familiarity with the libc rounding functions when I wrote the offending code, and an observation that the cast seemed to have the effect I was looking for. So that's how I implemented this (I was using the cast to do basically what trunc() does -- rounding towards 0).

I disagree with makeme that explicit type conversion has almost no justifiable use --- I agree that it's something that should be used sparingly, but I have often used casts to eliminate compilation warnings that creep up when one relies on implicit type conversion. Say you're linking against a library function that returns an int which is known to be non-negative, but you're comparing it to an unsigned int in your code... without explicitly casting one or the other of these values, a comparison will result in a compiler warning ('comparison between signed and unsigned'). As it is, the libc rounding functions do not return integer types, but integer-valued doubles, so even in the present case some kind of type conversion is required (whether explicit or implict).

Style matters aside, I would still like to understand what actually went on here; unlike makeme, it does surprise me that this particular operation has two different results on different platforms; or I guess it would surprise me if that's the intended behaviour. I just have the feeling that the behaviour of this is probably specified in the standard, and if so, there's a bug in one or the other of the compilers.

So I have fixed the offending code and I understand what I did wrong, but now I'm trying to understand why this is platform-dependent. It doesn't seem like it ought to be (to me).
 

rinseout

macrumors regular
Original poster
Jan 20, 2004
160
0
Mechcozmo said:
This isn't a Pentium 100 or Pentium 75?
It's not a first-generation Pentium, if that's what you're asking. It's either a Pentium III or Pentium IV (not sure which).
 

makeme

macrumors member
Jul 16, 2005
48
0
rinseout said:
I disagree with makeme that explicit type conversion has almost no justifiable use --- I agree that it's something that should be used sparingly, but I have often used casts to eliminate compilation warnings that creep up when one relies on implicit type conversion. Say you're linking against a library function that returns an int which is known to be non-negative, but you're comparing it to an unsigned int in your code... without explicitly casting one or the other of these values, a comparison will result in a compiler warning ('comparison between signed and unsigned').

Those warnings are to remind you that you are making a conversion or comparison that COULD be problematic. If this is NOT true, you should ignore them or turn them off, NOT cast to make them go away. Until compilers become smart enough to know the difference, they will and SHOULD warn you if you do this.

As it is, the libc rounding functions do not return integer types, but integer-valued doubles, so even in the present case some kind of type conversion is required (whether explicit or implict).

Implicit type conversion is your friend.

Code:
#include <iostream>
using namespace std;

int main()
{
        int test[201];
        for (int n = 0; n <= 200; n++)
                test[n] = n;

        int index = 1 / .005;
        cout << test[index];
        return 0;
}

This produces 200, which is the value of test[1 / .005] or test[200]. Yes, there is a warning and in this case it should be ignored or selectively turned off. I don't see what the problem is and why you should need to cast.

For what it's worth, I also don't understand why you find it necessary to stick the result of such a mathematical calculation into an unsigned int in the first place.

I also don't understand why this would not always produce 200 no matter what:

Code:
#include <iostream>
using namespace std;

int main()
{
        cout << 1 / .005;
        return 0;
}

Does it not on some compiler and or platform? If so, I suggest you file a bug report with the compiler developer.
 

makeme

macrumors member
Jul 16, 2005
48
0
rinseout said:
I disagree with makeme that explicit type conversion has almost no justifiable use --- I agree that it's something that should be used sparingly, but I have often used casts to eliminate compilation warnings that creep up when one relies on implicit type conversion. Say you're linking against a library function that returns an int which is known to be non-negative, but you're comparing it to an unsigned int in your code... without explicitly casting one or the other of these values, a comparison will result in a compiler warning ('comparison between signed and unsigned'). As it is, the libc rounding functions do not return integer types, but integer-valued doubles, so even in the present case some kind of type conversion is required (whether explicit or implict).

The following code produces no warnings and there is no explicit type conversion.

Code:
#include <iostream>
using namespace std;

int main()
{
        cout << 1 / .005;
        return 0;
}

The reason, is that operator<< is overloaded to take a double. This kind of type safety is what C++ is all about. There is no need to forcefully convert a double into a char* just to output it. You seem to be trying to do the latter by unnecessarily stuffing a double into an unsigned int. Ask yourself, why am I stuffing a double into an usigned int?

Perhaps you can design and create a class or collection of classes to do what you need. Have them BE the types that you need. They don't have to have these limitations that the built in types have. Do you need an integral type that is just a number and any number? You can do that. Make a number class and overload the constructor, copy constructor and assignment operator to take signed and unsigned ints, long ints, short ints, doubles and everything.

This is how you program in C++. C++ is class design, not procedure design. If you do the latter, write in C and cast your heart out.
 

rinseout

macrumors regular
Original poster
Jan 20, 2004
160
0
makeme, I don't know why you're trying to lecture me on object-oriented programming; believe me I know all about oo vs. procedural philosophy. I understand that you hate casts, and I have already admitted that the cast was not the correct solution for my problem. The only reason this thread continued is because clearly there was something strange going on that still nobody understands --- i.e., casting 1.0/0.005 as a double to an unsigned int yields 199 on one platform, but 200 on another. Your refrain is just "don't cast because casting is bad and that's not how to best use C++"; fair enough, but since casting is part of the language some of us were trying to understand this.

We're going to have to disagree on casting to avoid warnings, too. My philosophy on warnings is that you want to make them go away without turning them off, so that when you do encounter warnings you make a conscious decision about whether you mean to do what you're doing; if you do, then put in a cast so that you acknowledge it, and people who read the code know you've acknowledged it. If you have a page of warnings for a large project, how are you ever going to notice the one that counts if you turn them off?
 

makeme

macrumors member
Jul 16, 2005
48
0
My philosophy on warnings is that you want to make them go away without turning them off, so that when you do encounter warnings you make a conscious decision about whether you mean to do what you're doing

I agree and I apologize for stating otherwise.


unlike makeme, it does surprise me that this particular operation has two different results on different platforms

It does surprise me and I apologize for sounding as if it did not.
 

makeme

macrumors member
Jul 16, 2005
48
0
Understand you must, these three things.

1. If cout << 1 / .005 does not produce 200, then that is a compiler bug that needs to be reported and fixed. End of story.

2. Bjarne Stroustrup, the creator of C++ says that explicit type conversion has almost no justifiable use in C++, a language that he created. I agree with him. I merely quoted that passage from his book.

3. If you are just doing simple integer math, then there is no need for casting. Just do this: int i = int(1 / .005). This constructs a new integer from a double. Notice there is no cast and no warning. The built in types are classes with overloaded constructors taking everything you could possibly want. This is one of the many, many ways that C++ minimizes and in some cases eliminates the need for explicit type conversion.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.