Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

JonnyFrond

macrumors newbie
Original poster
Jan 15, 2011
9
0
Hi there,

Can anyone out there help me with this. I have just written a small program to add to numbers together and find the squareroot in C++

The problem is that I have declared my two variables as int, yet it accepts this. In Visual Studio it comes out, quite correctly, with an error, as the variables should be declared as doubles.

????

Any input on this would be greatly appreciated.

Kind regards

Jonny
 
Here's the code, as you can see it is very simple, yet it is not what I would expect. This was given us as a demo of an error, yet me on my mac didn't get one.

It's going to make it hard to learn if it xcode has it's own version of C++ that is not cross platform. I am expecting to be using unix when I finish my degree, not osx.

#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;

int main ()
{
int a,b;
cout <<"Enter valur a and b " << endl;
cin >> a >> b; // read user input
cout << "The total is " << sqrt(a+b) << endl;
cout << fixed << setprecision(2);

cin.ignore();
return 0;
}
 
It's going to make it hard to learn if it xcode has it's own version of C++ that is not cross platform.
Using Visual Studio as the cross-platform baseline is probably more likely going to cause problems.
I am expecting to be using unix when I finish my degree, not osx.
OS X is UNIX.

Code:
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;

int main () 
{
	int a,b;
	cout <<"Enter valur a and b " << endl;
	cin >> a >> b; // read user input
	cout << "The total is " << sqrt(a+b) << endl;
	cout << fixed << setprecision(2);
	
	cin.ignore();
    return 0;
}

Are you hoping to get a complaint about the coercion of a+b to match one of the candidates of sqrt (float, double, or long double)?

-Lee
 
Last edited:
It's going to make it hard to learn if it xcode has it's own version of C++ that is not cross platform. I am expecting to be using unix when I finish my degree, not osx.

Your assumption is wrong, errors or warnings at compile time comes down to the compiler not the language. OS X does use standard C++ and OS X is unix. http://www.opengroup.org/openbrand/register/xy.htm The compiler used for C++ is GNUcc or GCC for short.
 
Hi there,

Can anyone out there help me with this. I have just written a small program to add to numbers together and find the squareroot in C++

The problem is that I have declared my two variables as int, yet it accepts this. In Visual Studio it comes out, quite correctly, with an error, as the variables should be declared as doubles.

????

Any input on this would be greatly appreciated.

Kind regards

Jonny

Visual studio is actually wrong to throw the error. If your professor told you it's an error, then he doesn't know the language and isn't worth his weight in carbon. Moreover the conversion won't cause any problems because doubles can represent ints without loss of precision.

http://www.cplusplus.com/doc/tutorial/typecasting/

Implicit conversions do not require any operator. They are automatically performed when a value is copied to a compatible type ... Standard conversions affect fundamental data types, and allow conversions such as the conversions between numerical types (short to int, int to float, double to int...)
 
Last edited:
If your end-goal is UNIX compatibility, don't trust Visual Studio. Trust OS X's version of gcc. My code from OS X always compiles on my university's Linux/UNIX serveres. Some of my classmate's VS code? Not so much.

And as others have said, there's nothing wrong with that code. Standard C/C++ will convert the int to a double internally, since sqrt() doesn't do ints.
 
Thank you guys, all your answers have helped me understand this. I am at Uni, and they use Visual Studio, I have a mac and have decided to use xcode, for a few reasons, some of which you guys have outlined here.

As a mature student, I have come to learning from a slightly different place from others and I am trying to set myself up more for industry as opposed to just passing my degree.

In my mind, using Visual studio will lead to bad programming practices and windows only libraries. Using Xcode will lead to possibly writing programs that will require a lot of fiddling with on the uni computers in order to submit something that will be markable.

Thanks for your input.

Jonny
 
The error is probably not about the type coercion; it's about ambiguity. Visual Studio is right to complain. If you try this code in Comeau (regarded as the most standards-compliant compiler there is), you'll get the same error. Note that you can try this online.

I don't have a GCC at hand here, but I'm surprised it would accept this.
 
Last edited:
Hmm... I was able to access GCC in the mean time, and the reason it accepts this is because in its cmath header file you'll find this:

Code:
  using ::sqrt;

  inline float
  sqrt(float __x)
  { return __builtin_sqrtf(__x); }

  inline long double
  sqrt(long double __x)
  { return __builtin_sqrtl(__x); }

  template<typename _Tp>
    inline typename __gnu_cxx::__enable_if<__is_integer<_Tp>::__value,
                                           double>::__type
    sqrt(_Tp __x)
    { return __builtin_sqrt(__x); }

Your integer sqrt-call triggers the template version.

Apparently the other compilers use libraries without this trick.

Regarding standards-compliance: C++ adds float and long double versions of the functions in math.h (in C, these functions have different names since you can't overload functions in C; in this case they would be sqrtf() and sqrtl()).

An integer-taking sqrt() function is not part of the standard library.

In short, GCC provides more than is strictly required.
 
As a mature student, I have come to learning from a slightly different place from others and I am trying to set myself up more for industry as opposed to just passing my degree.

Ah you're going to be completely useless anyway when you get out off school and we have to retrain you anyway :D

In my mind, using Visual studio will lead to bad programming practices and windows only libraries.

You'll get plenty of sources for bad coding practices, I doubt VS which is a fairly good IDE (one of my favorites actually) will be a significant contribution to that :)
 
I doubt VS which is a fairly good IDE (one of my favorites actually) will be a significant contribution to that :)

It could be if they are teaching managed code (C++/CLI) instead of "regular" C++. Then of course you'd have to use Visual Studio. (NOTE: I'm not saying it's "bad", just non-standard and non-portable.)


B
 
About the error. It's always best to use explicit conversion. Not just in C but just always. It never does any hard except to use up a few more bytes of disk space and it lets the reader, maybe years later, know that you thought about types. It documents your intent.

Others have already pointed out that if the goal is to move to UNIX then (1) the compiler inside xcode is the most common compiler used in UNIX/Linux systems and (2) Max OS X is UNIX (plus a bit more added on top)

Why is Visual Studio different? Microsoft has a vested business interest in getting people locked into their software. It's not that they hire stupid engineers that can't write a compiler, the differences are intensional.
 
Last edited:
It's not necessarily simply a marketing conspiracy that dictates differences in compiler warnings. One compiler might make some assumptions ("ah, I'm sure he really meant to cast these to doubles") while another insists that you do it yourself, throwing a warning or error. The one that makes assumptions makes it more convenient for the programmer, but this convenience comes at a cost: there is an added ambiguity now. This can lead to subtle errors where you assumed the compiler "should" be doing something while it in fact is assuming something entirely different. For example, you might write code where you think it should convert your ints to doubles and give you a decimal result, while it assumes maybe you intended to keep everything in the integer domain and it throws away the decimal results. I've run into this many times.

Forcing you to cast explicitly can be a pain, but it helps you to always remember which operators require what types, and it avoids these sorts of errors. The cost is a bit of extra work for the programmer, but this may not be a bad thing. Some languages, such as Ada, take it to an extreme, forcing you to be aware of ALL the different types you use in your program, and when it is OK and not OK to convert between them. A huge pain, but it makes for fewer bugs in the code!
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.