int to binary and vise-versa

Discussion in 'Mac Programming' started by Soulstorm, Nov 1, 2006.

  1. Soulstorm macrumors 68000

    Soulstorm

    Joined:
    Feb 1, 2005
    #1
    I want to create some functions that can take a number and convert it to binary, and vise versa.

    The function for the 'long to binary' is this:
    Consider that I have a char *_intBits[32] declared in the class that contains this function among others.

    Code:
    void binary32::updateVector(){
    	int j=0,i = sizeof(long)*8-1;
    	long mask;
    	for (; i>=0; i--) {
    		mask = _intNum & (1<<i);
    		if(mask == 0)
    			_intBits[j] = 0;
    		else
    			_intBits[j] = 1;
    		j++;
    	}
    }
    and the function of the binary to int is this:
    Code:
    int binaryToInt(string s){
    	int i,j=0;
    	int sum;
    	for (i=s.size(); i>=0; i--) {
    		if(s[i] == '1')
    			sum = sum + pow(2.0,j);
    		j++;
    	}
    	return sum;
    }
    But I am experiencing some problems. The result of the first function, when entered in the second function, it does not give me the proper result. I cannot find the mistake... Can anyone help?
     
  2. iMeowbot macrumors G3

    iMeowbot

    Joined:
    Aug 30, 2003
    #2
    binaryToInt() is doubling your answer, right? That's a good old fashioned off-by-one error.

    In this line:
    Code:
    for (i=s.size(); i>=0; i--)
    you are setting the variable i to one higher than you want by using s.size(). Remember, indexes start at 0, not 1.

    [edit: also, there isn't enough of your binary32 class to really tell what is going on there, but are you sure you want to store 0 or 1 into those chars, rather than '0' and '1' ? ]
     
  3. gnasher729 macrumors P6

    gnasher729

    Joined:
    Nov 25, 2005
    #3
    1. Give as a very good reason why _intBits starts with an underscore character. Then read up about identifiers reserved for the implementation.

    2. Give us a very good explanation why this compiles at all. Any strange warnings coming from your compiler? You said you declared "char* _intBits [32];". In that case _intBits = 1; should not compile, as it tries to assign an int to a char*. Then read up about null pointer constants to understand why _intBits = 0; does compile.
     
  4. kainjow Moderator emeritus

    kainjow

    Joined:
    Jun 15, 2000
    #4
    I prefix my class variables with underscores all the time and haven't seem to have any issues with it.. yet :confused:

    How else do you differentiate class variables from local variables? Hungarian notation? :eek:
     
  5. AlmostThere macrumors 6502a

    #5
    Er, stop reinventing the wheel, learn about the standard library and use a std::bitset :)

    (Sorry, not very helpful if this is a class project or something, though...)
     
  6. iMeowbot macrumors G3

    iMeowbot

    Joined:
    Aug 30, 2003
    #6
    Underscore followed by a capital, or a pair of underscores, are reserved. Underscore followed by a lower is fine, unless its scope covers the whole source file. You can usually get away with the last one, except in comp.lang.c++ where no one cares if programs can actually do anything, as long as they are compliant.
     
  7. GeeYouEye macrumors 68000

    GeeYouEye

    Joined:
    Dec 9, 2001
    Location:
    State of Denial
    #7


    Give the guy a break. For #1, underscore-lowercase is valid for any application, especially private data. For #2, it's pretty obvious he meant "char * _intBits of length 32" or "char _intbits[32]", since he didn't actually copy/paste the declaration into the post.
     
  8. Soulstorm thread starter macrumors 68000

    Soulstorm

    Joined:
    Feb 1, 2005
    #8


    Wow, relax man! :)

    1)Why shouldn't it? Well, a very good reason, which you probably haven't thought, is because it's a private data of a class. I have named it '_intbits' because that's the way I'm used to do, me and so many other programmers. So, the accessor function for the variable would be a simple 'intBits()' without the underscore. See how nice it is?

    2)It compiles perfectly, and it should compile. But it's my mistake here, because I typed some things wrong at the thread. The _intBits is an array of int, not a char. I have declared 'int* _intBits' although I made a mistake not mentioning it at first for that, I'm sorry, but I wasn't at my computer at the time, and.. sh*t happens :) . The _intBits array of integers is allocated in the constructor functions.

    Here are my 2 files:
    binary32.h
    Code:
    #include <iostream>
    #include "definitions.h"
    
    class binary32{
    	stdVector<int>		_intBits;
    	unsigned short	_bitNumber;
    	long			_intNum;
    	char*			_hexCode;
    public:
    		binary32(long);	
    	binary32(stdString);
    	void reset();
    	~binary32();
    	void updateVector();
    	void show();
    	
    	void import(stdString);
    	void import(long);
    	void setHex(long);
    	
    	stdVector<int>		intBits(){return _intBits;}
    	unsigned short	bitNumber(){return _bitNumber;}
    	long			intNum() {return _intNum;}
    	char*			hexCode(){return _hexCode;}
    };
    
    binary32.cpp
    Code:
    #include "binary32.h"
    
    binary32::binary32(long a){
    	_hexCode = 0;
    	for (int i=0; i<32; i++) {
    		_intBits.push_back(0);
    	}
    	reset();
    }
    
    binary32::binary32(stdString s){
    	_hexCode = 0;
    	for (int i=0; i<32; i++) {
    		_intBits.push_back(0);
    	}
    	import(s);
    }
    
    
    void binary32::reset(){
    	if(_hexCode){
    		delete [] _hexCode;
    		_hexCode = 0;
    	}
    	_bitNumber = sizeof(long)*8;
    	updateVector();
    }
    
    binary32::~binary32(){
    	//delete [] _intBits;
    	delete [] _hexCode;
    }
    
    void binary32::updateVector(){
    	int j=0,i = sizeof(long)*8-1;
    	long mask;
    	for (; i>=0; i--) {
    		mask = _intNum & (1<<i);
    		if(mask == 0)
    			_intBits[j] = 0;
    		else
    			_intBits[j] = 1;
    		j++;
    	}
    	setHex(_intNum);
    }
    
    void binary32::show(){
    	int i;
    	stdCout << _hexCode << '\n';
    	for (i=0; i<_bitNumber; i++) {
    		stdCout << _intBits[i];
    	}
    	stdCout << '\n' << _intNum << '\n';
    }
    
    void binary32::import(stdString s){
    	if (s.size() < 32) {
    		for (int k=s.size(); k<=32; k++) {
    			s.insert(0,"0");
    		}
    	}
    	else if (s.size()>32) {
    		stdCout << "string to be processed is larger than 32: error\n";
    		return;
    	}
    	int i,j=0;
    	long sum=0;
    	for (i=s.size()-1; i>=0; i--) {
    		if(s[i] == '1'){
    			sum = sum + pow(2.0,j);
    		}
    		j++;
    	}
    	_intNum = sum;
    	reset();
    }
    
    void binary32::import(long s){
    	_intNum = s;
    	reset();
    }
    
    void binary32::setHex(long _num)
    {
    	char *hdigits;						//char array to store the result and return
    	unsigned char size = sizeof(long)*2;//2 hex digits for each byte(8bits)
    		hdigits = new char[size+2];			//first character for sign, last for null
    		char *hlookup = "0123456789abcdef";	//lookup table stores the hex digits into their
    											//corresponding index.
    		
    		long temp = _num;					//temporary place holder for _num of the same type
    											//as _num
    		if(_num<0)						
    		{									
    			hdigits[0]='-';					//if _num is negative make the sign negative
    			_num *= -1;						//and make _num positive to clear(zero) the sign bit
    		}
    		else
    			hdigits[0]=' ';					//else if _num is positive, make the sign blank(space)
    		char mask = 0x000f;					//mask will clear(zero) out all the bits except lowest 4
    											//which represent a single hex digit
    		for(char x=0;x<size;x++)
    		{	
    			temp = _num;					//temp is assigned _num each time
    			temp >>=(4*x);					//shift it in multiples of 4 bits each time
    			temp &= mask;					//mask the integer so it will only have lowest 4 bits
    			hdigits[size-x]=hlookup[temp];	//temp now contains a numeric value which will point
    											//to the corresponding index in the look up table so
    											//all we have to do is use it as an index to the lookup
    		}
    		hdigits[size+1]= NULL;				//the last element will store a null
    		delete [] _hexCode;
    		_hexCode = hdigits;
    }
    and some definitions:
    Code:
    #ifndef DEFINITIONS_H
    #define DEFINITIONS_H
    
    /*	
    	Definitions of standard and widely used expressions.
    	This is in order to avoid conflicts with namespaces
    */
    
    #define stdString	std::string
    #define stdCout		std::cout
    #define stdVector	std::vector
    #define stdCin		std::cin
    #define stdIfstream	std::ifstream
    #define stdOfstream	std::ofstream
    #define stdOstream	std::ostream
    #define stdIstream	std::istream
    #define stdMap		std::map
    #define stdCerr		std::cerr
    
    #define stdIstringstream std::istringstream
    #define stdOstringstream std::ostringstream
    #define stdStringStream	 std::stringstream
    
    /*	
    	Mathematical definitions
    	These are definitions of constants
    	for mathematical operations
    */
    
    #define PI			3.1415
    
    
    #endif
    Why am I using the definitions and declare a class of binary32 and not just making functions? Because that is a part of a large framework, and that will help me build a program I want in Cocoa. Note that this is work in progress and I know there are many things that need some cleaning up. Why am I using the "definitions.h" file? Because I don't want to mess things up with 'using namespace std', but on the other hand, I find it tiring to write each time 'std::something'.

    I should thank iMeowBot for his answer. The problem is fixed.

    Notes:
    1)the function for the integer to hex was found on the internet, as you may have noticed.
    2)I have abandoned the int* _intBits idea, since a vector will help me do some other thing I have in mind...
     

Share This Page