# Bitwise shift operators

Discussion in 'Mac Programming' started by CarlisleUnited, Jan 21, 2010.

1. ### CarlisleUnited macrumors 6502

Joined:
Jan 30, 2007
Location:
Nederland
#1
Why are they usefull? I know how they function but in the end you just multiply or divide a number by 2 no?

2. ### LtRammstein macrumors 6502a

Joined:
Jun 20, 2006
Location:
Denver, CO
#2
What a silly question.

Bitwise operations are pretty cool, besides the multiply/divide by two. In certain algorithms you can speed up code execution because of it. Bitwise operators are hardware level math that the ALU of the computer uses. Depending on how you use the operators, you can get some awesome results in a more timely manner. However, on a normal Mac/PC you won't see a difference, but if you write parallel code for supercomputers you can see a difference.

Also, under certain frameworks in OS X and Windows .NET there are objects that require bitwise operations. For example, in .NET if you want to use a button to do multiple functions, you would use a bitwise-OR. This way you don't have to write new handlers for each potential operation.

In microcontroller/microprocessor programming you end up doing bitwise operations all the time comparing registers to values to achieve a result.

Hopefully this helps. If said something wrong or didn't add more detail, please reply with a correction.

3. ### lee1210 macrumors 68040

Joined:
Jan 10, 2005
Location:
Dallas, TX
#3
What if your datatype isn't an int?

While a compiler may optimize it, a bitshift on a CPU is much cheaper than an actual multiplication or divide.

If the right-hand operand is a variable, then you would have to do a lookup, calculation, etc. to see what to multiply or divide by. If you know how many bits you want to shift, why use a different operation? Multiplying or dividing by powers of 2 is not more fun than a bit shift.

Sometimes your intentions in code will be more clear/easier to read if you use a bitshift. For example, if you want to test that bit 14 is high, x&(1<<14)!=0 is easier for me to decipher than x&16384!=0.

Nowadays it's not that common to store statuses in int or short datatypes, you just use a BOOL or some other typedef, accept the wasted bits, and be done with it. This has not always been so, so if you are storing 16 discrete values in one short, it is pretty critical that you know how to shift around, test bits, etc.

Again, not common, but if you are storing 4 8-bit values in a 32-bit int, shifting/masking to pull out the values would be pretty commonplace.

Rarely is something there that is wholly useless. Uses might not readily present themselves to you, and they may be outdated, but they certainly exist.

-Lee

4. ### Sander macrumors 6502

Joined:
Apr 24, 2008
#4
It may also be worth pointing out that on older CPUs, multiply/divide instructions weren't available (and on many microcontrollers, they still aren't). Compilers nowadays are probably smart enough to optimize away a division when they "see" it can be fully represented in a bit shift, but with the first compilers this was not the case.

So the remaining reason for using them in modern code is, as lee1210 said, to make it clear to a reader of the code what it is you're trying to do. If I'm extracting the "green" value out of an RGBA pixel, code like (rgba >> 16) & 0xff is idiom everyone used to dealing with bitmaps will recognize, and rgba/65536 - 256*(rgba/16777216) looks very strange.

5. ### CarlisleUnited thread starter macrumors 6502

Joined:
Jan 30, 2007
Location:
Nederland
#5
Thanks for the explanation guys, just came across bit shifts and really wasn't sure what they were for.

6. ### chown33 macrumors 604

Joined:
Aug 9, 2009
Location:
Sailing beyond the sunset
#6
In general, multiplication and division are implemented with an algorithm where shifting is one of the basic operations. This is so in software or in hardware.