Hey all,
I'm almost finished reading a book on Objective-C.
Book: Big Nerd ranch - Objective-C programming
Chapter: 33
It's going into bitwise operations and as such it starts out by giving us Base 10 examples then goes into binary (base 2) and finally Hex (base 16)
So I am having a hard time getting my head around this chapter. It seems my lack of math skills is haunting me. However, I want to understand this perfectly otherwise I am failing myself. I am generally good with numbers once I understand the basics.
So, let's start off at the beginning: Base 10 numbers.
I seem to understand this 100% - for example a number like: 123456 can be represented in decimal like so:
1 * 10^5 + 2 * 10^4 + 3 * 10^3 + 4 * 10^2 + 5 * 10^1 + 6 * 10^0
I think this gets me my 123456 number?
Now we go down to base 2 numbers:
What I am trying to figure out is this:
in binary I can work out these two numbers:
A. 10100010 = 162
B. 11001110 = 206
Now how I got this was using the diagram presented to us in this chapter - which looked something like this:
|128 |64 |32 |16| 8 |4 | 2 | 1
| 0 | 0 | 1| |1 |1 |1 | 0 | 0
What I am trying to understand is why is there (from right to left) 1 to 128 in the top?
The pattern shows that we multiply the number to the right by the power of 2 (base 2 numbers) - but if I try follow that rule it goes wrong quickly...
Eg:
1^ 2 = 1
2 ^2 = 4 ..so far so good..
4 ^ 2 = 16....
So where does 8 come into it?
If a look at it from right to lift - doubling up each time.
1 x 2 = 2.
2 x 4 = 4;
4 x 4 = 16
Where is 8?
I may have this completely wrong - but I need to understand this..
This is before I get to representing numbers in hex (Base 16)
So I need to get the binary part first.
Can anyone shed some light on this more me?
Thanks all
I'm almost finished reading a book on Objective-C.
Book: Big Nerd ranch - Objective-C programming
Chapter: 33
It's going into bitwise operations and as such it starts out by giving us Base 10 examples then goes into binary (base 2) and finally Hex (base 16)
So I am having a hard time getting my head around this chapter. It seems my lack of math skills is haunting me. However, I want to understand this perfectly otherwise I am failing myself. I am generally good with numbers once I understand the basics.
So, let's start off at the beginning: Base 10 numbers.
I seem to understand this 100% - for example a number like: 123456 can be represented in decimal like so:
1 * 10^5 + 2 * 10^4 + 3 * 10^3 + 4 * 10^2 + 5 * 10^1 + 6 * 10^0
I think this gets me my 123456 number?
Now we go down to base 2 numbers:
What I am trying to figure out is this:
in binary I can work out these two numbers:
A. 10100010 = 162
B. 11001110 = 206
Now how I got this was using the diagram presented to us in this chapter - which looked something like this:
|128 |64 |32 |16| 8 |4 | 2 | 1
| 0 | 0 | 1| |1 |1 |1 | 0 | 0
What I am trying to understand is why is there (from right to left) 1 to 128 in the top?
The pattern shows that we multiply the number to the right by the power of 2 (base 2 numbers) - but if I try follow that rule it goes wrong quickly...
Eg:
1^ 2 = 1
2 ^2 = 4 ..so far so good..
4 ^ 2 = 16....
So where does 8 come into it?
If a look at it from right to lift - doubling up each time.
1 x 2 = 2.
2 x 4 = 4;
4 x 4 = 16
Where is 8?
I may have this completely wrong - but I need to understand this..
This is before I get to representing numbers in hex (Base 16)
So I need to get the binary part first.
Can anyone shed some light on this more me?
Thanks all