Here's an example, which assumes (since it's always true for Mac programming) that we're using the
two's complement method for storing signed numbers. We're pretend we have only 4 bits in our integers:
How the 16 possible unsigned bit patterns are interpreted:
0000 means 0
0001 means 1 (which is the same as +1)
0010 means 2 (which is the same as +2)
0011 means 3 (which is the same as +3)
0100 means 4 (which is the same as +4)
0101 means 5 (which is the same as +5)
0110 means 6 (which is the same as +6)
0111 means 7 (which is the same as +7)
1000 means 8 (which is the same as +8)
1001 means 9 (which is the same as +9)
1010 means 10 (which is the same as +10)
1011 means 11 (which is the same as +11)
1100 means 12 (which is the same as +12)
1101 means 13 (which is the same as +13)
1110 means 14 (which is the same as +14)
1111 means 15 (which is the same as +15)
Conclusion: We can represent 16 different integers but all of them are positive.
How the 16 possible signed bit patterns are interpreted:
0000 means 0
0001 means 1 (which is the same as +1)
0010 means 2 (which is the same as +2)
0011 means 3 (which is the same as +3)
0100 means 4 (which is the same as +4)
0101 means 5 (which is the same as +5)
0110 means 6 (which is the same as +6)
0111 means 7 (which is the same as +7)
1000 means -8
1001 means -7
1010 means -6
1011 means -5
1100 means -4
1101 means -3
1110 means -2
1111 means -1
Conclusion: We can represent 8 different non-negative integers and 8 different negative integers.
Notice that the
unsigned bit patterns don't really represent
unsigned numbers. Each bit pattern in both sets
represents an integer, and integers by nature always have a sign. People tend to gloss over the distinction.