A little OT

Discussion in 'Mac Programming' started by mdeh, Feb 26, 2010.

  1. mdeh macrumors 6502

    Jan 3, 2009
    Hi all,
    I am reading Piper/Murphy's Cryptography. There is one sentence, that does not quite make sense. The text is talking about decimal and binary numbers.

    They show an example of converting a decimal number to it's binary equivalent, and show how 53 and 86 are converted. "53" is a 6-bit number and "86" is an example of a 7-bit number.

    Now the confusing part.
    Can anyone elucidate the "3.32d" for example with the digit 1, or 20 etc. It's bugging me, even though in the big picture, it really does not effect anything. :D

    Thanks in advance
  2. kpua macrumors 6502

    Jul 25, 2006
    Just a guess, but is it just multiplication? It sounds like it's suggesting that, for example, to represent a 4-digit decimal number (1000-9999) in binary you would need about 3.32 * 4 = 13.28 bits on average.
  3. mdeh thread starter macrumors 6502

    Jan 3, 2009

    That's what I thought initially too. Thanks
  4. lee1210 macrumors 68040


    Jan 10, 2005
    Dallas, TX
    A better way would be:
    the ceiling of log base 2(value+1)

    so for 86:
    log base 2 of 87 is between 6 and 7, so the ceiling is 7. It's a bit more involved, but you can tell for sure how many bits you need.


    Edit: I didn't see that this was estimating the number of bits needed to estimate some number of digits...
    Using the method above you get 10 for 1000, and 14 for 9999
    The average is nice, i guess? Everything over 8191 will take 14 bits, everything over 4095 will take 13, so that's 2/5 of the space at 13, and 1/5 at 14... there isn't a lot of the space that can use only 10, 1/10th or so that can use 11, the rest 12... i guess 13.28 is nice to know? I'm not sure what you'd do with it.
  5. mdeh thread starter macrumors 6502

    Jan 3, 2009

    Thanks Lee

Share This Page