If it is given that the range of a certain string is from 0x0000 to 0xFFFF, does this imply that each character in the string is 16 bits long?

I'm trying to write a subroutine that converts ASCII dec string into binary and vice versa.

The hints I'm given say that for conversion from ASCII dec to binary I must:

subtract 0x30 from every digit - I think this is so you can get the digit in base 10 as we know it.

Then I am to multiply each digit by its weight and then sum up all of this to get the binary value.

This last step makes no sense to me as the way I have learnt it converting from dec to binary requires divisions by 2 and then recording the remainders etc. Is there another way I am unaware of?

Similarly to convert from binary to decimal I'm asked to use a division method but the only way I am aware of coverting from bin to dec is to multiply by weights.

Any advice?

Thanks

Last edited by Shafqat; 11Dec2008 at 15:20..