If it is given that the range of a certain string is from 0x0000 to 0xFFFF, does this imply that each character in the string is 16 bits long?
I'm trying to write a subroutine that converts ASCII dec string into binary and vice versa.
The hints I'm given say that for conversion from ASCII dec to binary I must:
subtract 0x30 from every digit - I think this is so you can get the digit in base 10 as we know it.
Then I am to multiply each digit by its weight and then sum up all of this to get the binary value.
This last step makes no sense to me as the way I have learnt it converting from dec to binary requires divisions by 2 and then recording the remainders etc. Is there another way I am unaware of?
Similarly to convert from binary to decimal I'm asked to use a division method but the only way I am aware of coverting from bin to dec is to multiply by weights.
Probably yes, but I'd want something else to correlate that with before deciding for certain. If the numbers are 32-bit then the range could mean 0x00000000 to 0x0000ffff.
I think what is meant by "weight" is the relevant power of 10. So 1234 means 1*1000+2*100+3*10+4, and that's what it means by multiply and add, and the resulting value of 1234 in the relevant register will contain the value which you can then represent in binary with the method you've outlined.
I'm not clear what is meant by a division method, maybe something like:
store digit (binvalue%10)+0x30
|All times are GMT +5.5. The time now is 10:12.|