I'll make some general comments. All values stored in your computer (run of the mill computer of today, anyway) are binary. They are stored as ones and zeroes. The interpretation of that value may vary. This means that you always need to consider what the pattern represents, as well as the actual pattern of ones and zeroes. A byte with the value, 0011 0001, may represent a character, '1'. The value, 0000 0001, may represent the integer, 1. A floating point 1 would be a different pattern entirely. A 2-of-5 code for 1 would be yet another pattern. The pattern sent by a keyboard when you press 1 may be none of the above. It may seem confusing at first, but it's something you must get your head around. Octal, hexadecimal, and decimal representations are a different way of expressing a binary pattern. Again, the meaning of the pattern may change, strictly because you want it to represent something specific that has no one-to-one correspondence with the actual pattern that exists in the storage element. Further, just to make things worse, if more than one byte is used to store a pattern, the order of the bytes may vary from platform to platform (little endian versus big endian versus various mixed forms of the two). Fortunately, your language and various standards try to insulate you from the endian issue as much as possible.
First of all the answer is no. The only way that I found is bitwise operations. Here is how I implemented this: char soh = '/a'; // I din't find short form of soh,if you know, tell me please soh=soh>>2; string myString = "10011100"; char myChar = '/0'; for (i=0;i<8;i++) { myChar= myChar << 1; if(myString=='1'){myChar= (myChar|soh);} }