Can anyone help me figure out the logic in this pseudo C code? This is part of a calculation for a message check value. I am racking my brain but still can't understand it. I have also attached a jpg of a sample packet as an example of a working packet stream. Header Check Value is a 2 byte high/low binary number computed using the following algorithm (in pseudo C programming language notation): unsigned check_value, temp1, temp2, temp3; check_value=Oxa55a; temp1=temp2=0 for every byte in the order of their appearance in the packet not including the Header check Value itself { temp3 = (byte 11 temp1 ++) & OxFF; temp2 = (temp2 + temp3) & OxFFFF; check_value = (check_value + temp2) & OxFFFF; check_value = ((check_value&1) << 15) I ((check_value >> 1) & Ox?FFF); }
Don't worry, it's not you being thick, it's whoever wrote this crap not understanding the concept of "clear communication". Ask whoever gave you this code: (1) what does "byte 11 temp1 ++" mean? (2) what does Ox?FFF mean? (I presume the leading O should be a zero, but what does the question mark represent? (3) what is the size in bytes of the unsigned type? (If they're being unnecessarily pedantic, make sure they answer in terms of 8-bit bytes. Some people like to think of bytes as meaning some other number of bits, and in doing so incorrectly think of themselves as extremely clever, especially when they don't state this.)
(1) what does "byte 11 temp1 ++" mean? This is supposed to be "byte ^ temp1 ++" (2) what does Ox?FFF mean? (I presume the leading O should be a zero, but what does the question mark represent? This is supposed to be "0xFFFF" (3) what is the size in bytes of the unsigned type? (If they're being unnecessarily pedantic, make sure they answer in terms of 8-bit bytes. Some people like to think of bytes as meaning some other number of bits, and in doing so incorrectly think of themselves as extremely clever, especially when they don't state this.) 8-bit bytes
OK, well in that case any bitwise AND with 0xFFFF is completely redundant. All 65536 2-byte signed values are exactly the same after ANDing them with 0xFFFF. There isn't a vast amount of logic. It's just doing some number crunching on the byte values to compute a checksum. First byte is 0x00. This is xor'd with temp1 which is zero and post-incremented. The & 0xFF is not redundant. temp3 is now 0x0000. temp2 is also unchanged; it is at its initial value 0 and has 0 added to it. check_value is unchanged by the next line as temp2 is zero. The 4th line changes check_value from 0xA55A, which is 1010 0101 0101 1010 in binary, by shifting the whole thing right and moving the rightmost bit to the left: 1010 0101 0101 1010 0101 0010 1010 1101 Second byte is 0x38; this is xored with temp1 which is 1, and postincremented to 2. 0x38 ^ 0x01 = 0x39. The & 0xFF does nothing in this case because the result is <256. So temp3=0x39. temp2 just has temp3 added to zero so is 0x39. check_value has temp2 (0x39) added to it: 0101 0010 1010 1101 0000 0000 0011 1001 + =================== 0101 0010 1110 0110 and is then rotated: 0101 0010 1110 0110 1010 1001 0111 0011 I recommend you work out the next few bytes; it's not difficult to plug the values into the above template. At the end of all this, check_value will have a particular value, which should match the value in bytes 48 and 49, i.e. BAF5.