hey friends just needed some help in the below program #include<stdio.h> void main() { float a=10.5; float x=20.5; printf("a=%d",a); printf("x=%d",x); getch(); } am getting a=0 and some garbage value for x. Can some one please explain me what is the reason for this??
You have used the wrong format specifier. %d is used for int %f is used for float Make these modifications and you may get expected result printf("a=%f",a); printf("x=%f",x);
ya i know that. but can you explain me why am getting a=0 and x as garbage value why not a=0 as well as x=0. any particular reason??
It's undefined behaviour, so you could get anything. In Visual Studio 2010 for example I get zero for both. The results are going to come from the result of printf trying to interpret the 10.5 and 20.5 float bit patterns as ints. Let's expand the program a bit to have a look at the exact memory, do some casting and see what we get: Code: void test47() { float a=10.5; float x=20.5; printf("a=%d\n",a); printf("x=%d\n",x); unsigned char *p=(unsigned char*)&a; for (int i=0; i<4; i++) printf("0x%02x ", p[i]); printf("\n"); p=(unsigned char*)&x; for (int i=0; i<4; i++) printf("0x%02x ", p[i]); printf("\n"); int *r=(int*)&a; printf("%d\n",*r); r=(int*)&x; printf("%d\n",*r); } Results: a=0 x=0 0x00 0x00 0x28 0x41 0x00 0x00 0xa4 0x41 1093140480 1101266944 So the float representation of 10.5 is 00 00 28 41, and of 20.5 00 00 a4 41. Bearing in mind this is x86 so those bytes are switched around, let's plug 41 28 00 00 into a hex calculator, convert to dec and see what we get: 1093140480. So that's the reason for the odd values. I suggest you try this with your compiler and see what happens. Post the results if it still puzzles you. If on your compiler floats are 4 bytes and ints are 2 bytes then that could explain the zero result for both. a[0..1] and x[0..1] are both 0x00 0x00 which would be zero where sizeof(int)=2.