Following along with an exercise, I was instructed to make changes to an original piece of code for purposes of optimization. I found an optimization they did not list. Being quite new to programming, I believe I'm much more likely to be at fault than the book. I have provided the detail of the change below. I'm just looking for some confirmation as to whether or not I should be using it. The example in the book shows: Code: i = 2; while (i <= sqrt(static_cast<double>(n))) { if (n % i == 0) { is_prime = false; break; } i++; } The change I made on my own was moving "i++" into the equality/divisibilty test in line 3. Code: if (n % i++ == 0) { After testing the program, it works just fine. Is this a silly, insignificant change that doesn't matter for optimization? Is this a bad habit to get into for reasons unknown to my inexperience? Is it only chance that doing such a thing works here but will not work in similar cases?
Simply moving i++ from one place to another isn't likely to make any difference; you'd need to run it in a profiler to determine (a) if the i++ is a problem in the first place and (b) if it actually made any positive difference that would at least make up for the now less readable code. Optimisation's an odd task to give to a beginner. It needs doing properly, using a code profiler to determine the true bottlenecks. Presenting it as an exercise in a beginner's book gives the impression that you can optimise code just by looking at it and shifting it around, which isn't the case. Assuming the compiler doesn't do it for you (some will, hence the need for profiling. The Visual Studio optimiser for example looks for constant expressions within loops and replaces them with constant values), the main optimisation in that loop is going to be not recalculating sqrt() every time round the loop, i.e. i=2; max=sqrt(...n); while (i<=max)... Depending on the profiling results before and after that change, another optimisation could be to remove sqrt() and use i=2; while (i*i<n)... instead. This calculates the square of i each time round the loop and only profiling will show whether this is faster or slower than using sqrt() before; it depends on the speed of sqrt() and the average value of i. Intuition may suggest that calculating sqrt once before the loop would be better, but if sqrt takes as long as calculating, say, 10 integer squares, then using i*i will be faster if the average number of times this is calculated within a particular scenario is less than 10. Another optimisation is to throw out almost half the possible values of i by performing a one-off test for i=2 before the main loop, then set i=3 for the loop and increment it by two each time round. The penultimate optimisation in terms of speed, but there's a memory tradeoff, is to use a prime number lookup table. A binary search into that table will find out whether n is prime very quickly - 4 lookups if the table is 16 long, for instance. 2,3,5,7, 11,13,17,19, 23,29,31,37, 41,43,47,51 let's say we're looking for 35. Midpoint is 23 so it's the second half. Midpoint of 29,31,37, 41,43,47,51 is 41 so it's the first half of that. Midpoint of 29,31,37 is 31 so it's the second half There's only one item left (37) and it's not 35 so 35 isn't prime. The ULTIMATE optimisation will simply be to use a yes/no lookup table for each possible value of n. Very wasteful in terms of memory, but if you've got oodles of memory and need the calculation done in next to no time this will do the trick. int lookup_table[]={0,1,1,1, 0,1,0,1, 0,0,0,1, 0,1,0,0, ...}; is_prime = lookup_table[n]; and a slight slowdown for a 32x improvement in memory is to use bits: unsigned char lookup_table={0xae, 0x28, ...}; unsigned char bitmasks={0x01,0x02,0x04,0x08,0x10,0x20,0x40,0x80}; is_prime=lookup_table[n>>3] & bitmasks[n&3]; noting that is_prime could be zero or any value in bitmasks so subsequent tests should only consider it as zero/non-zero.