For a given problem, there maybe many solutions, but we always look for a ‘better’ solution. So, how do we determine which solution is ‘better’? In the context of algorithms and programs, we consider a solution as ‘better’ when it uses the minimum resources. Hence, an efficient program will always use least resources to implement the logic. Now, what are these resources? Is there a way to measure this efficiency? How do we do it? Lets take a plunge into these concepts in detail in this article.

For every line of code one writes, it is done at cost to the system. This cost could be the resources it uses which could be

Talking more about time complexity, every operation in the logic takes some time. These operations could be anything, a read operation, a mathematical operation, an assignment, conditional, etc. So, definitely, the more the number of such operations, the more time it takes. But still, is it fair to compare and measure any algorithm with its number of operations? Lets take an example, finding the minimum number element out of an array of three elements would definitely be much faster in terms of time and space, than finding the minimum element number from an array of 10 elements. Both apply the same logic, its just that the larger the size of array, the more will be the cost in terms of time and space. Hence, the number of operations or the total time/space required by the algorithm definitely does not govern the entity called complexity.

However, what if we see, if the size of the problem increases, how does the time/space taken by these operations vary? To be precise, we are more interested in this relation with which the time taken by the algorithm increases with the problem size. Therefore, this relation is termed as the time complexity. Similarly, the relation with which the memory used by the algorithm varies with the size of the problem is termed as the Space Complexity.

In all means, the lesser the complexity, the more efficient the algorithm/program. However, there could be certain cases, where a certain problem specifications make it more efficient. This problem specification is definitely not related to any way to the problem size. We do consider this scenario and call it as the best case complexity. On the same lines, problem specifications could make it least efficient, hence, the worst case complexity. However, for all average cases, we have the average case complexity.

There is a definite way to measure complexity and a notation to represent it. The next sections will be talking about the notation and its computation details.

The time / space complexity is expressed and represented using the Big-O Notation. It is a function which determines, the proportion of time / space an algorithm will need taking the size of the algorithm as its input. It is represented as

Hence the name, big-O Notation and called as in the order of.

Mathematically,

Means, the function T(n) grows equally or with lesser rate as f(n). Mostly books describe the mathematical explanation and analysis of the big-O notation. Lets understand the same in our very own programming terminology and language.

In the above explanation, we talked about the size of the problem. I am sure you understood, but lets confirm our understanding. The size of the algorithm generally, refers to the the number or size of input given to the algorithm. For example, in sorting and searching algorithms, it would be size of the array, in lookups, it will be total number of elements, in linked lists, it will be the number of nodes, etc.

Here are a few examples how complexities look like

Given an algorithm there are a few guidelines or should I say the conventional ways to compute the complexity which is listed out here.

For each statement, there proportional time taken will be constant with respect to the size of the algorithm. Hence, for all statements, it would contribute as the constant complexity, with this constant increasing as the number of statements increase, however, still a constant.

For example, an algorithm with k statements, the time complexity is O(k) where k is a constant.

In a conditional statement like

Here, based on the condition, either logic1 of complexity1 would be executed or logic2 with complexity2. Here the best and worst case scenario comes into picture. If complexity1 is better than (means efficient) complexity2, then the time complexity can be represented as:

The number of times the loop is iterated, the complexity is folded to that number of times. Hence, in case

Here, the total complexity of logic within for loop is O(1+1+1) = O(3) Note O(3) complexity is still a constant.

Unfolding the loop, it runs ‘n’ number of times, hence as n increases, the number of statements increasing proportionally increasing the complexity. Since, it is directly proportional to ‘n’ and each i-th iteration, complexity is O(3), hence now the total complexity becomes

Now we generally ignore constants as as the value of ‘n’ increases to really large value, 3 becomes negligible, and hence can be stripped off. Therefore, for a single loop, complexity has been computed as

What if we have a nested loop like

Doing a similar analysis, total number of statements iterations would be (n*n) = n^2 i.e. n square.

That is, for every increase in ‘n’, the complexity would increase exponentially. How? take an example to compute number of iterations If n = 2, total number of times statements 1 through 3 will be executed is 4. Similarly, for n =3, number if times is 9. Hence, the time complexity for a nested loop is

Therefore, as the number of nested loops increases to k, the time complexity increases to n raise to the power k.

Worth to mention, in case complexity looks like

The complexity would be taken as order of n cube, as as the value of n increases, n becomes negligible in front of n cube.

Now, we know what are complexities and how do we measure them. Lets do some examples and determine complexities.

With the following piece of code,

Here, no matter what the size of array is, it always reads first two, adds them and returns leading to a time complexity of O(1).

Example code:

Here, the whole loop has to be iterated until the needed element is found. Hence, time taken varies linearly with the size of the array. Therefore, the time complexity is O(n)

Each element is displayed, and total number of elements are (row*col). Therefore, we needed a nested loop, which makes the time complexity as order of (row *col) i.e O(n*m). When the m and n reaches large values, they become equivalent leading the time complexity to O(n^2).

There are certain powerful algorithms, which makes the complexity as efficient as O(log n). One of the example is a binary search. It searches an element in a sorted array. The way it works is, divide the array into half, and compare the element to be searched with the middle element of the array. If they are equal, you found it. If the middle element is greater, repeat the search on left sub-array else repeat the search on right sub-array.

Here, the search is considering on only half of elements everytime, i.e. need to only compare with max of half of the total elements. This makes the time complexity as O(log n). (log is to the base of 2).

### Complexity

For every line of code one writes, it is done at cost to the system. This cost could be the resources it uses which could be

- CPU Cycles
- Memory
- Network

Talking more about time complexity, every operation in the logic takes some time. These operations could be anything, a read operation, a mathematical operation, an assignment, conditional, etc. So, definitely, the more the number of such operations, the more time it takes. But still, is it fair to compare and measure any algorithm with its number of operations? Lets take an example, finding the minimum number element out of an array of three elements would definitely be much faster in terms of time and space, than finding the minimum element number from an array of 10 elements. Both apply the same logic, its just that the larger the size of array, the more will be the cost in terms of time and space. Hence, the number of operations or the total time/space required by the algorithm definitely does not govern the entity called complexity.

However, what if we see, if the size of the problem increases, how does the time/space taken by these operations vary? To be precise, we are more interested in this relation with which the time taken by the algorithm increases with the problem size. Therefore, this relation is termed as the time complexity. Similarly, the relation with which the memory used by the algorithm varies with the size of the problem is termed as the Space Complexity.

In all means, the lesser the complexity, the more efficient the algorithm/program. However, there could be certain cases, where a certain problem specifications make it more efficient. This problem specification is definitely not related to any way to the problem size. We do consider this scenario and call it as the best case complexity. On the same lines, problem specifications could make it least efficient, hence, the worst case complexity. However, for all average cases, we have the average case complexity.

There is a definite way to measure complexity and a notation to represent it. The next sections will be talking about the notation and its computation details.

### Big-O Notation

The time / space complexity is expressed and represented using the Big-O Notation. It is a function which determines, the proportion of time / space an algorithm will need taking the size of the algorithm as its input. It is represented as

Code:

O(f(n))

Mathematically,

Code:

T(n) = O(f(n)), where n -> infinity if there exist constants C and N, such that, |T(n)| <= C |f(n)| for all n>N

In the above explanation, we talked about the size of the problem. I am sure you understood, but lets confirm our understanding. The size of the algorithm generally, refers to the the number or size of input given to the algorithm. For example, in sorting and searching algorithms, it would be size of the array, in lookups, it will be total number of elements, in linked lists, it will be the number of nodes, etc.

Here are a few examples how complexities look like

Code:

O(1) //Constant O(n) O(n^2) // Order of n square O(log n)

### Computing complexity

Given an algorithm there are a few guidelines or should I say the conventional ways to compute the complexity which is listed out here.

**1. Statement**For each statement, there proportional time taken will be constant with respect to the size of the algorithm. Hence, for all statements, it would contribute as the constant complexity, with this constant increasing as the number of statements increase, however, still a constant.

For example, an algorithm with k statements, the time complexity is O(k) where k is a constant.

**2. Conditional Statement**In a conditional statement like

Code:

if (condition) { //logic1 of some complexity1 } else { //logic2 of some complexity2 }

- Best-Case = complexity1
- Worst Case = complexity2

**3. Loop**The number of times the loop is iterated, the complexity is folded to that number of times. Hence, in case

Code:

for (i = 0; i < n; i++) { /*statement 1 of complexity O(1)*/ /*statement 2 of complexity O(1)*/ /*statement 3 of complexity O(1)*/ }

Unfolding the loop, it runs ‘n’ number of times, hence as n increases, the number of statements increasing proportionally increasing the complexity. Since, it is directly proportional to ‘n’ and each i-th iteration, complexity is O(3), hence now the total complexity becomes

Code:

O(3*n).

Code:

O(n)

Code:

for (i = 0; i < n; i++) { for (j=0; j<n; j++) { /*statement 1 of complexity (1) */ /*statement 2 of complexity (1) */ /*statement 3 of complexity (1) */ } }

That is, for every increase in ‘n’, the complexity would increase exponentially. How? take an example to compute number of iterations If n = 2, total number of times statements 1 through 3 will be executed is 4. Similarly, for n =3, number if times is 9. Hence, the time complexity for a nested loop is

Code:

O(n^2) /*i.e. order of n square*/

Worth to mention, in case complexity looks like

Code:

O(n^3 + n) /*i.e. n cube plus n*/

### Some Examples

Now, we know what are complexities and how do we measure them. Lets do some examples and determine complexities.

**1. O(1) Constant Complexity**With the following piece of code,

Code:

int addFirstTwo(int array[]) { int sum = array[0] + array[1]; return sum; }

**2. O(n) Linear Complexity**Example code:

Code:

int search(int array[], int n, int elem) { int i; for (i = 0; i < n; i++) { if (array[i] == elem) return i; } return (-1); }

**3. Order of n square**Code:

int traverseMatrix (int mat[][], int row, int col) { int i, j; for (i = 0; i < row; i++) { for(j = 0; j <col; j++) { printf(‘%d\t”, mat[i][j]); } printf(“\n”); } }

**4. O(log n) Logarithmic Complexity**There are certain powerful algorithms, which makes the complexity as efficient as O(log n). One of the example is a binary search. It searches an element in a sorted array. The way it works is, divide the array into half, and compare the element to be searched with the middle element of the array. If they are equal, you found it. If the middle element is greater, repeat the search on left sub-array else repeat the search on right sub-array.

Code:

int binarySearch(int array[], int n, int elem) { int i = n/2; int uLimit = n; int lLimit = 0; while (elem != array[i]) { if (uLimit == lLimit) { printf(“Not Found\n”); i = -1; break; } if (elem >array[i]) { lLimit = i + 1; } else { uLimit = i - 1; } i = (lLimit + uLimit)/2; } return i; }