Hi everyone,

I am trying to write a program on improving quicksort using insertion sort for small number of inputs.

Here is the actual question:

The running time of quicksort can be improved in practice by taking advantage of the fast running time of insertion sort when its input is “nearly” sorted. When quicksort is called on a subarray with fewer than k elements, let it simply return without sorting the subarray. After the top-level call to quicksort returns, run insertion sort on the entire array to finish the sorting process. How should k be picked? (Hint: This algorithm runs in O(nk + n lg(n/k)) expected time).

i)Implement a quicksort algorithm with the above modifications. The implementation needs to work only for integer numbers (no need to do it for general objects). Use #define parameters for k

#define n_value 500 //arbitrarily chosen

ii)Generate a test integer data, compute the program run times for various k values and plot. Find the approximate k value for which the program run times are short. Present your findings along with your programs. Run the same experiment for several large n values. The test results would be independent of the data chosen.

iii)Conclude how the k can be chosen (for optimum performance of the algorithm) from a practical point of view?

If anyone can provide the code I would really appreciate it . Thanks a lot.

I am trying to write a program on improving quicksort using insertion sort for small number of inputs.

Here is the actual question:

The running time of quicksort can be improved in practice by taking advantage of the fast running time of insertion sort when its input is “nearly” sorted. When quicksort is called on a subarray with fewer than k elements, let it simply return without sorting the subarray. After the top-level call to quicksort returns, run insertion sort on the entire array to finish the sorting process. How should k be picked? (Hint: This algorithm runs in O(nk + n lg(n/k)) expected time).

i)Implement a quicksort algorithm with the above modifications. The implementation needs to work only for integer numbers (no need to do it for general objects). Use #define parameters for k

#define n_value 500 //arbitrarily chosen

ii)Generate a test integer data, compute the program run times for various k values and plot. Find the approximate k value for which the program run times are short. Present your findings along with your programs. Run the same experiment for several large n values. The test results would be independent of the data chosen.

iii)Conclude how the k can be chosen (for optimum performance of the algorithm) from a practical point of view?

If anyone can provide the code I would really appreciate it . Thanks a lot.