Insertion sort has a runtime that is Ω(n) (when the input is sorted) and O(n2) (when the input is reverse sorted). On average, it runs in Θ(n2
To answer this question, let's first determine how we can evaluate the runtime of insertion sort. If we can find a nice mathematical expression for the runtime, we can then manipulate that expression to determine the average runtime.
The key observation we need to have is that the runtime of insertion sort is closely related to the number of inversions in the input array. An inversion in an array is a pair of elements A[i] and A[j] that are in the wrong relative order - that is, i < j, but A[j] < A[i]. For example, in this array:
0 1 3 2 4 5
There is one inversion: the 3 and 2 should be switched. In this array:
4 1 0 3 2
There are 6 inversions:
One important property of inversions is that a sorted array has no inversions in it, since every element should be smaller than everything coming after it and larger than everything coming before it.
The reason this is significant is that there is a direct link between the amount of work done in insertion sort and the number of inversions in the original array. To see this, let's review some quick pseudocode for insertion sort:
Normally, when determining the total amount of work done by a function like this, we could determine the maximum amount of work done by the inner loop, then multiply it by the number of iterations of the outer loop. This will give an upper bound, but not necessarily a tight bound. A better way to account for the total work done is to recognize that there are two different sources of work:
That outer loop always does Θ(n) work. The inner loop, however, does an amount of work that's proportional to the total number of swaps made across the entire runtime of the algorithm. To see how much work that loop will do, we will need to determine how many total swaps are made across all iterations of the algorithm.
This is where inversions come in. Notice that when insertion sort runs, it always swaps adjacent elements in the array, and it only swaps the two elements if they form an inversion. So what happens to the total number of inversions in the array after we perform a swap? Well, graphically, we have this:
[---- X ----] A[j] A[j+1] [---- Y ----]
Here, X is the part of the array coming before the swapped pair and Y is the part of the array coming after the swapped pair.
Let's suppose that we swap A[j] and A[j+1]. What happens to the number of inversions? Well, let's consider some arbitrary inversion between two elements. There are 6 possibilities:
This means that after performing a swap, we decrease the number of inversions by exactly one, because only the inversion of the adjacent pair has disappeared. This is hugely important for the following reason: If we start off with I inversions, each swap will decrease the number by exactly one. Once no inversions are left, no more swaps are performed. Therefore, the number of swaps equals the number of inversions!
Given this, we can accurately express the runtime of insertion sort as Θ(n + I), where I is the number of inversions of the original array. This matches our original runtime bounds - in a sorted array, there are 0 inversions, and the runtime is Θ(n + 0) = Θ(n), and in a reverse-sorted array, there are n(n - 1)/2 inversions, and the runtime is Θ(n + n(n-1)/2) = Θ(n2). Nifty!
So now we have a super precise way of analyzing the runtime of insertion sort given a particular array. Let's see how we can analyze its average runtime. To do this, we'll need to make an assumption about the distribution of the inputs. Since insertion sort is a comparison-based sorting algorithm, the actual values of the input array don't actually matter; only their relative ordering actually matters. In what follows, I'm going to assume that all the array elements are distinct, though if this isn't the case the analysis doesn't change all that much. I'll point out where things go off-script when we get there.
To solve this problem, we're going to introduce a bunch of indicator variables of the form Xij, where Xij is a random variable that is 1 if A[i] and A[j] form an inversion and 0 otherwise. There will be n(n - 1)/2 of these variables, one for each distinct pair of elements. Note that these variables account for each possible inversion in the array.
Given these X's, we can define a new random variable I that is equal to the total number of inversions in the array. This will be given by the sum of the X's:
I = Σ Xij
We're interested in E[I], the expected number of inversions in the array. Using linearity of expectation, this is
E[I] = E[Σ Xij] = Σ E[Xij]
So now if we can get the value of E[Xij], we can determine the expected number of inversions and, therefore, the expected runtime!
Fortunately, since all the Xij's are binary indicator variables, we have that
E[Xij] = Pr[Xij = 1] = Pr[A[i] and A[j] are an inversion]
So what's the probability, given a random input array with no duplicates, that A[i] and A[j] are an inversion? Well, half the time, A[i] will be less than A[j], and the other half of the time A[i] will be greater than A[j]. (If duplicates are allowed, there's a sneaky extra term to handle duplicates, but we'll ignore that for now). Consequently, the probability that there's an inversion between A[i] and A[j] is 1 / 2. Therefore:
E[I] = ΣE[Xij] = Σ (1 / 2)
Since there are n(n - 1)/2 terms in the sum, this works out to
E[I] = n(n - 1) / 4 = Θ(n2)
And so, on expectation, there will be Θ(n2) inversions, so on expectation the runtime will be Θ(n2 + n) = Θ(n2). This explains why the average-case behavior of insertion sort is Θ(n2).
Hope this helps!
For fun I wrote a program which ran through all data combinations for a vector of size n counting comparisons and found that the best case is n-1 (all sorted) and the worst is (n*(n-1))/2.
Some results for different n:
n min ave max ave/(min+max) ave/max
2 1 1 1 0.5000
3 2 2.667 3 0.5334
4 3 4.917 6 0.5463
5 4 7.717 10 0.5512
6 5 11.050 15 0.5525
7 6 14.907 21 0.5521
8 7 19.282 28 0.5509
9 8 24.171 36 0.5493
10 9 29.571 45 0.5476
11 10 35.480 55 0.5458
12 11 41.897 66 0.5441
It seems the average value follows min closer than it does max.
EDIT: some additional values
13 12 48.820 78 0.5424
14 13 56.248 91 0.5408
EDIT: value for 15
15 14 64.182 105 0.5393
EDIT: selected higher values
16 15 72.619 120 - 0.6052
32 31 275.942 496 - 0.5563
64 63 1034.772 1953 - 0.5294
128 127 4186.567 8128 - 0.5151
256 255 16569.876 32640 - 0.5077
I recently wrote a program to compute the average number of comparisons for insertion sort for higher values of n. From these I have drawn the conclusion that as n approaches infinity the average case approaches the worst case divided by two.
Most algorithms have average-case the same as worst-case. To see why this is, let's call O the worst-case and Ω the best-case. Presumably, O >= Ω as n goes to infinity. For most distributions, the average case is going to be close to the average of the best- and worst-case - that is, (O + Ω)/2 = O/2 + Ω/2. Since we don't care about coefficients, and O >= Ω, this is the same as O.
Obviously, this is an oversimplification. There are running time distributions that are skewed such that the assumption of the average-case being the average of the worst-case and the best-case is not valid*. But this should give you a decent intuition as to why this is.
*As mentioned by templatetypedef in the comments, some examples are quicksort/quickselect, BST lookup (unless you balance the tree), hash table lookup, and the simplex method.