At school we are currently learning sorting algorithms in Java and I got for my homework the Heap Sort. I did my reading, I tried to find out as much as I could, but it seems I just can't grasp the concept.
I'm not asking you to write me a Java program, if you could just explain to me as simply as you can how the Heap Sort works.
Right, so basically you take a heap and pull out the first node in the heap - as the first node is guaranteed to be the largest / smallest depending on the direction of sort. The tricky thing is re-balancing / creating the heap in the first place.
Two steps were required for me to understand the heap process - first of all thinking of this as a tree, getting my head around it, then turning that tree into an array so it could be useful.
The second part of that is to essentially traverse the tree breadth first, left to right adding each element into the array. So the following tree:
73
7 12
2 4 9 10
1
Would be {73,7,12,2,4,9,10,1}
The first part requires two steps:
- Make sure each node has two children (Unless you don't have enough nodes to do that as in the tree above.
- Make sure each node is bigger (Or smaller if sorting min first) than its children.
So to heapify a list of numbers you add each one to the heap, then following those two steps in order.
To create my heap above I will add 10 first - it's the only node so nothing to do. Add 12 as it's child on the left:
10
12
This satisfies 1, but not 2 so I will swap them round:
12
10
Add 7 - nothing to do
12
10 7
Add 73
12
10 7
73
10 < 73 so need to swap those:
12
73 7
10
12 < 73 so need to swap those:
73
12 7
10
Add 2 - nothing to do
73
12 7
10 2
Add 4 - nothing to do
73
12 7
10 2 4
Add 9
73
12 7
10 2 4 9
7 < 9 - swap
73
12 9
10 2 4 7
Add 1 - nothing to do
73
12 9
10 2 4 7
1
We have our heap :D
Now you just remove each element from the top, swapping in the last element to the top of the tree each time, then re-balancing the tree:
Take 73 off - putting 1 in its place
1
12 9
10 2 4 7
1 < 12 - so swap them
12
1 9
10 2 4 7
1 < 10 - so swap them
12
10 9
1 2 4 7
Take 12 off - replace with 7
7
10 9
1 2 4
7 < 10 - swap them
10
7 9
1 2 4
Take 10 off - replace with 4
4
7 9
1 2
4 < 7 - swap
7
4 9
1 2
7 < 9 - swap
9
4 7
1 2
Take 9 off - replace with 2
2
4 7
1
2 < 4 - swap them
4
2 7
1
4 < 7 - swap them
7
2 4
1
Take 7 off - replace with 1
1
2 4
1 < 4 - swap them
4
2 1
Take 4 - replace with 1
1
2
1 < 2 - swap them
2
1
Take 2 - replace with 1
1
Take 1
Sorted list voila.
One way to think of heap sort is as a cleverly optimized version of selection sort. In selection sort, the sort works by repeatedly finding the largest element not yet placed correctly, then putting it into the next correct spot in the array. However, selection sort runs in time O(n2) because it has to do n rounds of finding the largest element out of a bunch (and there can be up to n different elements to look at) and putting it into place.
Intuitively, heap sort works by building up a special data structure called a binary heap that speeds up finding the largest element out of the unplaced array elements. Binary heaps support the following operations:
- Insert, which inserts an element into the heap, and
- Delete-Max, which removes and returns the largest element of the heap.
At a very high level, the algorithm works as follows:
- Insert each element of the array to a new binary heap.
- For i = n down to 1:
- Call Delete-Max on the heap to get the largest element of the heap back.
- Write this element to position i.
This sorts the array because the elements returned by Delete-Max are in descending order. Once all the elements have been removed, the array is then sorted.
Heap sort is efficient because the Insert and Delete-Max operations on a heap both run in O(log n) time, meaning that n inserts and deletions can be done on the heap in O(n log n) time. A more precise analysis can be used to show that, in fact, it takes Θ(n log n) time regardless of the input array.
Typically, heap sort employs two major optimizations. First, the heap is usually built up in-place inside the array by treating the array itself as a compressed representation of the heap. If you look at a heapsort implementation, you will usually see unusual uses of array indices based on multiplying and dividing by two; these accesses work because they are treating the array as a condensed data structure. As a result, the algorithm requires only O(1) auxiliary storage space.
Second, rather than building up the heap one element at a time, the heap is usually built using a specialized algorithm that runs in time Θ(n) to build the heap in-place. Interestingly, in some cases this ends up making the code easier to read because code can be reused, but the algorithm itself becomes a bit trickier to understand and analyze.
You will sometimes see heapsort done with a ternary heap. This has the advantage of being slightly faster on average, but if you find a heapsort implementation using this without knowing what you're looking at it can be fairly tricky to read. Other algorithms also use the same general structure but a more complex heap structure. Smoothsort uses a much more complicated heap to get O(n) best-case behavior while maintaining O(1) space usage and O(n log n) worst-case behavior. Poplar sort is similar to smoothsort, but with O(log n) space usage and slightly better performance. One can even think of classic sorting algorithms like insertion sort and selection sort as heap sort variants.
Once you have a better grasp of heapsort, you may want to look into the introsort algorithm, which combines quicksort, heapsort, and insertion sort to produce an extremely fast sorting algorithm that combines the strength of quicksort (fast sorting on average), heapsort (excellent worst-case behavior), and insertion sort (fast sorting for small arrays). Introsort is what's used in many implementations of C++'s std::sort
function, and is not very hard to implement yourself once you have a working heapsort.
Hope this helps!
Suppose you have a special data structure (called a heap) which allows you to store a list of numbers and lets you retrieve and remove the smallest one in O(lg n)
time.
Do you see how this leads to a very simple sorting algorithm?
The hard part (it's actually not that hard) is implementing the heap.
Maybe interactive tracing will help you understand the algorithm better. Here is a demo.
I'll see how I go in answering this, because my explanation for heap sort and what a heap is will be a little...
...uh, terrible.
Anyway, firstly, we'd better check what a Heap is:
As taken from Wikipedia, a heap is:
In computer science, a heap is a specialized tree-based data structure that satisfies the heap property: if B is a child node of A, then key(A) ≥ key(B). This implies that an element with the greatest key is always in the root node, and so such a heap is sometimes called a max-heap. (Alternatively, if the comparison is reversed, the smallest element is always in the root node, which results in a min-heap.)
Pretty much, a heap is a binary tree such that all the children of any node are smaller than that node.
Now, heap sort is an O(n lg(n)) sorting algorithm. You can read a little about it here and here. It pretty much works by putting all the elements of whatever you're trying to sort into a heap, then building the sorted array from the largest element to the smallest. You'll keep on restructuring the heap, and since the largest element is at the top (root) of the heap at all times, you can just keep taking that element and putting it at the back of the sorted array. (That is, you'll build the sorted array in reverse)
Why is this algorithm O(n lg(n))? Because all the operations on a heap are O(lg(n)) and as a result, you'll do n operations, resulting in a total running time of O(n lg(n)).
I hope my terrible rant helped you! It's a little bit wordy; sorry about that...
I remember my Algorithm analysis professor to tell us that the heap sort algorithm works like a heap of gravel:
Imagine you have a sack filled with gravel, and you empty that on the floor: Bigger stones will likely roll to the bottom and smaller stones (or sand) will stay on top.
You now take the very top of the heap and save it at the smallest value of your heap. Put again the rest of your heap in your sack, and repeat. (Or you can get the opposite approach and grab the biggest stone you saw rolling on the floor, the example is still valid)
That's it more or less the simple way i know to explain how heap sort works.
Heap sort include simplest logic with time complexity O(nlogn) and space complexity O(1)
public class HeapSort {
public static void main(String[] args) {
Integer [] a={12,32,33,8,54,34,35,26,43,88,45};
HeapS(a,a.length-1);
System.out.println(Arrays.asList(a));
}
private static void HeapS(Integer[] a, int l) {
if(l<=0)
return;
for (int i = l/2-1; i >=0 ; i--) {
int index=a[2*i+1]>a[2*i+2]?2*i+1:2*i+2;
if(a[index]>a[i]){
int temp=a[index];
a[index]=a[i];
a[i]=temp;
}
}
int temp=a[l];
a[l]=a[0];
a[0]=temp;
HeapS(a,l-1);
}
}
来源:https://stackoverflow.com/questions/8938375/an-intuitive-understanding-of-heapsort