问题
I was looking at the Wikipedia entry for Prim's algorithm and I noticed that its time complexity with an adjacency matrix is O(V^2) and its time complexity with a heap and adjacency list is O(E lg(V)) where E is the number of edges and V is the number of vertices in the graph.
Since Prim's algorithm is used in denser graphs, E can approach V^2, but when it does, the time complexity with a heap becomes O(V^2 lg(V)) which is greater than O(V^2). Obviously, a heap will improve performance over just searching the array, but the time complexity says otherwise.
How does the algorithm actually slow down with an improvement?
回答1:
Even though the heap saves you from searching through the array, it slows down the "update" part of the algorithm: array updates are O(1), while heap updates are O(log(N)).
In essence, you trade speed in one part of the algorithm for speed in another.
No matter what, you'll have to search N times. However, in dense graphs, you'll need to update a lot (~V^2), and in sparse graphs, you don't.
Another example off the top of my head is searching for elements in an array. If you're only doing it once, linear search is the best - but if you do lots of queries, it's better to sort it and use binary search every time.
回答2:
From the Introduction to Algorithms (Carmen)
Time= Θ(V)·T(EXTRACT-MIN) + Θ(E)·T(DECREASE-KEY)
T(EXTRACT-MIN) T(DECREASE-KEY) Total 1. array O(V) O(1) O(V^2) 2. binary heap O(lgV) O(lgV) O(E lgV) 3. Fibonacci heap O(lgV) O(1) O(E + VlgV)
Using different data structures causes different time complexities.
回答3:
I think you read it wrong to some degree. For dense graphs, the article talks about using Fibonacci heaps with time complexity O(E + V log V), which is significantly better.
来源:https://stackoverflow.com/questions/716341/prims-algorithm-time-complexity