What's Up with O(1)?

前端 未结 13 2116
既然无缘
既然无缘 2020-12-22 17:40

I have been noticing some very strange usage of O(1) in discussion of algorithms involving hashing and types of search, often in the context of using a dictionary type provi

相关标签:
13条回答
  • 2020-12-22 17:55

    I think when many people throw around the term "O(1)" they implicitly have in mind a "small" constant, whatever "small" means in their context.

    You have to take all this big-O analysis with context and common sense. It can be an extremely useful tool or it can be ridiculous, depending on how you use it.

    0 讨论(0)
  • 2020-12-22 17:59

    My understanding is that O(1) is not necessarily constant; rather, it is not dependent on the variables under consideration. Thus a hash lookup can be said to be O(1) with respect to the number of elements in the hash, but not with respect to the length of the data being hashed or ratio of elements to buckets in the hash.

    The other element of confusion is that big O notation describes limiting behavior. Thus, a function f(N) for small values of N may indeed show great variation, but you would still be correct to say it is O(1) if the limit as N approaches infinity is constant with respect to N.

    0 讨论(0)
  • 2020-12-22 18:00

    Yes, garbage collection does affect the asymptotic complexity of algorithms running in the garbage collected arena. It is not without cost, but it is very hard to analyze without empirical methods, because the interaction costs are not compositional.

    The time spent garbage collecting depends on the algorithm being used. Typically modern garbage collectors toggle modes as memory fills up to keep these costs under control. For instance, a common approach is to use a Cheney style copy collector when memory pressure is low because it pays cost proportional to the size of the live set in exchange for using more space, and to switch to a mark and sweep collector when memory pressure becomes greater, because even though it pays cost proportional to the live set for marking and to the whole heap or dead set for sweeping. By the time you add card-marking and other optimizations, etc. the worst case costs for a practical garbage collector may actually be a fair bit worse, picking up an extra logarithmic factor for some usage patterns.

    So, if you allocate a big hash table, even if you access it using O(1) searches for all time during its lifetime, if you do so in a garbage collected environment, occasionally the garbage collector will traverse the entire array, because it is size O(n) and you will pay that cost periodically during collection.

    The reason we usually leave it off of the complexity analysis of algorithms is that garbage collection interacts with your algorithm in non-trivial ways. How bad of a cost it is depends a lot on what else you are doing in the same process, so the analysis is not compositional.

    Moreover, above and beyond the copy vs. compact vs. mark and sweep issue, the implementation details can drastically affect the resulting complexities:

    1. Incremental garbage collectors that track dirty bits, etc. can all but make those larger re-traversals disappear.
    2. It depends on whether your GC works periodically based on wall-clock time or runs proportional to the number of allocations.
    3. Whether a mark and sweep style algorithm is concurrent or stop-the-world
    4. Whether it marks fresh allocations black if it leaves them white until it drops them into a black container.
    5. Whether your language admits modifications of pointers can let some garbage collectors work in a single pass.

    Finally, when discussing an algorithm, we are discussing a straw man. The asymptotics will never fully incorporate all of the variables of your environment. Rarely do you ever implement every detail of a data structure as designed. You borrow a feature here and there, you drop a hash table in because you need fast unordered key access, you use a union-find over disjoint sets with path compression and union by rank to merge memory-regions over there because you can't afford to pay a cost proportional to the size of the regions when you merge them or what have you. These structures are thought primitives and the asymptotics help you when planning overall performance characteristics for the structure 'in-the-large' but knowledge of what the constants are matters too.

    You can implement that hash table with perfectly O(1) asymptotic characteristics, just don't use garbage collection; map it into memory from a file and manage it yourself. You probably won't like the constants involved though.

    0 讨论(0)
  • 2020-12-22 18:09

    I can't speak to the other discussions you've seen, but there is at least one hashing algorithm that is guaranteed to be O(1).

    Cuckoo hashing maintains an invariant so that there is no chaining in the hash table. Insertion is amortized O(1), retrieval is always O(1). I've never seen an implementation of it, it's something that was newly discovered when I was in college. For relatively static data sets, it should be a very good O(1), since it calculates two hash functions, performs two lookups, and immediately knows the answer.

    Mind you, this is assuming the hash calcuation is O(1) as well. You could argue that for length-K strings, any hash is minimally O(K). In reality, you can bound K pretty easily, say K < 1000. O(K) ~= O(1) for K < 1000.

    0 讨论(0)
  • 2020-12-22 18:10

    I can see what you're saying, but I think there are a couple of basic assumptions underlying the claim that look-ups in a Hash table have a complexity of O(1).

    • The hash function is reasonably designed to avoid a large number of collisions.
    • The set of keys is pretty much randomly distributed, or at least not purposely designed to make the hash function perform poorly.

    The worst case complexity of a Hash table look-up is O(n), but that's extremely unlikely given the above 2 assumptions.

    0 讨论(0)
  • 2020-12-22 18:11

    In general, I think people use them comparatively without regard to exactness. For example, hash-based data structures are O(1) (average) look up if designed well and you have a good hash. If everything hashes to a single bucket, then it's O(n). Generally, though one uses a good algorithm and the keys are reasonably distributed so it's convenient to talk about it as O(1) without all the qualifications. Likewise with lists, trees, etc. We have in mind certain implementations and it's simply more convenient to talk about them, when discussing generalities, without the qualifications. If, on the other hand, we're discussing specific implementations, then it probably pays to be more precise.

    0 讨论(0)
提交回复
热议问题