How do Python dictionary lookup algorithms work internally?
mydi[\'foo\']
If the dictionary has 1,000,000 terms, is a tree search execut
Here's a good explanation: http://wiki.python.org/moin/DictionaryKeys
Pseudocode from above link:
def lookup(d, key):
'''dictionary lookup is done in three steps:
1. A hash value of the key is computed using a hash function.
2. The hash value addresses a location in d.data which is
supposed to be an array of "buckets" or "collision lists"
which contain the (key,value) pairs.
3. The collision list addressed by the hash value is searched
sequentially until a pair is found with pair[0] == key. The
return value of the lookup is then pair[1].
'''
h = hash(key) # step 1
cl = d.data[h] # step 2
for pair in cl: # step 3
if key == pair[0]:
return pair[1]
else:
raise KeyError, "Key %s not found." % key
Hash lookups don't use trees. They use a hash table, and they take constant time lookup. They will take more space (on average I believe twice as much) as a tree, but the lookup and insert times are win.
To oversimplify, take an md5 of your key, and mod that with the number of addresses you have, and that's where you save or look to retrieve a key. It doesn't matter how big the set is, it will always take the same amount of time as long as you don't have significant collision, which a good hash will avoid.
Here's some pseudo-code closer to what actually happens. Imagine the dictionary has a data
attribute containing the key,value pairs and a size
which is the number of cells allocated.
def lookup(d, key):
perturb = j = hash(key)
while True:
cell = d.data[j % d.size]
if cell.key is EMPTY:
raise IndexError
if cell.key is not DELETED and (cell.key is key or cell.key == key):
return cell.value
j = (5 * j) + 1 + perturb
perturb >>= PERTURB
The perturb
value ensures that all bits of the hash code are eventually used when resolving hash clashes but once it has degraded to 0 the (5*j)+1
will eventually touch all cells in the table.
size
is always much larger than the number of cells actually used so the hash is guaranteed to eventually hit an empty cell when the key doesn't exist (and should normally hit one pretty quickly). There's also a deleted value for a key to indicate a cell which shouldn't terminate the search but which isn't currently in use.
As for your question about the length of the key string, hashing a string will look at all of the characters in the string, but a string also has a field used to store the computed hash. So if you use different strings every time to do the lookup the string length may have a bearing, but if you have a fixed set of keys and re-use the same strings the hash won't be recalculated after the first time it is used. Python gets a benefit from this as most name lookups involve dictionaries and a single copy of each variable or attribute name is stored internally, so every time you access an attribute x.y
there is a dictionary lookup but not a call to a hash function.
As you mentioned in your title, dicts are hash tables. No tree searching is used. Looking up a key is a nearly constant time operation, regardless of the size of the dict.
You might find the answers to this question helpful: How are Python's Built In Dictionaries Implemented
Answer 1: Internal working is explained in this video
Answer 2: No, a tree search is not done if you have a million records in a dictionary.
Answer 3: As there might be key collisions you will expect performance in terms of the size of the dictionary, and not in terms of the length of the key string.
Answer 4: Consider the dictionary as an array (contiguous memory locations), but there might be blocks within the array which are not used. Hence, dictionaries tend to waste a lot of memory space as compared to trees. But, for better run-time performance dictionaries might be better than trees. Key collisions can sometime degrade performance. You should read about Consistent Hashing.