I want to know how I could perform some kind of index on keys from a python dictionary. The dictionary holds approx. 400,000 items, so I am trying to avoid a linear search.
No. The only way of searching for a string in dictionary keys is to look in each key. Something like what you've suggested is the only way of doing it with a dictionary.
However, if you have 400,000 records and you want to speed up your search, I'd suggest using an SQLite database. Then you can just say SELECT * FROM TABLE_NAME WHERE COLUMN_NAME LIKE '%userinput%';
. Look at the documentation for Python's sqlite3 module here.
Another option is to use a generator expression, as these are almost always faster than the equivalent for loops.
filteredKeys = (key for key in myDict.keys() if userInput in key)
for key in filteredKeys:
doSomething()
EDIT: If, as you say, you don't care about one-time costs, use a database. SQLite should do what you want damn near perfectly.
I did some benchmarks, and to my surprise, the naive algorithm is actually twice as fast as a version using list comprehensions and six times as fast as a SQLite-driven version. In light of these results, I'd have to go with @Mark Byers and recommend a Trie. I've posted the benchmark below, in case someone wants to give it a go.
import random, string, os
import time
import sqlite3
def buildDict(numElements):
aDict = {}
for i in xrange(numElements-10):
aDict[''.join(random.sample(string.letters, 6))] = 0
for i in xrange(10):
aDict['log'+''.join(random.sample(string.letters, 3))] = 0
return aDict
def naiveLCSearch(aDict, searchString):
filteredKeys = [key for key in aDict.keys() if searchString in key]
return filteredKeys
def naiveSearch(aDict, searchString):
filteredKeys = []
for key in aDict:
if searchString in key:
filteredKeys.append(key)
return filteredKeys
def insertIntoDB(aDict):
conn = sqlite3.connect('/tmp/dictdb')
c = conn.cursor()
c.execute('DROP TABLE IF EXISTS BLAH')
c.execute('CREATE TABLE BLAH (KEY TEXT PRIMARY KEY, VALUE TEXT)')
for key in aDict:
c.execute('INSERT INTO BLAH VALUES(?,?)',(key, aDict[key]))
return conn
def dbSearch(conn):
cursor = conn.cursor()
cursor.execute("SELECT KEY FROM BLAH WHERE KEY GLOB '*log*'")
return [record[0] for record in cursor]
if __name__ == '__main__':
aDict = buildDict(400000)
conn = insertIntoDB(aDict)
startTimeNaive = time.time()
for i in xrange(3):
naiveResults = naiveSearch(aDict, 'log')
endTimeNaive = time.time()
print 'Time taken for 3 iterations of naive search was', (endTimeNaive-startTimeNaive), 'and the average time per run was', (endTimeNaive-startTimeNaive)/3.0
startTimeNaiveLC = time.time()
for i in xrange(3):
naiveLCResults = naiveLCSearch(aDict, 'log')
endTimeNaiveLC = time.time()
print 'Time taken for 3 iterations of naive search with list comprehensions was', (endTimeNaiveLC-startTimeNaiveLC), 'and the average time per run was', (endTimeNaiveLC-startTimeNaiveLC)/3.0
startTimeDB = time.time()
for i in xrange(3):
dbResults = dbSearch(conn)
endTimeDB = time.time()
print 'Time taken for 3 iterations of DB search was', (endTimeDB-startTimeDB), 'and the average time per run was', (endTimeDB-startTimeDB)/3.0
os.remove('/tmp/dictdb')
For the record, my results were:
Time taken for 3 iterations of naive search was 0.264658927917 and the average time per run was 0.0882196426392
Time taken for 3 iterations of naive search with list comprehensions was 0.403481960297 and the average time per run was 0.134493986766
Time taken for 3 iterations of DB search was 1.19464492798 and the average time per run was 0.398214975993
All times are in seconds.