问题
I'm writing a p2p application in Python and am using the hashlib module to identify files with the same contents but different names within the network.
The thing is that I tested the code that does the hash for the files in Windows (Vista), with Python 2.7 and it's very fast (less than a second, for a couple of gigabytes). So, in Linux (Fedora 12, with Python 2.6.2 and Python 2.7.1 compiled by myself because I haven't found a rpm with yum) is so much slower, almost a minute for files less than 1gb.
The question is, Why? and Can I do something to improve the performance in Linux?
The code for the hash is
import hashlib
...
def crear_lista(directorio):
lista = open(archivo, "w")
for (root, dirs, files) in os.walk(directorio):
for f in files:
#archivo para hacerle el hash
h = open(os.path.join(root, f), "r")
#calcular el hash de los archivos
md5 = hashlib.md5()
while True:
trozo = h.read(md5.block_size)
if not trozo: break
md5.update(trozo)
#cada linea es el nombre de archivo y su hash
size = str(os.path.getsize(os.path.join(root, f)) / 1024)
digest = md5.hexdigest()
#primera linea: nombre del archivo
#segunda: tamaño en KBs
#tercera: hash
lines = f + "\n" + size + "\n" + digest + "\n"
lista.write(lines)
del md5
h.close()
lista.close()
I changed the r
by rb
and rU
but the results are the same
回答1:
You're reading the file in 64 byte (hashlib.md5().block_size
) blocks and hashing them.
You should use a much larger read value in the range of 256KB (262144 bytes) to 4MB (4194304 bytes) and then hash that; this one digup program reads in 1MB blocks i.e.:
block_size = 1048576 # 1MB
while True:
trozo = h.read(block_size)
if not trozo: break
md5.update(trozo)
来源:https://stackoverflow.com/questions/4418042/hashlib-in-windows-and-linux