I saw some guy who encrypt users password multiple times with MD5 to improve security. I'm not sure if this works but it doesn't look good. So, does it make sense?
Let's assume the hash function you use would be a perfect one-way function. Then you can view its output like that of a "random oracle", its output values are in a finite range of values (2^128 for MD5).
Now what happens if you apply the hash multiple times? The output will still stay in the same range (2^128). It's like you saying "Guess my random number!" twenty times, each time thinking of a new number - that doesn't make it harder or easier to guess. There isn't any "more random" than random. That's not a perfect analogy, but I think it helps to illustrate the problem.
Considering brute-forcing a password, your scheme doesn't add any security at all. Even worse, the only thing you could "accomplish" is to weaken the security by introducing some possibility to exploit the repeated application of the hash function. It's unlikely, but at least it's guaranteed that you for sure won't win anything.
So why is still not all lost with this approach? It's because of the notion that the others made with regard to having thousands of iterations instead of just twenty. Why is this a good thing, slowing the algorithm down? It's because most attackers will try to gain access using a dictionary (or rainbow table using often-used passwords, hoping that one of your users was negligent enough to use one of those (I'm guilty, at least Ubuntu told me upon installation). But on the other hand it's inhumane to require your users to remember let's say 30 random characters.
That's why we need some form of trade-off between easy to remember passwords but at the same time making it as hard as possible for attackers to guess them. There are two common practices, salts and slowing the process down by applying lots of iterations of some function instead of a single iteration. PKCS#5 is a good example to look into.
In your case applying MD5 20000 instead of 20 times would slow attackers using a dictionary significantly down, because each of their input passwords would have to go through the ordinary procedure of being hashed 20000 times in order to be still useful as an attack. Note that this procedure does not affect brute-forcing as illustrated above.
But why is using a salt still better? Because even if you apply the hash 20000 times, a resourceful attacker could pre-compute a large database of passwords, hashing each of them 20000 times, effectively generating a customized rainbow table specifically targeted at your application. Having done this they could quite easily attack your application or any other application using your scheme. That's why you also need to generate a high cost per password, to make such rainbow tables impractical to use.
If you want to be on the really safe side, use something like PBKDF2 illustrated in PKCS#5.
Hashing a password is not encryption. It is a one-way process.
Check out security.stackexchange.com, and the password related questions. They are so popular we put together this blog post specifically to help individuals find useful questions and answers.
This question specifically discusses using md5 20 times in a row - check out Thomas Pornin's answer. Key points in his answer:
- 20 is too low, it should be 20000 or more - password processing is still too fast
- There is no salt: an attacker may attack passwords with very low per-password cost, e.g. rainbow tables - which can be created for any number of md5 cycles
- Since there is no sure test for knowing whether a given algorithm is secure or not, inventing your own cryptography is often a recipe for disaster. Don't do it
There is such a question on crypto.SE but it is NOT public now. The answer by Paŭlo Ebermann is:
For password-hashing, you should not use a normal cryptographic hash, but something made specially to protect passwords, like bcrypt.
See How to safely store a password for details.
The important point is that password crackers don't have to bruteforce the hash output space (2160 for SHA-1), but only the password space, which is much much smaller (depending on your password rules - and often dictionaries help). Thus we don't want a fast hash function, but a slow one. Bcrypt and friends are designed for this.
And similar question has these answers: The question is "Guarding against cryptanalytic breakthroughs: combining multiple hash functions" Answer by Thomas Pornin:
Combining is what SSL/TLS does with MD5 and SHA-1, in its definition of its internal "PRF" (which is actually a Key Derivation Function). For a given hash function, TLS defines a KDF which relies on HMAC which relies on the hash function. Then the KDF is invoked twice, once with MD5 and once with SHA-1, and the results are XORed together. The idea was to resist cryptanalytic breaks in either MD5 or SHA-1. Note that XORing the outputs of two hash functions relies on subtle assumptions. For instance, if I define SHB-256(m) = SHA-256(m) XOR C, for a fixed constant C, then SHB-256 is as good a hash function as SHA-256; but the XOR of both always yields C, which is not good at all for hashing purposes. Hence, the construction in TLS in not really sanctioned by the authority of science (it just happens not to have been broken). TLS-1.2 does not use that combination anymore; it relies on the KDF with a single, configurable hash function, often SHA-256 (which is, in 2011, a smart choice).
As @PulpSpy points out, concatenation is not a good generic way of building hash functions. This was published by Joux in 2004 and then generalized by Hoch and Shamir in 2006, for a large class of construction involving iterations and concatenations. But mind the fine print: this is not really about surviving weaknesses in hash functions, but about getting your money worth. Namely, if you take a hash function with a 128-bit output and another with a 160-bit output, and concatenate the results, then collision resistance will be no worse than the strongest of the two; what Joux showed is that it will not be much better either. With 128+160 = 288 bits of output, you could aim at 2144 resistance, but Joux's result implies that you will not go beyond about 287.
So the question becomes: is there a way, if possible an efficient way, to combine two hash functions such that the result is as collision-resistant as the strongest of the two, but without incurring the output enlargement of concatenation ? In 2006, Boneh and Boyen have published a result which simply states that the answer is no, subject to the condition of evaluating each hash function only once. Edit: Pietrzak lifted the latter condition in 2007 (i.e. invoking each hash function several times does not help).
And by PulpSpy:
I'm sure @Thomas will give a thorough answer. In the interm, I'll just point out that the collision resistance of your first construction, H1(m)||H2(M) is surprisingly not that much better than just H1(M). See section 4 of this paper:
http://web.cecs.pdx.edu/~teshrim/spring06/papers/general-attacks/multi-joux.pdf
no , it's not a good practice, you must use a $salt for your encryption because the password cand be cracked with those rainbow tables
来源:https://stackoverflow.com/questions/6869129/encryptingmd5-multiple-times-can-improve-security