hash-code-uniqueness

How to generate a hash code from three longs

社会主义新天地 提交于 2020-01-22 14:45:48
问题 I have a HashMap with coordinates as keys. Coordinates have 3 longs holding the x, y and z coordinate. (Coordinate is and needs to be a custom class, the coordinates need to be longs). Now i want to be able to access e.g. the field [5, 10, 4] by doing: hashMap.get(new Coordinate(5, 10, 4)) . I have implemented the equals method but that is not enough since apparently i need to provide an implementation for hashCode as well. So my question is how do i generate an unique hashCode from three

Java hashcodes collide in one case and not the other for the same objects, why? (Code Below)

本秂侑毒 提交于 2020-01-02 12:26:12
问题 I tried to write a small program to demonstrate hash collisions in java when only equals is overridden and not hashcode() method. This was to prove the theory that two unequal objects can have the same hashcode. This was for an Interview question where the behaviour was asked. I created 200,000 objects, stored them in an array and then compared them to see which are duplicates. (For this I am using a nested for loop iterating over an array of objects after the object creation phase.) For

Tinyurl-style unique code: potential algorithm to prevent collisions

狂风中的少年 提交于 2019-12-18 03:38:23
问题 I have a system that requires a unique 6-digit code to represent an object, and I'm trying to think of a good algorithm for generating them. Here are the pre-reqs: I'm using a base-20 system (no caps, numbers, vowels, or l to prevent confusion and naughty words) The base-20 allows 64 million combinations I'll be inserting potentially 5-10 thousand entries at once, so in theory I'd use bulk inserts, which means using a unique key probably won't be efficient or pretty (especially if there

Are hash collisions with different file sizes just as likely as same file size?

会有一股神秘感。 提交于 2019-12-12 08:28:31
问题 I'm hashing a large number of files, and to avoid hash collisions, I'm also storing a file's original size - that way, even if there's a hash collision, it's extremely unlikely that the file sizes will also be identical. Is this sound (a hash collision is equally likely to be of any size), or do I need another piece of information (if a collision is more likely to also be the same length as the original). Or, more generally: Is every file just as likely to produce a particular hash,

Are there circumstances where a hash algorithm can be guaranteed unique?

非 Y 不嫁゛ 提交于 2019-12-08 20:43:07
问题 If I'm hashing size-constrained similar data (social security numbers, for example) using a hash algorithm with a larger byte size than the data (sha-256, for example), will the hash guarantee the same level of uniqueness as the original data? 回答1: You can always create a customized hash that guarantees uniqueness. For data in a known domain (like SSN's), the exercise is relatively simple. If your target hash value actually has more bits available than what you're hashing, the hash simply

Is a hash result ever the same as the source value?

不问归期 提交于 2019-12-07 08:47:34
问题 This is more of a cryptography theory question, but is it possible that the result of a hash algorithm will ever be the same value as the source? For example, say I have a string: baf34551fecb48acc3da868eb85e1b6dac9de356 If I get the SHA1 hash on it, the result is: 4d2f72adbafddfe49a726990a1bcb8d34d3da162 In theory, is there ever a case where these two values would match? I'm not asking about SHA1 specifically here - it's just my example. I'm just wondering if hashing algorithms are built in

Java hashcodes collide in one case and not the other for the same objects, why? (Code Below)

纵然是瞬间 提交于 2019-12-06 07:29:20
I tried to write a small program to demonstrate hash collisions in java when only equals is overridden and not hashcode() method. This was to prove the theory that two unequal objects can have the same hashcode. This was for an Interview question where the behaviour was asked. I created 200,000 objects, stored them in an array and then compared them to see which are duplicates. (For this I am using a nested for loop iterating over an array of objects after the object creation phase.) For around 200,000 objects I get 9 collisions. First one being object at index 196 and 121949. I then go on to

Is a hash result ever the same as the source value?

走远了吗. 提交于 2019-12-05 17:53:36
This is more of a cryptography theory question, but is it possible that the result of a hash algorithm will ever be the same value as the source? For example, say I have a string: baf34551fecb48acc3da868eb85e1b6dac9de356 If I get the SHA1 hash on it, the result is: 4d2f72adbafddfe49a726990a1bcb8d34d3da162 In theory, is there ever a case where these two values would match? I'm not asking about SHA1 specifically here - it's just my example. I'm just wondering if hashing algorithms are built in such a way as to prevent this. Well, it would depend on the hashing algorithm - but I'd be surprised to

Fast HashCode of a Complex Object Graph

一个人想着一个人 提交于 2019-12-04 03:46:22
问题 I have a pretty complex object and I need to get uniqueness of these objects. One solution can be done by overriding GetHashCode() . I have implemented a code noted below: public override int GetHashCode() { return this._complexObject1.GetHashCode() ^ this._complexObject2.GetHashCode() ^ this._complexObject3.GetHashCode() ^ this._complexObject4.GetHashCode() ^ this._complexObject5.GetHashCode() ^ this._complexObject6.GetHashCode() ^ this._complexObject7.GetHashCode() ^ this._complexObject8

Are hash collisions with different file sizes just as likely as same file size?

限于喜欢 提交于 2019-12-04 02:49:26
I'm hashing a large number of files, and to avoid hash collisions, I'm also storing a file's original size - that way, even if there's a hash collision, it's extremely unlikely that the file sizes will also be identical. Is this sound (a hash collision is equally likely to be of any size), or do I need another piece of information (if a collision is more likely to also be the same length as the original). Or, more generally: Is every file just as likely to produce a particular hash, regardless of original file size? Depends on your hash function, but in general, files that are of the same size