Background:
You can use the following method to aggregate hash codes: http://docs.oracle.com/javase/7/docs/api/java/util/Objects.html#hash(java.lang.Object...)
So I understand, you effectively have some set of strings that you need to identify by hash code, and that set of strings that you need to identify among will never change?
If that's the case, it doesn't particularly matter, so long as the scheme you use gives you unique numbers for the different strings/combinations of strings. I would start by just concatenating the strings and calculating the String.hashCode() and seeing if you end up with unique numbers. If you don't, then you could try:
A possible scheme for a 64-bit hash code is as follows:
So an implementation based on values suggested in Numerical Recipes would be:
private static final long[] byteTable;
private static final long HSTART = 0xBB40E64DA205B064L;
private static final long HMULT = 7664345821815920749L;
static {
byteTable = new long[256];
long h = 0x544B2FBACAAF1684L;
for (int i = 0; i < 256; i++) {
for (int j = 0; j < 31; j++) {
h = (h >>> 7) ^ h;
h = (h << 11) ^ h;
h = (h >>> 10) ^ h;
}
byteTable[i] = h;
}
}
The above is initialising our array of random numbers. We use an XORShift generator, but we could really use any fairly good-quality random number generator (creating a SecureRandom() with a particular seed then calling nextLong() would be fine). Then, to generate a hash code:
public static long hashCode(String cs) {
if (cs == null) return 1L;
long h = HSTART;
final long hmult = HMULT;
final long[] ht = byteTable;
for (int i = cs.length()-1; i >= 0; i--) {
char ch = cs.charAt(i);
h = (h * hmult) ^ ht[ch & 0xff];
h = (h * hmult) ^ ht[(ch >>> 8) & 0xff];
}
return h;
}
A guide to consider is that given a hash code of n bits, on average you'd expect to have to generate hashes of in the order of 2^(n/2) strings before you get a collision. Or put another way, with a 64-bit hash, you'd expect a collision after around 4 billion strings (so if you're dealing with up to, say, a couple of million strings, the chances of a collision are pretty negligible).
Another option would be MD5, which is a very strong hash (practically secure), but it is a 128-bit hash, so you have the slight disadvantage of having to deal with 128-bit values. I would say MD5 is overkill for these purposes-- as I say, with a 64-bit hash, you can deal fairly safely with in the order of a few million strings.
(Sorry, I should clarify -- MD5 was designed as a secure hash, it's just that it's since found not to be secure. A "secure" hash is one where given a particular hash it's not feasible to deliberately construct input that would lead to that hash. In some circumstances-- but not as I understand in yours-- you would need this property. You might need it, on the other hand, if the strings you're dealing with a user-input data-- i.e. a malicious user could deliberately try to confuse your system. You might also be interetsed in the following I've written in the past:
Standard java practise, is to simply write
final int prime = 31;
int result = 1;
for( String s : strings )
{
result = result * prime + s.hashCode();
}
// result is the hashcode.
If you happen to use Java, you can create an array of strings (or convert a collection to an array), and then use Arrays.hashCode()
as documented here.
I see no reason not to concatenate the strings and compute the hashcode for the concatenation.
As an analogy, say that I wanted to compute a MD5 checksum for a memory block, I wouldn't split the block up into smaller pieces and compute individual MD5 checksums for them and then combine them with some ad hoc method.
Let's solve your root problem.
Don't use a hashcode. Just add a integer primary key for each String