Fast hash for strings

Question:

I have a set of ASCII strings, let’s say they are file paths. They could be both short and quite long.

I’m looking for an algorithm that could calculate hash of such a strings and this hash will be also a string, but will have a fixed length, like youtube video ids:

https://www.youtube.com/watch?v=-F-3E8pyjFo
                                ^^^^^^^^^^^

MD5 seems to be what I need, but it is critical for me to have a short hash strings.

Is there a shell command or python library which can do that?

Asked By: Anthony

||

Answers:

You could use the sum program (assuming you’re on linux) but keep in mind that the shorter the hash the more collisions you might have. You can always truncate MD5/SHA hashes as well.

EDIT: Here’s a list of hash functions: List of hash functions

Answered By: eugecm

Something to keep in mind is that hash codes are one way functions – you cannot use them for “video ids” as you cannot go back from the hash to the original path. Quite apart from anything else hash collisions are quite likely and you end up with two hashes both pointing to the same video instead of different ones.

To create an Id like the youtube one the easiest way is to create a unique id however you normally do that (for example an auto key column in a database) and then map that to a unique string in a reversible way.

For example you could take an integer id and map it to 0-9a-z in base 36…or even 0-9a-zA-Z in base 62, padding the generated string out to the desired length if the id on its own does not give enough characters.

Answered By: Tim B

I guess this question is off-topic, because opinion based, but at least one hint for you, I know the FNV hash because it is used by The Sims 3 to find resources based on their names between the different content packages. They use the 64 bits version, so I guess it is enough to avoid collisions in a relatively large set of reference strings. The hash is easy to implement, if no module satisfies you (pyfasthash has an implementation of it for example).

To get a short string out of it, I would suggest you use base64 encoding. For example, this is the size of a base64-encoded 64 bits hash: nsTYVQUag88= (and you can get rid or the padding =).

Edit: I had finally the same problem as you, so I implemented the above idea: https://gist.github.com/Cilyan/9424144

Answered By: Cilyan

Another option: hashids is designed to solve exactly this problem and has been ported to many languages, including Python. It’s not really a hash in the sense of MD5 or SHA1, which are one-way; hashids “hashes” are reversable.

You are responsible for seeding the library with a secret value and selecting a minimum hash length.

Once that is done, the library can do two-way mapping between integers (single integers, like a simple primary key, or lists of integers, to support things like composite keys and sharding) and strings of the configured length (or slightly more). The alphabet used for generating “hashes” is fully configurable.

I have provided more details in this other answer.

Answered By: Chris

As of Python 3 this method does not work:

Python has a built-in hash() function that’s very fast and perfect for most uses:

>>> hash("dfds")
3591916071403198536

You can then make it unsigned:

>>> hashu=lambda word: ctypes.c_uint64(hash(word)).value

You can then turn it into a 16 byte hex string:

>>> hashu("dfds").to_bytes(8,"big").hex()

Or an N*2 byte string, where N is <= 8:

>>> hashn=lambda word, N  : (hashu(word)%(2**(N*8))).to_bytes(N,"big").hex()

..etc. And if you want N to be larger than 8 bytes, you can just hash twice. Python’s built-in is so vastly faster, it’s never worth using hashlib for anything unless you need security… not just collision resistance.

>>> hashnbig=lambda word, N  : ((hashu(word)+2**64*hashu(word+"2"))%(2**(N*8))).to_bytes(N,"big").hex()

And finally, use the urlsafe base64 encoding to make a much better string than "hex" gives you

>>> hashnbigu=lambda word, N  : urlsafe_b64encode(((hashu(word)+2**64*hash(word+"2"))%(2**(N*8))).to_bytes(N,"big")).decode("utf8").rstrip("=")
>>> hashnbigu("foo",16)
'ZblnvrRqHwAy2lnvrR4HrA'

Caveats:

  • Be warned that in Python 3.3 and up, this function is
    randomized and won’t work for some use cases. You can disable this with PYTHONHASHSEED=0

  • See https://github.com/flier/pyfasthash for fast, stable hashes that
    that similarly won’t overload your CPU for non-cryptographic applications.

  • Don’t use this lambda style in real code… write it out! And
    stuffing things like 2**32 in your code, instead of making them
    constants is bad form.

  • In the end 8 bytes of collision resistance is OK for a smaller
    applications…. with less than a million entries, you’ve got
    collision odds of < 0.0000001%. That’s a 12 byte b64 encoded
    string. But it might not be enough for larger apps.

  • 16 bytes is enough for a UUID/OID in a cache, etc.

Speed comparison for producing 300k 16 byte hashes from a bytes-input.

builtin: 0.188
md5: 0.359
fnvhash_c: 0.113

For a complex input (tuple of 3 integers, for example), you have to convert to bytes to use the non-builtin hashes, this adds a lot of conversion overhead, making the builtin shine.

builtin: 0.197
md5: 0.603
fnvhash_c: 0.284
Answered By: Erik Aronesty
Categories: questions Tags: , , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.