I have a code pattern which translates one integer to another. Just like this:
int t(int value) {
switch (value) {
case 1: return const_1;
ca
A switch
construct is faster (or at least not slower).
That's mostly because a switch
construct gives static data to the compiler, while a runtime structure like a hash map doesn't.
When possible compilers should compile switch
constructs into array of code pointers: each item of the array (indexed by your indexes) points to the associated code. At runtime this takes O(1), while a hash map could take more: O(log n) at average case or O(n) at worst case, usually, and anyway a bigger constant number of memory accesses.
I think it is not obvious which is going to be faster. You might need to profile both approaches.
The hash map should have complexity of O(1).
The switch (with non-contiguous keys like yours) may be optimized into a binary search (at least with GCC), which has complexity of O(log n).
On the other hand, any operation done on a hash map will be much more expensive than an operation done in a switch.
Hash table time complexity is generally O(1) when don't considering collision. C++ standard doesn't specified how switch is implemented but it can be implemented as jump-table which time complexity is O(1) too or it can be implemented as binary search which time complexity is O(log n) or a combination depending on how many case statement etc.
So in a word, small scale like your case, switch is faster, but hash table might win in large scale
The speed of a hash map will depend on two things: the speed of the hash function, and the number of collisions. When all of the values are known ahead of time, it's possible to create a perfect hash function that has no collisions. If you can generate a perfect hash function that only consists of a couple of arithmetic operations, it will potentially be faster than the switch.
An array will have the fastest access time, by definition.
The switch statement compares values, then uses a jump table (which is an array of function pointers).
The hashmap computes a hash value from the data, then either searches a tree in memory or uses the hash value as an index into an array. Slow because of computing the hash value.
On most modern platforms, 64k, is not a big amount of data and can be statically allocated as a constant array.
One problem with the array technique is account for keys that you have not accounted for. One example is to use a unique sentinel value. When the value is returned, you know you have an unknown key.
I suggest using a static const
array of values.
I will add my 5 cents:
For the number of entries at about 50 std::unordered_map (hash based, O(1)) is typically slower then std::map (tree based O(ln(N))), and both of them are slower then boost::flat_map(sorted vector O(ln(N))) which I tend to use in such cases. It is not always the case that switch can be compiled to jump table, and when it is, you can simply put your values (or functions) in vector yourself and access by index. Otherwise switch is marginally faster than boost::flat_map.
Please note the word "typically" in the beginning, if you do care about performance of this piece of code do the profiling (and share results with us :)).