[FEATURE]: Add `perfect_hashing` probing scheme
Is your feature request related to a problem? Please describe.
Perfect hash functions describe an injective mapping from the input key domain into the hash map's slot index domain. In other words, Each distinct key hashes to a distinct slot in the map.
This setup allows for a set of optimizations:
- Perfect hash functions don't require any probing logic, since there can't be any colliding key in a slot that is not equal to the probe key
- The injectivity constraint guarantees that any index value produced by the hash function is smaller than the map's capacity. Thus, we can get rid of the remainder computation.
- Key equality comparison can be simplified, i.e., we only have to check if the key in the slot is equal to a sentinel or not.
Describe the solution you'd like
Add a new class cuco::perfect_hashing<class Hash> to our probing scheme zoo which behaves as follows:
When the dereferencing operator of the probing iterator is called for the first time (at the initial probing position), return slots + hash(key). After incrementing the iterator, always return end(), meaning that there is at most one probing step.
A user must ensure that the Hash function in combination with the input key set actually forms a perfect hash function, and the maximum hash values is smaller than the map's capacity. Otherwise behavior is undefined.
Notes on the implementation:
- Currently each of our probing schemes uses the same
probing_iteratorclass. This new probing scheme doesn't fit into the logic of the existing iterator. Thus I propose to let each probing scheme define its ownprobing_iteratoras a member class. - Since perfect hashing only requires bitwise comparison against the sentinel, we ignore any user-specified
KeyEqualoperator.
Describe alternatives you've considered
There is one more optimization we could additionally apply, but I would vote against it due to technical reasons:
Perfect hashing guarantees that there are no collisions. Thus, we could insert keys using non-atomic STG instructions, which has proven to be significantly faster compared to atomic CAS operations.
This however leads to some undesireful side effects due to the relaxed memory ordering of the GPU, which ultimately leads to implausible return values from some of our APIs (insert_and_find and also bulk insert; see example in the bottom paragraph of https://github.com/NVIDIA/cuCollections/issues/475#issuecomment-2113437463).
If this optimization is desired, it can still be enabled by specifying cuda::thread_scope_thread when instantiating the map type. This is a bit hacky but I think it's better than breaking the existing logic, introducing spurious errors in the aforementioned return values.
Additional context
See discussion #475
### Tasks
Few more implementation details come to my mind:
- When doing an
insertwith perfect hashing, we don't need to check the content of the slot first. Instead, we can directly issue the store instruction. This is handled outside the probing iterator. I think we have to add a constexpr switch to trigger this specialized code path to theinsertdevice function. - Key equality comparison is also handled outside of the probing iterator. This probably also needs a dedicated code path
As mentioned in slack, I would like to work on this issue