grenad icon indicating copy to clipboard operation
grenad copied to clipboard

The sorter should avoid sorting and use a smarter inserting algorithm

Open Kerollmops opened this issue 2 years ago • 1 comments

We use grenad in Meilisearch, and when we benchmark and flamegraph the indexing process, which is intensively utilizing this library, we can see that 12-15% of the time is passed in the sort_unstable_by_key of the sorter. This sort is executed to ensure that the values are when we need to dump all the in-memory entries into a writer.

According to the documentation, sorting the entries by using sort_unstable_by_key is O(m * n * log(n)) where m is the key function. As our key function, the function that computes our key, can be considered O(1), our current sorting solution should be O(n * log(n)).

Inserting an entry in our current data structure is O(1) but not when we need to dump into the writer.

https://github.com/meilisearch/grenad/blob/dd79a338b4be073913cb538b173769042188558d/src/sorter.rs#L246-L253

The sorter constraints

Some of the rules that apply to the sorter are:

  1. We never read entries before the need to dump the entries into a writer.
  2. We need to read the values in order only once to merge them before dumping them into a writer.
  3. We need to support entries with the same key and only merge it once at the end.

Those constraints made me think of a more straightforward implementation of a binary heap, where we do not need to implement removing values but only read them in order, i.e. the binary heap's into_sorted_iter.

Using a binary heap

The only downside that I see about using a heap is about supporting entries with duplicate keys. One solution I see to keep the algorithm simple on both sides, the binary heap inserting system and the current merging system, is to postfix every key by an incrementing big-endian number. This way, we can keep the order and uniqueness of the entries. The downside of this solution is it can impact memory usage. The impact depends on the workload and the size of the inserted keys.

According to the documentation of the standard binary heap, inserting a value is O(1), and poping values is O(log(n)).

Adding benchmarks

We need to introduce more benchmarks to measure our progress on this subject and avoid regressions. To do so, we need to find suitable real-world datasets or find a way to generate them properly. Measuring is complex.

Kerollmops avatar Mar 03 '22 14:03 Kerollmops