searchkick icon indicating copy to clipboard operation
searchkick copied to clipboard

Support range as an option for async reindex

Open kapso opened this issue 2 years ago • 10 comments

I am curious why this change was made? - Changed async reindex to fetch ids instead of using ranges for numeric primary keys with Active Record

...and if it's possible (or perhaps makes sense) to support range as an option as well for async reindex

Why? - enqueuing ids is really slow, especially when writing 100s of ids/job to Sidekiq/Redis. We use a batch_size of 1000 and creating jobs is really slow when reindexing 40m documents.

UPDATE: it took ~16 hours to enqueue ids for 40m documents with a batch size of 1000.

kapso avatar Mar 07 '22 01:03 kapso

Or maybe use Sidekiq::Worker.perform_bulk which I think could speed up Sidekiq/Redis writes, in case Sidekiq is being used.

kapso avatar Mar 07 '22 02:03 kapso

@ankane any thoughts? thanks.

kapso avatar Apr 09 '22 20:04 kapso

Hey @kapso, I was thinking it'd be good to handle gaps in primary keys at the expense of longer enqueuing times, but may be good to support the previous pattern as well. How long did it previously take to enqueue, and how does the total reindexing time compare before and after?

ankane avatar May 18 '22 21:05 ankane

@ankane We have 140m rows in the table, out of which 40m rows (we use search_import scope for filtering) get indexed. With a batch_size of 500, it takes 20+ hours to just enqueue all ids.

The previous pattern used to take ~20m.

We use Heroku worker to enqueue these ids, and sometimes Heroku workers get re-cycled and so does the enqueue process.

kapso avatar May 31 '22 16:05 kapso

Since so many datasets have few if any gaps in their primary keys I think the option should exist to enqueue faster rather than more accurately. Could this be a simple config option in an initializer config.queue_method = :fast # or :precise

ramaboo avatar May 31 '22 19:05 ramaboo

@ankane another interesting thing we have seen a few times is that the Enqueuing job (Heroku worker) crashes after a couple of hours with no error reported in our error system (Sentry). And since it crashed (and re-started) it obviously re-started the enqueuing process - creating a fresh index and writing to this new index.

The Enqueuing (Heroku) worker had 2.5GB memory, so memory definitely wasn't an issue.

This is the enqueuing job/process - Model.reindex(mode: :async)

kapso avatar May 31 '22 23:05 kapso

@ankane curious if there are any plans to also support the range option?

kapso avatar Jun 18 '22 00:06 kapso

@ankane thinking of submitting a PR for this, is this the place to start? - https://github.com/ankane/searchkick/commit/88f52da847c1a8c7d034ffe3caec66098cf431db

Or is there any other code we should also be looking at?

kapso avatar Jun 28 '22 01:06 kapso

if relation.respond_to?(:find_in_batches)
  relation.find_in_batches(batch_size: batch_size) do |items|
  batch_job(class_name, batch_id, items.map(&:id))

I believe, and correct me if I'm wrong, but find_in_batches here will convert batch to an array of AR objects, whereas in_batches.each would leave it as an ActiveRecord::Association. If we used the latter, then we could items.pluck(:id) instead of items.map(&:id), and save the overhead of loading all the objects into memory.

irb(main):010:0> Product.find_in_batches do |batch|
irb(main):011:1*   puts batch.is_a?(ActiveRecord::Relation)
irb(main):012:1> end
false
irb(main):007:0> Product.in_batches.each do |batch|
irb(main):008:1*   puts batch.is_a?(ActiveRecord::Relation)
irb(main):009:1> end
true

BobbyMcWho avatar Jun 29 '22 14:06 BobbyMcWho

Ran into the same problem where 16 million records took nearly 2 hours just to get enqueued. First reaction was to increase the batch size, but working with ranges sounds like a more reasonable approach.

I can work something out if you're willing to support ranges again @ankane. I see that the BulkReindexJob still accepts range options here, so it should be a straightforward PR.

igorbelo avatar Oct 04 '23 10:10 igorbelo