elasticsearch-rails icon indicating copy to clipboard operation
elasticsearch-rails copied to clipboard

Add max size limit to requests for bulk import

Open jpr5 opened this issue 4 years ago • 3 comments

This commit adds a new parameter max_size, in bytes, which is used to enforce an upper limit on the overall HTTP POST size. This is useful when trying to maximize bulk import speed by reducing roundtrips to retrieve and send data.

This is needed for scenarios where there is no control over Elasticsearch's maximum HTTP request payload size. For example, AWS' elasticsearch offering has either a 10MiB or 100MiB HTTP request payload size limit.

batch_size is good for bounding local runtime memory usage, but when indexing large sets of big objects, it's entirely possible to hit a service provider's underlying request size limit and biff the import mid-run. This is even worse when force is true - then the index is left in an incomplete state with no obvious value to adjust batch_size down to, in order to sneak under the limit.

The max_size defaults to 10_000_000, to catch the worst-case scenario on AWS.

jpr5 avatar Jul 19 '21 21:07 jpr5

💚 CLA has been signed

Signed the agreement.

jpr5 avatar Jul 19 '21 21:07 jpr5

Well, willing to look at/fix the failures but can't see the test detail failures anymore...

jpr5 avatar Apr 21 '23 01:04 jpr5