BulkInsert
context
Option to insert bulk data in batches
proposed solution
In Knex , insert function (ref) taking either a hash of properties to be inserted into the row, or an array of inserts . Can that be implemented in trilogy as well
alternatives
Unable to insert data at once using trilogy, but as a workaround, knex can be used eg:
await db.knex<UserDocType>("users").insert(userArry);
Sounds good. This would be nice to add as createMany, with a few considerations:
- Batching bulk inserts to configurable chunks (default = 100?)
- sqlite doesn't support
returningoroutputclauses. I worked around that in the design ofcreatebut I'm not sure that would translate tocreateMany, so the return value would be the number of rows, a boolean, or nothing.
- Batching bulk inserts to configurable chunks (default = 100?)
I think there are some limitations in sqlite bulk inserts, (eg: SQLITE ERROR, Toomany Variables) when you insert many rows at a time. Is there any basis for deciding the default chunk value to 100?
- sqlite doesn't support
returningoroutputclauses. I worked around that in the design ofcreatebut I'm not sure that would translate tocreateMany.
yeah, but I don't think that is required in bulk inserts.
Is there any basis for deciding the default chunk value to 100?
None at all :) Probably 500 or even 1,000 is doable, I just don't know what the typical limit is and will have to look into it.
yeah, but I don't think that is required in bulk inserts.
Agreed, I just wanted to point out the return value would be different between create and createMany, but that'll be documented anyway.
I just tried with a table with 4 columns , and fails at chunk of 240 with this, await db.knex<UserDocType>("users").insert(userArry), function. (Error : SQLITE ERROR, Toomany Variables).
When the chunks are smaller the net time taken for the query is more when compared with larger chunks.. Is it possible optimize the chunks count dynamically ? 🤔