Provide a way to control the backfill rate
A way to control the rate at which at backfilling occurs would avoid potential overload on the Postgres instance due to the large amount of I/O that backfilling a large table can incur.
This, or some other way to solve the problem of expensive migration starts, is likely required to use pgroll on very high-traffic databases.
It is not only about overwhelming the main PostgreSQL server you are running the migration on, or only 'very high-traffic databases'. If you have a standby server on the other side of a questionable network you are going to be outpacing the speed the WAL files can send with any mass change on a large enough table, which will risk the standby getting too far behind, the primary not keeping a WAL file long enough, and the standby now being useless until re-setup. People might need to restrict to some fairly slow speed that can keep up with that network. (Or even less, to not overwhelm/delay other uses of that network.)
Having some sort of 'delay' parameter between batches (and a way to tweak batch size also (which may exist, I haven't read all of the docs or code yet)) would allow people to tweak the backfill for their environments (in a perfect world these settings would also have the ability to be modified by an environmental configuration, not only within the migration code, so that people who have the same db in many different environments can use a single migration file for all of them with reasonable modifiers on delay/size for each, and if you checked the environment setting between every batch, you could slow down an in progress migration to let systems/networks recover without needing to actually kill it mid migration upon discovering an environment not keeping up.
(having a batch size parameter that could be set to 1 will also prevent possible deadlock scenarios when dealing with tables that the application likes to update, assuming that a batch size of 1 means 1 record per transaction.)
I think making the batch size configurable via an environment variable is a good starting point.
Additionally, I think we can either introduce a "delay" variable which allows a pause between each batch or perhaps a desired "batches per second" variable. I don't have a strong preference between the two except that adding a delay would be a simpler change and would not require picking a rate limiting library.
@andrew-farries WDYT?
delay is a good first start IMHO! In the long run I wonder if we would want something dynamic, ie allowing users to inject a function (hook?) to dynamically decide on the pace. This hook could base its decisions on checking the current system performance (IOPS, CPU, etc..) or other metrics (ie replication slot current size).
maybe the best option could be to have a hook to configure this, with the default implementation being a static delay configured via env variable?
These are the hooks we have today as an example: https://github.com/xataio/pgroll/blob/aeb11fd65d3a59905e6a5c8205045e5b834bde8b/pkg/roll/options.go#L33, it would probably make sense to have them defined with a different struct.
thoughts @andrew-farries?
I like the approach of using a hook to configure the backfill rate based on user-defined parameters as @exekias suggests.
As a first step though, allowing for a simple delay between batches is OK too.
To keep things simple for now I'm going to allow a simple delay after each batch. I think a combination of this and batch size should be enough for now and we can wait for customer feedback to decide if we need to make it more complicated.
Doing it via a hook is nice when running from code, but will start to get complicated if we want to enable it via the cli.