fetchDSL icon indicating copy to clipboard operation
fetchDSL copied to clipboard

WIP Idea: Add checkpointing to continue requests after failures

Open devslash-paul opened this issue 5 years ago • 0 comments

When something fails - eg an internet outage, it's super annoying to have to restart the entire set of requests. It would be useful instead if there was some form of checkpointing.

Thinking of

runHttp {
  call('...') {
     data = Checkpoint(FileDataSupplier(...))
  }
}

The checkpoint will work via using a file based at a {cwd}/.fetchDSL folder. Each invocation can be named, or otherwise a 'global' checkpoint will be used in which case it's unsafe to run multiple checkpoints at the same time.

The easiest way to do checkpointing will be adding metadata to a envelope that encapsulates the request. This context object will be able to be received later to identify when a request has made it through the pipeline and can be marked as successful.

Another requirement is deterministic ordering of the input if checkpointing needs index based skips. Given that the concurrent nature of the produceHttp section is largely to handle the case of slow before sections, this should be fine. Though it would potentially lose the ability to use a slow (but thread safe) data provider.

devslash-paul avatar Jul 03 '19 17:07 devslash-paul