terraform-provider-iterative
terraform-provider-iterative copied to clipboard
Allow tasks to destroy themselves
With the current implementation, instances can't destroy all the supporting resources, because of interdependency. For example, after deleting a security group, it's impossible to issue more API calls because there is no network connection.
Possible solutions include:
-
Using cloud-native templates like AWS CloudFormation, Google Cloud Deployment Manager and Azure Resource Templates to let providers destroy everything.
-
Leaving cheap and costless resources in the cloud, and running a garbage collector in every invocation to delete resources from past tasks.
-
Requiring users to explicitly delete resources after each task. This approach is convenient with the launch/harvest lifecycle, but not for the CML runner.
Do you have a good list of the particular resources that cause this issue?
My two cents is to allow those resources to be predefined by the user (vpc, security grp rules, etc) so tpi doesn't need to clean them and users can have that extra control.
Alternatively, why not add an explicit cleanup step at the end of workflows?
on: workflow_dispatch
jobs:
create:
runs-on: ubuntu-latest
steps:
- uses: iterative/setup-cml@v1
- run: cml runner create ${{ github.run_id }}
reproduce:
needs: create
runs-on: self-hoster
steps:
- uses: iterative/setup-dvc@v1
- run: dvc repro
delete:
if: always()
needs: reproduce
runs-on: ubuntu-latest
steps:
- uses: iterative/setup-cml@v1
- run: cml runner delete ${{ github.run_id }}
On GitHub Actions, this can even be automated by using the post functionality.
On GitHub Actions, this can even be automated by using the post functionality.
Not quite since the post is on the setup level and runs on the same host that was defined for the job. So it would either delete the instance right after it was made or would run on the instance and have the same problem.
the create, run, delete/clean-up as separate jobs could work but that starts to feel pretty opinionated (forced convention) on your ci/cd scripts. I'm not super opposed but I feel more insight/opinions on it should be gathered.
Not quite since the post is on the setup level and runs on the same host that was defined for the job. So it would either delete the instance right after it was made or would run on the instance and have the same problem.
Oh, my! 🤦🏼♂️ Yes, you're absolutely right.
the
create,run,delete/clean-upas separate jobs could work but that starts to feel pretty opinionated (forced convention) on your ci/cd scripts.
As opinionated as requiring separate deploy and train jobs as we already do, but slightly more bulky? 😅
I'm not super opposed but I feel more insight/opinions on it should be gathered.
Definitely, and we should also explore webhook-based scaling solutions like the ones proposed at https://docs.github.com/es/actions/hosting-your-own-runners/autoscaling-with-self-hosted-runners
- is
if: always()or equivalent supported in all CIs? - the extra bulkiness sounds fine provided it's optional-to-fix-edge-cases rather than default-required
- in any case a
cml runner gc [--all](or equivalent) standalone command sounds nice to have as a prerequisite