terraspace
terraspace copied to clipboard
State download conflict with multiple terraspace executions running simultaneously
Checklist
- [X] Upgrade Terraspace: Are you using the latest version of Terraspace? This allows Terraspace to fix issues fast. There's an Upgrading Guide: https://terraspace.cloud/docs/misc/upgrading/
- [X] Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on https://community.boltops.com
- [ ] Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample.
Difficult to reproduce but comes down to the fact that the environment is not getting specied
My Environment
Software | Version |
---|---|
Operating System | Ubuntu 20.04 |
Terraform | 1.0.0 |
Terraspace | 0.6.11 |
Ruby | 2.7.0 |
Expected Behaviour
Multiple invocations on the same stack with different environments should be able to execute simultaneously on the same machine.
I've got a complex stack of 8 stages that is being deployed to multiple environments (dev, test, staging, prod). When running an apply on multiple environments on the same machine simultaneously causes the executions to get corrupted.
Current Behavior
Depending on your timing you can have the code work or you'll get ERB errors trying to resolve values from the state.
The issue is caused by how terraspace stores its material in /tmp
for each invocation.
Errno::ENOENT: No such file or directory @ rb_sysopen - /tmp/terraspace/remote_state/stacks/stackname/state.json
The generated plans are unique enough since they have their filename appended with a random hex string as per:
lib/terraspace/cli/up.rb: "#{Terraspace.tmp_root}/plans/#{@mod.name}-#{@@random}.plan"
Step-by-step reproduction instructions
Difficult to reproduce since it is a bit reliant on race conditions but take a complex stack and run
terraspace all init --exit-on-fail -y && terraspace all plan -y --exit-on-fail
For multiple environments in different shells on the same machine.
Code Sample
N/A
Solution Suggestion
The downloaded state files, plans, and other material stored in /tmp/terraspace should be separated by environment as well. If there are files common to all terraspace invocations they can still be placed in the root (since you want to be able to run subsequent plan/refresh/validate/up after running an init, meaning they need a predictable path to store their material). Splitting by environment alone should be sufficient to allow multiple invocations.
Could force setting TS_TMP_ROOT
prior to each invocation for now to work around this issue.
Hi, any idea to when this will be solve ?
Would like to address. Unsure when though. Will also review and consider PRs. No sweat either way of course 👍
We have 2 scenarios here, maybe was wrote above in any case let me say:
a) 2 or more developers in their /home/$users/devel/terraspace_project/stacks, can't do apply/plan because in /tmp/terraspace the permissions is only applied to first one, the other receive permission denied. Our workaround is set chmod 777 /tmp/terraspace -R
b) using this workaroung we notice the other issue for example, if developer1, use Terraspace_project-1/stackname_alpha and developer2 use Terraspace_project-5/stackname_alpha (same name), even here exist a conflict because isn't identified wich stacks by project would be in /tmp/terraspace and could overwrite critical states.
for the second we don't know how solve with workaround.
TS_TMP_ROOT environment variable didn't see in the reference documentation.
our version is latest: 2.2.14
sorry by PR, i don't know Ruby lang
is possible use tmp inside .terraspace-cache wich is inside of project when do plan/up and not use /tmp/terraspace ?
I'm trying to avoid this issue by setting TS_TMP_ROOT to different paths for different projects. I am also setting TS_CACHE_ROOT to different paths for each project.
example project A:
export TS_TMP_ROOT="$HOME/.myproject_A/terraspace"
export TS_CACHE_ROOT="$HOME/.myproject_A/terraspace-cache"
example project B:
export TS_TMP_ROOT="$HOME/.myproject_B/terraspace"
export TS_CACHE_ROOT="$HOME/.myproject_B/terraspace-cache"
Doing so seems to be steps toward a feasible solution, except that some files are still being created under /tmp/terraspace/ due to the following line in rewrite.rb not properly using Terraspace.tmp_root:
https://github.com/boltops-tools/terraspace/blob/20f733b26f08f6dd959ece62765e1d864cea51c2/lib/terraspace/compiler/erb/rewrite.rb#L11-L12
Suggestion: It seems that line # 11 above should use Terraspace.tmp_root like how it is used here:
https://github.com/boltops-tools/terraspace/blob/20f733b26f08f6dd959ece62765e1d864cea51c2/lib/terraspace/terraform/args/thor.rb#L148
Suggestion: Terraspace.tmp_root should probably also be used here:
https://github.com/boltops-tools/terraspace/blob/20f733b26f08f6dd959ece62765e1d864cea51c2/lib/terraspace/terraform/api/vars/json.rb#L19