terraform-provider-octopusdeploy
terraform-provider-octopusdeploy copied to clipboard
Terraform plan crash for octopus resources. Error: copy destination is invalid
Describe the bug
We have to many octopus accounts and deployment targets in our octopus server. After upgrade octopus server from 2020.6.4974
to version 2021.1.7316
we got crash terraform plan with message: Error: copy destination is invalid
.
Steps to reproduce For reproduce i created github repo
Expected behavior Terraform plan don't have errors
Logs and other supporting information See Github actions
Screenshots If applicable, add screenshots to help explain your problem.
Environment and versions:
- OS: Octopus server run on windows OS
- Octopus Server Version: ['2021.1.7316']
- Terraform Version: [
0.13.2
] - Octopus Terraform Provider Version: ['0.7.37`]
Additional context I would be very grateful for any information on this matter :)
Hey @osipovdaniil! 👋 Thank you for submitting this issue. I am conducting an investigation.
The error, Error: copy destination is invalid
comes from a dependent library, copier. This library is used in go-octopusdeploy to convert accounts, endpoints, and worker pools. I am investigating why this error is being generated.
Update: the fact that you upgraded to a newer version of Octopus Server hints that the bug lies somewhere in go-octopusdeploy; the API client library must be updated since our API changed.
Hi @jbristowe, how can I update the copier library in order to use it with the octopusdeploy provider?
Hi @jbristowe . Any news on this issue? I still got a problem
Update: tracing HTTP requests has revealed behaviour that may explain why this issue is occurring. In short, response bodies from /api/accounts/{id}
can differ between identical requests. This is invalidating the state that's persisted by Terraform. Subsequent comparisons of state is what's causing the copier library to blow up. I am working to discover the case(s) where this behaviour is occurring.
Workaround: reducing the number of accounts being created in batches (i.e. 10 accounts) should resolve this behaviour.
Another workaround: use --refresh=false
if you're sure you have no remote changes.
I set -parallelism=1
in terraform plan and apply. It's helped for me.
Thanks for help guys!
I don't think we should close this as it's still an issue. Setting -parallelism=1
is a good workaround, but now my planning/applying takes forever.
I agree; keeping this issue open will allow us to track it. However, I do not have a solution to this problem at the moment. I will continue to investigate it.
Any update on this? I could try to submit a PR if you nudge me in a right direction.
@stalmok Unfortunately, no. The issue exists in the Octopus REST API; there's a concurrency issue that results in the behaviour described by OP. The workaround (above) resolves it.
Yeah I've been using that workaround for a while now, but with 1000 resources it's painfully slow. Is there an issue you could point me to in RESP API repo?
I'm also finding this issue. The parallelism=1 workaround works but is tedious when there is many resources to create. I can say that in my example the only octo resource I am trying to manage is multiple octopusdeploy_azure_service_principal.
I was not able to reproduce this issue using v0.10.3.
I will close this issue, if you can still reproduce it in the latest version, please let us know.