Serge Smertin
Serge Smertin
we may be able to add retries in terraform provider for this, but we're still figuring out the best way on what to retry.
@neinkeinkaffee I assume you could already build the provider for local deployment. Could you experiment with [`resource.RetryContext`](https://github.com/databricks/terraform-provider-databricks/blob/68bb39eb84d179ce4c43aad97c70627d90bbaa4f/mws/resource_mws_workspaces.go#L220-L243) in the [Create method here](https://github.com/databricks/terraform-provider-databricks/blob/master/mws/resource_mws_credentials.go#L26-L38)? Probably `string.Contains` on `valid cross account IAM role`...
@neinkeinkaffee yes, looks like something that could stop the bleeding. What are the results of running that in your environment? How many retries do you usually get?
@stevenwhayes yes, sometimes you have to update the state before proceeding. Please try it out and confirm the steps - I’ll update the error troubleshooting guide
@stack72 of course, that change will be there soon
@janaekj `required_providers` block has nothing to do with `provider` block
@barywhyte it's just replacing the provider coordinates. binaries didn't change.
@RKSelvi make sure you have .terraform.lock.hcl in the source control. This commonly fixes all the issues.
@RKSelvi i think you need to add provider hashes of both _used_ and _the most recent_ versions to .terraform.lock.hcl
@barywhyte the share of provider adopters using tf v0.13, v0.12, and v0.11 is extremely low, hence I strongly recommend updating to the latest possible version. Approximately 80% of adopters are...