terraform-provider-circleci
terraform-provider-circleci copied to clipboard
Error error creating context: context deadline exceeded
I am getting an error when trying to create a context. The error is:
╷
│ Error: error creating context: Post "https://circleci.com/api/v2/context": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
│
│ with circleci_context.application,
│ on circleci.tf line 5, in resource "circleci_context" "application":
│ 5: resource "circleci_context" "application" {
│
╵
When I try to apply again the resources fails because it already exists. To workaround this I import the resource and apply is good again.
I found a reference to this error at https://github.com/hashicorp/terraform/issues/3536 which suggests it might have been a macOS thing. But that should have been fixed by now.
Terraform and provider:
❯ t version
Terraform v1.1.4
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v3.72.0
+ provider registry.terraform.io/mrolla/circleci v0.6.1
OS:
[email protected]
-----------------------------
OS: macOS 12.2 21D49 x86_64
Host: MacBookPro16,1
Kernel: 21.3.0
Uptime: 10 days, 18 hours, 14 mins
Packages: 174 (brew)
Shell: zsh 5.8
Resolution: 1792x1120
DE: Aqua
WM: Rectangle
Terminal: vscode
CPU: Intel i7-9750H (12) @ 2.60GHz
GPU: Intel UHD Graphics 630, AMD Radeon Pro 5300M
Memory: 14847MiB / 32768MiB
Context deadline exceeded is a generic Go timeout error. I take it from your comment that this is repeatable but it's not reproducible from the acceptance tests, which FWIW I run from macOS all the time. Without a reproduction we can't do anything and even then this is not likely to be a provider bug.
It seems to happen maybe 50% of the time. I haven't noticed it from other providers. I'll try to get some more information from terraform debugging logging next time I add some contexts to see if I can find anything. I agree it could just be a network issue between me and CIrcleCI.
Is it easy enough for me to run acceptance tests to try and see if I can reproduce there as well?
Weird.
Is it easy enough for me to run acceptance tests to try and see if I can reproduce there as well?
Yeah! There's a few required env vars that IIRC aren't externally documented, but running acceptance tests in your own personal GH/Circle user is simple enough.
@andyshinn / @bendrucker, we're seeing this same thing...all the time and we're running this from inside CCI.
Whipped up a quick pr to try and address. I'm not native to the Go lang land so happy to try and make any changes requested 😊
https://github.com/mrolla/terraform-provider-circleci/pull/72
Every time i remember to add TF_LOG it doesn't happen and every time i forget... it happens. So, I haven't been able to get any additional information on it, yet. But thanks for opening a PR!
I haven't looked into pulling down and building a TF provider from a specific branch. But I will try and do that soon to test out the fix.
I have logs!
|
│ Error: Get "https://circleci.com/api/v2/context/<REDACTED>": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
│
│ with circleci_context.service_context,
│ on main.tf line 88, in resource "circleci_context" "service_context":
│ 88: resource "circleci_context" "service_context" {
|
We can toss a verbose error logging in there if you all have recommendations etc.
Ok nice, an error on GET is always going to be retryable. A timeout error on POST is potentially retryable but if the underlying request actually succeeded the retry will end up failing too.
My original one was a POST which is why I was thinking to get debug logs to see if there was actually a response or something bad happening in the POST. The resource does get created. It just doesn't seem to get any response within whatever timeout is expected (not sure if the timeout can be increased).
If a response isn't received more or less immediately, it's unlikely to ever come. The response timeout case is pretty tough because it means a connection was successfully opened and a request transmitted. Connections that fail to open at all or receive an immediate error response and close would be more typical symptoms of excessive concurrency.