cloud-builders
cloud-builders copied to clipboard
[BUG] kubectl - memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
Affected builder image
name: 'gcr.io/google.com/cloudsdktool/cloud-sdk' entrypoint: 'kubectl'
call:
args:
- "rollout"
- "restart"
- "deployment"
- "-l"
- "app=myLabel"
Expected Behavior
works (?). runs the rollout
Actual Behavior
I get an error:
Already have image (with digest): gcr.io/google.com/cloudsdktool/cloud-sdk
E0404 20:29:08.702878 1 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0404 20:29:08.704245 1 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0404 20:29:08.705268 1 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0404 20:29:08.707955 1 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0404 20:29:08.709899 1 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Steps to Reproduce the Problem
- create a cloudbuild file
- set the environment variables
- run the following (ish) command
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: 'kubectl'
args:
- "rollout"
- "restart"
- "deployment"
- "-l"
- "app=mylabel"
env:
- "CLOUDSDK_COMPUTE_ZONE=${_ZONE}"
- "CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER}"
- "CLOUDSDK_CORE_PROJECT=${_PROJECT}"
for me this was due to a bad env : .kube/config instead of ~/.kube/config in .bashrc
It is my understanding that the kubectl entrypoint runs the kubectl binary and you need to provide a valid kubeconfig to connect to your cluster. I can reproduce this behavior if I don't have a step which gets credentials first. I added a step like
- id: 'get context'
name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'gcloud'
args: [ 'container', 'clusters', 'get-credentials', 'my-cluster', '--region', 'northamerica-northeast1' ]
and my subsequent apply using the kubectl entrypoint worked just fine.
Thank @caddac for sharing detailed example.