skaffold icon indicating copy to clipboard operation
skaffold copied to clipboard

DEBU[0018] marking resource failed due to error code STATUSCHECK_IMAGE_PULL_ERR subtask=-1 task=Deploy

Open bestofman opened this issue 2 years ago • 9 comments

I am trying to deploy my NodeJS application on a local Kubernetes cluster, using skaffold but I get the following result:

INFO[0001] Render completed in 15.042699ms               subtask=-1 task=DevLoop
Tags used in deployment:
 - learnertester/auth -> learnertester/auth:4d1688c1ebbcde95b85b8505b9734837743f007a9a0232aec4c24c6ee6044f07
 - learnertester/ticketing-client -> learnertester/ticketing-client:7e330225fded6bae522511538d55dfcd0f4dc2477166cef0ba13d60393c34edc
 - learnertester/tickets -> learnertester/tickets:20a0b006e4318b47725c03d25fd7adc47d781d096335749b5dc05bbdb8048c80
 - learnertester/orders -> learnertester/orders:98335ecdbaa89f389da83910376e75e0d3caa85474eb49bc1e4d4d5c729b73d6
 - learnertester/expiration -> learnertester/expiration:8c6b05f89e0abe8e6a33da266355cf79713e6bd22d1abda0da5541f24d5d8d9e
 - learnertester/payments -> learnertester/payments:ba1c398b9c2d3572b57a7edae8cded20e662dbfb52862d0c2f5fb10e6a0f584a
DEBU[0001] Local images can't be referenced by digest.
They are tagged and referenced by a unique, local only, tag instead.
See https://skaffold.dev/docs/pipeline-stages/taggers/#how-tagging-works  subtask=-1 task=Deploy
Starting deploy...
DEBU[0001] getting client config for kubeContext: `kubernetes-admin@kubernetes`  subtask=-1 task=DevLoop
DEBU[0001] Running command: [kubectl --context kubernetes-admin@kubernetes get -f - --ignore-not-found -ojson]  subtask=0 task=Deploy
DEBU[0001] Command output: []                            subtask=0 task=Deploy
DEBU[0001] 24 manifests to deploy. 24 are updated or new  subtask=0 task=Deploy
DEBU[0001] Running command: [kubectl --context kubernetes-admin@kubernetes apply -f -]  subtask=0 task=Deploy
 - deployment.apps/auth-depl created
 - service/auth-srv created
 - deployment.apps/auth-mongo-depl created
 - service/auth-mongo-srv created
 - deployment.apps/client-depl created
 - service/client-srv created
 - deployment.apps/expiration-depl created
 - deployment.apps/expiration-redis-depl created
 - service/expiration-redis-srv created
 - ingress.networking.k8s.io/ingress-service created
 - deployment.apps/nats-depl created
 - service/nats-srv created
 - deployment.apps/orders-depl created
 - service/orders-srv created
 - deployment.apps/orders-mongo-depl created
 - service/orders-mongo-srv created
 - deployment.apps/payments-depl created
 - service/payments-srv created
 - deployment.apps/payments-mongo-depl created
 - service/payments-mongo-srv created
 - deployment.apps/tickets-depl created
 - service/tickets-srv created
 - deployment.apps/tickets-mongo-depl created
 - service/tickets-mongo-srv created
INFO[0016] Deploy completed in 14.814 seconds            subtask=-1 task=Deploy
Waiting for deployments to stabilize...
DEBU[0016] getting client config for kubeContext: `kubernetes-admin@kubernetes`  subtask=-1 task=DevLoop
DEBU[0016] getting client config for kubeContext: `kubernetes-admin@kubernetes`  subtask=-1 task=DevLoop
DEBU[0016] checking status deployment/orders-depl        subtask=-1 task=Deploy
DEBU[0016] checking status deployment/auth-mongo-depl    subtask=-1 task=Deploy
DEBU[0016] checking status deployment/nats-depl          subtask=-1 task=Deploy
DEBU[0016] checking status deployment/expiration-redis-depl  subtask=-1 task=Deploy
DEBU[0016] checking status deployment/tickets-depl       subtask=-1 task=Deploy
DEBU[0016] checking status deployment/expiration-depl    subtask=-1 task=Deploy
DEBU[0016] checking status deployment/tickets-mongo-depl  subtask=-1 task=Deploy
DEBU[0016] checking status deployment/auth-depl          subtask=-1 task=Deploy
DEBU[0016] checking status deployment/orders-mongo-depl  subtask=-1 task=Deploy
DEBU[0016] checking status deployment/payments-depl      subtask=-1 task=Deploy
DEBU[0016] checking status deployment/payments-mongo-depl  subtask=-1 task=Deploy
DEBU[0016] checking status deployment/client-depl        subtask=-1 task=Deploy
DEBU[0017] Running command: [kubectl --context kubernetes-admin@kubernetes rollout status deployment orders-depl --namespace default --watch=false]  subtask=-1 task=Deploy
DEBU[0017] Running command: [kubectl --context kubernetes-admin@kubernetes rollout status deployment expiration-depl --namespace default --watch=false]  subtask=-1 task=Deploy
DEBU[0017] Running command: [kubectl --context kubernetes-admin@kubernetes rollout status deployment tickets-depl --namespace default --watch=false]  subtask=-1 task=Deploy
DEBU[0017] Running command: [kubectl --context kubernetes-admin@kubernetes rollout status deployment expiration-redis-depl --namespace default --watch=false]  subtask=-1 task=Deploy
DEBU[0017] Running command: [kubectl --context kubernetes-admin@kubernetes rollout status deployment payments-depl --namespace default --watch=false]  subtask=-1 task=Deploy
DEBU[0017] Running command: [kubectl --context kubernetes-admin@kubernetes rollout status deployment client-depl --namespace default --watch=false]  subtask=-1 task=Deploy
DEBU[0017] Running command: [kubectl --context kubernetes-admin@kubernetes rollout status deployment nats-depl --namespace default --watch=false]  subtask=-1 task=Deploy
DEBU[0017] Running command: [kubectl --context kubernetes-admin@kubernetes rollout status deployment tickets-mongo-depl --namespace default --watch=false]  subtask=-1 task=Deploy
DEBU[0017] Running command: [kubectl --context kubernetes-admin@kubernetes rollout status deployment auth-mongo-depl --namespace default --watch=false]  subtask=-1 task=Deploy
DEBU[0017] Running command: [kubectl --context kubernetes-admin@kubernetes rollout status deployment payments-mongo-depl --namespace default --watch=false]  subtask=-1 task=Deploy
DEBU[0017] Running command: [kubectl --context kubernetes-admin@kubernetes rollout status deployment auth-depl --namespace default --watch=false]  subtask=-1 task=Deploy
DEBU[0017] Running command: [kubectl --context kubernetes-admin@kubernetes rollout status deployment orders-mongo-depl --namespace default --watch=false]  subtask=-1 task=Deploy
DEBU[0018] Command output: [Waiting for deployment "orders-depl" rollout to finish: 0 of 1 updated replicas are available...
]  subtask=-1 task=Deploy
DEBU[0018] Command output: [Waiting for deployment "orders-mongo-depl" rollout to finish: 0 of 1 updated replicas are available...
]  subtask=-1 task=Deploy
DEBU[0018] Command output: [Waiting for deployment "tickets-depl" rollout to finish: 0 of 1 updated replicas are available...
]  subtask=-1 task=Deploy
DEBU[0018] Command output: [Waiting for deployment "client-depl" rollout to finish: 0 of 1 updated replicas are available...
]  subtask=-1 task=Deploy
DEBU[0018] Command output: [Waiting for deployment "nats-depl" rollout to finish: 0 of 1 updated replicas are available...
]  subtask=-1 task=Deploy
DEBU[0018] Command output: [Waiting for deployment "expiration-redis-depl" rollout to finish: 0 of 1 updated replicas are available...
]  subtask=-1 task=Deploy
DEBU[0018] Command output: [Waiting for deployment "payments-mongo-depl" rollout to finish: 0 of 1 updated replicas are available...
]  subtask=-1 task=Deploy
DEBU[0018] Command output: [Waiting for deployment "payments-depl" rollout to finish: 0 of 1 updated replicas are available...
]  subtask=-1 task=Deploy
DEBU[0018] Command output: [Waiting for deployment "expiration-depl" rollout to finish: 0 of 1 updated replicas are available...
]  subtask=-1 task=Deploy
DEBU[0018] Command output: [Waiting for deployment "auth-mongo-depl" rollout to finish: 0 of 1 updated replicas are available...
]  subtask=-1 task=Deploy
DEBU[0018] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
]  subtask=-1 task=Deploy
DEBU[0018] Command output: [Waiting for deployment "tickets-mongo-depl" rollout to finish: 0 of 1 updated replicas are available...
]  subtask=-1 task=Deploy
DEBU[0018] Pod "orders-mongo-depl-d5d848ddf-bbqgv" scheduled but not ready: checking container statuses  subtask=-1 task=DevLoop
DEBU[0018] Pod "tickets-depl-849c8d456b-vs88q" scheduled but not ready: checking container statuses  subtask=-1 task=DevLoop
DEBU[0018] Pod "client-depl-775ccc9965-2k9gm" scheduled but not ready: checking container statuses  subtask=-1 task=DevLoop
DEBU[0018] Pod "expiration-redis-depl-54b5cdbd58-8bzmj" scheduled but not ready: checking container statuses  subtask=-1 task=DevLoop
DEBU[0018] Pod "orders-depl-586c4b7894-z59cl" scheduled but not ready: checking container statuses  subtask=-1 task=DevLoop
DEBU[0018] Pod "auth-mongo-depl-5f6657ff85-hm45t" scheduled but not ready: checking container statuses  subtask=-1 task=DevLoop
DEBU[0018] Pod "payments-depl-6485786b64-t8t8d" scheduled but not ready: checking container statuses  subtask=-1 task=DevLoop
DEBU[0018] Pod "payments-mongo-depl-7877bc7dc7-4xhtr" scheduled but not ready: checking container statuses  subtask=-1 task=DevLoop
DEBU[0018] Pod "nats-depl-7699f6bf9c-bfv9n" scheduled but not ready: checking container statuses  subtask=-1 task=DevLoop
DEBU[0018] Pod "expiration-depl-7989dc5ff4-lkpvw" scheduled but not ready: checking container statuses  subtask=-1 task=DevLoop
DEBU[0018] marking resource failed due to error code STATUSCHECK_IMAGE_PULL_ERR  subtask=-1 task=Deploy
 - deployment/expiration-depl: container expiration is waiting to start: learnertester/expiration:8c6b05f89e0abe8e6a33da266355cf79713e6bd22d1abda0da5541f24d5d8d9e can't be pulled
    - pod/expiration-depl-7989dc5ff4-lkpvw: container expiration is waiting to start: learnertester/expiration:8c6b05f89e0abe8e6a33da266355cf79713e6bd22d1abda0da5541f24d5d8d9e can't be pulled
 - deployment/expiration-depl failed. Error: container expiration is waiting to start: learnertester/expiration:8c6b05f89e0abe8e6a33da266355cf79713e6bd22d1abda0da5541f24d5d8d9e can't be pulled.
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=Deploy
DEBU[0018] pod statuses could not be fetched this time due to following errors occurred context canceled  subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=Deploy
DEBU[0018] pod statuses could not be fetched this time due to following errors occurred context canceled  subtask=-1 task=Deploy
DEBU[0018] marking resource status check cancelledSTATUSCHECK_USER_CANCELLED  subtask=-1 task=Deploy
DEBU[0018] setting skaffold deploy status to STATUSCHECK_IMAGE_PULL_ERR.  subtask=-1 task=Deploy
Cleaning up...
DEBU[0018] Running command: [kubectl --context kubernetes-admin@kubernetes delete --ignore-not-found=true --wait=false -f -]  subtask=-1 task=DevLoop
 - deployment.apps "auth-depl" deleted
 - service "auth-srv" deleted
 - deployment.apps "auth-mongo-depl" deleted
 - service "auth-mongo-srv" deleted
 - deployment.apps "client-depl" deleted
 - service "client-srv" deleted
 - deployment.apps "expiration-depl" deleted
 - deployment.apps "expiration-redis-depl" deleted
 - service "expiration-redis-srv" deleted
 - ingress.networking.k8s.io "ingress-service" deleted
 - deployment.apps "nats-depl" deleted
 - service "nats-srv" deleted
 - deployment.apps "orders-depl" deleted
 - service "orders-srv" deleted
 - deployment.apps "orders-mongo-depl" deleted
 - service "orders-mongo-srv" deleted
 - deployment.apps "payments-depl" deleted
 - service "payments-srv" deleted
 - deployment.apps "payments-mongo-depl" deleted
 - service "payments-mongo-srv" deleted
 - deployment.apps "tickets-depl" deleted
 - service "tickets-srv" deleted
 - deployment.apps "tickets-mongo-depl" deleted
 - service "tickets-mongo-srv" deleted
INFO[0054] Cleanup completed in 35.7 seconds             subtask=-1 task=DevLoop
DEBU[0054] Running command: [tput colors]                subtask=-1 task=DevLoop
DEBU[0054] Command output: [256
]                        subtask=-1 task=DevLoop
1/12 deployment(s) failed

This is the expiration-depl.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: expiration-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: expiration
  template:
    metadata:
      labels:
        app: expiration
    spec:
      containers:
        - name: expiration
          image: learnertester/expiration
          env:
            - name: NATS_CLIENT_ID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NATS_URL
              value: 'http://nats-srv:4222'
            - name: NATS_CLUSTER_ID
              value: ticketing
            - name: REDIS_HOST
              value: expiration-redis-srv

And this is the expiration-redis-depl.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: expiration-redis-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: expiration-redis
  template:
    metadata:
      labels:
        app: expiration-redis
    spec:
      containers:
        - name: expiration-redis
          image: redis
---
apiVersion: v1
kind: Service
metadata:
  name: expiration-redis-srv
spec:
  selector:
    app: expiration-redis
  ports:
    - name: db
      protocol: TCP
      port: 6379
      targetPort: 6379

Information

  • Skaffold version: v2.0.3
  • Operating system: Ubuntu 22.04 LTS
  • Installed via: Snap
  • Contents of skaffold.yaml:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
  kubectl: 
    manifests:
      - ./infra/k8s/*
build:
  local:
    push: false
  artifacts:
    - image: learnertester/auth
      context: auth
      docker:
        dockerfile: Dockerfile
      sync:
        manual:
          - src: 'src/**/*.ts'
            dest: .
    - image: learnertester/ticketing-client
      context: client
      docker:
        dockerfile: Dockerfile
      sync:
        manual:
          - src: '**/*.js'
            dest: .
    - image: learnertester/tickets
      context: tickets
      docker:
        dockerfile: Dockerfile
      sync:
        manual:
          - src: 'src/**/*.ts'
            dest: .
    - image: learnertester/orders
      context: orders
      docker:
        dockerfile: Dockerfile
      sync:
        manual:
          - src: 'src/**/*.ts'
            dest: .
    - image: learnertester/expiration
      context: expiration
      docker:
        dockerfile: Dockerfile
      sync:
        manual:
          - src: 'src/**/*.ts'
            dest: .
    - image: learnertester/payments
      context: payments
      docker:
        dockerfile: Dockerfile
      sync:
        manual:
          - src: 'src/**/*.ts'
            dest: .

bestofman avatar Dec 15 '22 06:12 bestofman

Nobody knows about the problem?

bestofman avatar Dec 16 '22 16:12 bestofman

Apologies for no response on this.

From the log message:

DEBU[0017] Running command: [kubectl --context kubernetes-admin@kubernetes rollout status deployment orders-depl --namespace default --watch=false] subtask=-1 task=Deploy

It seems that the value of the kubernetes context is not getting set correctly. Can you verify that the kubernetes context is set correctly(see https://skaffold.dev/docs/environment/kube-context/).

gsquared94 avatar Feb 06 '23 19:02 gsquared94

I'm also having the same issue with kubeadm, running with the correct context.

skylight74 avatar Feb 26 '23 12:02 skylight74

I ran into the same issue Ubuntu 22, correct context is set both via yaml and cli, tried both ways but it's showing the same issue.

filipRisteski avatar Mar 06 '23 15:03 filipRisteski

Here are my thoughts. Upon further investigation it might have something to do with docker contexts. If you want kubernetes on Linux you have to install (additionally) docker desktop, which brings with it another context (daemon). The docker documentation explains this, but I'm not sure if this creates the issue with skaffold. In my organization setting the proper context to docker-desktop and having push: false works on macs. The exact same skaffold.yaml fails on Linux and throws this error. I believe it's because of the existence of double daemons on Linux.

To see if I can get around the error I tried manually changing the docker context, but that didn't get me anywhere.

filipRisteski avatar Mar 06 '23 15:03 filipRisteski

I also faced the same problem. My issue was that I was manually specifying ImagePullPolicy in K8s Deployment config. After removing it, the error was gone.

btsb78 avatar Apr 17 '23 11:04 btsb78

Deleting the cache file under ~/.skaffold/cache fixed the problem for me on v2.5.0.

kurczynski avatar Jun 09 '23 18:06 kurczynski

Unfortunately doing that causes all images to be rebuilt even if they were unchanged (and it didn't work)

akpro-io avatar Jun 15 '23 07:06 akpro-io

Just putting this here in the hopes that it helps a future visitor. I had this issue and I was rather baffled until I read the previous comments about context. Many thanks for the clues!

Turns out I had made a naming mistake in my .envrc file. (I'm using direnv to set my environment variables.)

$ cat .envrc
kubectl config set-context etmc --cluster=etmc
kubectl config use-context etmc
export KUBECONFIG="$(k3d kubeconfig write etmc)"
export DOCKER_HOST=unix:///var/run/docker.sock

I was using kubectl to set context to etmc. This was wrong because k3d creates it's context names prefixed with k3d-.

$ kubectl config get-contexts
CURRENT   NAME       CLUSTER    AUTHINFO         NAMESPACE
          etmc                                   
*         k3d-etmc   k3d-etmc   admin@k3d-etmc

I looked in the file generated by k3d kubeconfig write etmc and it looks like context is set in there. Both kubectl lines in my .envrc were redundant, then. I changed it to the following and now I'm able to use Skaffold without the deployment errors.

$ cat .envrc
export KUBECONFIG="$(k3d kubeconfig write etmc)"
export DOCKER_HOST=unix:///var/run/docker.sock

EDIT: I think I spoke too soon. This didn't solve my STATUSCHECK_IMAGE_PULL_ERR issue.

insanity54 avatar Feb 13 '24 03:02 insanity54