Remote resolution for kaniko catalog examples
The compatible remote resolution task referred in the catalog repo needs to be updated in https://github.com/tektoncd/pipeline/blob/main/examples/v1beta1/pipelineruns/pipelinerun.yaml#L39-L40.
The yaml file to reproduce:
# This demo modifies the cluster (deploys to it) you must use a service
# account with permission to admin the cluster (or make your default user an admin
# of the `default` namespace with default-cluster-admin.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
generateName: default-cluster-admin-
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: "unit.tests"
spec:
workspaces:
- name: source
mountPath: /workspace/source/go/src/github.com/GoogleContainerTools/skaffold
steps:
- name: run-tests
image: golang
env:
- name: GOPATH
value: /workspace/go
workingDir: $(workspaces.source.path)
script: |
# The intention behind this example Task is to run unit test, however we
# currently do nothing to ensure that a unit test issue doesn't cause this example
# to fail unnecessarily. In the future we could re-introduce the unit tests (since
# we are now pinning the version of Skaffold we pull) or use Tekton Pipelines unit tests.
echo "pass"
---
# This task deploys with kubectl apply -f <filename>
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: demo-deploy-kubectl
spec:
params:
- name: path
description: Path to the manifest to apply
- name: yqArg
description: Okay this is a hack, but I didn't feel right hard-coding `-d1` down below
- name: yamlPathToImage
description: The path to the image to replace in the yaml manifest (arg to yq)
- name: imageURL
description: The URL of the image to deploy
workspaces:
- name: source
steps:
- name: replace-image
image: mikefarah/yq:3
command: ['yq']
args:
- "w"
- "-i"
- "$(params.yqArg)"
- "$(params.path)"
- "$(params.yamlPathToImage)"
- "$(params.imageURL)"
- name: run-kubectl
image: lachlanevenson/k8s-kubectl
command: ['kubectl']
args:
- 'apply'
- '-f'
- '$(params.path)'
---
# This Pipeline Builds two microservice images(https://github.com/GoogleContainerTools/skaffold/tree/master/examples/microservices)
# from the Skaffold repo (https://github.com/GoogleContainerTools/skaffold) and deploys them to the repo currently running Tekton Pipelines.
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: "demo.pipeline"
spec:
params:
- name: image-registry
default: gcr.io/christiewilson-catfactory
workspaces:
- name: git-source
tasks:
- name: fetch-from-git
taskRef:
resolver: hub
params:
- name: type #optional
value: artifact
- name: kind #optional
value: task
- name: name
value: git-clone
- name: version
value: "0.4"
params:
- name: url
value: https://github.com/GoogleContainerTools/skaffold
- name: revision
value: v1.32.0
workspaces:
- name: output
workspace: git-source
- name: skaffold-unit-tests
runAfter: [fetch-from-git]
taskRef:
name: "unit.tests"
workspaces:
- name: source
workspace: git-source
- name: build-skaffold-web
runAfter: [skaffold-unit-tests]
taskRef:
resolver: hub
params:
- name: type #optional
value: artifact
- name: kind #optional
value: task
- name: name
value: kaniko
- name: version
value: "0.6"
params:
- name: IMAGE
value: $(params.image-registry)/leeroy-web
- name: CONTEXT
value: examples/microservices/leeroy-web
- name: DOCKERFILE
value: $(workspaces.source.path)/examples/microservices/leeroy-web/Dockerfile
workspaces:
- name: source
workspace: git-source
- name: build-skaffold-app
runAfter: [skaffold-unit-tests]
taskRef:
resolver: hub
params:
- name: type #optional
value: artifact
- name: kind #optional
value: task
- name: name
value: kaniko
- name: version
value: "0.6"
params:
- name: IMAGE
value: $(params.image-registry)/leeroy-app
- name: CONTEXT
value: examples/microservices/leeroy-app
- name: DOCKERFILE
value: $(workspaces.source.path)/examples/microservices/leeroy-app/Dockerfile
workspaces:
- name: source
workspace: git-source
- name: deploy-app
taskRef:
name: demo-deploy-kubectl
params:
- name: imageURL
value: $(params.image-registry)/leeroy-app@$(tasks.build-skaffold-app.results.IMAGE_DIGEST)
- name: path
value: $(workspaces.source.path)/examples/microservices/leeroy-app/kubernetes/deployment.yaml
- name: yqArg
value: "-d1"
- name: yamlPathToImage
value: "spec.template.spec.containers[0].image"
workspaces:
- name: source
workspace: git-source
- name: deploy-web
taskRef:
name: demo-deploy-kubectl
params:
- name: imageURL
value: $(params.image-registry)/leeroy-web@$(tasks.build-skaffold-web.results.IMAGE_DIGEST)
- name: path
value: $(workspaces.source.path)/examples/microservices/leeroy-web/kubernetes/deployment.yaml
- name: yqArg
value: "-d0"
- name: yamlPathToImage
value: "spec.template.spec.containers[0].image"
workspaces:
- name: source
workspace: git-source
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: demo-pipeline-run-1
spec:
pipelineRef:
name: "demo.pipeline"
serviceAccountName: 'default'
workspaces:
- name: git-source
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Error: https://prow.tekton.dev/view/gs/tekton-prow/pr-logs/pull/tektoncd_pipeline/5712/pull-tekton-pipeline-alpha-integration-tests/1588181464132882432
Would it make sense to file an issue for the kaniko catalog remote resolution? That doesn't seem like an issue we'd want to ignore.
Originally posted by @lbernick in https://github.com/tektoncd/pipeline/issues/5712#issuecomment-1305703094
cc @abayer
Thanks Jerome! Do either you or andrew have an example reproducer and the error that occurred?
Thanks Jerome! Do either you or andrew have an example reproducer and the error that occurred?
Thanks for the reminder. Updated in the issue comment.
@JeromeJu The failure you linked to is a red herring - that particular failure was me screwing up some copy-paste. The actual error I kept hitting is seen in https://prow.tekton.dev/view/gs/tekton-prow/pr-logs/pull/tektoncd_pipeline/5712/pull-tekton-pipeline-integration-tests/1588168599833415680:
build_logs.go:37: build logs
>>> Container step-build-and-push:
�[36mINFO�[0m[0000] Resolved base name golang:1.15 to builder
�[36mINFO�[0m[0000] Retrieving image manifest golang:1.15
�[36mINFO�[0m[0000] Retrieving image golang:1.15 from registry index.docker.io
panic: runtime error: index out of range [1] with length 1
goroutine 1 [running]:
github.com/GoogleContainerTools/kaniko/pkg/executor.CalculateDependencies({0xc00015c840, 0x2, 0x7fff60796c9f}, 0x2578940, 0xc0000a3400)
github.com/GoogleContainerTools/kaniko/pkg/executor.DoBuild(0x2578940)
github.com/GoogleContainerTools/kaniko/cmd/executor/cmd.glob..func2(0x2566980, {0x16ac185, 0x5, 0x5})
github.com/spf13/cobra.(*Command).execute(0x2566980, {0xc000116130, 0x5, 0x5})
github.com/spf13/cobra.(*Command).ExecuteC(0x2566980)
github.com/spf13/cobra.(*Command).Execute(...)
main.main()
This was using the same kaniko/executor image as the copied-in task does - v1.8.1, gcr.io/kaniko-project/executor@sha256:b44b0744b450e731b5a5213058792cd8d3a6a14c119cf6b1f143704f22a7c650. I reproduced it locally as well.
This looks like an issue with the kaniko-build task, should we transfer to the catalog repo?
cc @QuanZhang-William thanks for looking into this
So we have found that the previous error persists either we use copied-in 0.6 kaniko task or we use remote resolution.
Decided to start debugging from the catalog/kaniko task itself.
cc @QuanZhang-William thanks for looking into this
So we have found that the previous error persists either we use copied-in 0.6 kaniko task or we use remote resolution.
Decided to start debugging from the catalog/kaniko task itself.
With some investigation it looks like the root cause is the example's DOCKERFILE cannot be accepted by kaniko (see this related open issue).
I have tried the code example used in the kaniko catalog test: https://github.com/tektoncd/catalog/blob/main/task/kaniko/0.6/tests/run.yaml#L21, and successfully built & pushed the image to GAR. So to unblock @JeromeJu, changing the source code and docker file should solve the issue
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle stale
Send feedback to tektoncd/plumbing.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle rotten
Send feedback to tektoncd/plumbing.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen with a justification.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/close
Send feedback to tektoncd/plumbing.
@tekton-robot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopenwith a justification. Mark the issue as fresh with/remove-lifecycle rottenwith a justification. If this issue should be exempted, mark the issue as frozen with/lifecycle frozenwith a justification./close
Send feedback to tektoncd/plumbing.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.