skaffold
skaffold copied to clipboard
k.kubectl.WaitForDeletions does bail on new CRDs
Unless switched off ...
func (c *CLI) WaitForDeletions(ctx context.Context, out io.Writer, manifests ManifestList) error {
if !c.waitForDeletions.Enabled {
return nil
}
[...]
... WaitForDeletions does perform kubectl get on the manifests.Reader()ed output.
Given the case, that manifests.Reader()ed output contains CRDs unbeknownst to the cluster, the call to kubectl get just bails without further due:
if err != nil {
return err
}
However, in the context of developing the k8s manifest of an application which include CRDs, one does not want to hit a cryptic error message which forces one to debug skaffold code to see what's going on.
Therfore, either provide an actionable error message which instructs to disable waitForDeletions on the cli (eg. by parsing the kubectl error message), or reconcile CRDs used in those manifests intelligently prior to proceeding with WaitForDeletions, or filtering out non-available CRDs from the manifests.Reader()ed output.
Considerations: In the context and intent of WaitForDeletions, maybe filtering out those CRDs from the input manifest is the most consistent and safe solution, since at the and of WaitForDeletions, all resources shall be deleted anyway.
Here is the actual log snippet:
exiting dev mode because first deploy failed: running [kubectl --context k3d-k3s-default get -f - --ignore-not-found -ojson]
- stdout: ""
- stderr: "error: unable to recognize \"STDIN\": no matches for kind \"ClusterIssuer\" in version \"cert-manager.io/v1alpha2\"\n"
- cause: exit status 1
Note, that it is deceiving that --ignore-not-found in
buf, err := c.RunOutInput(ctx, manifests.Reader(), "get", c.args(nil, "-f", "-", "--ignore-not-found", "-ojson")...)
apparently does not apply to resource definitions, only to resources. Which, though deceiving, is kind of consistent - depending on the viewpoint.
/cc @dgageot - since you last modified those lines.
This is a repo to reproduce: https://github.com/philips/crd-skaffold-issue
~work-around: skaffold dev/run --wait-for-deletions=false~
I can't even use the work-around. If CRDs are unknown to the cluster, nothing works.
You're right, me neither. I got one step further, though.
As opposed to @philips logs, I get:
[...]
- unable to recognize "STDIN": no matches for kind "ClusterIssuer" in version "cert-manager.io/v1beta1"
vs
- Error from server (NotFound): error when creating "STDIN": the server could not find the requested resource (post prometheuses.monitoring.coreos.com)
- philips:
apiVersion: skaffold/v2beta5-??? - me:
apiVersion: skaffold/v2beta6-v1.13.1
I've run skaffold run --wait-for-deletions=false --cleanup=false proving that the requested resource is being created with:
$ kubectl apply -f k8s/dev/bases/cluster-issuer.yml
clusterissuer.cert-manager.io/ca-issuer created
EDIT: due to an ongoing bug in k8s I can't test to simply re-apply as philips did. So I guess I'm forced to enjoy the weather for the time being... :wink: :sun_behind_rain_cloud:
@philips The last time had this issue, we advised the user to number the CRDs "01-crd.yaml", "02-resource.yaml". Kubectl respects the ordering.
Can you try that?
Thanks Tejal
Another repo to reproduce: https://github.com/gsquared94/crd-skaffold-issue
I tried @tejal29's idea (which was reported working at https://github.com/kubernetes/kubernetes/issues/16448#issuecomment-454218437) but that failed with the same errors
❯ skaffold dev
Listing files to watch...
Generating tags...
Checking cache...
Tags used in deployment:
Starting deploy...
Cleaning up...
WARN[0002] deployer cleanup: reading manifests: kubectl create: running [kubectl --context minikube create --dry-run=client -oyaml -f /Users/gaghosh/Code/Hack/crd-skaffold-issue-simple/kubernetes/0_crontab.definition.yaml -f /Users/gaghosh/Code/Hack/crd-skaffold-issue-simple/kubernetes/1_crontabs.yaml]
- stdout: "apiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n names:\n kind: CronTab\n plural: crontabs\n shortNames:\n - ct\n singular: crontab\n scope: Namespaced\n versions:\n - name: v1\n schema:\n openAPIV3Schema:\n properties:\n spec:\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n
I also tried listing out the files directly in the skaffold.yaml:
manifests:
- kubernetes/0_crontab.definition.yaml
- kubernetes/1_crontabs.yaml
and even though that provided the files in order to kubectl:
kubectl --context minikube create --dry-run=client -oyaml -f /Users/gaghosh/Code/Hack/crd-skaffold-issue-simple/kubernetes/0_crontab.definition.yaml -f /Users/gaghosh/Code/Hack/crd-skaffold-issue-simple/kubernetes/1_crontabs.yaml
it still resulted in the same failure.
Couple possible solutions:
- allow multiple kubectl deployers to run in sequence. Hard to implement right now.
- change the kubectl deployer so that it takes a list of lists of manifests. It would deploy/delete in multiple steps. Quite easy
- teach the kubectl deployer to group manifests based on crd definition vs crd usage. Lots of possible side effects
On Tue 11 Aug 2020 at 10:38 Gaurav [email protected] wrote:
Another repo to reproduce: https://github.com/gsquared94/crd-skaffold-issue I tried @tejal29 https://github.com/tejal29's idea (which was reported working at kubernetes/kubernetes#16448 (comment) https://github.com/kubernetes/kubernetes/issues/16448#issuecomment-454218437) but that failed with the same errors
❯ skaffold dev
Listing files to watch...
Generating tags...
Checking cache...
Tags used in deployment:
Starting deploy...
Cleaning up...
WARN[0002] deployer cleanup: reading manifests: kubectl create: running [kubectl --context minikube create --dry-run=client -oyaml -f /Users/gaghosh/Code/Hack/crd-skaffold-issue-simple/kubernetes/0_crontab.definition.yaml -f /Users/gaghosh/Code/Hack/crd-skaffold-issue-simple/kubernetes/1_crontabs.yaml]
- stdout: "apiVersion: apiextensions.k8s.io/v1\nkind http://apiextensions.k8s.io/v1%5Cnkind: CustomResourceDefinition\nmetadata:\n name: crontabs.stable.example.com\nspec:\n group: stable.example.com\n names:\n kind: CronTab\n plural: crontabs\n shortNames:\n - ct\n singular: crontab\n scope: Namespaced\n versions:\n - name: v1\n schema:\n openAPIV3Schema:\n properties:\n spec:\n properties:\n cronSpec:\n type: string\n image:\n type: string\n replicas:\n
I also tried listing out the files directly in the skaffold.yaml:
manifests: - kubernetes/0_crontab.definition.yaml - kubernetes/1_crontabs.yamland even though that provided the files in order to kubectl: kubectl --context minikube create --dry-run=client -oyaml -f /Users/gaghosh/Code/Hack/crd-skaffold-issue-simple/kubernetes/0_crontab.definition.yaml -f /Users/gaghosh/Code/Hack/crd-skaffold-issue-simple/kubernetes/1_crontabs.yaml
it still resulted in the same failure.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/GoogleContainerTools/skaffold/issues/4641#issuecomment-671812765, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABFPF3G4CD2CALRGPBUDWLSAD7PBANCNFSM4PZJZLHA .
it still resulted in the same failure.
Let me remind there are actually two failures.
- waitForDeletions --ignore-not-found doesn't ignore missing definitions
- "not found" on kubectl apply
Leaving a quick update on this: We're still trying to figure out the broader story for deploy in skaffold and how CRDs tie into that. We're hoping to have some improvements soon once we decide upon how we want things to work with some upcoming changes. Thanks for being patient with this
Has the handling of CRDs been improved in Skaffold v2, or is this issue still current?
@jgillich this is a still an issue in Skaffold v2. We are prioritizing this work here though and will it will likely go into our v2.3.0 milestone/release which is expected to be out late Feb. I will update the thread here when we start work on the milestone, etc. Thanks for your patience here
I guess there haven't been any updates since the last time? 😞