terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
Why don’t you want to fix the issue with CRD?
You are correct about this being the way that this provider operates, but this is not how Terraform inherently operates. A provider should not require connection to the running service, in this case a Kubernetes cluster, to run a plan, but the way this is written it does. The plan stage should only compare against the state, and do built in validation of the objects.
The Helm provider doesn’t require access to the cluster to run a plan, and the AWS provider doesn’t require access to AWS to run a plan. So why does this provider? Validation of the actual AWS object by AWS is done at apply time only and that works well.
It may be true that this isn’t a “bug,” but it something that should be addressed. I guess I’ll resubmit it as a feature then. This is one specific use case, but the fact you can’t run a plan without a connection is a larger problem. I should be able to run the plan in a pipeline for MRs to allow proper review of what’s going to change, but it will fail since the provider requires a connection to the cluster even at plan time.
Originally posted by @jsingleton785 in #2597
So, I’ve left my comment on this topic and would appreciate it if some constructive suggestions are made, rather than an attempt to pull out the CRD and deliver it prematurely before deploying the release!
not sure if this exact same issue is what your experiencing, though I seem to have encountered the problem specifically when trying to use kubernetes_manifest to install CRDs that are not yet present on the cluster. If I were to apply the yaml with kubectl I would have to use --server-side param to get around the length of the CRD, etc. And, the error is API did not recognize GroupVersionKind from manifest (CRD may not be installed)
not sure if this exact same issue is what your experiencing, though I seem to have encountered the problem specifically when trying to use
kubernetes_manifestto install CRDs that are not yet present on the cluster. If I were to apply the yaml withkubectlI would have to use--server-sideparam to get around the length of the CRD, etc. And, the error isAPI did not recognize GroupVersionKind from manifest (CRD may not be installed)
Strange, I have the same issue as with Cert-manager, but never mind, that’s not the point. I know that some charts, like those from Bitnami (Broadcom Inc.), have various workarounds that skip CRD checks, but… not all of them follow this approach. I’d like to test this feature, at least in alpha, so that CRD validation can be ignored when applying only specific manifests via kubernetes_manifest, rather than all of them!
Something I noticed that's sort of funny is this terraform OpenFAAS-on-K8s tutorial where they specifically do the OpenFAAS install and then CRD creation as two distinct steps in what appear to be separate terraform modules. If both had been done in a single pass then it'd have failed due to this problem.
From my own experience dealing with this, I'm wondering if the solution here is to just refrain from using terraform for K8s management and instead just stick with the native tooling. On top of CRDs being barely supported, the whole exercise of dealing with tfstate (when the K8s apiserver makes the source of truth readily available!), not to mention mapping the original K8s yaml content into terraform's special syntax (the pain is slightly reduced with yamldecode as in that tutorial, but that only supports one object at a time!) aren't exactly a great experience to begin with. Given all this, I don't see how using terraform is an improvement.
In other words, if this concept is so foreign to how terraform wants the world to work, why not just recommend that folks use something else?
Something I noticed that's sort of funny is this terraform OpenFAAS-on-K8s tutorial where they specifically do the OpenFAAS install and then CRD creation as two distinct steps in what appear to be separate terraform modules. If both had been done in a single pass then it'd have failed due to this problem.
From my own experience dealing with this, I'm wondering if the solution here is to just refrain from using terraform for K8s management and instead just stick with the native tooling. On top of CRDs being barely supported, the whole exercise of dealing with tfstate (when the K8s apiserver makes the source of truth readily available!), not to mention mapping the original K8s yaml content into terraform's special syntax (the pain is slightly reduced with
yamldecodeas in that tutorial, but that only supports one object at a time!) aren't exactly a great experience to begin with. Given all this, I don't see how using terraform is an improvement.In other words, if this concept is so foreign to how terraform wants the world to work, why not just recommend that folks use something else?
Of course, this problem connects to all kubernetes provider, but today, CRD validation and invalidation located in specifically helm templates. So, I don't know, why terraform dev team dragging his feet with an answer, because can be realize easy skipping for end validation into terraform provider instead of creating 1K issues in a lot of helm charts (and providers).
Also, I found this issue #2187 and good solution for solving current problem via including opportunity with flag disableCrdCheck=true, that flag can be moved to false by default for don't change all logic and will not cause any failures in other releases and manifests.
a fix for this is badly needed, and very simple to implement by the looks of it, it really makes no sense to even argue about such things to be honest.
@ironashram I hope, someone sends PR for closing this task.
Come on guys, this is already too long now. We are waiting for the fix. It's disrupting our workflows. @arybolovlev
up
isn't this solved with Deferred changes ?