Kubernetes cluster/node pool api should be split
Hi! Spent a decent amount of time playing with the DO K8s provider and your Kubernetes service. Keep hitting an issue- the "default" node pool on the k8s cluster is treated specially. Any change to it at all results in cluster replacement. My current solution is to run a fixed 1 node pool with the smallest instance type and then add real pools separately.
https://github.com/terraform-providers/terraform-provider-digitalocean/issues/350
I actually looked at fixing the provider to do this, and realized why it is the way it is- the data model of the Kubernetes cluster API. To create a cluster, you MUST define at least one node pool in the cluster CREATE api. Once the cluster is created, a standalone node pool resource is fine- you can look up individual pools, add them, delete them, and modify them. But needing them there for the initial create means that with Terraform, the node pool has to exist on the cluster resource.
Of course, you could move all of them into the main cluster, and deal with them there. That has other (terraform specific, not digitalocean's problem) issues. That is probably the right approach. But the third option is the one that I assumed happened. I always would have expected "node pool" to be a seperate API resource, which would make the handling in external tools trivial. It is probably too late at this point, but perhaps consider splitting the two resources and allowing the creation of clusters with 0 nodes so that pools can be added/configured separately. (I understand there's a business reason for not allowing 0 node cluster, but perhaps charge for a cluster with 0 nodes for the control plane)
Hitting the same issue. Any chance this may be resolved some day?