terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
kubernetes_manifest imported resources are recreated not updated (reopening)
Terraform Version, Provider Version and Kubernetes Version
Effectively duplicate of #1712 but on v2.12.1
Terraform version: 1.1.1 (also tried 1.2.8)
Kubernetes provider version: 2.12.1
Kubernetes version: 1.21
Affected Resource(s)
- kubernetes_manifest
Terraform Configuration Files
resource "kubernetes_manifest" "worker_machine_set" {
for_each = var.az_subnet_map
manifest = yamldecode(templatefile("${path.module}/manifests/machineset/worker-machine-set.yaml", {
az = "${var.region}${each.key}",
region = var.region,
environment = var.environment,
subnet_id = each.value,
ami_id = var.worker_ami_id,
instance_type = var.worker_instance_type
}))
field_manager {
force_conflicts = true
}
lifecycle {
prevent_destroy = true
}
}
worker-machine-set.yaml
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
name: ${environment}-worker-${az}
namespace: openshift-machine-api
labels:
machine.openshift.io/cluster-api-cluster: ${environment}
spec:
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: ${environment}
machine.openshift.io/cluster-api-machineset: ${environment}-worker-${az}
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: ${environment}
machine.openshift.io/cluster-api-machine-role: worker
machine.openshift.io/cluster-api-machine-type: worker
machine.openshift.io/cluster-api-machineset: ${environment}-worker-${az}
spec:
providerSpec:
value:
userDataSecret:
name: worker-user-data
placement:
availabilityZone: ${az}
region: ${region}
credentialsSecret:
name: aws-cloud-credentials
instanceType: ${instance_type}
blockDevices:
- ebs:
encrypted: true
iops: 2000
kmsKey:
arn: ""
volumeSize: 500
volumeType: io1
securityGroups:
- filters:
- name: "tag:Name"
values:
- ${environment}-worker-sg*
kind: AWSMachineProviderConfig
loadBalancers:
- name: ${environment}-ingress
type: network
tags:
- name: kubernetes.io/cluster/${environment}
value: owned
- name: deployment
value: worker
deviceIndex: 0
ami:
id: ${ami_id}
subnet:
id: ${subnet_id}
apiVersion: awsproviderconfig.openshift.io/v1beta1
iamInstanceProfile:
id: kubic-${environment}-worker
Debug Output
Panic Output
Steps to Reproduce
-
terraform import 'module.kubic_crds.kubernetes_manifest.worker_machine_set["a"]' "apiVersion=machine.openshift.io/v1beta1,kind=MachineSet,namespace=openshift-machine-api,name=bs-ops-worker-us-east-1a"
-
terragrunt plan -target "module.kubic_crds"
Expected Behavior
The plan should update the resource. This is the behavior I see with v2.8.0 of the provider.
Actual Behavior
The plan attempts to recreate the resource... which in this case is not allowed because lifecycle.prevent_destroy
is true
Important Factoids
Terragrunt is used to wrap the calls to terraform, but it is still terraform that is ultimately being called. Terragrunt simply provides me with a couple of pre/post hooks to run scripts.
References
- #1712
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Hi @zonybob!
We need a few more clarifications before we can look into this.
- Is there any state present for
module.kubic_crds
before running the import step? - What is the exact output you are getting when the import happens? Can you paste exact terraform output here?
Thanks!
@alexsomesan thanks for the quick reply!
This was intially a completely fresh installation. There was zero state when the imports were first run. I have since reproduced the import/plan error with other state present as well.
When the import occurs, there was nothing concerning; the output was normal.
$ terraform import 'module.kubic_crds.kubernetes_manifest.worker_machine_set["a"]' "apiVersion=machine.openshift.io/v1beta1,kind=MachineSet,namespace=openshift-machine-api,name=bs-ops-worker-us-east-1a"
module.kubic_crds.kubernetes_manifest.worker_machine_set["a"]: Importing from ID "apiVersion=machine.openshift.io/v1beta1,kind=MachineSet,namespace=openshift-machine-api,name=bs-ops-worker-us-east-1a"...
module.kubic_crds.kubernetes_manifest.worker_machine_set["a"]: Import prepared!
Prepared kubernetes_manifest for import
module.kubic_crds.kubernetes_manifest.worker_machine_set["a"]: Refreshing state...
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
╷
│ Warning: Apply needed after 'import'
│
│ Please run apply after a successful import to realign the resource state to the configuration in Terraform.
╵
Releasing state lock. This may take a few moments...
When the plan occurs, the error is ...
╷
│ Error: Instance cannot be destroyed
│
│ on ../modules/kubic-crds/machine.tf line 7:
│ 7: resource "kubernetes_manifest" "worker_machine_set" {
│
│ Resource module.kubic_crds.kubernetes_manifest.worker_machine_set["a"] has
│ lifecycle.prevent_destroy set, but the plan calls for this resource to be
│ destroyed. To avoid this error and continue with the plan, either disable
│ lifecycle.prevent_destroy or reduce the scope of the plan using the -target
│ flag.
╵
ERRO[0061] Hit multiple errors:
Hit multiple errors:
exit status 1
I'm seeing a similar issue, but even with some resources that are created by Terraform (on 2.13.1):
Terraform output of a secondary plan run after the initial plan/apply.
Terraform detected the following changes made outside of Terraform since the
last "terraform apply":
# kubernetes_manifest.elasticsearch["elasticsearch"] has changed
~ resource "kubernetes_manifest" "elasticsearch" {
~ object = {
~ metadata = {
~ annotations = null -> {
+ "eck.k8s.elastic.co/orchestration-hints" = jsonencode(
{
+ no_transient_settings = true
+ service_accounts = true
}
)
+ "elasticsearch.k8s.elastic.co/cluster-uuid" = "kq-OmZ83Qj259Q6sTjgA6A"
}
name = "elasticsearch"
# (14 unchanged elements hidden)
}
~ spec = {
~ nodeSets = [
~ {
name = "rack1"
~ podTemplate = {
+ metadata = {
+ creationTimestamp = null
}
~ spec = {
~ initContainers = [
~ {
name = "sysctl"
+ resources = {}
# (2 unchanged elements hidden)
},
]
# (4 unchanged elements hidden)
}
}
# (3 unchanged elements hidden)
},
~ {
name = "rack2"
~ podTemplate = {
+ metadata = {
+ creationTimestamp = null
}
~ spec = {
~ initContainers = [
~ {
name = "sysctl"
+ resources = {}
# (2 unchanged elements hidden)
},
]
# (4 unchanged elements hidden)
}
}
# (3 unchanged elements hidden)
},
~ {
name = "rack5"
~ podTemplate = {
+ metadata = {
+ creationTimestamp = null
}
~ spec = {
~ initContainers = [
~ {
name = "sysctl"
+ resources = {}
# (2 unchanged elements hidden)
},
]
# (4 unchanged elements hidden)
}
}
# (3 unchanged elements hidden)
},
]
# (13 unchanged elements hidden)
}
# (2 unchanged elements hidden)
}
# (2 unchanged attributes hidden)
# (1 unchanged block hidden)
}
...
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# kubernetes_manifest.elasticsearch["elasticsearch"] must be replaced
-/+ resource "kubernetes_manifest" "elasticsearch" {
- computed_fields = [
- "metadata.annotations",
- "spec.nodeSets.podTemplate.metadata.creationTimestamp",
] -> null
~ manifest = {
~ spec = {
~ nodeSets = [
~ {
name = "rack1"
~ podTemplate = {
~ spec = {
~ containers = [
~ {
name = "elasticsearch"
~ resources = {
~ requests = {
~ cpu = "2" -> "1"
# (1 unchanged element hidden)
}
# (1 unchanged element hidden)
}
# (1 unchanged element hidden)
},
]
# (4 unchanged elements hidden)
}
}
# (3 unchanged elements hidden)
},
...
Note that Terraform wants to replace this resource even though nothing of significance has changed with it.
Terraform file:
resource "kubernetes_manifest" "elasticsearch" {
depends_on = [
...
]
for_each = local.ELASTICSEARCH_CONFIG
manifest = {
apiVersion = local.ELASTICSEARCH_API_VERSION
kind = local.ELASTICSEARCH_KIND
metadata = {
name = each.key
namespace = var.ELASTICSEARCH_NAMESPACE
}
spec = {
version = var.ELASTICSEARCH_VERSION
updateStrategy = each.value.updateStrategy
volumeClaimDeletePolicy = each.value.volumeClaimDeletePolicy
secureSettings = local.ELASTICSEARCH_SECURE_SETTINGS
http = {
service = {
spec = {
selector = each.value.http.service.spec.selector
}
}
tls = {
certificate = {
secretName = format("%s%s", "elasticsearch-cert", var.ENVIRONMENT_SUB_NAME != null ? "-${var.ENVIRONMENT_SUB_NAME}" : "")
}
}
}
podDisruptionBudget = {}
nodeSets = [
for node in each.value.nodeSets : {
name = node.name
count = node.count
config = try(merge(node.config, each.value.baseNodeConfig), each.value.baseNodeConfig)
podTemplate = node.podTemplate
volumeClaimTemplates = try(node.volumeClaimTemplates, null)
}
]
}
}
field_manager {
force_conflicts = true
}
}
Check if the CRD has a schema. If it doesn't, any change requires a replacement.
The CRD has a schema: https://raw.githubusercontent.com/elastic/cloud-on-k8s/main/config/crds/v1/bases/elasticsearch.k8s.elastic.co_elasticsearches.yaml
In the rest of the diff (where you've cut it off with ...
), something should be marked with forces replacement
, can you paste the rest of the diff?
Below is the full output of the terraform plan command, but there is no mention of forces replacement
.
Terraform Plan Output
terraform plan
Initializing the backend...
Successfully configured the backend "http"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Finding hashicorp/kubernetes versions matching ">= 2.0.0"...
- Installing hashicorp/kubernetes v2.13.1...
- Installed hashicorp/kubernetes v2.13.1 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
kubernetes_secret.active_directory_secure_settings["devtest"]: Refreshing state... [id=elastic-test/active-directory-secure-settings-devtest]
kubernetes_secret.ca_certificates_crt: Refreshing state... [id=elastic-test/tls-ca]
kubernetes_secret.s3_secure_settings["es_test_snapshots"]: Refreshing state... [id=elastic-test/s3-secure-settings-es-test-snapshots]
kubernetes_secret.watcher_encryption_secure_settings[0]: Refreshing state... [id=elastic-test/watcher-encryption-secure-settings]
kubernetes_manifest.elasticsearch_cert: Refreshing state...
kubernetes_manifest.elasticsearch_virtualservice["istio-system/elastic-test-domain-com"]: Refreshing state...
kubernetes_manifest.elasticsearch_virtualservice["istio-system/elastic-test-domain-com"]: Refreshing state...
kubernetes_manifest.elasticsearch_destinationrule[0]: Refreshing state...
kubernetes_manifest.elasticsearch: Refreshing state...
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# kubernetes_manifest.elasticsearch must be replaced
-/+ resource "kubernetes_manifest" "elasticsearch" {
~ manifest = {
~ spec = {
~ nodeSets = [
~ {
name = "rack1"
~ podTemplate = {
~ spec = {
~ containers = [
~ {
name = "elasticsearch"
~ resources = {
~ requests = {
~ cpu = "1" -> "2"
# (1 unchanged element hidden)
}
# (1 unchanged element hidden)
}
# (1 unchanged element hidden)
},
]
# (4 unchanged elements hidden)
}
}
# (3 unchanged elements hidden)
},
{
config = {
cluster = {
routing = {
allocation = {
awareness = {
attributes = "k8s_node_name,zone"
}
}
}
}
node = {
attr = {
zone = "rack2"
}
}
xpack = {
security = {
audit = {
enabled = true
}
authc = {
realms = {
active_directory = {
devtest = {
<snipped; but no diffs/changes marked here>
}
}
}
}
}
}
}
count = 1
name = "rack2"
podTemplate = {
spec = {
affinity = {
nodeAffinity = {
requiredDuringSchedulingIgnoredDuringExecution = {
nodeSelectorTerms = [
{
matchExpressions = [
{
key = "topology.kubernetes.io/zone"
operator = "In"
values = [
"rack2",
]
},
{
key = "elastic.domain.com/environment-test"
operator = "In"
values = [
"true",
]
},
]
},
]
}
}
podAntiAffinity = {
preferredDuringSchedulingIgnoredDuringExecution = [
{
podAffinityTerm = {
labelSelector = {
matchLabels = {
"elasticsearch.k8s.elastic.co/cluster-name" = "elasticsearch"
}
}
topologyKey = "kubernetes.io/hostname"
}
weight = 100
},
]
}
}
automountServiceAccountToken = true
containers = [
{
name = "elasticsearch"
resources = {
limits = {
cpu = "4"
memory = "8Gi"
}
requests = {
cpu = "1"
memory = "8Gi"
}
}
volumeMounts = [
{
mountPath = "/usr/share/elasticsearch/config/certs"
name = "tls-ca"
},
]
},
]
initContainers = [
{
command = [
"sh",
"-c",
"sysctl -w vm.max_map_count=262144",
]
name = "sysctl"
securityContext = {
privileged = true
}
},
]
volumes = [
{
name = "tls-ca"
secret = {
secretName = "tls-ca"
}
},
]
}
}
volumeClaimTemplates = [
{
metadata = {
name = "elasticsearch-data"
}
spec = {
accessModes = [
"ReadWriteOnce",
]
resources = {
requests = {
storage = "500Gi"
}
}
storageClassName = "openebs-hot"
}
},
]
},
# (1 unchanged element hidden)
]
# (6 unchanged elements hidden)
}
# (3 unchanged elements hidden)
}
~ object = {
~ metadata = {
~ annotations = {
- "eck.k8s.elastic.co/orchestration-hints" = jsonencode(
{
- no_transient_settings = true
- service_accounts = true
}
)
- "elasticsearch.k8s.elastic.co/cluster-uuid" = "kq-OmZ83Qj259Q6sTjgA6A"
} -> (known after apply)
~ clusterName = null -> (known after apply)
~ creationTimestamp = null -> (known after apply)
~ deletionGracePeriodSeconds = null -> (known after apply)
~ deletionTimestamp = null -> (known after apply)
~ finalizers = null -> (known after apply)
~ generateName = null -> (known after apply)
~ generation = null -> (known after apply)
~ labels = null -> (known after apply)
~ managedFields = null -> (known after apply)
name = "elasticsearch"
~ ownerReferences = null -> (known after apply)
~ resourceVersion = null -> (known after apply)
~ selfLink = null -> (known after apply)
~ uid = null -> (known after apply)
# (1 unchanged element hidden)
}
~ spec = {
~ auth = {
~ fileRealm = null -> (known after apply)
~ roles = null -> (known after apply)
}
~ http = {
~ service = {
~ metadata = {
~ annotations = null -> (known after apply)
~ finalizers = null -> (known after apply)
~ labels = null -> (known after apply)
~ name = null -> (known after apply)
~ namespace = null -> (known after apply)
}
~ spec = {
~ allocateLoadBalancerNodePorts = null -> (known after apply)
~ clusterIP = null -> (known after apply)
~ clusterIPs = null -> (known after apply)
~ externalIPs = null -> (known after apply)
~ externalName = null -> (known after apply)
~ externalTrafficPolicy = null -> (known after apply)
~ healthCheckNodePort = null -> (known after apply)
~ internalTrafficPolicy = null -> (known after apply)
~ ipFamilies = null -> (known after apply)
~ ipFamilyPolicy = null -> (known after apply)
~ loadBalancerClass = null -> (known after apply)
~ loadBalancerIP = null -> (known after apply)
~ loadBalancerSourceRanges = null -> (known after apply)
~ ports = null -> (known after apply)
~ publishNotReadyAddresses = null -> (known after apply)
~ sessionAffinity = null -> (known after apply)
~ sessionAffinityConfig = {
~ clientIP = {
~ timeoutSeconds = null -> (known after apply)
}
}
~ type = null -> (known after apply)
# (1 unchanged element hidden)
}
}
~ tls = {
~ selfSignedCertificate = {
~ disabled = null -> (known after apply)
~ subjectAltNames = null -> (known after apply)
}
# (1 unchanged element hidden)
}
}
~ image = null -> (known after apply)
~ monitoring = {
~ logs = {
~ elasticsearchRefs = null -> (known after apply)
}
~ metrics = {
~ elasticsearchRefs = null -> (known after apply)
}
}
~ nodeSets = [
~ {
name = "rack1"
~ podTemplate = {
~ spec = {
~ containers = [
~ {
name = "elasticsearch"
~ resources = {
~ requests = {
~ cpu = "1" -> "2"
# (1 unchanged element hidden)
}
# (1 unchanged element hidden)
}
# (1 unchanged element hidden)
},
]
# (4 unchanged elements hidden)
}
}
~ volumeClaimTemplates = [
~ {
~ apiVersion = null -> (known after apply)
~ kind = null -> (known after apply)
~ metadata = {
~ annotations = null -> (known after apply)
~ finalizers = null -> (known after apply)
~ labels = null -> (known after apply)
name = "elasticsearch-data"
~ namespace = null -> (known after apply)
}
~ spec = {
~ dataSource = {
~ apiGroup = null -> (known after apply)
~ kind = null -> (known after apply)
~ name = null -> (known after apply)
}
~ dataSourceRef = {
~ apiGroup = null -> (known after apply)
~ kind = null -> (known after apply)
~ name = null -> (known after apply)
}
~ resources = {
~ limits = null -> (known after apply)
# (1 unchanged element hidden)
}
~ selector = {
~ matchExpressions = null -> (known after apply)
~ matchLabels = null -> (known after apply)
}
~ volumeMode = null -> (known after apply)
~ volumeName = null -> (known after apply)
# (2 unchanged elements hidden)
}
~ status = {
~ accessModes = null -> (known after apply)
~ allocatedResources = null -> (known after apply)
~ capacity = null -> (known after apply)
~ conditions = null -> (known after apply)
~ phase = null -> (known after apply)
~ resizeStatus = null -> (known after apply)
}
},
]
# (2 unchanged elements hidden)
},
~ {
name = "rack2"
~ volumeClaimTemplates = [
~ {
~ apiVersion = null -> (known after apply)
~ kind = null -> (known after apply)
~ metadata = {
~ annotations = null -> (known after apply)
~ finalizers = null -> (known after apply)
~ labels = null -> (known after apply)
name = "elasticsearch-data"
~ namespace = null -> (known after apply)
}
~ spec = {
~ dataSource = {
~ apiGroup = null -> (known after apply)
~ kind = null -> (known after apply)
~ name = null -> (known after apply)
}
~ dataSourceRef = {
~ apiGroup = null -> (known after apply)
~ kind = null -> (known after apply)
~ name = null -> (known after apply)
}
~ resources = {
~ limits = null -> (known after apply)
# (1 unchanged element hidden)
}
~ selector = {
~ matchExpressions = null -> (known after apply)
~ matchLabels = null -> (known after apply)
}
~ volumeMode = null -> (known after apply)
~ volumeName = null -> (known after apply)
# (2 unchanged elements hidden)
}
~ status = {
~ accessModes = null -> (known after apply)
~ allocatedResources = null -> (known after apply)
~ capacity = null -> (known after apply)
~ conditions = null -> (known after apply)
~ phase = null -> (known after apply)
~ resizeStatus = null -> (known after apply)
}
},
]
# (3 unchanged elements hidden)
},
~ {
name = "rack5"
~ volumeClaimTemplates = [
~ {
~ apiVersion = null -> (known after apply)
~ kind = null -> (known after apply)
~ metadata = {
~ annotations = null -> (known after apply)
~ finalizers = null -> (known after apply)
~ labels = null -> (known after apply)
name = "elasticsearch-data"
~ namespace = null -> (known after apply)
}
~ spec = {
~ dataSource = {
~ apiGroup = null -> (known after apply)
~ kind = null -> (known after apply)
~ name = null -> (known after apply)
}
~ dataSourceRef = {
~ apiGroup = null -> (known after apply)
~ kind = null -> (known after apply)
~ name = null -> (known after apply)
}
~ resources = {
~ limits = null -> (known after apply)
# (1 unchanged element hidden)
}
~ selector = {
~ matchExpressions = null -> (known after apply)
~ matchLabels = null -> (known after apply)
}
~ volumeMode = null -> (known after apply)
~ volumeName = null -> (known after apply)
# (2 unchanged elements hidden)
}
~ status = {
~ accessModes = null -> (known after apply)
~ allocatedResources = null -> (known after apply)
~ capacity = null -> (known after apply)
~ conditions = null -> (known after apply)
~ phase = null -> (known after apply)
~ resizeStatus = null -> (known after apply)
}
},
]
# (3 unchanged elements hidden)
},
]
~ podDisruptionBudget = {
~ metadata = {
~ annotations = null -> (known after apply)
~ finalizers = null -> (known after apply)
~ labels = null -> (known after apply)
~ name = null -> (known after apply)
~ namespace = null -> (known after apply)
}
~ spec = {
~ maxUnavailable = null -> (known after apply)
~ minAvailable = null -> (known after apply)
~ selector = {
~ matchExpressions = null -> (known after apply)
~ matchLabels = null -> (known after apply)
}
}
}
~ remoteClusters = null -> (known after apply)
~ revisionHistoryLimit = null -> (known after apply)
~ secureSettings = [
~ {
~ entries = null -> (known after apply)
# (1 unchanged element hidden)
},
~ {
~ entries = null -> (known after apply)
# (1 unchanged element hidden)
},
~ {
~ entries = null -> (known after apply)
# (1 unchanged element hidden)
},
]
~ serviceAccountName = null -> (known after apply)
~ transport = {
~ service = {
~ metadata = {
~ annotations = null -> (known after apply)
~ finalizers = null -> (known after apply)
~ labels = null -> (known after apply)
~ name = null -> (known after apply)
~ namespace = null -> (known after apply)
}
~ spec = {
~ allocateLoadBalancerNodePorts = null -> (known after apply)
~ clusterIP = null -> (known after apply)
~ clusterIPs = null -> (known after apply)
~ externalIPs = null -> (known after apply)
~ externalName = null -> (known after apply)
~ externalTrafficPolicy = null -> (known after apply)
~ healthCheckNodePort = null -> (known after apply)
~ internalTrafficPolicy = null -> (known after apply)
~ ipFamilies = null -> (known after apply)
~ ipFamilyPolicy = null -> (known after apply)
~ loadBalancerClass = null -> (known after apply)
~ loadBalancerIP = null -> (known after apply)
~ loadBalancerSourceRanges = null -> (known after apply)
~ ports = null -> (known after apply)
~ publishNotReadyAddresses = null -> (known after apply)
~ selector = null -> (known after apply)
~ sessionAffinity = null -> (known after apply)
~ sessionAffinityConfig = {
~ clientIP = {
~ timeoutSeconds = null -> (known after apply)
}
}
~ type = null -> (known after apply)
}
}
~ tls = {
~ certificate = {
~ secretName = null -> (known after apply)
}
~ otherNameSuffix = null -> (known after apply)
~ subjectAltNames = null -> (known after apply)
}
}
# (3 unchanged elements hidden)
}
# (2 unchanged elements hidden)
}
# (1 unchanged attribute hidden)
# (1 unchanged block hidden)
}
Plan: 1 to add, 0 to change, 1 to destroy.
Just a quick update here, I did some further testing and it seems like any time there is a change under manifest.spec.(daemonset|deployment).podTemplate
a forces replacement
happens. Would anyone know where in the CDR it would tell Terraform that changes here require a replacement?
It's because podTemplate
has x-kubernetes-preserve-unknown-fields: true
which effectively makes it schemaless.
Check if the CRD has a schema. If it doesn't, any change requires a replacement.
Is this still the case? Is there any workaround? I'm really at a loss for how to import and manage certain core types that cannot be destroyed and have schemaless sub-objects.
This should be marked as a duplicate of #1928
Here is the definition of the MachineSet CRD. you can see that there is x-kubernetes-preserve-unknown-fields
https://github.com/openshift/machine-api-operator/blob/master/install/0000_30_machine-api-operator_03_machineset.crd.yaml#L281-L285