kpt seems unable to managed unamed "lists"
When trying to create a kpt package of kube-prometheus, for a GKE cluster in 1.18.x
I seems that the manifests:
-
grafana-dashboardDefinitions.yaml -
prometheus-roleBindingSpecificNamespaces.yaml -
prometheus-roleSpecificNamespaces.yaml
are in fact list types, actually ConfigMapList, RoleBindingList and RoleList respectively. And when trying to perform a kpt live apply, the following message gets printed out:
$ kpt live apply dev/kube-prometheus --reconcile-timeout=20m --output=table
error: empty name for object
Of course I'm able to apply the manifests manually using kubectl, so I did and it works, but I would prefer using kpt.
Here is a more detailed example
$ k delete -f dev/kube-prometheus/grafana-dashboardDefinitions.yaml
configmap "grafana-dashboard-apiserver" deleted
configmap "grafana-dashboard-cluster-total" deleted
configmap "grafana-dashboard-controller-manager" deleted
configmap "grafana-dashboard-k8s-resources-cluster" deleted
configmap "grafana-dashboard-k8s-resources-namespace" deleted
configmap "grafana-dashboard-k8s-resources-node" deleted
configmap "grafana-dashboard-k8s-resources-pod" deleted
configmap "grafana-dashboard-k8s-resources-workload" deleted
configmap "grafana-dashboard-k8s-resources-workloads-namespace" deleted
configmap "grafana-dashboard-kubelet" deleted
configmap "grafana-dashboard-namespace-by-pod" deleted
configmap "grafana-dashboard-namespace-by-workload" deleted
configmap "grafana-dashboard-node-cluster-rsrc-use" deleted
configmap "grafana-dashboard-node-rsrc-use" deleted
configmap "grafana-dashboard-nodes" deleted
configmap "grafana-dashboard-persistentvolumesusage" deleted
configmap "grafana-dashboard-pod-total" deleted
configmap "grafana-dashboard-prometheus-remote-write" deleted
configmap "grafana-dashboard-prometheus" deleted
configmap "grafana-dashboard-proxy" deleted
configmap "grafana-dashboard-scheduler" deleted
configmap "grafana-dashboard-statefulset" deleted
configmap "grafana-dashboard-workload-total" deleted
$ k delete -f dev/kube-prometheus/prometheus-roleBindingSpecificNamespaces.yaml
rolebinding.rbac.authorization.k8s.io "prometheus-k8s" deleted
rolebinding.rbac.authorization.k8s.io "prometheus-k8s" deleted
rolebinding.rbac.authorization.k8s.io "prometheus-k8s" deleted
$ k delete -f dev/kube-prometheus/prometheus-roleSpecificNamespaces.yaml
role.rbac.authorization.k8s.io "prometheus-k8s" deleted
role.rbac.authorization.k8s.io "prometheus-k8s" deleted
role.rbac.authorization.k8s.io "prometheus-k8s" deleted
$ k delete cm inventory-22934648
configmap "inventory-22934648" deleted
$ kpt live apply dev/kube-prometheus --reconcile-timeout=20m
namespace/monitoring unchanged
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com configured
serviceaccount/alertmanager-main unchanged
serviceaccount/grafana unchanged
serviceaccount/kube-state-metrics unchanged
serviceaccount/node-exporter unchanged
serviceaccount/prometheus-adapter unchanged
serviceaccount/prometheus-k8s unchanged
serviceaccount/prometheus-operator unchanged
role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged
clusterrole.rbac.authorization.k8s.io/kube-state-metrics unchanged
clusterrole.rbac.authorization.k8s.io/node-exporter unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-adapter unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-k8s unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-operator unchanged
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources unchanged
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged
clusterrolebinding.rbac.authorization.k8s.io/node-exporter unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator unchanged
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator unchanged
configmap/adapter-config unchanged
configmap/grafana-dashboards unchanged
secret/alertmanager-main configured
secret/grafana-datasources unchanged
service/alertmanager-main unchanged
service/grafana unchanged
service/kube-state-metrics unchanged
service/node-exporter unchanged
service/prometheus-adapter unchanged
service/prometheus-k8s unchanged
service/prometheus-operator unchanged
deployment.apps/grafana configured
deployment.apps/kube-state-metrics unchanged
deployment.apps/prometheus-adapter configured
deployment.apps/prometheus-operator unchanged
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io configured
alertmanager.monitoring.coreos.com/main unchanged
daemonset.apps/node-exporter configured
prometheus.monitoring.coreos.com/k8s unchanged
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules unchanged
servicemonitor.monitoring.coreos.com/alertmanager unchanged
servicemonitor.monitoring.coreos.com/coredns unchanged
servicemonitor.monitoring.coreos.com/grafana unchanged
servicemonitor.monitoring.coreos.com/kube-apiserver unchanged
servicemonitor.monitoring.coreos.com/kube-controller-manager unchanged
servicemonitor.monitoring.coreos.com/kube-scheduler unchanged
servicemonitor.monitoring.coreos.com/kube-state-metrics unchanged
servicemonitor.monitoring.coreos.com/kubelet unchanged
servicemonitor.monitoring.coreos.com/node-exporter unchanged
servicemonitor.monitoring.coreos.com/prometheus unchanged
servicemonitor.monitoring.coreos.com/prometheus-adapter unchanged
servicemonitor.monitoring.coreos.com/prometheus-operator unchanged
error when retrieving current configuration of:
Resource: "/v1, Resource=configmaplists", GroupVersionKind: "/v1, Kind=ConfigMapList"
Name: "", Namespace: "monitoring"
from server for: "grafana-dashboardDefinitions.yaml": resource name may not be empty
error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindinglists", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBindingList"
Name: "", Namespace: "monitoring"
from server for: "prometheus-roleBindingSpecificNamespaces.yaml": resource name may not be empty
error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=rolelists", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleList"
Name: "", Namespace: "monitoring"
from server for: "prometheus-roleSpecificNamespaces.yaml": resource name may not be empty
Of course according to the official API docs
RoleList is read only in rbac.authorization.k8s.io/v1 RoleBindingList is read only in rbac.authorization.k8s.io/v1 ConfigMapList is read only in v1
However since kubectl seems to support applying such manifests, is there a way for kpt do to the same?
Thanks in advance
Just for note, I've been using kustomize for some specific patches and running
$ kustomize build dev/kube-prometheus | kpt live apply --reconcile-timeout=2m --output table
seems to work fine :)
Thank you very much for providing exact steps, your yml files and command output. Very very cool. Glad you got a work around for now, I will see if we can find someone to take a look but since you can keep making progress we'll set it up as P1/P2 (instead of P0).
@seans3 can you take a look since you own this area?
@neuromantik33 Can you please show the beginning of the ConfigMapList config yaml? What seems odd in your output is that the "Name" (which is a mandatory field) seems missing (at least kpt thinks its missing).
The raw yaml which is having issues is located here. However it does seem to have a name
metadata:
labels:
app.kubernetes.io/component: grafana
app.kubernetes.io/name: grafana
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 7.3.7
name: grafana-dashboard-workload-total
namespace: monitoring
kind: ConfigMapList
@neuromantik33 We intend to figure out why ConfigMapList is having issues in kpt. Here is more context:
-
ConfigMapListand otherListresources are never stored in the cluster. So when we try to apply aList, we get the error (empty name). The cluster does not perform transactions of multiple resources, and theseList-types of resources are usually used to return a group of objects. Other tools handle theseListtransparently by applying the resources within theListone-by-one.kptneeds to add thisListhandling functionality. - Currently transforming a
ConfigMapListto another list of resources (withoutConfigMapList) separated by---can currently be used as a workaround. - If possible, breaking out each resource into its own resource (without the
ConfigMapList) would also be a workaround.
Leaving open at P2 until we correctly add List handling.
I thought we had handling to unwrap lists when we read files/streams. But looking at it now, the functionality for unwrapping lists in the kyaml library is not sufficient to handle this situation: https://github.com/kubernetes-sigs/kustomize/blob/c9e7f627fe8f75bc60ea76c4974e8d7eada752ec/kyaml/kio/byteio_reader.go#L176