kubeconform
kubeconform copied to clipboard
Couldn't Parse the Correct `{{ .ResourceKind }}` Variable Value
Hi all,
When I tried to manage my CRDs and validate them using Kubeconform, I got a problem with parsing the correct Kind
from my yaml file
to match the new schema generated file.
Process
- I dumped all my CRDs from my Cluster
- Dumped the python file openapi2jsonschema (from https://raw.githubusercontent.com/yannh/kubeconform/master/scripts/openapi2jsonschema.py) to do the conversion work
- Validation with
kubeconform -schema-location default --schema-location '{{ .ResourceKind }}_{{ .ResourceAPIVersion }}.json' <crd-name-file.yaml>
Input
Here some of my CRDs ready to be converted, with the format: {crd-kind}-{apiversion}
clusterissuer_v1beta1.json
clusterissuer_v1.json
kafkauser_v1alpha1.json
kafkauser_v1beta1.json
...
My Kubeconfrom version
kubeconform -v
v0.4.13
Expected
By this, I expected to have all my yaml files converted and checked with kubeconform
Output
kubeconform -summary -output json -schema-location default -schema-location './schemas/{{ .ResourceKind }}_{{ .ResourceAPIVersion }}.json' ./input/prom-crd.yaml
{
"resources": [
{
"filename": "./input/prom-crd.yaml",
"kind": "CustomResourceDefinition",
"name": "prometheuses.monitoring.coreos.com",
"version": "apiextensions.k8s.io/v1",
"status": "statusError",
"msg": "could not find schema for CustomResourceDefinition"
}
],
"summary": {
"valid": 0,
"invalid": 0,
"errors": 1,
"skipped": 0
}
}
With
.
├── input
│ └── prom-crd.yaml
├── schemas
│ └── prometheus_v1.json
└── script-crd.sh
2 directories, 3 files
AND
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
...
spec:
conversion:
strategy: None
group: monitoring.coreos.com
names:
categories:
- prometheus-operator
kind: Prometheus
listKind: PrometheusList
plural: prometheuses
singular: prometheus
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: The version of Prometheus
jsonPath: .spec.version
name: Version
type: string
- description: The desired replicas number of Prometheuses
jsonPath: .spec.replicas
name: Replicas
type: integer
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1
...
Troubleshooting
For instance and as the error msg shows, I found that {{ .ResourceKind }}
variable is containing the CustomResourceDefinition
value
While I was expecting to have prometheus
value
I checked that by passing a static correct ResourceKind
kubeconform -summary -output json -schema-location default -schema-location './schemas/prometheus_{{ .ResourceAPIVersion }}.json' ./input/prom-crd.yaml
{
"resources": [
{
"filename": "./input/prom-crd.yaml",
"kind": "CustomResourceDefinition",
"name": "prometheuses.monitoring.coreos.com",
"version": "apiextensions.k8s.io/v1",
"status": "statusInvalid",
"msg": "For field spec: Additional property group is not allowed - For field spec: Additional property names is not allowed - For field spec: Additional property scope is not allowed - For field spec: Additional property versions is not allowed - For field spec: Additional property conversion is not allowed - For field status: availableReplicas is required - For field status: paused is required - For field status: replicas is required - For field status: unavailableReplicas is required - For field status: updatedReplicas is required - For field status: Additional property storedVersions is not allowed - For field status: Additional property acceptedNames is not allowed - For field status: Additional property conditions is not allowed"
}
],
"summary": {
"valid": 0,
"invalid": 1,
"errors": 0,
"skipped": 0
}
}
Thank you !!
For everyone facing the same issue, I created this code to automatize the work (Isn't that optimized but it could help)
for file in ./input/*.yaml; do
check=$(yq e '.items[0]' "${file}")
if [ "$check" == "null" ]; then
crd_kind=$(yq e '.spec.names.kind' "${file}" | tr '[:upper:]' '[:lower:]')
crd_apiversion=$(yq e '.spec.versions[0].name' "${file}" | tr '[:upper:]' '[:lower:]')
path="./schemas/${crd_kind}_${crd_apiversion}.json"
echo $path
kubeconform -summary -output json -ignore-missing-schemas -schema-location default -schema-location ${path} ${file}
else
crd_kind=$(yq e '.items[0].spec.names.kind' "${file}" | tr '[:upper:]' '[:lower:]')
crd_apiversion=$(yq e '.items[0].spec.versions[0].name' "${file}" | tr '[:upper:]' '[:lower:]')
path="./schemas/${crd_kind}_${crd_apiversion}.json"
echo $path
kubeconform -summary -output json -ignore-missing-schemas -schema-location default -schema-location ${path} ${file}
fi
done
Hope this can help !! and looking for a true solution to {{ .ResourceKind }}
variable
ok, now that wrote all the details, I'm able to understand why this is not working for you.
first, let's make sure that we are using the same jargon:
CRD - it's the CustomResourceDefinition itself (i.e. kind: CustomResourceDefinition
)
CR - it's the CustomResource that is defined by the CRD (e.g. kind: KafkaUser
)
your CRs are validated correctly. this is why I wasn't able to reproduce the issue that you described here.
So just to make sure we are aligned - the issue that you're having is that you're not able to validate the CRD that defines the Prometheus CR.
@royhadad wrote here why this is happening and suggested a workaround that you can use. because this workaround was already merged to datree, you can also use it to make your script shorter:
datree test ./input/prom-crd.yaml --ignore-missing-schemas --schema-location './schemas/prometheus_{{ .ResourceAPIVersion }}.json'
Specified as local is work!
example:
kubeconform -summary -output json -schema-location
~/work/github/yannh/kubernetes-json-schema/v1.24.0-local/{{.ResourceKind}}{{.KindSuffix}}.json
-schema-location ~/work/github/datreeio/CRDs-catalog/{{.Group}}/{{.ResourceKind}}{{.ResourceAPIVersion}}.json
-schema-location 'tmp/jsonschema/{{.ResourceKind}}{{.ResourceAPIVersion}}.json' you_are_yaml_manifests.yaml
{
"resources": [],
"summary": {
"valid": 29,
"invalid": 0,
"errors": 0,
"skipped": 0
}
}