cloud-on-k8s
cloud-on-k8s copied to clipboard
don't remove empty `spec.auth.fileRealm` array from CRD
Proposal
In order to simply kubectl patch --type json operations, I propose to keep empty array definitions from fileRealm and roles, by removing their omitempty annotations (https://github.com/elastic/cloud-on-k8s/blob/032267e5ef7c505c7b5283d91f6a17791ab4919c/pkg/apis/elasticsearch/v1/elasticsearch_types.go#L237-L239).
Bug Report
What did you do?
defined a cluster including:
spec:
auth:
roles: []
fileRealm: []
and ran:
kubectl patch es/$1 --type json --patch "$(cat << EOF
[
{
"op" : "add" ,
"path" : "/spec/auth/fileRealm/-" ,
"value" : {
"secretName" : "$2-user"
}
},
{
"op" : "add" ,
"path" : "/spec/auth/roles/-" ,
"value" : {
"secretName" : "$2-role"
}
}
]
EOF
)
What did you expect to see?
I expected to have a first entry added.
What did you see instead? Under which circumstances?
Instead I discovered that the CRD strips empty arrays from the object definition:
#kubectl get -o json es cluster1
"spec": {
"auth": {},
-
ECK version:
2.4.0
-
Kubernetes information:
- Cloud: GKE 1.22
- autopilot
kubectl version --short
Client Version: v1.23.5
Server Version: v1.22.11-gke.400
- Resource definition:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: cluster1
spec:
version: 8.4.0
nodeSets:
- name: default
count: 4
auth:
roles: []
fileRealm: []
- Logs: only message I get is from kubectl patch:
The request is invalid
For now the only solution I found is using jq and kubectl replace:
kubectl get "es/$1" -o json \
| jq --arg 'secret' "$2-user" '.spec.auth.fileRealm += [{"secretName": $secret}]' \
| jq --arg 'secret' "$2-role" '.spec.auth.roles += [{"secretName": $secret}]' \
| kubectl replace -f -
Hello,
I can't reproduce the issue you describe.
> kubectl version --short
Client Version: v1.23.10
Server Version: v1.22.13-gke.1000
> kubectl get es hey -o json | jq '.spec.auth'
{
"fileRealm": [],
"roles": []
}
> kubectl patch es/hey --type json --patch "$(cat << EOF
[
{
"op" : "add" ,
"path" : "/spec/auth/fileRealm/-" ,
"value" : {
"secretName" : "test-user"
}
},
{
"op" : "add" ,
"path" : "/spec/auth/roles/-" ,
"value" : {
"secretName" : "test-role"
}
}
]
EOF
)"
elasticsearch.elasticsearch.k8s.elastic.co/hey patched
> kubectl get es hey -o json | jq '.spec.auth'
{
"fileRealm": [
{
"secretName": "test-user"
}
],
"roles": [
{
"secretName": "test-role"
}
]
}
Could you check again on your side?
Seems like it only happens if you didn't edit the fileRealm or roles array already. If I add an entry and then remove it, the empty arrays are correctly kept as is. But on a fresh cluster, I still get:
kubectl get es cluster4 -o json | jq '.spec.auth'
{}
I don't add/remove an entry or edit the fileRealm/roles array after the inital creation. I just apply this manifest:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: test
spec:
version: 8.4.2
auth:
roles: []
fileRealm: []
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
Then I execute kubectl patch and the fileRealm/roles fields are populated.
With auth: {} in the manifest, the kubectl patch fails with The request is invalid, which is correct because the paths we want to patch don't exist.
Weird! I noticed we don't exactly have the same gke/kube server version, maybe it comes from that. Thanks for trying to reproduce tho! I'll keep this issue updated if I find something.
Closing this due to inactivity. Feel free to reopen if needed.