gcp-compute-persistent-disk-csi-driver
gcp-compute-persistent-disk-csi-driver copied to clipboard
driver can not parse tag values containing a coma
Driver can not parse tag value with a coma:
$ oc -n openshift-cluster-csi-drivers get deployment.apps/gcp-pd-csi-driver-controller -o json | jq '.spec.template.spec.containers[0].args[-1]'
"--extra-tags=parent1/key1/value,with,coma/parent2/key2/value2"
$ oc -n openshift-cluster-csi-drivers logs pod/gcp-pd-csi-driver-controller-654845fb89-dqlrp
...
I0815 11:34:35.519878 1 main.go:125] Driver vendor version v1.1.0-rc1-1347-gc770b412-dirty
F0815 11:34:35.524193 1 main.go:166] Bad extra tags: tag "with" is invalid, correct format: 'parent_id1/key1/value1,parent_id2/key2/value2'
Escaping does not work either:
$ oc -n openshift-cluster-csi-drivers get deployment.apps/gcp-pd-csi-driver-controller -o json | jq '.spec.template.spec.containers[0].args[-1]'
"--extra-tags=parent1/key1/value\\,with\\,coma/parent2/key2/value2"
$ oc -n openshift-cluster-csi-drivers logs pod/gcp-pd-csi-driver-controller-78495f87c7-qjvj5
Defaulted container "csi-driver" out of: csi-driver, csi-provisioner, provisioner-kube-rbac-proxy, csi-attacher, attacher-kube-rbac-proxy, csi-resizer, resizer-kube-rbac-proxy, csi-snapshotter, snapshotter-kube-rbac-proxy, csi-liveness-probe
I0815 11:41:27.848600 1 main.go:111] Operating compute environment set to: production and computeEndpoint is set to: <nil>
I0815 11:41:27.848714 1 main.go:120] Sys info: NumCPU: 6 MAXPROC: 1
I0815 11:41:27.848721 1 main.go:125] Driver vendor version v1.1.0-rc1-1347-gc770b412-dirty
F0815 11:41:27.851228 1 main.go:166] Bad extra tags: tag value "value\\" for tag "parent1/key1/value\\" is invalid. Tag value can have a maximum of 63 characters and cannot be empty. Tag value must begin and end with an alphanumeric character, and must contain only uppercase, lowercase alphanumeric characters, and the following special characters `_-.@%=+:,*#&(){}[]` and spaces
According to GCP documentation comas are allowed for tag values: https://cloud.google.com/resource-manager/docs/tags/tags-creating-and-managing#adding_tag_values
OpenShift recently added support for allowing users to set custom GCP labels and tags during installation: https://github.com/openshift/enhancements/blob/master/enhancements/api-review/gcp_user_defined_labels.md#apply-user-defined-labels-to-all-gcp-resources-created-by-openshift
And the allowed values are documented to match the GCP documentation: https://docs.openshift.com/container-platform/4.16/installing/installing_gcp/installing-gcp-customizations.html#installing-gcp-cluster-creation_installing-gcp-customizations
So in OpenShift we don't prevent using comas either for tag values, and if users attempt to use them the driver will fail. Either we need to add some escaping to the driver or disallow using comas for tag values and update the documentation accordingly.
NOTE: This can can be reproduced easily by creating a tag in GCP manually and passing it to --extra-tags arg of the driver, followed by dynamic volume provisioning.
cc @mattcary @sunnylovestiramisu
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
It's not just commas, extras slashes will also cause problems, and the validation regexp doesn't allow for utf-8 & unicode as described at https://cloud.google.com/resource-manager/docs/tags/tags-creating-and-managing#creating_tag.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.