gcp-compute-persistent-disk-csi-driver
gcp-compute-persistent-disk-csi-driver copied to clipboard
Support `--version` flag
/sig storage /kind feature
What would you like to be added:
It is idiomatic and a wide practice a component/binary to have a --version flag:
$ docker run k8s.gcr.io/kube-apiserver:v1.18.6 kube-apiserver --version
Kubernetes v1.18.6
$ docker run k8s.gcr.io/kube-apiserver:v1.18.6 kube-apiserver --version=raw
version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
$ docker run coredns/coredns:1.7.0 --version
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
$ docker run mcr.microsoft.com/k8s/csi/azuredisk-csi:v0.8.0 --version
Build Date: "2020-08-10T02:31:06Z"
Compiler: gc
Driver Name: disk.csi.azure.com
Driver Version: v0.8.0
Git Commit: 37a0784fbee9a141fc1bed2d26a1196d664f5d6b
Go Version: go1.13.10
Platform: linux/amd64
Topology Key: topology.disk.csi.azure.com/zone
Currently gcp-compute-persistent-disk-csi-driver does not support such flag:
docker run k8s.gcr.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver:v1.3.4 --version
flag provided but not defined: -version
Usage of /gce-pd-csi-driver:
-add_dir_header
If true, adds the file directory to the header
-alsologtostderr
log to standard error as well as files
-cloud-config string
Path to GCE cloud provider config
-endpoint string
CSI endpoint (default "unix:/tmp/csi.sock")
-extra-labels string
Extra labels to attach to each PD created. It is a comma separated list of key value pairs like '<key1>=<value1>,<key2>=<value2>'. See https://cloud.google.com/compute/docs/labeling-resources for details
-http-endpoint :8080
The TCP network address where the prometheus metrics endpoint will listen (example: :8080). The default is empty string, which means metrics endpoint is disabled.
-log_backtrace_at value
when logging hits line file:N, emit a stack trace
-log_dir string
If non-empty, write log files in this directory
-log_file string
If non-empty, use this log file
-log_file_max_size uint
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
-logtostderr
log to standard error instead of files (default true)
-metrics-path /metrics
The HTTP path where prometheus metrics will be exposed. Default is /metrics. (default "/metrics")
-run-controller-service
If set to false then the CSI driver does not activate its controller service (default: true) (default true)
-run-node-service
If set to false then the CSI driver does not activate its node service (default: true) (default true)
-skip_headers
If true, avoid header prefixes in the log messages
-skip_log_headers
If true, avoid headers when opening log files
-stderrthreshold value
logs at or above this threshold go to stderr (default 2)
-v value
number for the log level verbosity
-vmodule value
comma-separated list of pattern=N settings for file-filtered logging
Why is this needed:
- Sometimes image SHA can be used instead of human readable tags - in such case the --version flag is useful to understand the component version behind the image SHA.
- The version flag can prove also more metadata for the component - such as GitCommit, BuildDate, GoVersion, Platform.
k8s.io/component-base has useful package (verflag,version) that can be used for this purpose (this pkg is used by the core K8s components as well).
Sounds reasonable---if you can come up with a PR I'll look at it!
My instinct would be to keep it simple, basically just repeat the same as what is emitted to metrics on startup.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@ialidzhikov again, PRs are welcome.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.