nfs-subdir-external-provisioner icon indicating copy to clipboard operation
nfs-subdir-external-provisioner copied to clipboard

OpenShift specific instructions need improvement

Open MallocArray opened this issue 2 years ago • 2 comments

  1. As mentioned in https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/195 the service account listed on line 182 of README.md is not correct for a default installation using Helm, it is actually nfs-subdir-external-provisioner instead of `nfs-client-provisioner'

  2. This section for the OpenShift permissions is also located under a Manual install, and since I was doing Helm or Kustomize, I missed it for several hours of troubleshooting unless I scrolled lower in the manual steps, so perhaps it should get its own section of this document?

  3. It was not mentioned at all in the Helm Chart README.md that is linked in the main README.md, so I would suggest adding the same block of text about OpenShift permissions to that file located: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/charts/nfs-subdir-external-provisioner/README.md

  4. Finally, there isn't a mention that changing the Release Name will also change the Service Account name, which requires a new oc command to assign permissions to the correct account. So perhaps something like this would be more clear

$ NAMESPACE=`oc project -q`
$ RELEASENAME=nfs-storage
$ helm install $RELEASENAME nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set storageClass.name=$RELEASENAME \
    --set nfs.server=x.x.x.x \
    --set nfs.path=/export1
$ oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:$NAMESPACE:$RELEASENAME-nfs-client-provisioner

MallocArray avatar Jun 17 '22 21:06 MallocArray

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 15 '22 21:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 15 '22 22:10 k8s-triage-robot

thanks for the great comments around ocp How did you manage that error during startup of the pod: F1107 16:49:00.327637 1 provisioner.go:247] Error getting server version: Get "https://x.x.x.x:443/version?timeout=32s": dial tcp x.x.x.x:443: connect: no route to host

joeltraber avatar Nov 07 '22 16:11 joeltraber

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Dec 07 '22 17:12 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 07 '22 17:12 k8s-ci-robot