charts
charts copied to clipboard
[artifactory-ha] NFS PV name should include "namespace" so that artifactory can be deployed into more than one namespace
Is this a request for help?: NO, well, maybe :D
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes:
$ helm version
version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-09T19:10:21Z", GoVersion:"go1.15.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.4", GitCommit:"67d2fcf276fcd9cf743ad4be9a9ef5828adc082f", GitTreeState:"clean", BuildDate:"2019-09-18T14:41:55Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Which chart: artifactory-ha
What happened: Tried to create a test artifactory in another namespace, but the deployment failed because the "pv" for NFS data clashed. PV are not namespaced
What you expected to happen: I expected another artifactory to spin up so I could test something without affecting my production instance.
How to reproduce it (as minimally and precisely as possible):
- Create an artifactory-ha release with NFS persistence in "ns1" namespace
- Create another artifactory-ha release with NFS persistence (different path, obviously), in "ns2" namespace (fail)
Anything else we need to know: The problem is the PV.
$ k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
artifactory-artifactory-ha-backup-pvc Bound artifactory-artifactory-ha-backup-pv 10Ti RWO 4d
artifactory-artifactory-ha-data-pvc Bound artifactory-artifactory-ha-data-pv 10Ti RWO 4d
...
... usually the PV has a "GUID" like name. for example:
$ k get pvc data-artifactory-postgresql-0
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-artifactory-postgresql-0 Bound pvc-a30087e2-14bc-4c94-8378-f7d6bf2bc13b 50Gi RWO vsphere-pure-02 11d
I think there is probably a function in helm to generate a GUID, but I am not sure how that would work during upgrades? :)
~tommy
@TJM Let me try to reproduce this. If possible can you share how you have created the NFS server and NFS Persistent Volume. Also the values.yaml (NFS configuration part ) you have used for both the Artifactory releases.
@rahulsadanandan Our NFS server is a PureStorage FlashBlade.. I just created the shares in the WebUI. The PV is created by the helm chart.
Here is the relevant section of the values.yaml (artifactory.persistence):
artifactory:
persistence:
size: 20Gi # Make smaller PV since we are using NFS
type: nfs
nfs:
ip: 10.9.35.100
haDataMount: /app-artifactory-sandbox-001/artifactory
haBackupMount: /app-artifactory-sandbox-001/backup
capacity: 10Ti
haDataDir:
enabled: true
path: /var/opt/jfrog/artifactory-ha
artifactory:
persistence:
size: 20Gi # Make smaller PV since we are using NFS
type: nfs
nfs:
ip: 10.9.35.100
haDataMount: /app-artifactory-sandbox-002/artifactory
haBackupMount: /app-artifactory-sandbox-002/backup
capacity: 10Ti
haDataDir:
enabled: true
path: /var/opt/jfrog/artifactory-ha
Hi @TJM, We are tracking this issue internally. I was able to reproduce the issue when we use the same release name in another namespace. If we use a different release name in another namespace, we won’t get a conflict. Just wanted to get your thoughts on this. cc @chukka
@rahulsadanandan Thanks, that is a decent workaround for now... and perhaps we can just add the "namespace" to the PV name instead of trying to do something random (which feels like a problem waiting to happen)