charts
charts copied to clipboard
[artifactory-ha] Enable cache-fs with NFS
Is this a request for help?: No
Is this a BUG REPORT or FEATURE REQUEST? (choose one): Feature Request
Version of Helm and Kubernetes: 3.0.31 / 1.15.2 Which chart: artifactory-ha-2.0.29 What happened: CacheFS is removed from the binarystore.xml when artifactory.persistence.type = nfs
What you expected to happen: CacheFS is the most useful when using "slow NFS" so it seems like it would be useful for NFS. I think it would just be a matter of adding another condition (file-system or nfs?)
How to reproduce it (as minimally and precisely as possible): set artifactory.persistence.type=nfs
Anything else we need to know: See Also #688
Yes. When using NFS, the default template removes the cache-fs. It is a good idea to add this too.
Until then, you can mange a full customised artifactory.persistence.binarystoreXml in a dedicated values-storage.yaml and build your own template. This way, you are not coupled to the OOB template we provide.
BTW - we encourage to use the no-nfs options of using file system sharding or an S3 compatible binary store. You can get better performance without having the NFS as your single point of failure.
Thanks @eldada I am aware of the ability to customize the binarystore.xml. I sincerely thank you guys for building that in, as we have had to customize (rebuild) the container itself in the past. This was more of a request to change the default configuration. It seems like the caching should be default for NFS, more so than sharding "local" filesystem storage.
RE: NFS Availability - We have a highly available NFS solution (Pure Storage Flash Blade). The other local alternative, shading the files across multiple cluster nodes, may indeed perform better, but it doubles/triples our storage footprint, with no additional availability. All of the VMs (K8s nodes) will eventually store their data into VMDK files, which are backed by the same backend storage array. Going from 8TB to 24TB of storage usage is a tough pill to swallow. Multiple copies of files on the same array are all gone in the unlikely event that the array fails.
Storing our artifactory data in the cloud seems counter-productive, since the "clients" are local (in our datacenter). If we chose to do that, we might as well just store the images in GCR or whatever, and forego the need for Artifactory. ;)
I am, honesty, way more concerned with the database as a SPOF.