argo-helm icon indicating copy to clipboard operation
argo-helm copied to clipboard

feat(argo-cd): Add Volume Pesistence support for argocd-repo-server

Open arielly-parussulo opened this issue 2 years ago • 9 comments

  • allow argocd-repo-server to be deployed as a StatefulSet.
  • allow the creation of Persistent Volumes for argocd-repo-server.

Checklist:

  • [x] I have bumped the chart version according to versioning
  • [ ] I have updated the documentation according to documentation
  • [x] I have updated the chart changelog with all the changes that come with this pull request according to changelog.
  • [x] Any new values are backwards compatible and/or have sensible default.
  • [x] I have signed off all my commits as required by DCO.
  • [ ] My build is green (troubleshooting builds).

Changes are automatically published when merged to main. They are not published on branches.

arielly-parussulo avatar Nov 16 '22 14:11 arielly-parussulo

Hi @arielly-parussulo thanks for contribution.

I personally believe that persistence should be more complex feature. For reason mentioned above the repo server would ideally use 1 shared PVC with ReadWriteMultiple mode so all repositories can share already downloaded resources instead of trying to create lots of PVCs that will eventually converge to the same content. I believe 1 disk per replica would work ok with NFS backend and only a few replicas.

I can imagine that this should have at least following:

  1. global.persistence with defaults for new PVCs
  2. component level overrides (mapping to volumes)
  3. option to choose between PVC (created, existing), in-memory emptyDir and ephemeral emptyDir
  4. must be compatible with current HPAs

I think this might be good base but feature should be extended.

pdrastil avatar Nov 17 '22 18:11 pdrastil

Hi @arielly-parussulo thanks for contribution.

I personally believe that persistence should be more complex feature. For reason mentioned above the repo server would ideally use 1 shared PVC with ReadWriteMultiple mode so all repositories can share already downloaded resources instead of trying to create lots of PVCs that will eventually converge to the same content. I believe 1 disk per replica would work ok with NFS backend and only a few replicas.

I can imagine that this should have at least following:

  1. global.persistence with defaults for new PVCs
  2. component level overrides (mapping to volumes)
  3. option to choose between PVC (created, existing), in-memory emptyDir and ephemeral emptyDir
  4. must be compatible with current HPAs

I think this might be good base but feature should be extended.

Cool! I think I can add some of these features in this PR. I've added the StatefulSet feature because I saw some people mentioning that in this issue and we ended up using it in my company as we have less replicas and a monorepo that ends up causing a DiskPressure issue in our pods. So I still think that it could be an option in the Helm Chart. But I agree with you about the other features. I will try to improve this PR to add more persistency features for argocd-repo-server.

arielly-parussulo avatar Nov 21 '22 15:11 arielly-parussulo

Please also go though this thread https://github.com/argoproj/argo-cd/issues/7927 as this chart mirrors what's in the upstream.

pdrastil avatar Nov 21 '22 15:11 pdrastil

Sorry for asking @pdrastil, is it possible to have feature even if it's still in beta? @arielly-parussulo haven't pushed in 3 weeks. I can contribute if needed. Thanks

Gianluca755 avatar Dec 12 '22 14:12 Gianluca755

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] avatar Feb 11 '23 02:02 github-actions[bot]

Any news about this?

pierluigilenoci avatar Feb 11 '23 16:02 pierluigilenoci

For reason mentioned above the repo server would ideally use 1 shared PVC with ReadWriteMultiple mode

Just to note, ReadWriteMany isn't well supported by cloud vendors when pods are on different nodes. Neither GKE nor EKS allow ReadWriteMany when you're using PD or EBS as the volume type.

zswanson avatar Jul 16 '23 18:07 zswanson

@pdrastil using ReadWriteMany necessarily imposes the use of some NFS (for example, EFS from AWS) with a significant reduction in performance (for example, in Azure, SMB is used, which has embarrassing performance) as well as essential limitations in cases of clusters that operate multi-AZ. One day, it will be a concrete option, but today it is only a good theoretical idea that collides with the limits of the various cloud providers.

pierluigilenoci avatar Jul 17 '23 07:07 pierluigilenoci

hello we also need a pvc on our repo server does someone know if this PR will be merge soon ?

clement94310 avatar Dec 28 '23 08:12 clement94310