[artifactory-jcr] Default configuration runs out of disk space in 1 day
Is this a request for help?: No
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes: Helm 3.1.3, Kubernetes 1.16
Which chart: jfrog/artifactory-jcr:3.4.1
What happened:
A day after deploying Artifactory Container Registry with Helm chart jfrog/artifactory-jcr:3.4.1 it stopped working, because it ran out of disk space! It is a fresh deployment, no existing PV/PVC, and almost all values were left at their defaults. It was configured as a pull-through cache for Docker images.
Symptoms after a day:
docker pullon cached images worksdocker pullon new images returns:Error response from daemon: manifest for temp-artifactory.my.com/myorg/image:latest not found: manifest unknown: The named manifest is not known to the registry.- Artifactory UI is still working
- Artifactory UI/Monitoring/Storage reports:
Binaries Size: 10.46 GB Binaries Count: 246
Artifacts Size: 10.83 GB Artifacts Count: 273
Optimization: 96.62% Items Count: 289
Directory: /opt/jfrog/artifactory/var/data/artifactory/filestore
Used: 19.5 GB / 19.6 GB (99.9%)
- Artifactory UI/Monitoring/System logs/"artifactory-request.log" does not contain new logs and last line is incomplete:
...
2021-02-11T02:03:55.452Z|5c0aa695d6fc7a00|127.0.0.1|anonymous|GET|/api/system/ping|200|-1|0|6|JFrog-Router/7.12.6-1
2021-02-11T02:04:00.453Z|1e679eae981bb770|127.0.0.1|anonymo
- the PV mounted on
/var/opt/jfrog/artifactoryin the Pod ran out of disk space:
$ kubectl exec -it pod/artifactory-artifactory-0 -- sh
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 98868448 5395656 93456408 5% /
/dev/sdd 20511312 20494928 0 100% /var/opt/jfrog/artifactory
$ du -sh /var/opt/jfrog/artifactory/*
9.0G /var/opt/jfrog/artifactory/backup
36.0K /var/opt/jfrog/artifactory/bootstrap
10.5G /var/opt/jfrog/artifactory/data
500.0K /var/opt/jfrog/artifactory/etc
73.8M /var/opt/jfrog/artifactory/log
52.0K /var/opt/jfrog/artifactory/work
What you expected to happen:
Default configuration should deploy a working Artifactory Container Registry that takes care of its own disk usage and works for an unlimited time. Or at least a warning that you will run out of disk if do not configure cleaning of unused artifacts, turn on Storage Quota Control, and optionally turn off Backups.
How to reproduce it (as minimally and precisely as possible):
Helm configuration (configures Ingress):
artifactory:
nginx:
enabled: false
ingress:
enabled: true
defaultBackend:
enabled: false
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/proxy-body-size: "0"
ingress.kubernetes.io/proxy-read-timeout: "600"
ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^/(v1|v2)/token /artifactory/api/docker/null/$1/token;
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/all/$1/$2;
hosts:
- temp-artifactory.my.com
tls:
- secretName: my-cert
hosts:
- temp-artifactory.my.com
Artifactory configuration:
- added DockerHub as a remote repository with Enable Foreign Layers Caching (to act as a pull-through cache)
- added GitLab Registry as a remote repository with Enable Foreign Layers Caching (to act as a pull-through cache)
- added a virtual repository
alland selected to use both previous repositories - testing if it works on a few images (total artifacts size: 10.83 GB)
- wait a day for the "backup-daily" to run and consume all the disk space
Anything else we need to know: Instructions for setting up Artifactory in a Kubernetes environment using Helm could be improved.