containerdisks
containerdisks copied to clipboard
https://quay.io/repository/containerdisks/centos-stream is consuming 187.44 GiB of quota within quay.
What happened:
https://quay.io/repository/containerdisks/centos-stream is consuming 187.44 GiB of quota within quay.
What you expected to happen: A clear and concise description of what you expected to happen.
How to reproduce it (as minimally and precisely as possible): Steps to reproduce the behavior.
Additional context: Add any other context about the problem here.
Environment:
- KubeVirt version (use
virtctl version): N/A - Kubernetes version (use
kubectl version): N/A - VM or VMI specifications: N/A
- Cloud provider or hardware configuration: N/A
- OS (e.g. from /etc/os-release): N/A
- Kernel (e.g.
uname -a): N/A - Install tools: N/A
- Others: N/A
How many recent versions are stored on the CentOS Stream mirror? Do they delete old versions at some point?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@kubevirt-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/reopen /remove-lifecycle rotten
@lyarwood: Reopened this issue.
In response to this:
/reopen /remove-lifecycle rotten
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
In reference with Ben's idea in: https://github.com/kubevirt/containerdisks/pull/328#issuecomment-2833482340 Quay also enables the policy: "Prune images by the number of tags", this may be suitable for this issue. Thought? @lyarwood @lyarwood @codingben
@jcanocan yes I think that would be more useful, I wouldn't even be against setting it to a low number across the project to start with and seeing what if any feedback we get from users is. I assume most are just using the latest or some other top level version alias like fedora:42 etc
@jcanocan yes I think that would be more useful, I wouldn't even be against setting it to a low number across the project to start with and seeing what if any feedback we get from users is. I assume most are just using the
latestor some other top level version alias likefedora:42etc
What about a maximum of 100-150 tags? Repos with more than that are from 1 year or 4 months ago.
I don't see settings for this in the containerdisks org on quay.io.
@brianmcarey @dhiller Is it even possible to change the above-mentioned quota settings on quay.io?