containerdisks icon indicating copy to clipboard operation
containerdisks copied to clipboard

https://quay.io/repository/containerdisks/centos-stream is consuming 187.44 GiB of quota within quay.

Open lyarwood opened this issue 1 year ago • 13 comments

What happened:

https://quay.io/repository/containerdisks/centos-stream is consuming 187.44 GiB of quota within quay.

What you expected to happen: A clear and concise description of what you expected to happen.

How to reproduce it (as minimally and precisely as possible): Steps to reproduce the behavior.

Additional context: Add any other context about the problem here.

Environment:

  • KubeVirt version (use virtctl version): N/A
  • Kubernetes version (use kubectl version): N/A
  • VM or VMI specifications: N/A
  • Cloud provider or hardware configuration: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Kernel (e.g. uname -a): N/A
  • Install tools: N/A
  • Others: N/A

lyarwood avatar Apr 16 '24 09:04 lyarwood

How many recent versions are stored on the CentOS Stream mirror? Do they delete old versions at some point?

0xFelix avatar Apr 16 '24 14:04 0xFelix

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot avatar Jul 15 '24 14:07 kubevirt-bot

/remove-lifecycle stale

lyarwood avatar Jul 15 '24 14:07 lyarwood

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot avatar Oct 13 '24 15:10 kubevirt-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kubevirt-bot avatar Nov 12 '24 15:11 kubevirt-bot

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kubevirt-bot avatar Dec 12 '24 16:12 kubevirt-bot

@kubevirt-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

kubevirt-bot avatar Dec 12 '24 16:12 kubevirt-bot

/reopen /remove-lifecycle rotten

lyarwood avatar Mar 19 '25 11:03 lyarwood

@lyarwood: Reopened this issue.

In response to this:

/reopen /remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

kubevirt-bot avatar Mar 19 '25 11:03 kubevirt-bot

In reference with Ben's idea in: https://github.com/kubevirt/containerdisks/pull/328#issuecomment-2833482340 Quay also enables the policy: "Prune images by the number of tags", this may be suitable for this issue. Thought? @lyarwood @lyarwood @codingben

jcanocan avatar Apr 29 '25 07:04 jcanocan

@jcanocan yes I think that would be more useful, I wouldn't even be against setting it to a low number across the project to start with and seeing what if any feedback we get from users is. I assume most are just using the latest or some other top level version alias like fedora:42 etc

lyarwood avatar Apr 29 '25 08:04 lyarwood

@jcanocan yes I think that would be more useful, I wouldn't even be against setting it to a low number across the project to start with and seeing what if any feedback we get from users is. I assume most are just using the latest or some other top level version alias like fedora:42 etc

What about a maximum of 100-150 tags? Repos with more than that are from 1 year or 4 months ago.

jcanocan avatar Apr 29 '25 09:04 jcanocan

I don't see settings for this in the containerdisks org on quay.io.

@brianmcarey @dhiller Is it even possible to change the above-mentioned quota settings on quay.io?

0xFelix avatar Apr 29 '25 15:04 0xFelix