promo-tools
promo-tools copied to clipboard
Recognize Child Images Defined In Promotion Manifests
What would you like to be added:
If a child image is defined within a sub-project's promotion manifest, it will not be seen by the Auditor since the parent image (manifest list) is under a different name. Our incoming child image will never match an existing child definition, because it's under a different name.
Example
Most manifest lists look something like this:
# k8s.io/k8s.gcr.io/images/k8s-staging-sub-project/images.yaml
- name: logger
dmap:
"sha256:c4151a15c8439265d98f66d25ef17964e9e975d894822a54ed7e72db78dba6c6": ["parent"]
- name: logger-amd
dmap:
"sha256:2c9c8df42ac7525e556bbff81aa9a62960888c69d5faad4aad408893bc95cbc9": ["child_amd"]
- name: logger-arm
dmap:
"sha256:a41a91e366e973da0bfd6fce44ba131d561ab435119ff7e1050d1e226a06dbda": ["child_arm"]
If the logger image is of mediaType: manifest.list
(a parent image) which contains both logger-amd
and logger-arm
images, our Auditor does not recognize these children are actually defined within the promotion manifest. The incoming Pub/Sub message for a child image of logger
looks like this:
gcr.io/k8s-sub-project/logger@sha256:2c9c8df42ac7525e556bbff81aa9a62960888c69d5faad4aad408893bc95cbc9
If you look carefully, this image does not exist! But in actuality, this is the child image logger-amd
we defined within the promotion manifest. Since this is how an incoming Pub/Sub child image looks like, we must widen our criteria for linking images with a promotion manifest.
Why is this needed:
Looking at the sha256 digest, instead of the fully qualified image name (FQIN), the Auditor will be able to recognize incoming child images if they are define in a promotion manifest. Not all sub-projects explicitly define child images, however for the ones that do follow this convention the verification will not require a full read of the source registry. This change has the potential to dramatically decrease the number of HTTP request send to GCR if all child images can be found in the kubernetes/k8s.io repository. The result of this feature would reduce the number of instances the Auditor exceeds GCR Quotas and causes false alarms (Issue: Noisy Auditor)
cc: @listx @amwat @justaugustus @kubernetes-sigs/release-engineering
@tylerferrara -- in "What would you like to be added", would you mind adding a few lines about the requested feature?
You jump into a problem statement, but this should also include a crisp statement about the feature you're interested in seeing implemented.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale