cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
Generate SBOM and sign release artefacts
/kind feature /area release /area security /help /priority important-soon /triage accepted
Describe the solution you'd like We should be generating a SBOM for CAPA and also signing this and any other release artefacts.
Anything else you would like to add: We should probably use sigstore
Environment:
- Cluster-api-provider-aws version:
- Kubernetes version: (use
kubectl version): - OS (e.g. from
/etc/os-release):
@richardcase: This request has been marked as needing help from a contributor.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/kind feature /area release /area security /help /priority important-soon /triage accepted
Describe the solution you'd like We should be generating a SBOM for CAPA and also signing this and any other release artefacts.
Anything else you would like to add: We should probably use sigstore
Environment:
- Cluster-api-provider-aws version:
- Kubernetes version: (use
kubectl version):- OS (e.g. from
/etc/os-release):
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Do you have any example how other projects in Kubernetes ecosystem does this?
We use it in FluxCD
Relevant to this, there is an effort going on in K8s https://github.com/kubernetes/release/issues/2383
Looks like sigstore is being used there: https://github.com/kubernetes/website/pull/31610/files
Might worth to come up with a common workflow for cluster-api and other providers too.
Might worth to come up with a common workflow for cluster-api and other providers too.
I agree @sedefsavas . We'll probably have to make changes to image-builder / the image promoter stuff which would touch all the providers (probably)
There is a nice TGIK talk about what's being done in Kubernetes about this: https://www.youtube.com/watch?v=H1D0fk9sZ8I
Hey I just saw this issue referenced in SIG Release, I'm happy to help out!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This is important for the future, so
/reopen
@richardcase: Reopened this issue.
In response to this:
This is important for the future, so
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Hi @richardcase , I could help with this one. Would you be ok with using the public sigstore? That would allow you to have publicly verifiable signatures without key management. How do you create your releases? I quickly checked the workflows, but couldn't find any dedicated workflow for that.
I'd like to split the SBOM generation ticket into a separate issue because we can take care of that easily once Sigstore is in place. What do you think?
I wonder if there is already infrastructure in place in the kubernetes community that we can just use?
e.g. I noticed that our container images are already signed because signing was added to the Kubernetes image promotion process.
I wonder if there is already infrastructure in place in the kubernetes community that we can just use?
e.g. I noticed that our container images are already signed because signing was added to the Kubernetes image promotion process.
I think you are referring to this, right? That's the same mechanism I would love to use :)
Ah perfect. Thx for the info, I"m not really familar with how it works :)
/assign @flxw
@sedefsavas - I saw that your name is on most of the releases. Could you kindly give me context on how those are authored? I couldn't find a Github Actions workflow that created the release, so I am assuming it's manual? My idea is to add a step for SBOM generation and upload into the Sigstore infrastructure into the release process. Looking forward to your answer!
@sedefsavas - I saw that your name is on most of the releases. Could you kindly give me context on how those are authored? I couldn't find a Github Actions workflow that created the release, so I am assuming it's manual? My idea is to add a step for SBOM generation and upload into the Sigstore infrastructure into the release process. Looking forward to your answer!
@flxw - we follow these steps when doing a release: https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/main/docs/book/src/development/releasing.md
So manual with some automation.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@richardcase: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Hi! My apologies for the long lead time, but I finally got around to making some time for this.
This exemplary commit integrates cosign into the Makefile, adding a separate release-signed step.
While it's totally possible to execute this manually on a local machine, I would recommend implementing this as a Github Action. These have an OIDC provider that gives us provenance information for free. This information will be engraved into Sigstore during the signing process.
https://github.com/kubernetes-sigs/cluster-api-provider-aws/commit/c340edba366aa101f36e4109ce2477f8174e7b3b
However, I've also seen that there is a larger movement for Kubernetes artifact signing underway: https://github.com/kubernetes/enhancements/issues/3031
I'll link up with the people on that issue, as I hope to solve this a bit more elegantly and with benefits for the other projects. What do you think?
Thanks for the update @flxw.
It would be good to be aligned with the wider Kubernetes community effort on artifact signing.
Hey!
We have a similar tracking issue in CAPI to have this in place, and +1 from my side to have a common workflow for it. But going through the discussion quickly, it seems to me only the signing part of the SBoM was discussed, however how about the SBoM generation itself? We had a quick chat earlier with @cpanato and got to know, the upstream k8s community uses https://github.com/kubernetes-sigs/bom to generate it, so perhaps that is the workflow we could follow for SBoM generation.
@furkatgofurov7 - thanks for input and the link to the k8s community bom is really helpful.