k8s.io
k8s.io copied to clipboard
Configure Terraform presubmits and postsubmit jobs
We have a number of projects managed by terraform at https://github.com/kubernetes/k8s.io/tree/main/infra/gcp/terraform.
However, these currently projects require manual deployment by sig-k8s-infra leads and others which is blocking rapid iteration of the GCP infra.
We need to configure some automation to deploy these changes safely.
Google Cloud Changes:
- Create a new project that holds a privileged service account. Something like https://github.com/knative/test-infra/tree/main/infra/gcp#bootstrapping-terraform---one-time-setup
- Grant this ServiceAccount some roles on the organization. https://github.com/knative/test-infra/blob/main/infra/gcp/iam.tf
- Create a k8s service account
- Complete the Workload Identity Configuration
AWS Changes:
- Implement #3807.
Prow Changes:
- We will need a postsubmit job that runs when changes in infra/gcp/* are merged
- We will also need a presubmit that runs terraform plan only and prints that output to the PR. This will be running in the trusted cluster and that isn't allowed by default. Need to work something out.
Lets talk about it at the next sig-k8s-infra meeting.
/cc @ameukam
/kind feature /priority important-soon
Why this issue is scoped to GCP only ? we have the same problem with AWS. I would be interesting to introduce the same pattern to the existing cloud providers.
AWS can be enabled by completing #3807. Will add this to the body of the issue.
/milestone v.126
@ameukam: The provided milestone is not valid for this repository. Milestones in this repository: [v1.24
, v1.25
, v1.26
]
Use /milestone clear
to clear the milestone.
In response to this:
/milestone v.126
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/milestone v1.26
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten