chore(karpenter): upgrade karpenter to v0.32.6
What this PR does / why we need it: Bumps karpenter version to 0.32.6
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
fixes #16311
Special notes for your reviewer: Release page
/test all
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please ask for approval from hakman. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/retest
/retest
@moshevayner: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| pull-kops-e2e-aws-upgrade-k127-ko127-to-k128-kolatest-karpenter | 3a76e1ffd710fefb5476b66b5119b56663fba1de | link | false | /test pull-kops-e2e-aws-upgrade-k127-ko127-to-k128-kolatest-karpenter |
| pull-kops-e2e-aws-karpenter | 3a76e1ffd710fefb5476b66b5119b56663fba1de | link | true | /test pull-kops-e2e-aws-karpenter |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
I guess CRDs need updating:
I0208 05:04:17.581270 5067 addon.go:192] Applying update from "s3://k8s-kops-prow/e2e-pr16327.pull-kops-e2e-aws-karpenter.test-cncf-aws.k8s.io/addons/karpenter.sh/k8s-1.19.yaml"
I0208 05:04:17.581294 5067 s3fs.go:378] Reading file "s3://k8s-kops-prow/e2e-pr16327.pull-kops-e2e-aws-karpenter.test-cncf-aws.k8s.io/addons/karpenter.sh/k8s-1.19.yaml"
W0208 05:04:17.942636 5067 health.go:69] expected status.conditions to be list, got <nil>
W0208 05:04:17.989700 5067 health.go:69] expected status.conditions to be list, got <nil>
W0208 05:04:18.004385 5067 health.go:69] expected status.conditions to be list, got <nil>
W0208 05:04:18.037816 5067 health.go:69] expected status.conditions to be list, got <nil>
W0208 05:04:18.104261 5067 health.go:69] expected status.conditions to be list, got <nil>
W0208 05:04:18.185098 5067 health.go:69] expected status.conditions to be list, got <nil>
I0208 05:04:18.279025 5067 health.go:63] status conditions not found for PodDisruptionBudget.policy:kube-system/karpenter
I0208 05:04:18.478384 5067 health.go:63] status conditions not found for Secret:kube-system/karpenter-cert
I0208 05:04:19.883033 5067 health.go:63] status conditions not found for Service:kube-system/karpenter
I0208 05:04:19.982831 5067 health.go:63] status conditions not found for Deployment.apps:kube-system/karpenter
I0208 05:04:20.081517 5067 health.go:63] status conditions not found for MutatingWebhookConfiguration.admissionregistration.k8s.io:defaulting.webhook.karpenter.k8s.aws
I0208 05:04:20.178238 5067 health.go:63] status conditions not found for ValidatingWebhookConfiguration.admissionregistration.k8s.io:validation.webhook.karpenter.sh
I0208 05:04:20.277927 5067 health.go:63] status conditions not found for ValidatingWebhookConfiguration.admissionregistration.k8s.io:validation.webhook.config.karpenter.sh
I0208 05:04:20.376625 5067 health.go:63] status conditions not found for ValidatingWebhookConfiguration.admissionregistration.k8s.io:validation.webhook.karpenter.k8s.aws
W0208 05:04:20.376686 5067 results.go:63] error from apply on karpenter.k8s.aws/v1alpha1, Kind=AWSNodeTemplate /nodes: error getting rest mapping for karpenter.k8s.aws/v1alpha1, Kind=AWSNodeTemplate: no matches for kind "AWSNodeTemplate" in version "karpenter.k8s.aws/v1alpha1"
W0208 05:04:20.376713 5067 results.go:63] error from apply on karpenter.sh/v1beta1, Kind=NodePool /nodes: error getting rest mapping for karpenter.sh/v1beta1, Kind=NodePool: no matches for kind "NodePool" in version "karpenter.sh/v1beta1"
W0208 05:04:20.376746 5067 results.go:56] consistency error (healthy counts): &applyset.ApplyResults{total:30, applySuccessCount:28, applyFailCount:2, healthyCount:28, unhealthyCount:0}
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@moshevayner: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| pull-kops-e2e-aws-upgrade-k127-ko127-to-k128-kolatest-karpenter | 3a76e1ffd710fefb5476b66b5119b56663fba1de | link | false | /test pull-kops-e2e-aws-upgrade-k127-ko127-to-k128-kolatest-karpenter |
| pull-kops-e2e-aws-karpenter | 3a76e1ffd710fefb5476b66b5119b56663fba1de | link | true | /test pull-kops-e2e-aws-karpenter |
| pull-kops-e2e-k8s-aws-calico-k8s-infra | 3a76e1ffd710fefb5476b66b5119b56663fba1de | link | true | /test pull-kops-e2e-k8s-aws-calico-k8s-infra |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten