kwok
kwok copied to clipboard
[Kwokctl]: Added kube-controller-manager certificate for kind and binary runtime
What type of PR is this?
/kind feature
What this PR does / why we need it:
Currently, all components of kwokctl(etcd, kube-apiserver, kube-controller-manager, kwok-controller) share admin certificates, which is different from kubeadm's behavior, and we try to close to kubeadm's behavior as much as possible. So I tried to generate a component-specific certificate for kube-controller-manager component for kind and binary runtime for now.
After the review I will successfully add the certificates for all the remaining components covering other runtimes also.
Which issue(s) this PR fixes:
Fixes #878
Special notes for your reviewer:
Hi @wzshiming I need a quick review on this implementation as I am going to add component-sepcific certs for other kwokctl components that are presently sharing admin cert.
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Manoramsharma Once this PR has been reviewed and has the lgtm label, please assign wzshiming for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Hi @Manoramsharma. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Deploy Preview for k8s-kwok canceled.
| Name | Link |
|---|---|
| Latest commit | 6b0f7ca9453fab7c423250afa28d606147e248ce |
| Latest deploy log | https://app.netlify.com/sites/k8s-kwok/deploys/665dd1229a49d2000862a258 |
Hi @wzshiming I have tried to add component specific cert for kube-controller-manager, I have handled the generation of component-specific cert by changing GeneratePki function(You can see the changes), changed the configuration parameters of kube-controller-manager for binary runtime by providing the kubeControllerManagerKeyPath and kubeControllerManagerCertPath to addKubeControllerManager function but seeing some failed checks. Can please provide me a quick review to help me figure out what I am missing?
Note: For container runtime (Docker, podman, nerdctl, lima) I haven't changed anything for now.
/ok-to-test
You can try to reproduce and debug the failed tests on your machine, please take a look at that logs.
https://github.com/kubernetes-sigs/kwok/actions/runs/9338902266/job/25702619667?pr=1130
=== RUN TestHack/Hack_Data
--- FAIL: TestHack (0.00s)
--- FAIL: TestHack/Hack_Data (0.00s)
panic: envconfig: client failed: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined [recovered]
panic: envconfig: client failed: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
goroutine 38 [running]:
testing.tRunner.func1.2({0x16efa80, 0xc0001264c0})
/opt/hostedtoolcache/go/1.22.3/x64/src/testing/testing.go:1631 +0x24a
testing.tRunner.func1()
/opt/hostedtoolcache/go/1.22.3/x64/src/testing/testing.go:1634 +0x377
panic({0x16efa80?, 0xc0001264c0?})
/opt/hostedtoolcache/go/1.22.3/x64/src/runtime/panic.go:770 +0x132
sigs.k8s.io/e2e-framework/pkg/envconf.(*Config).Client(0xc000545c80)
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/envconf/config.go:139 +0xc5
sigs.k8s.io/kwok/test/e2e.CaseHack.CreateNode.func2({0x1c51350, 0xc00011a120}, 0xc00012e340, 0x1?)
/home/runner/work/kwok/kwok/test/e2e/helper/utils.go:63 +0x45
sigs.k8s.io/e2e-framework/pkg/env.(*testEnv).executeSteps(0xc000248180, {0x1c51350?, 0xc00011a120?}, 0xc00012e340, {0xc0001264b0?, 0x5413ed?, 0x243ac60?})
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/env/env.go:444 +0x8b
sigs.k8s.io/e2e-framework/pkg/env.(*testEnv).processTestFeature.(*testEnv).execFeature.func1(0xc00012e340)
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/env/env.go:458 +0x166
testing.tRunner(0xc00012e340, 0xc000248720)
/opt/hostedtoolcache/go/1.22.3/x64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 37
/opt/hostedtoolcache/go/1.22.3/x64/src/testing/testing.go:1742 +0x390
FAIL sigs.k8s.io/kwok/test/e2e/kwokctl/docker 39.084s
FAIL
@Manoramsharma: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| pull-kwok-verify-main | 6b0f7ca9453fab7c423250afa28d606147e248ce | link | true | /test pull-kwok-verify-main |
| pull-kwok-e2e-test-main | 6b0f7ca9453fab7c423250afa28d606147e248ce | link | true | /test pull-kwok-e2e-test-main |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.