controller-tools
controller-tools copied to clipboard
Connections to Webhook server refused
I was following https://book.kubebuilder.io/cronjob-tutorial/running-webhook.html on how to create/deploy a webhook. My code for ValidateCreate/Update/Delete is still empty, all methods returning (nil, nil). When I make install deploy to the cluster (a local kind), the API server gets connection refused when attempting to connect to the webhook. Also, when I curl/wget from a pod that has access to the webhook service, the host is found, but the connection times out with no response. One of the things I have created is an extra Certificate CR with cert-manager, that I link to the webhook through ca-injection (or atleast I believe this is what kustomize provides in the end, as it does some black magic substitution behind the scenes. Here are some of the manifests: manager_webhook_patch.yaml: apiVersion: apps/v1 kind: Deployment metadata: name: controller-manager namespace: system spec: template: spec: containers: - name: manager ports: - containerPort: 9443 name: webhook-server protocol: TCP volumeMounts: - mountPath: /tmp/k8s-webhook-server/webhook-certs name: cert readOnly: true volumes: - name: cert secret: defaultMode: 420 secretName: webhook-cert
webhook certificate (connected to a self-signed ClusteIssuer): -here I manually changed dnsNames before kustomize substitutes, but those are the values it should generate. apiVersion: cert-manager.io/v1 kind: Certificate metadata: labels: app.kubernetes.io/name: certificate app.kubernetes.io/instance: webhook-cert app.kubernetes.io/component: certificate app.kubernetes.io/created-by: operator app.kubernetes.io/part-of: operator app.kubernetes.io/managed-by: kustomize name: webhook-cert # this name should match the one appeared in kustomizeconfig.yaml namespace: dev spec: # SERVICE_NAME and SERVICE_NAMESPACE will be substituted by kustomize isCA: true dnsNames:
- operator-webhook-service.operator-system.svc
- operator-webhook-service.operator-system.svc.cluster.local issuerRef: kind: ClusterIssuer name: selfsigned-cluster-issuer secretName: webhook-cert # this secret will not be prefixed, since it's not managed by kustomize
.../webhook/service.yaml: apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/name: service app.kubernetes.io/instance: webhook-service app.kubernetes.io/component: webhook app.kubernetes.io/created-by: operator app.kubernetes.io/part-of: operator app.kubernetes.io/managed-by: kustomize name: webhook-service namespace: system spec: ports: - port: 443 protocol: TCP targetPort: 9443 selector: control-plane: controller-manager
The error I get when i do kubectl apply -f config/sample/helloapp_sample.yaml
Name: "helloapp-sample", Namespace: "dev" for: "task.com_v1_helloapp.yaml": error when patching "task.com_v1_helloapp.yaml": Internal error occurred: failed calling webhook "vhelloapp.kb.io": failed to call webhook: Post "https://operator-webhook-service.operator-system.svc:443/validate-task-com-task-com-v1-helloapp?timeout=10s": dial tcp 10.96.205.104:443: connect: connection refused
I see the kubebuilder project redirected me here. I don't think this is the place for this potential bug, so I am closing the issue.
Actually, this is probably the place, as it's most likely a webhook configuration issue.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.