cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
Noisy repeating `webhook` logs for `capo-controller-manager`
/kind bug
What steps did you take and what happened:
Create a cluster using ClusterClass with a managed topology, it seems that the controllers are frequently hitting the API endpoint which triggers the following log message repeatingly:
capo-system/capo-controller-manager-85dd599d44-k68bp[manager]: I0615 17:26:07.288416 1 http.go:96] "controller-runtime/webhook/webhooks: received request" webhook="/validate-infrastructure-cluster-x-k8s-io-v1alpha6-openstackmachinetemplate" UID=f3b62f25-5618-4078-85a3-50e872dbd36c kind="infrastructure.cluster.x-k8s.io/v1alpha6, Kind=OpenStackMachineTemplate" resource={Group:infrastructure.cluster.x-k8s.io Version:v1alpha6 Resource:openstackmachinetemplates}
capo-system/capo-controller-manager-85dd599d44-k68bp[manager]: I0615 17:26:07.289025 1 http.go:143] "controller-runtime/webhook/webhooks: wrote response" webhook="/validate-infrastructure-cluster-x-k8s-io-v1alpha6-openstackmachinetemplate" code=200 reason= UID=f3b62f25-5618-4078-85a3-50e872dbd36c allowed=true
In a search within centralized logs, an environment searching for validate-infrastructure-cluster-x-k8s-io matched almost 1000 times per hour.
What did you expect to happen:
Less noise :)
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- Cluster API Provider OpenStack version (Or
git rev-parse HEADif manually built): - Cluster-API version:
- OpenStack version:
- Minikube/KIND version:
- Kubernetes version (use
kubectl version): - OS (e.g. from
/etc/os-release):
looks this log not from CAPO repo itself, maybe from the cert-manager code ? anyway, running --v=5 or default log level?
looks this log not from CAPO repo itself, maybe from the cert-manager code ? anyway, running --v=5 or default log level?
We're using the default log level in our case.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.