cluster-api-provider-openstack icon indicating copy to clipboard operation
cluster-api-provider-openstack copied to clipboard

Noisy repeating `webhook` logs for `capo-controller-manager`

Open mnaser opened this issue 2 years ago • 4 comments

/kind bug

What steps did you take and what happened: Create a cluster using ClusterClass with a managed topology, it seems that the controllers are frequently hitting the API endpoint which triggers the following log message repeatingly:

capo-system/capo-controller-manager-85dd599d44-k68bp[manager]: I0615 17:26:07.288416       1 http.go:96] "controller-runtime/webhook/webhooks: received request" webhook="/validate-infrastructure-cluster-x-k8s-io-v1alpha6-openstackmachinetemplate" UID=f3b62f25-5618-4078-85a3-50e872dbd36c kind="infrastructure.cluster.x-k8s.io/v1alpha6, Kind=OpenStackMachineTemplate" resource={Group:infrastructure.cluster.x-k8s.io Version:v1alpha6 Resource:openstackmachinetemplates}
capo-system/capo-controller-manager-85dd599d44-k68bp[manager]: I0615 17:26:07.289025       1 http.go:143] "controller-runtime/webhook/webhooks: wrote response" webhook="/validate-infrastructure-cluster-x-k8s-io-v1alpha6-openstackmachinetemplate" code=200 reason= UID=f3b62f25-5618-4078-85a3-50e872dbd36c allowed=true

In a search within centralized logs, an environment searching for validate-infrastructure-cluster-x-k8s-io matched almost 1000 times per hour.

What did you expect to happen:

Less noise :)

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster API Provider OpenStack version (Or git rev-parse HEAD if manually built):
  • Cluster-API version:
  • OpenStack version:
  • Minikube/KIND version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):

mnaser avatar Jun 15 '23 17:06 mnaser

looks this log not from CAPO repo itself, maybe from the cert-manager code ? anyway, running --v=5 or default log level?

jichenjc avatar Jun 20 '23 06:06 jichenjc

looks this log not from CAPO repo itself, maybe from the cert-manager code ? anyway, running --v=5 or default log level?

We're using the default log level in our case.

mnaser avatar Jun 28 '23 15:06 mnaser

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 23 '24 12:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 22 '24 12:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Mar 23 '24 13:03 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 23 '24 13:03 k8s-ci-robot