controller-runtime icon indicating copy to clipboard operation
controller-runtime copied to clipboard

[Question] Integration with tracing

Open STRRL opened this issue 3 years ago • 13 comments

Hi! I found that the feature "API Server Tracing" is available as alpha since Kubernetes v1.22, and this blog mentioned that a simple patch could also enable tracing on controller-runtime as well.

I think integration with tracing would be a powerful tool to enhance the observability of controller-runtime and soo-many operators build on it.

Is this feature on the roadmap? And I am very interested in building that.

STRRL avatar Apr 25 '22 12:04 STRRL

related PR: https://github.com/kubernetes-sigs/controller-runtime/pull/1211

STRRL avatar May 03 '22 07:05 STRRL

Thanks @STRRL . I'm not sure, the patch example in the blog adds an otelhttp handler on top of the existing webhook server. Is that all we have to do?

FillZpp avatar May 05 '22 06:05 FillZpp

Is that all we have to do?

No.

IMO, including the webhook server, there are also other components that need integration with tracing, like

  • Controller and Reconciler
  • Client provided by controller-runtime
  • Webhook server
  • Logger/Contextual Logging

The first 3 things might be resolved by otelhttp, and propagate context properly. And the 4th one might need upstream updates on logr. But we would still implement customized implementation for logr.LogSink as a preview.

STRRL avatar May 05 '22 09:05 STRRL

I do understand tracing webhook server may help users to find out the time cost of a request to apiserver. But I don't understand what should we trace for controller or reconciler, for they all work asynchronously. Are you going to trace each object from list/watch to reconcile?

FillZpp avatar May 05 '22 09:05 FillZpp

For almost all the controllers/operators based on controller-runtime, the Reconciler is the most important part which contains their core business logic. I think there is no reason to ignore the tracing on them.

But I don't understand what should we trace for controller or reconciler, for they all work asynchronously. Are you going to trace each object from list/watch to reconcile?

I did not think about how tracing context/span propagates throw api-server and etcd, it might work or might not. And I am not sure that "trying to find out one reconciliation relates to previous reconciliation" is practical or not in theory, because the current status is the aggregation of all the previous updates, there must be overlapping for the propagation of different tracing contexts/span. I think it should be cleared when we actually design the tracing integration.

On the other side, only tracing operations inside only one reconciliation is also very useful:

  • what kind of event trigger this reconciliation
  • then, which resources are modified/created/deleted
  • maybe some other kind of API invoked during the reconciliation
    • for Chaos Mesh, it would invoke chaos-daemon to inject chaos by grpc
    • for cloud provider related controllers, it would invoke cloud provider's openapi
    • etc.
  • does this reconciliation "win" the optimistic lock when updating resources.

Are you going to trace each object from list/watch to reconcile?

Based on the former topic, I want to trace all the single reconciliation, I am not sure, but prefer to yes.

STRRL avatar May 05 '22 12:05 STRRL

I was suffering with profiling the performance of the chaos mesh controller-manager recently days. It makes me concentrate much more on the tracing of Kubernetes operators.

I will start to work on this issue in the next several weeks.

STRRL avatar Jul 22 '22 01:07 STRRL

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 20 '22 01:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 19 '22 02:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Dec 19 '22 02:12 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 19 '22 02:12 k8s-ci-robot

Can this be re-opened ?

mjnovice avatar Apr 08 '24 17:04 mjnovice

/reopen /remove-lifecycle rotten

sbueringer avatar Apr 08 '24 18:04 sbueringer

@sbueringer: Reopened this issue.

In response to this:

/reopen /remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 08 '24 18:04 k8s-ci-robot