controller-runtime
                                
                                
                                
                                    controller-runtime copied to clipboard
                            
                            
                            
                        [Question] Integration with tracing
Hi! I found that the feature "API Server Tracing" is available as alpha since Kubernetes v1.22, and this blog mentioned that a simple patch could also enable tracing on controller-runtime as well.
I think integration with tracing would be a powerful tool to enhance the observability of controller-runtime and soo-many operators build on it.
Is this feature on the roadmap? And I am very interested in building that.
related PR: https://github.com/kubernetes-sigs/controller-runtime/pull/1211
Thanks @STRRL . I'm not sure, the patch example in the blog adds an otelhttp handler on top of the existing webhook server. Is that all we have to do?
Is that all we have to do?
No.
IMO, including the webhook server, there are also other components that need integration with tracing, like
- Controller and Reconciler
 - Client provided by controller-runtime
 - Webhook server
 - Logger/Contextual Logging
 
The first 3 things might be resolved by otelhttp, and propagate context properly. And the 4th one might need upstream updates on logr. But we would still implement customized implementation for logr.LogSink as a preview.
I do understand tracing webhook server may help users to find out the time cost of a request to apiserver. But I don't understand what should we trace for controller or reconciler, for they all work asynchronously. Are you going to trace each object from list/watch to reconcile?
For almost all the controllers/operators based on controller-runtime, the Reconciler is the most important part which contains their core business logic. I think there is no reason to ignore the tracing on them.
But I don't understand what should we trace for controller or reconciler, for they all work asynchronously. Are you going to trace each object from list/watch to reconcile?
I did not think about how tracing context/span propagates throw api-server and etcd, it might work or might not. And I am not sure that "trying to find out one reconciliation relates to previous reconciliation" is practical or not in theory, because the current status is the aggregation of all the previous updates, there must be overlapping for the propagation of different tracing contexts/span. I think it should be cleared when we actually design the tracing integration.
On the other side, only tracing operations inside only one reconciliation is also very useful:
- what kind of event trigger this reconciliation
 - then, which resources are modified/created/deleted
 - maybe some other kind of API invoked during the reconciliation
- for Chaos Mesh, it would invoke chaos-daemon to inject chaos by grpc
 - for cloud provider related controllers, it would invoke cloud provider's openapi
 - etc.
 
 - does this reconciliation "win" the optimistic lock when updating resources.
 
Are you going to trace each object from list/watch to reconcile?
Based on the former topic, I want to trace all the single reconciliation, I am not sure, but prefer to yes.
I was suffering with profiling the performance of the chaos mesh controller-manager recently days. It makes me concentrate much more on the tracing of Kubernetes operators.
I will start to work on this issue in the next several weeks.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, 
lifecycle/staleis applied - After 30d of inactivity since 
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since 
lifecycle/rottenwas applied, the issue is closed 
You can:
- Mark this issue or PR as fresh with 
/remove-lifecycle stale - Mark this issue or PR as rotten with 
/lifecycle rotten - Close this issue or PR with 
/close - Offer to help out with Issue Triage
 
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, 
lifecycle/staleis applied - After 30d of inactivity since 
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since 
lifecycle/rottenwas applied, the issue is closed 
You can:
- Mark this issue or PR as fresh with 
/remove-lifecycle rotten - Close this issue or PR with 
/close - Offer to help out with Issue Triage
 
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity, 
lifecycle/staleis applied - After 30d of inactivity since 
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since 
lifecycle/rottenwas applied, the issue is closed 
You can:
- Reopen this issue with 
/reopen - Mark this issue as fresh with 
/remove-lifecycle rotten - Offer to help out with Issue Triage
 
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
 lifecycle/staleis applied- After 30d of inactivity since
 lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
 lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
 /reopen- Mark this issue as fresh with
 /remove-lifecycle rotten- Offer to help out with Issue Triage
 Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Can this be re-opened ?
/reopen /remove-lifecycle rotten
@sbueringer: Reopened this issue.
In response to this:
/reopen /remove-lifecycle rotten
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.