client-go
client-go copied to clipboard
rest: misuse of http.DefaultClient in HTTPClientFor
Hello,
I am opening this issue to discuss a potential misuse of http.DefaultClient
in rest.HTTPClientFor
. The function appears to make the assumption that http.DefaultClient
will always use http.DefaultTransport
, which may not be the case.
For context, this is the function in question: https://github.com/kubernetes/client-go/blob/0cde78477a6d3ec3682b922654942a9a21f3a9eb/rest/transport.go#L38-L45
Take this code as example:
package main
func init() {
// change the default client transport's to something custom
http.DefaultClient.Transport = &http.Transport{
...
}
}
func main() {
...
}
In this example, invocations of rest.HTTPClientFor
which resolve the transport to http.DefaultTransport
will mistakenly assume that they can use the default client because the configuration of http.DefaultClient.Transport
may not match http.DefaultTransport
anymore.
The check to fallback to http.DefaultClient
in rest.HTTPClientFor
seems to be intended as an optimization. Fixing the implementation could look like this:
if transport != http.DefaultClient.Transport || (transport == http.DefaultTransport && http.DefaultClient.Transport != nil) || config.Timeout > 0 {
httpClient = &http.DefaultClient{
Transport: transport,
Timeout: config.Timeout,
}
} else {
httpClient = http.DefaultClient
}
However, this fix example highlights the tight coupling between this optimization and the inner implementation of http.Client
. I would like to suggest removing it and always creating a new http.Client
regardless of which transport is resolved from the configuration to simplify the code and avoid regressions that could be difficult to anticipate or track down.
To summarize, the function would be modified to always do:
return &http.Client{
Transport: transport,
Timeout: config.Timeout,
}
Happy to submit the fix if there is a consensus on the issue.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.