kubernetes-client
kubernetes-client copied to clipboard
Server side apply stuck in retry
Describe the bug
When performing a server-side-apply patch of a deployment which contains a container that has duplicate environment variables with the same name, the api server returns status code 500 and the client continuously retries the same action.
Fabric8 Kubernetes Client version
6.13.5
Steps to reproduce
An example of the server-side-apply using kubectl:
$ kubectl -v=10 apply --server-side -f -<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
template:
spec:
containers:
- name: app
env:
- name: foo
value: x
- name: foo
value: y
EOF
...
I0303 18:22:47.629465 192491 request.go:1351] Request Body: {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"name":"deployment"},"spec":{"template":{"spec":{"containers":[{"env":[{"name":"foo","value":"x"},{"name":"foo","value":"y"}],"name":"app"}]}}}}
I0303 18:22:47.629629 192491 round_trippers.go:466] curl -v -XPATCH -H "Content-Type: application/apply-patch+yaml" -H "User-Agent: kubectl/v1.31.1 (linux/arm64) kubernetes/948afe5" -H "Accept: application/json" 'https://.../apis/apps/v1/namespaces/default/deployments/deployment?fieldManager=kubectl&fieldValidation=Strict&force=false'
I0303 18:22:47.634034 192491 round_trippers.go:553] PATCH https://.../apis/apps/v1/namespaces/default/deployments/deployment?fieldManager=kubectl&fieldValidation=Strict&force=false 500 Internal Server Error in 4 milliseconds
I0303 18:22:47.634062 192491 round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 4 ms Duration 4 ms
I0303 18:22:47.634065 192491 round_trippers.go:577] Response Headers:
...
I0303 18:22:47.634109 192491 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"failed to create typed patch object (default/deployment; apps/v1, Kind=Deployment): .spec.template.spec.containers[name=\"app\"].env: duplicate entries for key [name=\"foo\"]","code":500}
I0303 18:22:47.634463 192491 helpers.go:246] server response object: [{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "failed to create typed patch object (default/deployment; apps/v1, Kind=Deployment): .spec.template.spec.containers[name=\"app\"].env: duplicate entries for key [name=\"foo\"]",
"code": 500
}]
...
As seen here, the api server returns 500. When performing the same server-side-apply using the Java client:
var deployment = new DeploymentBuilder()
.withNewMetadata()
.withName("admin-service")
.endMetadata()
.withNewSpec()
.withNewTemplate()
.withNewSpec()
.addNewContainer()
.withName("app")
.addNewEnv()
.withName("foo")
.withValue("x")
.endEnv()
.addNewEnv()
.withName("foo")
.withValue("y")
.endEnv()
.endContainer()
.endSpec()
.endTemplate()
.endSpec()
.build();
var pc = new PatchContext.Builder().withForce(true).withPatchType(PatchType.SERVER_SIDE_APPLY).build();
K8S.resource(deployment).patch(pc);
The client gets stuck on the patch(pc) call since the StandardHttpClient continuously retries the same REST call in shouldRetry:
if (code == 429 || code >= 500) {
retryInterval = Math.max(retryAfterMillis(httpResponse), retryInterval);
LOG.debug(
"HTTP operation on url: {} should be retried as the response code was {}, retrying after {} millis",
request.uri(), code, retryInterval);
return retryInterval;
}
Expected behavior
The call to patch(PatchContext) should fail and not be retried in this case.
Runtime
Kubernetes (vanilla)
Kubernetes API Server version
1.25.3@latest
Environment
Linux
Fabric8 Kubernetes Client Logs
Additional context
No response
The problem is that the server is responding with a server error (5xx) to what seems to be a client error (4xx). This is why the client is automatically retrying with the backoff interval.
I'm not sure why in this case, the server is returning this HTTP status when the problem clearly seems to be in the client-side when sending an invalid body.
This issue has been automatically marked as stale because it has not had any activity since 90 days. It will be closed if no further activity occurs within 7 days. Thank you for your contributions!