java
java copied to clipboard
Exception attempting to re-aquire lock
Describe the bug After upgrade client from 10.0.1 to 16.0.0, get this error:
2022-08-24 07:43:27.500 ERROR 12 --- [eduled-worker-1] i.k.c.e.leaderelection.LeaderElector : Unexpected error on acquiring or renewing the lease
java.util.concurrent.ExecutionException: java.time.format.DateTimeParseException: Text '20220804T135255.122Z' could not be parsed at index 0
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
at io.kubernetes.client.extended.leaderelection.LeaderElector.lambda$acquire$1(LeaderElector.java:162)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.time.format.DateTimeParseException: Text '20220804T135255.122Z' could not be parsed at index 0
at java.base/java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:2052)
at java.base/java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1880)
at io.kubernetes.client.openapi.JSON$DateTypeAdapter.read(JSON.java:405)
at io.kubernetes.client.openapi.JSON$DateTypeAdapter.read(JSON.java:363)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:130)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:221)
at com.google.gson.Gson.fromJson(Gson.java:991)
at com.google.gson.Gson.fromJson(Gson.java:956)
at com.google.gson.Gson.fromJson(Gson.java:905)
at io.kubernetes.client.openapi.JSON.deserialize(JSON.java:168)
at io.kubernetes.client.extended.leaderelection.resourcelock.EndpointsLock.get(EndpointsLock.java:78)
at io.kubernetes.client.extended.leaderelection.LeaderElector.tryAcquireOrRenew(LeaderElector.java:262)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
... 3 common frames omitted
Client Version 16.0.0
Kubernetes Version 1.21 (EKS)
Java Version Java17
To Reproduce Steps to reproduce the behavior:
Create a lock using 10.0.1, then upgrade to 16.0.0
Expected behavior No exception
Server (please complete the following information):
- OS: Ubuntu
- Environment [e.g. container]: Container in EKS
- Cloud [e.g. Azure]: AWS
How can I delete lock from kubectl?
We switched from JODA to JDK date across these releases. We should probably be able to handle this, so I think that this is a bug, but I'm not sure when we will fix it.
For now, you can delete the lock using kubectl delete endpoints <lock-name>
Thanks - that should be fine - just wanted to make sure it was logged
On Wed, Aug 24, 2022 at 6:07 PM Brendan Burns @.***> wrote:
We switched from JODA to JDK date across these releases. We should probably be able to handle this, so I think that this is a bug, but I'm not sure when we will fix it.
For now, you can delete the lock using kubectl delete endpoints
— Reply to this email directly, view it on GitHub https://github.com/kubernetes-client/java/issues/2350#issuecomment-1226000901, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAZ37SN5W434PQLNQ2BNCLDV2ZJETANCNFSM57OLX2SA . You are receiving this because you authored the thread.Message ID: @.***>
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.