amazon-kinesis-client
amazon-kinesis-client copied to clipboard
activeThreads is higher than maxActiveThreads causing stale workers
I have a Python application that uses the amazon-kinesis-client-python and I see some weird behavior -
maxActiveThreads is set to 30 and following a rebalance event in the workers cluster (pods on K8s) I see a bigger number of active threads (42) than the limit I set causing the worker to stop processing records for some reason...
Here are some logs from the worker "telling" the same story -
2024-08-04 01:42:11,450 [main] INFO s.a.k.m.MultiLangDaemonConfig [NONE] - Using a fixed thread pool with 30 max active threads.
2024-08-04 02:21:50,930 [multi-lang-daemon-0000] INFO s.a.k.c.DiagnosticEventLogger [NONE] - Current thread pool executor state: ExecutorStateEvent(executorName=SchedulerThreadPoolExecutor, currentQueueSize=0, activeThreads=22, coreThreads=0, leasesOwned=22, largestPoolSize=24, maximumPoolSize=2147483647)
2024-08-04 02:22:14,934 [LeaseCoordinator-0000] INFO s.a.k.l.dynamodb.DynamoDBLeaseTaker [NONE] - Taking leases that have been expired for a long time
2024-08-04 02:22:15,322 [LeaseCoordinator-0000] INFO s.a.k.l.dynamodb.DynamoDBLeaseTaker [NONE] - Worker [TRUNCATE] successfully took 19 leases
2024-08-04 02:22:21,512 [multi-lang-daemon-0000] INFO s.a.k.c.DiagnosticEventLogger [NONE] - Current thread pool executor state: ExecutorStateEvent(executorName=SchedulerThreadPoolExecutor, currentQueueSize=0, activeThreads=41, coreThreads=0, leasesOwned=41, largestPoolSize=41, maximumPoolSize=2147483647)
2024-08-04 08:56:32,955 [multi-lang-daemon-0000] INFO s.a.k.c.DiagnosticEventLogger [NONE] - Current thread pool executor state: ExecutorStateEvent(executorName=SchedulerThreadPoolExecutor, currentQueueSize=0, activeThreads=51, coreThreads=0, leasesOwned=26, largestPoolSize=51, maximumPoolSize=2147483647)
What can I configure for the number of active threads to be 30? Why the number of active threads won't go down at some point? In the Dynamo table, I saw the worker later on was assigned to 26 shards so I don't see a reason for 41 active threads... looks like it was stuck...