Excessive task_lists Updates Causing DB Connection Exhaustion
Hi,
We are using Cadence with the Java client and MySQL persistence and are observing continuous execution of the following internal query:
UPDATE task_lists SET range_id = ?, DATA = ?
Under load, this query runs repeatedly for several hours, opening a large number of database connections and eventually exhausting the DB connection pool. This leads to performance degradation and db choking.
The query appears to be generated internally by Cadence (matching service), not by application code. We suspect task queue contention, high poller concurrency, or frequent task list ownership changes.
Are there known causes or recommended limits/configurations to prevent excessive task_lists updates?
Could you share what version of Cadence you are using, how your load looks like, how are your DB/matching configurations
This query runs periodically for each task list to persist the ack level of each task list. You can change this dynamic config to decrease the frequency.
Could you share what version of Cadence you are using, how your load looks like, how are your DB/matching configurations
Versions • Cadence server: 3.29.5 • Java client: 3.6.2 • Persistence: MySQL
Configuration (summary) • Mostly default matching service configuration. • Workers are started without explicit WorkerOptions, so Cadence uses default poller settings. • Activity/workflow execution concurrency is limited at the application level (batch size 3), but poller concurrency has not been explicitly capped. • Multiple worker pods (7 pods) are polling the same task list. • No application code directly updates task_lists; the query appears to be generated internally by Cadence.
Could the high number of polling workers (default pollers × 7 pods) be causing frequent task list ownership changes and excessive task_lists updates, despite low execution concurrency?
If so, are there recommended limits for workflow/activity task pollers for this setup?