Use sidekiq concurrency field to set DB_POOL
There are really two stages to this PR to consider.
The easy one is allowing deployment-sidekiq to set the DB_POOL env variable per sidekiq worker definition. The second one specifically sets it to the concurrency value rather than using an independent key. My memory is that sidekiq should never need more DB connections than it has threads, so this reduces connection overhead for workers with low duty cycles (eg a worker dedicated to one of the scheduler, mailers, and potentially ingress queues).
As far as I can tell this should be fine, though @renchap is more familiar with exactly how this work, and can confirm whether this should be alright.
I do not think this is needed, Mastodon already sets the pool size to the Sidekiq concurrency: https://github.com/mastodon/mastodon/blob/main/config/database.yml#L3
I do not think this is needed, Mastodon already sets the pool size to the Sidekiq concurrency: https://github.com/mastodon/mastodon/blob/main/config/database.yml#L3
In a vacuum, you are correct. However, deployment-sidekiq imports configmap-env, which in turn always sets DB_POOL to the largest thread count among sidekiq workers, minimum 5.
This change was prompted by my single-thread mailers worker getting 10 connections because that's what I give pull. The configMap's env var needs to be overridden one way or another. I'm not sure if deployments can unset an env var beyond just redefining it to "", and picking the value we actually want seemed like a cleaner solution than unsetting and relying on an upstream implementation detail. I'm not strongly opposed to the latter, though.
Looking at this with fresh eyes: if we decide to rely on the upstream line, it may be best to remove DB_POOL from the configMap altogether, since we already set MAX_THREADS in deployment-web. If DB_POOL does need to be set independently of the defaults given by MAX_THREADS and per-worker concurrency, it probably needs to be on a per-pod basis anyway.
Ha, this is unfortunate. Why dont we remove DB_POOL from the configmap rather than overriding it there? It should always have the correct value according to our database.yml, I am not sure why it was overridden in the chart in the first place?
If we want to remove DB_POOL, I just opened PR #190, which should accomplish that.
Just merged #190 which removes DB_POOL entirely, so considering this closed as a result.