pgscv
pgscv copied to clipboard
Locks by user?
It would be very useful to see metrics on locks by mode and user. (Trying to find out who's responsible for a bunch of DB contention…)
Is that possible?
It's possible, but... is that matter?
pg_locks contains stats like gauges, and stats collected from pg_locks (and pg_stat_activity btw) are "lossy" by design. The view shows a snapshot
and taking such snapshots (metrics) it's unknown what was between two snapshots. Of course, we can take more snapshots, but this doesn't solve the problem - only long locks will be captured and lots of shorts locks stay invisible.
The second, pg_locks doesn't contain user
attribute, pg_stat_activity have to be joined (which could potentially increase (but not too much) performance impact on instances with high number of connections and activity).
Will add the user
label, but can't promise when.
Thanks! Totally understand about the lossiness. We have multiple components sharing a database and trying to figure out which one (or two or…) are experiencing the locks.
On Fri, Mar 17, 2023 at 12:27 PM Lesovsky Alexey @.***> wrote:
It's possible, but... is that matter?
pg_locks contains stats like gauges, and stats collected from pg_locks (and pg_stat_activity btw) are "lossy" by design. The view shows a snapshot and taking such snapshots (metrics) it's unknown what was between two snapshots. Of course, we can take more snapshots, but this doesn't solve the problem - only long locks will be captured and lots of shorts locks stay invisible.
The second, pg_locks doesn't contain user attribute, pg_stat_activity have to be joined (which could potentially increase (but not too much) performance impact on instances with high number of connections and activity).
Will add the user label, but can't promise when.
— Reply to this email directly, view it on GitHub https://github.com/lesovsky/pgscv/issues/51#issuecomment-1474092825, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAACVTQ5OTQPFG7XWQDGLNLW4SGFZANCNFSM6AAAAAAV6WHCPE . You are receiving this because you authored the thread.Message ID: @.***>
We have multiple components sharing a database and trying to figure out which one (or two or…) are experiencing the locks.
In such case try to enable log_lock_waits
in server configuration, it automatically logs all locks higher than deadlock_timeout
(default 1s). From one side it could increase amount of logs (if there too many locks occur), but from other side you get comprehensive logs collected by Postgres itself instead of Prometheus 10s-15s scraping. And all you need then, is analyze postgres logs.
Great point. Will do. Sent from my phone. Sorry if I sound curt. On Mar 18, 2023, at 01:22, Lesovsky Alexey @.***> wrote:
We have multiple components sharing a database and trying to figure out which one (or two or…) are experiencing the locks.
In such case try to enable log_lock_waits in server configuration, it automatically logs all locks higher than deadlock_timeout (default 1s). From one side it could increase amount of logs (if there too many locks occur), but from other side you get comprehensive logs collected by Postgres itself instead of Prometheus 10s-15s scraping. And all you then, is analyze postgres logs.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>