Track transaction time
~~I'm not sure if this is the correct way to track it (on the server, with an always-outdated timestamp), but I couldn't see an obvious way to track this on the client side, and logically a transaction must be tied to a particular server connection, right?~~ Now tracking on the client side.
Closes https://github.com/postgresml/pgcat/issues/114 (total_query_time is already implemented).
I am testing this with
pgbench -i
pgbench -c 5 -j 10 -T 1800 -P 10 -S
and seeing monotonically increasing avg transaction time.
I am trying to figure out the reason for it, also I would expect the tests to fail so we probably have a coverage gap here that we'll want to address
Ha, yes, this is definitely wrong. I added a test that runs more queries (not in a transaction), and it fails with:
1) Stats SHOW STATS clients connect and make one query updates *_query_time and *_wait_time
Failure/Error: expect(results["total_xact_time"].to_i).to be_within(200).of(750)
expected 51397 to be within 200 of 750
# ./stats_spec.rb:50:in `block (4 levels) in <top (required)>'
When this metric should not have increased in that time.
@drdrsh could you try again, please? I'm seeing an issue when running pgbench in the dev container where both avg_query_time and avg_xact_time are zero, and I'm not sure if I'm doing something wrong in the setup there. (I don't think I broke it in this PR as I see the same on the main branch.)