Leaking of log contexts: `Re-starting finished log context`
Hello, tried asking in your matrix room but got no help, so..
I don't know what kind of problem this is, so not opening a bug report, unless you want me to do that.
I'm looking at journalctl -f -u matrix-synapse.service and this WARNING log keeps repeating (my server is quite idle).
Jul 22 17:42:54 pandora.woodpeckersnest.space matrix-synapse[90282]: 2022-07-22 15:42:54,886 - synapse.logging.context - 81 - WARNING - PUT-92
53 - Re-starting finished log context PUT-9253
Jul 22 17:44:43 pandora.woodpeckersnest.space matrix-synapse[90282]: 2022-07-22 15:44:43,186 - synapse.logging.context - 81 - WARNING - PUT-92
95 - Re-starting finished log context PUT-9295
Jul 22 17:45:13 pandora.woodpeckersnest.space matrix-synapse[90282]: 2022-07-22 15:45:13,754 - synapse.logging.context - 81 - WARNING - PUT-93
10 - Re-starting finished log context PUT-9310
Jul 22 17:45:45 pandora.woodpeckersnest.space matrix-synapse[90282]: 2022-07-22 15:45:45,020 - synapse.logging.context - 81 - WARNING - PUT-93
23 - Re-starting finished log context PUT-9323
What issue might it be? Thanks
Could you pick a random instance of that message, eg. Re-starting finished log context PUT-9323 and grep for every log line containing PUT-9323?
Also do you have the matrix-synapse-ldap3 module installed?
I tried saving journalctl log to a file and grepping for 9323/9310/9295 but just one line was returned each time. I don't have that module installed.
looks like the numbers are always increasing:
Jul 22 19:10:07 pandora.woodpeckersnest.space matrix-synapse[90282]: 2022-07-22 17:10:07,013 - synapse.logging.context - 81 - WARNING - PUT-11771 - Re-starting finished log context PUT-11771
Jul 22 19:11:37 pandora.woodpeckersnest.space matrix-synapse[90282]: 2022-07-22 17:11:37,615 - synapse.logging.context - 81 - WARNING - PUT-11809 - Re-starting finished log context PUT-11809
Jul 22 19:16:36 pandora.woodpeckersnest.space matrix-synapse[90282]: 2022-07-22 17:16:36,654 - synapse.logging.context - 81 - WARNING - PUT-11918 - Re-starting finished log context PUT-11918
Jul 22 19:19:01 pandora.woodpeckersnest.space matrix-synapse[90282]: 2022-07-22 17:19:01,275 - synapse.logging.context - 81 - WARNING - PUT-11967 - Re-starting finished log context PUT-11967
Now it reached 37K
Jul 23 13:37:44 pandora.woodpeckersnest.space matrix-synapse[90282]: 2022-07-23 11:37:44,761 - synapse.logging.context - 81 - WARNING - PUT-37500 - Re-starting finished log context PUT-37500
Jul 23 13:37:44 pandora.woodpeckersnest.space matrix-synapse[90282]: 2022-07-23 11:37:44,991 - synapse.logging.context - 81 - WARNING - PUT-37501 - Re-starting finished log context PUT-37501
From your server logs, it looks like you're logging at a WARNING level, which is fine for seeing when something is acting strangely, but not enough to figure out why it's acting strangely, since it doesn't say anything about everything else Synapse is doing. Could you change your logging configuration to log at INFO and do the grepping @squahtx's suggested again please?
I was able to find a couple matches:
root@pandora:~# journalctl -u matrix-synapse.service | grep "PUT-622"
Jul 25 17:50:15 pandora.woodpeckersnest.space matrix-synapse[469462]: 2022-07-25 15:50:15,204 - synapse.access.http.8008 - 450 - INFO - PUT-622 - ::ffff:172.18.0.5 - 8008 - {@heisenbridge:woodpeckersnest.space} Processed request: 0.055sec/0.022sec (0.007sec, 0.001sec) (0.002sec/0.003sec/4) 59B 200 "PUT /_matrix/client/v3/rooms/%21ANbHEEbPZgmzFbJZXZ%3Awoodpeckersnest.space/send/m.room.message/mautrix-python_R73%40T1658764215121?user_id=@heisenbridge:woodpeckersnest.space HTTP/1.0" "mautrix-python/0.16.11 aiohttp/3.8.1 Python/3.9.13" [0 dbevts]
Jul 25 17:50:15 pandora.woodpeckersnest.space matrix-synapse[469462]: 2022-07-25 15:50:15,240 - synapse.logging.context - 81 - WARNING - PUT-622 - Re-starting finished log context PUT-622
root@pandora:~# journalctl -u matrix-synapse.service | grep "PUT-752"
Jul 25 17:56:49 pandora.woodpeckersnest.space matrix-synapse[469462]: 2022-07-25 15:56:49,259 - synapse.access.http.8008 - 450 - INFO - PUT-752 - ::ffff:172.18.0.5 - 8008 - {@hbirc_loc-bitlbee_freegamefinding:woodpeckersnest.space} Processed request: 0.047sec/0.009sec (0.013sec, 0.002sec) (0.011sec/0.007sec/5) 59B 200 "PUT /_matrix/client/v3/rooms/%21FbnkqVBVZuwLqtreVF%3Awoodpeckersnest.space/send/m.room.message/mautrix-python_R76%40T1658764609198?user_id=@hbirc_loc-bitlbee_freegamefinding:woodpeckersnest.space HTTP/1.0" "mautrix-python/0.16.11 aiohttp/3.8.1 Python/3.9.13" [0 dbevts]
Jul 25 17:56:49 pandora.woodpeckersnest.space matrix-synapse[469462]: 2022-07-25 15:56:49,299 - synapse.logging.context - 81 - WARNING - PUT-752 - Re-starting finished log context PUT-752
"freegamefinding" is an ignored user.
@roughnecks for context this is warning about a problem in how we're tracking resource usage for metrics, so this doesn't have any effect on users. The only thing it means is that the metrics might be a bit wrong, which isn't ideal.
I've looked on matrix.org and it appears we're seeing quite a lot of these as well, so I think this is a general problem
Thanks, does this still apply if I have Matrix metrics disabled?
Thanks, does this still apply if I have Matrix metrics disabled?
Yeah, all the machinery is still there even if metrics are disabled, it just means you have even less reason to be concerned by the warnings.
Alright :-)
hmph, I thought we'd fixed all these. It would be nice to find out what's causing them. (Though as E says, it's harmless)
(@squahtx says we didn't fix all these)
Spotted on matrix.org federation inbound 4:
2023-02-20 00:16:46,882 - synapse.crypto.keyring - 847 - WARNING - ServerKeyFetcher-12819075 - Error looking up keys from <HOST>: Failed to send request: TimeoutError: Timed out after 10s
2023-02-20 00:16:46,901 - synapse.logging.context - 81 - WARNING - ServerKeyFetcher-12819075- - Re-starting finished log context ServerKeyFetcher-12819075-
2023-02-20 00:17:17,806 - synapse.logging.context - 81 - WARNING - ServerKeyFetcher-12819075- - Re-starting finished log context ServerKeyFetcher-12819075-
2023-02-20 00:17:17,806 - synapse.logging.context - 81 - WARNING - ServerKeyFetcher-12819075- - Re-starting finished log context ServerKeyFetcher-12819075-
Grepping for that logcontext:
2023-02-20 00:16:36,878 - synapse.http.federation.well_known_resolver - 253 - INFO - ServerKeyFetcher-12819075- - Fetching https://example.com/.well-known/matrix/server
2023-02-20 00:16:46,880 - synapse.http.federation.well_known_resolver - 284 - INFO - ServerKeyFetcher-12819075- - Error fetching https://example.com/.well-known/matrix/server: [<twisted.python.failure.Failure twisted.internet.defer.CancelledError: >]
2023-02-20 00:16:46,882 - synapse.http.matrixfederationclient - 672 - INFO - ServerKeyFetcher-12819075- - {GET-O-12902383} [example.com] Request failed: GET matrix://example.com/_matrix/key/v2/server: TimeoutError('Timed out after 10s')
2023-02-20 00:16:46,882 - synapse.crypto.keyring - 847 - WARNING - ServerKeyFetcher-12819075 - Error looking up keys from example.com: Failed to send request: TimeoutError: Timed out after 10s
2023-02-20 00:16:46,901 - synapse.logging.context - 81 - WARNING - ServerKeyFetcher-12819075- - Re-starting finished log context ServerKeyFetcher-12819075-
2023-02-20 00:17:17,806 - synapse.logging.context - 81 - WARNING - ServerKeyFetcher-12819075- - Re-starting finished log context ServerKeyFetcher-12819075-
2023-02-20 00:17:17,806 - synapse.http.federation.matrix_federation_agent - 363 - INFO - ServerKeyFetcher-12819075- - Failed to connect to example.com:8448: User timeout caused connection failure.
2023-02-20 00:17:17,806 - synapse.logging.context - 81 - WARNING - ServerKeyFetcher-12819075- - Re-starting finished log context ServerKeyFetcher-12819075-