fluent-logger-python
fluent-logger-python copied to clipboard
add discard_logs_on_reconnect_error in asyncsender
issue: https://github.com/fluent/fluent-logger-python/issues/175
When using asyncsender, there is a queue holds the logs to be send:
- when except socket.gaierror, i clear the queue and print log
- this avoids the back of the log blocked when sent
- but the first one will inevitably have this problem
- i'm thinking about other methods, maybe just simply print some logs
This is my solution, it doesn’t look particularly good. so feel free to close this if un-needed
Coverage remained the same at 100.0% when pulling 6995dbb606e22a16f926e882ef0a5d573b250f43 on enjoy-binbin:connect-hang into ace80f4c2bd0020fc16891440243663c612dd01e on fluent:master.
Coverage decreased (-1.07%) to 98.928% when pulling bb8efb8930c5ed8ece8dbad69ae978f7a51c642d on enjoy-binbin:connect-hang into ace80f4c2bd0020fc16891440243663c612dd01e on fluent:master.
I'll review it at depth when I have time, but I'm inclined not to accept this change.
Unless I'm missing something the error should lead to a reconnect and resending of the logs. Clearing the queue will cause the loss of perfectly good logs due to a simple reconnect issue, which would not be acceptable.
Yep Not problem. I understand it is risky
Or maybe we should just print some logs to help the user for debug
We can consider a tunable setting which would "discard logs on reconnect" and defaults to False
. Reconnects are a part of life and discarding logs isn't useful in most cases IMO.
I tend to make decisions against arbitrary loss of data.
Right. I can give a try. Thanks @arcivanov
@arcivanov I add a discard_logs_on_reconnect_error setting default false. Maybe not good enough