loguru
loguru copied to clipboard
Option for choosing `queue.Queue` over the multiprocessing queue?
When setting enqueue=True, loguru sends log records to the writer thread through a multiprocessing queue, making the whole process non-blocking (nice! 😇).
I am wondering if it is possible to add an additional option to use the queue.Queue instead. This can reduce the loguru overhead on IoT devices, where the CPU power is extremely limited. On my device running Cortext A53, queue.Queue reduced the overhead per log from 2+ms to 0.9+ms. The multiprocessing queue also seemed to cause some unpredictable spike up to 20ms (approx once every 20 logs).
@Delgan I have prepared this WIP PR
Hey @namoshizun. Thank you for sharing your feedback and a possible workaround.
I remember that during the initial development of Loguru several years ago, I had to replace the Queue with a SimpleQueue due to internal bugs in the former (see 74a418a0558b9dee02ae14cb36bc38c5065ea436). Now, perhaps the problems I faced at the time have been fixed upstream since then.
The implementation of Queue is actually much more complex than that of SimpleQueue, notably because it uses an additional internal thread. I think that's where the reduced overhead comes from. When using a SimpleQueue, the message is serialized within the caller thread, whereas with a Queue the serialization does not take place in the caller thread and therefore does not block it.
I do not think it's relevant to expose this technical detail to the user through a new multiprocessing_queue argument. We have to settle on one or the other internally. Based on your observation, I agree that the implementation used by Queue is preferable for better performance.
It so happens that I have a branch that has been on hold for a few years (see refactor-enqueue-new ), which replaces SimpleQueue with a custom implementation. For technical reasons, it does not just use a Queue, but the implementation is very similar in the sense that an internal thread is started where serialization takes place. I think this solution should reduce the overhead you are currently experiencing.
This refactoring has been pending since a long time, but I think I will be able to merge it soon.