loguru
loguru copied to clipboard
multiprocess run,i set enqueue=True,still happen error when rotate log file.
I use win10 + python3.6 + loguru-0.5.3
my code is:
import time
from loguru import logger
from concurrent.futures import ProcessPoolExecutor
logger.remove(handler_id=None)
logger.add("./log_files/loguru-test1.log",enqueue=True,rotation="10000 KB")
def f():
for i in range(200000):
logger.warning("test multiprocess raotate")
pool = ProcessPoolExecutor(10)
if __name__ == '__main__':
"""
100万条需要115秒
15:12:23
15:14:18
200万条需要156秒
"""
print(time.strftime("%H:%M:%S"))
for _ in range(10):
pool.submit(f)
pool.shutdown()
print(time.strftime("%H:%M:%S"))
Traceback (most recent call last):
File "F:\minicondadir\Miniconda2\envs\py36\lib\site-packages\loguru\_handler.py", line 287, in _queued_writer
self._sink.write(message)
File "F:\minicondadir\Miniconda2\envs\py36\lib\site-packages\loguru\_file_sink.py", line 174, in write
self._terminate_file(is_rotating=True)
File "F:\minicondadir\Miniconda2\envs\py36\lib\site-packages\loguru\_file_sink.py", line 205, in _terminate_file
os.rename(old_path, renamed_path)
PermissionError: [WinError 32] 另一个程序正在使用此文件,进程无法访问。: 'F:\\coding2\\nb_log\\tests\\log_files\\loguru-test1.log' -> 'F:\\coding2\\nb_log\\tests\\log_files\\loguru-test1.2021-08-25_15-12-23_434270.log'
is loguru enqueue=True only can run right on linux when using multiprocess?
Hi.
There is documentation about the caveats and possible missuses of multiprocessing with loguru here: Compatibility with multiprocessing using enqueue argument
Does it help you a bit?
The enqueue=True argument works with Windows too but you have to pass the logger to the process or pool created.
ok,i have understanded. linux is fork,win is spwan,there are many diffrerrnt between linux and win when use multiprocess.
but i think,pass the logger object to the Process target functhion argument when using windows,it is not convenient.
i think to solve the problem the loguru can use filelock and batch write messages to file,it dosent need pass the logger to the process
nb_log hava solve the problen on windows.the code can run right both on linux and windows. the benckmark of withe log to file is 400% fast than loguru. pip install nb_log
from nb_log import get_logger
from concurrent.futures import ProcessPoolExecutor
logger = get_logger('test_nb_log_conccreent',is_add_stream_handler=False,log_filename='test_nb_log_conccreent.log')
def f(x):
for i in range(200000):
logger.warning(f'{x} {i}')
if __name__ == '__main__':
# 200万条 45秒
pool = ProcessPoolExecutor(10)
print('start')
for i in range(10):
pool.submit(f,i)
pool.shutdown()
print('end')
i got this error too... on windows
i got this error too... on windows
来用nb_log,有10个方面超越loguru,主要是我那需要做的对比之一是loguru和nb_log多进程日志写入性能比较,发现loguru windows报错。nb_log在1第10章节演示了超越loguru的10个地方。
https://github.com/ydf0509/nb_log
pip install nb_log
作者在windwos上需要把logger传给多进程启动的那个函数作为入参,这样非常的不方便,如果调用层级比较深,需要一直传递logger变量到一条链路的函数里面去。我上面那个loguru代码写法只能运行在linux。
i got this error too... on windows
来用nb_log,有10个方面超越loguru,主要是我那需要做的对比之一是loguru和nb_log多进程日志写入性能比较,发现loguru windows报错。nb_log在1第10章节演示了超越loguru的10个地方。
https://github.com/ydf0509/nb_log
pip install nb_log
嗨原来作者是你,刚才我还在看你的gayhub,你的项目都挺有意思的 :), 我已经在loguru提了好几个issue了确实不怎么省心 ...
i got this error too... on windows
来用nb_log,有10个方面超越loguru,主要是我那需要做的对比之一是loguru和nb_log多进程日志写入性能比较,发现loguru windows报错。nb_log在1第10章节演示了超越loguru的10个地方。 https://github.com/ydf0509/nb_log pip install nb_log
嗨原来作者是你,刚才我还在看你的gayhub,你的项目都挺有意思的 :), 我已经在loguru提了好几个issue了确实不怎么省心 ...
简单的脚本发在博客园做记录,有意思的一般复杂些,一般是多个文件,而且还需要上传pypi所以就发在github。
Closing this issue as I think it was an issue with logger handling around processes.