multiprocessing-logging icon indicating copy to clipboard operation
multiprocessing-logging copied to clipboard

processing.Queue May Got Full And Raise Errors

Open hcxie20 opened this issue 5 years ago • 1 comments
trafficstars

I got a simple program which yields 2 processes and they start to log infos.

Traceback:

Traceback (most recent call last):
  File "/Users/xiehaochen/opt/anaconda3/envs/py27test/lib/python2.7/site-packages/multiprocessing_logging.py", line 117, in emit
  File "/Users/xiehaochen/opt/anaconda3/envs/py27test/lib/python2.7/site-packages/multiprocessing_logging.py", line 117, in emit
    self._send(s)
  File "/Users/xiehaochen/opt/anaconda3/envs/py27test/lib/python2.7/site-packages/multiprocessing_logging.py", line 98, in _send
    self._send(s)
  File "/Users/xiehaochen/opt/anaconda3/envs/py27test/lib/python2.7/site-packages/multiprocessing_logging.py", line 98, in _send
    self.queue.put_nowait(s)
  File "/Users/xiehaochen/opt/anaconda3/envs/py27test/lib/python2.7/multiprocessing/queues.py", line 155, in put_nowait
    self.queue.put_nowait(s)
  File "/Users/xiehaochen/opt/anaconda3/envs/py27test/lib/python2.7/multiprocessing/queues.py", line 155, in put_nowait
    return self.put(obj, False)
    return self.put(obj, False)
  File "/Users/xiehaochen/opt/anaconda3/envs/py27test/lib/python2.7/multiprocessing/queues.py", line 102, in put
    raise Full
Full
import logging
import logging.handlers
import multiprocessing
import time
import os
import multiprocessing_logging


print 'current pid: {0}'.format(os.getppid())

log_dir = os.path.abspath(os.path.dirname(__file__) + '/logs')
for log_file in os.listdir(log_dir):
    os.remove(log_dir + '/' + log_file)

logger = logging.getLogger()
logfile = os.path.basename(__file__).split('.')[0] + '.log'
logger.setLevel(logging.INFO)
fh = logging.handlers.TimedRotatingFileHandler('{0}/{1}'.format(log_dir, logfile), when='S')
fh.setLevel(logging.DEBUG)
formatter = logging.Formatter('[%(asctime)s] %(levelname)s %(processName)s %(process)d %(thread)d %(threadName)s %(filename)s:%(lineno)d -> %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)

multiprocessing_logging.install_mp_handler()

def test_func():
    i = 0

    while True:
        logging.info(i)
        i += 1

def main_logic(num_process=1):
    processes = []

    for _ in range(num_process):
        processes.append(multiprocessing.Process(target=test_func))

    for process in processes:
        process.start()

    time.sleep(2)
    print('week up here')
    for process in processes:
        process.terminate()

def analyze_logs(num_process=1):
    pass

def main(num_process=1):
    main_logic(num_process)
    analyze_logs(num_process)


if __name__ == '__main__':
    main(num_process=2)

hcxie20 avatar Sep 16 '20 06:09 hcxie20

Yes, this happens in very extreme situations: The parent process which collects log records from his children cannot keep up.

I believe this only happens in buggy programs or programs wanting to trigger this behavior. Do you see this differently?

jruere avatar Feb 05 '23 17:02 jruere