concurrent-log-handler icon indicating copy to clipboard operation
concurrent-log-handler copied to clipboard

TypeError: cannot unpack non-iterable NoneType object

Open kokman opened this issue 1 year ago • 7 comments

When using setup_logging_queues and stop_queue_listeners, console outputs exceptions. Any one help me? Thanks

--- Logging error --- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/logging/init.py", line 1110, in emit msg = self.format(record) ^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/logging/init.py", line 953, in format return fmt.format(record) ^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/logging/init.py", line 690, in format s = self.formatMessage(record) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/logging.py", line 99, in formatMessage ( TypeError: cannot unpack non-iterable NoneType object Call stack: File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 995, in _bootstrap self._bootstrap_inner() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 1038, in _bootstrap_inner self.run() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 975, in run self._target(*self._args, **self._kwargs) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/concurrent_log_handler/queue.py", line 54, in _monitor super()._monitor() # type: ignore[misc] Message: '192.168.88.162:54558 - "GET /api/version HTTP/1.1" 200' Arguments: None


my code (fastapi + uvicorn(workers=4)) :

logger = logging.getLogger()

def my_logging_setup(log_file_name, use_async=False): logging_config = { "version": 1, "disable_existing_loggers": False, "formatters": { "default": {"format": "%(asctime)s %(levelname)s %(name)s %(message)s"}, "formatter2": { "format": "[%(asctime)s][%(levelname)s][%(filename)s:%(lineno)s]" "[%(process)d][%(message)s]", }, }, # Set up our concurrent logger handler. Need one of these per unique file. "handlers": { "ch_log": { "level": "DEBUG", "class": "logging.StreamHandler", "formatter": "formatter2", }, "fh_log": { "level": "DEBUG", "class": "concurrent_log_handler.ConcurrentRotatingFileHandler", # Example of a custom format for this log. "formatter": "formatter2", # 'formatter': 'default', "filename": log_file_name, # Optional: set an owner and group for the log file # 'owner': ['greenfrog', 'admin'], # Sets permissions to owner and group read+write "chmod": 0o0660, # Note: this is abnormally small to make it easier to demonstrate rollover. # A more reasonable value might be 10 MiB or 10485760 "maxBytes": 5 * 1024 * 1024, # Number of rollover files to keep "backupCount": 10, # 'use_gzip': True, }, }, # Tell root logger to use our concurrent handler "root": { "handlers": ["ch_log", "fh_log"], "level": "DEBUG", }, }

logging.config.dictConfig(logging_config)

if use_async:
    setup_logging_queues()

return

@asynccontextmanager async def lifespan(app: FastAPI): try: # logger = logging.getLogger() logger.setLevel(logging.INFO)

    # set log file name
    log_file_name = web_server_log_path + f"/api.log"
    log_file_name = os.path.expandvars(log_file_name)

   my_logging_setup(log_file_name, True)

    logger.info("Logger started...")

    yield

    stop_queue_listeners()

    logger.info("Logger ended")

    print("Server Stopped")

except (KeyboardInterrupt, SystemExit):
    scheduler.shutdown(wait=False)

kokman avatar Jun 01 '24 04:06 kokman

Having the same error.

ddorian avatar May 13 '25 18:05 ddorian

The asyncio support there was added a while back by someone else and I have never really used it. So it hasn't been tested in a long time if at all.

Based on the example provided in the original report, I tried to piece together a demonstration script that showed the problem. However, I was unable to reproduce the problem with this script:

https://gist.github.com/Preston-Landers/f807125e9d44bbe59c5c1d4d04f8fa11

Everything seemed to log correctly with no errors. Using Python 3.13.3 on Windows (x86-64), Concurrent Log Handler 0.9.26 (the new one).

It's notable that my example ONLY uses asyncio and not gunicorn or similar servers.

If anyone can provide a self-contained working example of the problem, I'd be happy to try to debug it. Please indicate the version of Python, platform, and CLH you're using, as well as any required packages.

Preston-Landers avatar May 13 '25 18:05 Preston-Landers

My setup is pretty complex. I managed to trim it down to this script. Run it with uv run script.py. structlog has been configured to pass the logs logging module. I haven't even enabled writing to a file handler, just the queue listener.

# /// script
# requires-python = ">=3.13"
# dependencies = [
#     "concurrent-log-handler==0.9.26",
#     "structlog==25.3.0",
# ]
# ///
from __future__ import annotations

import copy
import logging
from typing import Callable

import structlog

log_processors_list: list[Callable] = []


def configure_root_logger(root_logger: logging.Logger, loglevel: int) -> None:
    from structlog.typing import Processor

    format_processors: list[Processor] = [
        structlog.stdlib.ProcessorFormatter.remove_processors_meta,
    ]

    formatter = structlog.stdlib.ProcessorFormatter(
        processors=format_processors,
        foreign_pre_chain=copy.deepcopy(log_processors_list),
    )
    log_processors_list.extend(
        [
            # Prepare event dict for `ProcessorFormatter`.
            structlog.stdlib.ProcessorFormatter.wrap_for_formatter
        ]
    )

    handler = logging.StreamHandler()
    ## Use OUR `ProcessorFormatter` to format all `logging` entries.
    handler.setFormatter(formatter)
    root_logger.addHandler(handler)
    root_logger.setLevel(loglevel)


def setup_all_logs(log_names: list[str]):
    from concurrent_log_handler.queue import setup_logging_queues

    # only enable specific loggers
    for log_name in log_names:
        configure_root_logger(logging.getLogger(log_name), logging.INFO)

    structlog.configure(
        context_class=dict,
        cache_logger_on_first_use=True,
        wrapper_class=structlog.make_filtering_bound_logger(logging.INFO),
        processors=log_processors_list,
        logger_factory=structlog.stdlib.LoggerFactory(),
    )
    # todo comment this line for script to work
    setup_logging_queues()


setup_all_logs(["root"])

logger = structlog.get_logger("root")

logger.info("hello")

--- Logging error ---
Traceback (most recent call last):
  File "/usr/lib/python3.13/logging/__init__.py", line 1150, in emit
    msg = self.format(record)
  File "/usr/lib/python3.13/logging/__init__.py", line 998, in format
    return fmt.format(record)
           ~~~~~~~~~~^^^^^^^^
  File "/home/guru/Desktop/.venv/lib/python3.13/site-packages/structlog/stdlib.py", line 1113, in format
    ed = cast(Dict[str, Any], record.msg).copy()
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'copy'
Call stack:
  File "/usr/lib/python3.13/threading.py", line 1012, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.13/threading.py", line 1041, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.13/threading.py", line 992, in run
    self._target(*self._args, **self._kwargs)
  File "/home/guru/Desktop/.venv/lib/python3.13/site-packages/concurrent_log_handler/queue.py", line 54, in _monitor
    super()._monitor()  # type: ignore[misc]
Message: "{'event': 'hello'}"
Arguments: None

ddorian avatar May 13 '25 19:05 ddorian

It's maybe here https://github.com/Preston-Landers/concurrent-log-handler/blob/9d60675de0269dfc9b1f0336b2b8e4bbb11131be/src/concurrent_log_handler/queue.py#L180

ddorian avatar May 13 '25 19:05 ddorian

OK, I did run your script and reproduce your results. Though I don't have any quick fixes for you. My initial assessment is that the way the CLH queue.py module works is simply incompatible with a lot of stuff, e.g. custom formatters like structlog, at least in its current form.

If you look over queue.py you'll see that actually nothing in there calls or refers to the CLH log handlers. It's just a generic script to automatically "background-ify" any and all loggers. And it's obviously got some issues in the way it works. Like I said, I've never really even used it extensively.

My suggestion is that you can use CLH as a log file handler in your project, just simply don't use the queue.py / background logger feature until it gets comprehensively reworked. Given what I said above I'm strongly considering removing it from the project, or at least a 1.0 release, unless we can find a way to make it bulletproof. I'm guessing if you look at the real-world impact of synchronous logging in your app it might be negligible in most cases. Simply define your loggers as shown in the README and don't call or use that setup_logging_queues() function.

My preference would be to either have background logging globally enabled by a separate project/codebase and remove it from CLH, or else integrate a BG feature directly into the CLH log handlers (e.g. a kwarg on it) that doesn't require exposing this separate queue.py module and is also included in the unit tests.

Preston-Landers avatar May 13 '25 20:05 Preston-Landers

Agree with you, I'm disabling it.

Another alternative (easier to code but less efficient) would be to run the write operation in a threadpool (with option to provide the threadpool reference).

ddorian avatar May 14 '25 06:05 ddorian

Thanks for the suggestion. I'm definitely going to look into that and have some ideas of how to make that happen, though I can't promise any timeline. I did create a new issue basically putting folks on notice that this queue.py module is deprecated and will be removed. Though I am completely fine with anyone wanting to vendor it into their own project source if they continue to find it useful. But as it stands it's kind of a utility side-project at best.

Preston-Landers avatar May 14 '25 17:05 Preston-Landers