django-constance
django-constance copied to clipboard
ImportError: cannot import name 'Config' from partially initialized module 'constance.base' (most likely due to a circular import)
As per https://stackoverflow.com/a/69185962/284164 :
Once every few hundred thousand requests I see one of these:
ImportError at /
cannot import name 'Config' from partially initialized module 'constance.base' (most likely due to a circular import) (/usr/local/lib/python3.9/site-packages/constance/base.py)
I cannot identify any rhyme or reason. It doesn't correspond with accessing constance
admin, just randomly occurs. My best guess is it's something to do with the LazyObject
in constance's __init__.py
, and maybe random race-conditions in restarting expired gunicorn
workers or something?!
I'm using:
- Python 3.9.2
- Django 3.2
-
django-constance = {extras = ["database"],version = "==2.8.*"}
-
"constance"
and"constance.backends.database"
inINSTALLED_APPS
(at top) -
CONSTANCE_BACKEND = "constance.backends.database.DatabaseBackend"
-
"constance.context_processors.config"
inTEMPLATES[0]["OPTIONS"]["context_processors"]
All my code does is from constance import config
and access config
attributes in the standard way in python code and Django templates.
For what it's worth, we've been using django-constance
on this site for years, but never saw this error until we upgraded to 2.8.0
(from 2.6.0
). We were using Django 3.1 when it first appeared, but has also occurred since upgrading to 3.2.
I've managed to reproduce the issue and track down the likely cause.
I managed to trigger it once using Django's runserver
, on the first request handled, but in hundreds of subsequent restarts/attempts I couldn't make it recur.
I then instead ran a couple of local gunicorn
worker threads with a low requests-per-worker threshold, and spammed the local port with rapid-fire requests, and sure enough the ImportError
occurs every now and then.
The issue seems to be this:
-
constance.__init__
uses adjango.utils.functional.LazyObject
which, once lazily evaluated, importsconstance.base
and instantiates aConfig
from that module. -
constance.base.Config.__init__
usesconstance.utils.import_module_attr
to import the backend specified in the project settings, which in turn usesimportlib.import_module
to import, in my case,constance.backends.database.DatabaseBackend
. -
constance.backends.database.__init__
also importsconfig
from...
(constance
), thus potentially creating an import loop.
It seems there is a rare race condition in which LazyConfig._setup
tries to import constance.base.Config
while constance.base
hasn't be fully initialised. I injected some debugging statements to demonstrate, and the buggy sequence looks like this:
[2021-09-15 09:35:10 +1000] [13504] [INFO] Booting worker with pid: 13504
BEGIN constance.__init__.LazyConfig._setup() vars(self): {'_wrapped': <object object at 0x108da8340>}
BEGIN constance.__init__.LazyConfig._setup() vars(self): {'_wrapped': <object object at 0x108da8340>}
constance.base imported
constance.base.Config.__init__: importing constance.backends.database.DatabaseBackend
ImportError at /
cannot import name 'Config' from partially initialized module 'constance.base' (most likely due to a circular import) (/usr/local/lib/python3.9/site-packages/constance/base.py)
EXIT constance.__init__.LazyConfig._setup() vars(self): {'_wrapped': <constance.base.Config object at 0x10fa7ea30>}
It seems to only happen when _setup
is called twice in a row like that, before constance.base
is imported, then the two threads seem to race from that point.
Unfortunately this issue persists even in the 3.1.0
release
- Could you provide your gunicorn config and execution args?
- Do you use threads in your application?
Thanks @sergei-iurchenko
pid = "/var/run/gunicorn.pid"
bind = "0.0.0.0:{0}".format(env("PORT", int, default=8000))
workers = env("GUNICORN_WORKERS", int, default=9) # (2*CPU_cores+1)
preload_app = True
max_requests = 100
timeout = env("GUNICORN_TIMEOUT", int, default=180)
# https://pythonspeed.com/articles/gunicorn-in-docker/
worker_tmp_dir = "/dev/shm"
threads = 4
worker_class = "gthread"
if env("GUNICORN_RELOAD", bool, default=False):
reload = True
accesslog = "-"
- Not explicitly elsewhere, only in the gunicorn configuration
-
I suppose there can be some problem with threads. Could you change worker_class to sync ? Will error disappear?
-
Try to disable preload https://docs.gunicorn.org/en/latest/settings.html?highlight=reload#reload
Thanks @sergei-iurchenko , I'll try changing those and will let you know if the issue recurs