pytest-django
pytest-django copied to clipboard
django_db_setup runs inside transactional_db transaction
If the first db test that gets run happens to have transactional_db
(or django_db(transaction=True)
) then all subsequent db tests will fail.
This appears to be because the db setup (migrations etc.) are all rolled-back and will not run again because django_db_setup
is session-scoped.
Hitting this as well. It appears that fixture data loaded in django_db_setup
gets removed for all subsequent tests by a test that runs with @pytest.mark.django_db(transaction=True)
.
We're loading fixture data in django_db_setup
as described in the docs:
# conftest.py
import pytest
from django.core.management import call_command
@pytest.fixture(scope='session')
def django_db_setup(django_db_setup, django_db_blocker):
with django_db_blocker.unblock():
call_command('loaddata', 'myfixture.json')
Tests that run prior to the transaction=True
test are able to access the fixture data with no problem...
@pytest.mark.django_db
def test_data_presence():
assert MyModel.objects.first() is not None # Passes
... but once we run a transaction test case, subsequent tests are missing the fixture data.
@pytest.mark.django_db(transaction=True)
def test_transaction_behavior():
...
@pytest.mark.django_db
def test_data_presence2():
assert MyModel.objects.first() is not None # Fails
If whatever subset of tests we run (using e.g. py.test -k "expr"
or py.test -m "expr"
) happens to skip all of the transaction=True
test cases, the fixture remains in place.
IIRC "serialized_rollback" is required for this, which I've started a while ago in https://github.com/pytest-dev/pytest-django/pull/353 - maybe you want to pick it up?
@yourcelf
Have you tried doing your loaddata in an overwritten transactional_db
fixture?
Also relevant: https://github.com/pytest-dev/pytest-django/issues/214
Thanks for the suggestions @blueyed. I tried adding this definition to conftest.py
:
import pytest
from django.core.management import call_command
from pytest_django.fixtures import transactional_db as orig_transactional_db
@pytest.fixture(scope='session')
def django_db_setup(django_db_setup, django_db_blocker):
with django_db_blocker.unblock():
call_command('loaddata', 'myfixture.json')
@pytest.fixture(scope='function')
def transactional_db(request, django_db_setup, django_db_blocker):
orig_transactional_db(request, django_db_setup, django_db_blocker)
with django_db_blocker.unblock():
call_command('loaddata', 'myfixture.json')
Is that what you meant by loading data in an overwritten transactional_db
fixture?
This didn't have any change in the availability of fixture data after running a test with @pytest.mark.django_db(transaction=True)
-- the fixture data was still missing. (This was the same whether the "super" call to orig_transactional_db
happened first or last.)
Try just that:
import pytest
from django.core.management import call_command
@pytest.fixture(scope='function')
def transactional_db(transactiona_db, request, django_db_setup, django_db_blocker):
with django_db_blocker.unblock():
call_command('loaddata', 'myfixture.json')
No change... with only the transactional_db
override in conftest.py, the fixture was unavailable to regular @pytest.mark.django_db
tests (without transaction=True
), no matter what order they run in. With both the transactional_db
override you suggested here and django_db_setup
defined, the behavior is the same -- tests running before a transaction=True
test work fine, those running after are missing the fixture.
with only the transactional_db override in conftest.py, the fixture was unavailable to regular @pytest.mark.django_db tests (without transaction=True), no matter what order they run in.
That is expected.
Too bad it does not work.
Have you verified that the loaddata
command gets run at least for each test?
You could enable logging of DB queries and then see from there what gets done for a) the first test where it is available, and then b) for the second test where it is missing - and then compare them.
I think I have a similar problem.
I have a Django app with a FooBarModel
and a have a data migration that includes a single record on it with very specific data.
For various reasons, I have tests for database constraints.
I also have some FooBarModel records that are created automatically when some other models instances are created.
I have two separate test functions, and both with @pytest.mark.django_db (transaction=True)
The first function test all database constraints of the FooBarModel.
The second function test if the automatic records are created as expected. (I use a fixture to create instances of the other models, and by consequence will some records in FooBarModel table)
after the first test function is executed, the database no longer has the data included by the data migration (apparently this data was removed) and because of this, the second function fails to verify that the data included in the migration exists in the database.
I expected @pytest.mark.django_db (transaction=True)
to cause, in each test case (I also use @ pytest.mark.parametrize) the database to return to the state it was in after running all migrations and data migrations and not being totally data cleaned.
An update: I use MySQL
Hi @luzfcb I'm experiencing exactly the same issues. Do you have any suggestions as to how to fix it?
Same here!
Hello, currently encountering this problem - is there a workaround that anyone is aware of please? Thank you
This probably can be solved by adding support to serialized_rollback
On pull-request #721 I have tried to put together some of the solutions, but the approach is not yet 100% functional. It would be nice to have new eyes on that pull-request.
I'm experiencing exactly the same issues.
I was trying to switch from In-Memory SQLite DB to file SQLite DB, because data provisioning in In-Memory SQLite DB takes too much time, and I want to reuse DB with already provisioned data instead.
File SQLite DB doesn't work with db
fixture, so I used transactional_db
but ran into this issue.
First test pass and then all DB is erased.
What about serialized_rollback
current feature implementation status? Or is there a better approach ? I was thinking of simply saving/restoring SQLite DB file before each test (probably much faster than serialized_rollback
).
We have also observed this issue. Are there workarounds know to avoid this issue? Some solutions perhaps?
I reproduced this issue and can confirm that it is fixed in #970, using @pytest.mark.django_db(serialized_rollback=True)
. Tested on pytest-django 4.5.2.