Add lte (less than or equal) filter
closes #69
@AlexPozh hi! are you still in?
@AlexPozh hi! are you still in?
@makridenko Hi, yes, im sorry for delay, i have the test week in uni, so i dont have much time, but i think i can finish this pull request at the end of the week, sorry again!
@AlexPozh there's no need to apologise! This is all voluntary and I appreciate your help! Just wanted to know if the pull request will be continued
@makridenko Hi! I have troubles with testing gt filter. I made like that:
def test_greater_than_filter(self, model_mock: type['ModelMock']):
# Prepare data
expected_q_set = QSet(
model_class=model_mock,
cache=[
model_mock(id=2, name="unique_name"),
model_mock(id=3, name="test_name"),
model_mock(id=4, name="new_name")
],
)
# Execution
actual_q_set = model_mock.objects.filter(id__gt=1)
actual_q_set._execute()
# Testing
assert actual_q_set == expected_q_set
i get assertion error. So I decided to print expected_q_set and actual_q_set variables. expected_q_set had aslo ModelMock(id=1, ...), but actual_q_set has ModelMock(id=2, ...), ModelMock(id=3, ...), ModelMock(id=4, ...).
And i didnt understand from where it takes ModelMock(id=1, name='test_name', ...). Its fixture, but i didnt find any code where it creates these instances ModelMock(id=1, ...), ModelMock(id=2, ...), ModelMock(id=3, ...), ModelMock(id=4, ...).
@AlexPozh oh, i see... when u try to get all elements of QSet -- it hitting database (test cache in this case), so there is no need to print queryset inside test
there is an issue i guess, i will think how to avoid this
And i didnt understand from where it takes ModelMock(id=1, name='test_name', ...). Its fixture, but i didnt find any code where it creates these instances ModelMock(id=1, ...), ModelMock(id=2, ...), ModelMock(id=3, ...), ModelMock(id=4, ...).
There is a fixture called fill_cache_db_and_clear (btw, there is no clear in it, by bad :D)
@AlexPozh btw, if you stuck -- push your code here so I could run it locally to help you
There is a fixture called
fill_cache_db_and_clear(btw, there is no clear in it, by bad :D)
Do u mean this?
@pytest.fixture(autouse=True, scope='function')
def clean_db_cache(model_mock: type['ModelMock']) -> 'Generator':
yield
model_mock.objects._cache = []
model_mock.objects.all().delete()
@AlexPozh no, actually fixture in test_q_set file
@AlexPozh no, actually fixture in test_q_set file
A, yes. I see. Got it now)
@makridenko i solved, but ofc that is not so good solution. I just cleaned cache inside test function:
model_mock.objects._cache = []
model_mock.objects.all().delete()
Because as i understood, first of all pytest calls clean_db_cache from fixture/model.py, then it calls fill_cache_db_and_clear and creates new cache :)
@AlexPozh could you add an issue for that please?
@AlexPozh could you add an issue for that please?
Yes, how can i call that?
@AlexPozh idk, just something like "test fixtures bug", and a description what's goin on
@makridenko i would like to clarify about cache. So when we implement this argument for QSet, it must create models with provided data, yes?
if to refer to doc cache (list[_M] | None): An optional initial cache of model instances.
@AlexPozh not really, we just define it like it have been already created trick is not to call _execute, cause it will hit database
so when we passing args to cache in test, we create mock qset object with already defined models
@AlexPozh so, in your example everything should be fine
def test_greater_than_filter(self, model_mock: type['ModelMock']):
# Prepare data
expected_q_set = QSet(
model_class=model_mock,
cache=[
model_mock(id=2, name="unique_name"),
model_mock(id=3, name="test_name"),
model_mock(id=4, name="new_name")
],
)
# Execution
actual_q_set = model_mock.objects.filter(id__gt=1)
actual_q_set._execute()
# Testing
assert actual_q_set == expected_q_set
but if we would hit database, like this for example
def test_greater_than_filter(self, model_mock: type['ModelMock']):
# Prepare data
expected_q_set = QSet(
model_class=model_mock,
cache=[
model_mock(id=2, name="unique_name"),
model_mock(id=3, name="test_name"),
model_mock(id=4, name="new_name")
],
)
print(expected_q_set) # or create a breakpoint
# Execution
actual_q_set = model_mock.objects.filter(id__gt=1)
actual_q_set._execute()
# Testing
assert actual_q_set == expected_q_set
It will run _execute and because of fill_cache_db_and_clear -- QSet would fill self cache with database values
def test_greater_than_filter(self, model_mock: type['ModelMock']): # Prepare data expected_q_set = QSet( model_class=model_mock, cache=[ model_mock(id=2, name="unique_name"), model_mock(id=3, name="test_name"), model_mock(id=4, name="new_name") ], ) # Execution actual_q_set = model_mock.objects.filter(id__gt=1) actual_q_set._execute() # Testing assert actual_q_set == expected_q_set
Interesting, now it works... 😀
@makridenko Done!
@AlexPozh thank you so much!