pytest-snapshot
pytest-snapshot copied to clipboard
Patch to make the Snapshot more convenient
I want to share a little monkey patch I apply to the Snapshot
class. It fixes three small annoyances:
-
If the current value is different, then create a
"*.dump"
file with the actual result. This can be used to compare the expected result with the actual result. Or you can just copy the actual result to fix the test. -
If there is no snapshot, it will create the snapshot. This eliminates the need to use the
--snapshot-update
each time you add a new test. -
If the snapshot is empty, it will re-create it. This is helpful if you want to re-create the snapshot, just empty the file. (Deleting it may cause it to be removed form git in some IDEs, therefore emptying it is a simpler solution.)
from pathlib import Path
from typing import Union
from pytest_snapshot.plugin import Snapshot
# see https://github.com/joseph-roitman/pytest-snapshot/issues/54
def patch_snapshot() -> None:
"""
Patches the Snapshot class to be easier to use.
1. If the current value is different, then create a `"*.dump"` file with the actual result.
This can be used to compare the expected result with the actual result. Or you can just copy the
actual result to fix the test.
2. If there is no snapshot, it will create the snapshot. This eliminates the need to use the
`--snapshot-update` each time you add a new test.
3. If the snapshot is empty, it will re-create it. This is helpful if you want to re-create
the snapshot, just empty the file. (Deleting it may cause it to be removed form git in some
IDEs, therefore emptying it is a simpler solution.)
"""
# Patch the class at most once!
if hasattr(Snapshot, '_original_assert_match'):
return
original_assert_match = Snapshot.assert_match
# this is our patched function
def assert_match(self: Snapshot, value: Union[str, bytes], snapshot_name: Union[str, Path]) -> None:
# if there is a dump file, we remove it
dump_file = self._snapshot_path(str(snapshot_name) + '.dump')
if dump_file.is_file():
dump_file.unlink(missing_ok=True)
# try to do the comparison
try:
original_assert_match(self, value, snapshot_name)
except AssertionError as e:
# ok, we have a failure. There can be two reasons:
# - the file does not exist
# - the file exists, and it is different
# check the error message to see if the snapshot needs to be created
snapshot_exists = str(e).find('run pytest with --snapshot-update to create it') < 0
if snapshot_exists:
snapshot_path = self._snapshot_path(snapshot_name)
encoded_expected_value = snapshot_path.read_bytes()
# if the file is not empty, we assume it must be different
if len(encoded_expected_value):
# the snapshot exists and is not empty. So we create a dump file
snapshot_name = dump_file
# now pretend we want to create the snapshot
# we may create the snapshot in the dump_file
orig_snapshot_update = self._snapshot_update
self._snapshot_update = True
try:
# we run the original method againg, this time we update the snapshot or create a dump file
original_assert_match(self, value, snapshot_name)
finally:
self._snapshot_update = orig_snapshot_update
Snapshot._original_assert_match = original_assert_match
Snapshot.assert_match = assert_match
patch_snapshot()
I would like to further understand your pain points. The workflow that I like to use is always running tests with --snapshot-update enabled. It seems to me that this avoids all your pain points.
- If a change happens, the snapshot is updated and the test fails. I can then view the changes with the git diff tool, and commit the changes if they are correct, or git reset them if they are incorrect. This is easier then having to manually call a diff tool with the snapshot and the dump, and then copying the dump if you want the changes.
- Solved
- Solved
Can you explain why you prefer your workflow?
- let's say you make a change that breaks 10 tests
- your solution:
10
failures - my solution:
10
failures
- your solution:
- you make a
fix
that fixes 2 of the 10 tests- your solution:
2
failures (the fixed ones!???!) - my solution:
8
failures: the 8 failing tests
- your solution:
- you make a fix that has no effect:
- your solution:
0
failures: everything is Ok (but it is not, because to figure out the 8 failing tests, you have to check the modified snapshots) - my solution: still
8
failing tests
- your solution:
- you realize that one of the snapshot was actually not what should have been expected and you fix the snapshot file plus a small fix in the test.
- your solution:
0
failures and 8 modified snapshot files (which on is the one you manually fixed?) - my solution:
7
failures and 1 changed shapshot file (the one I have manually fixed)
- your solution:
- make a fix that fixes
5
of the files, but breaks on new file- your solution:
6
failures - my solution:
3
failures (2
old ones, and the new one)
- your solution:
- make a change that changes nothing:
- your solution:
0
failures - my solution:
3
failures
- your solution:
- make change that breaks the file that you have manually fixed:
- your solution:
1
failure and 4 modified snapshots - my solution:
4
failures and one modified snapshot
- your solution:
...and so on...
- Your way of development makes the assumption, that if a snapshot fails, it must be wrong and therefore you "fix" it by overriding. My solution makes the assumption that a failed snapshot test is actually a failed test. Re-running does not succeed.
- I want to decide how to fix the test one by one: in some cased I want to update the snapshot and in others I want to fix the code.
- your solution lies most of the time about the failing tests, because it silently fixes the expectation. So, why do you need the tests in the first place, if you constantly change the expectation to the (maybe wrong) actual value????
To add to this discussion, having a dump file is really useful in situations where an alternate diff tool is needed (think image comparison, large array comparison, etc). If used with the current snapshot.assert_match_dir()
function, dump file cleanup is also quite straightforward using --allow-snapshot-deletion
Just wanted to follow up on this issue, @joseph-roitman is this still open? Adding dump files (or, as the ApprovalTests libraries call them, "received" files) for external diff tools is an essential part of the debugging process toward making judgments about how to move forward with a failed test.
https://github.com/approvals/ApprovalTests.Python